* [Intel-xe] [PATCH] fixup! drm/xe: Introduce a new DRM driver for Intel GPUs @ 2023-03-16 18:29 Lucas De Marchi 2023-03-16 18:32 ` [Intel-xe] ✓ CI.Patch_applied: success for " Patchwork ` (2 more replies) 0 siblings, 3 replies; 21+ messages in thread From: Lucas De Marchi @ 2023-03-16 18:29 UTC (permalink / raw) To: intel-xe Cc: Lucas De Marchi, thomas.hellstrom, mauro.chehab, maarten.lankhorst Introduced with the 6.2 rebase due to commit 000458b5966f ("drm: Only select I2C_ALGOBIT for drivers that actually need it"). Make a similar selection when CONFIG_DRM_XE_DISPLAY is enabled. Also, provide this as a fixup-only commit, to be squashed in the next rebase. With this, the following command works again: ./tools/testing/kunit/kunit.py build \ --kunitconfig drivers/gpu/drm/xe/.kunitconfig Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> --- drivers/gpu/drm/xe/Kconfig | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/gpu/drm/xe/Kconfig b/drivers/gpu/drm/xe/Kconfig index 4684e99549d3..aeaf3ce19c4f 100644 --- a/drivers/gpu/drm/xe/Kconfig +++ b/drivers/gpu/drm/xe/Kconfig @@ -44,6 +44,8 @@ config DRM_XE config DRM_XE_DISPLAY bool "Enable display support" depends on DRM_XE && EXPERT + select I2C + select I2C_ALGOBIT default y help Disable this option only if you want to compile out display support. -- 2.39.0 ^ permalink raw reply related [flat|nested] 21+ messages in thread
* [Intel-xe] ✓ CI.Patch_applied: success for fixup! drm/xe: Introduce a new DRM driver for Intel GPUs 2023-03-16 18:29 [Intel-xe] [PATCH] fixup! drm/xe: Introduce a new DRM driver for Intel GPUs Lucas De Marchi @ 2023-03-16 18:32 ` Patchwork 2023-03-16 18:33 ` [Intel-xe] ✗ CI.KUnit: failure " Patchwork 2023-03-17 6:08 ` [Intel-xe] [PATCH] " Mauro Carvalho Chehab 2 siblings, 0 replies; 21+ messages in thread From: Patchwork @ 2023-03-16 18:32 UTC (permalink / raw) To: Lucas De Marchi; +Cc: intel-xe == Series Details == Series: fixup! drm/xe: Introduce a new DRM driver for Intel GPUs URL : https://patchwork.freedesktop.org/series/115290/ State : success == Summary == === Applying kernel patches on branch 'drm-xe-next' with base: === commit 68c0f7421b74cf51fe442bed6d4395d28ded5d7d Author: Thomas Hellström <thomas.hellstrom@linux.intel.com> AuthorDate: Tue Mar 14 15:56:44 2023 +0100 Commit: Thomas Hellström <thomas.hellstrom@linux.intel.com> CommitDate: Thu Mar 16 15:16:00 2023 +0100 drm/xe/vm: Defer vm rebind until next exec if nothing to execute If all compute engines of a vm in compute mode are idle, defer a rebind to the next exec to avoid the VM unnecessarily trying to make memory resident and compete with other VMs for available memory space. Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> === git am output follows === Applying: fixup! drm/xe: Introduce a new DRM driver for Intel GPUs ^ permalink raw reply [flat|nested] 21+ messages in thread
* [Intel-xe] ✗ CI.KUnit: failure for fixup! drm/xe: Introduce a new DRM driver for Intel GPUs 2023-03-16 18:29 [Intel-xe] [PATCH] fixup! drm/xe: Introduce a new DRM driver for Intel GPUs Lucas De Marchi 2023-03-16 18:32 ` [Intel-xe] ✓ CI.Patch_applied: success for " Patchwork @ 2023-03-16 18:33 ` Patchwork 2023-03-16 20:53 ` Lucas De Marchi 2023-03-17 6:08 ` [Intel-xe] [PATCH] " Mauro Carvalho Chehab 2 siblings, 1 reply; 21+ messages in thread From: Patchwork @ 2023-03-16 18:33 UTC (permalink / raw) To: Lucas De Marchi; +Cc: intel-xe == Series Details == Series: fixup! drm/xe: Introduce a new DRM driver for Intel GPUs URL : https://patchwork.freedesktop.org/series/115290/ State : failure == Summary == + trap cleanup EXIT + /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig ERROR:root:`.exit.text' referenced in section `.uml.exitcall.exit' of arch/um/drivers/virtio_uml.o: defined in discarded section `.exit.text' of arch/um/drivers/virtio_uml.o collect2: error: ld returned 1 exit status make[2]: *** [../scripts/Makefile.vmlinux:35: vmlinux] Error 1 make[1]: *** [/kernel/Makefile:1264: vmlinux] Error 2 make: *** [Makefile:242: __sub-make] Error 2 [18:32:38] Configuring KUnit Kernel ... Generating .config ... Populating config with: $ make ARCH=um O=.kunit olddefconfig [18:32:42] Building KUnit Kernel ... Populating config with: $ make ARCH=um O=.kunit olddefconfig Building with: $ make ARCH=um O=.kunit --jobs=48 + cleanup ++ stat -c %u:%g /kernel + chown -R 1003:1003 /kernel ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Intel-xe] ✗ CI.KUnit: failure for fixup! drm/xe: Introduce a new DRM driver for Intel GPUs 2023-03-16 18:33 ` [Intel-xe] ✗ CI.KUnit: failure " Patchwork @ 2023-03-16 20:53 ` Lucas De Marchi 0 siblings, 0 replies; 21+ messages in thread From: Lucas De Marchi @ 2023-03-16 20:53 UTC (permalink / raw) To: intel-xe On Thu, Mar 16, 2023 at 06:33:06PM -0000, Patchwork wrote: >== Series Details == > >Series: fixup! drm/xe: Introduce a new DRM driver for Intel GPUs >URL : https://patchwork.freedesktop.org/series/115290/ >State : failure > >== Summary == > >+ trap cleanup EXIT >+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig >ERROR:root:`.exit.text' referenced in section `.uml.exitcall.exit' of arch/um/drivers/virtio_uml.o: defined in discarded section `.exit.text' of arch/um/drivers/virtio_uml.o >collect2: error: ld returned 1 exit status >make[2]: *** [../scripts/Makefile.vmlinux:35: vmlinux] Error 1 >make[1]: *** [/kernel/Makefile:1264: vmlinux] Error 2 >make: *** [Makefile:242: __sub-make] Error 2 passes for me... it seems to be because of the toolchain currently used: https://lore.kernel.org/all/20230207164156.537378-1-masahiroy@kernel.org/ it's on 6.3-rc1 and 6.2.7 Lucas De Marchi > >[18:32:38] Configuring KUnit Kernel ... >Generating .config ... >Populating config with: >$ make ARCH=um O=.kunit olddefconfig >[18:32:42] Building KUnit Kernel ... >Populating config with: >$ make ARCH=um O=.kunit olddefconfig >Building with: >$ make ARCH=um O=.kunit --jobs=48 >+ cleanup >++ stat -c %u:%g /kernel >+ chown -R 1003:1003 /kernel > > ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Intel-xe] [PATCH] fixup! drm/xe: Introduce a new DRM driver for Intel GPUs 2023-03-16 18:29 [Intel-xe] [PATCH] fixup! drm/xe: Introduce a new DRM driver for Intel GPUs Lucas De Marchi 2023-03-16 18:32 ` [Intel-xe] ✓ CI.Patch_applied: success for " Patchwork 2023-03-16 18:33 ` [Intel-xe] ✗ CI.KUnit: failure " Patchwork @ 2023-03-17 6:08 ` Mauro Carvalho Chehab 2 siblings, 0 replies; 21+ messages in thread From: Mauro Carvalho Chehab @ 2023-03-17 6:08 UTC (permalink / raw) To: Lucas De Marchi, intel-xe Cc: thomas.hellstrom, mauro.chehab, maarten.lankhorst On 3/16/23 19:29, Lucas De Marchi wrote: > Introduced with the 6.2 rebase due to > commit 000458b5966f ("drm: Only select I2C_ALGOBIT for drivers that > actually need it"). Make a similar selection when CONFIG_DRM_XE_DISPLAY > is enabled. Also, provide this as a fixup-only commit, to be squashed in > the next rebase. With this, the following command works again: > > ./tools/testing/kunit/kunit.py build \ > --kunitconfig drivers/gpu/drm/xe/.kunitconfig > > Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Yeah, selecting I2C and I2C_ALGOBIT is something that needs to be done on each driver. Doing such kind of things subsystem-wide doesn't sound right. So, LGTM. Reviewed-by: Mauro Carvalho Chehab <mchehab@kernel.org> > --- > drivers/gpu/drm/xe/Kconfig | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/drivers/gpu/drm/xe/Kconfig b/drivers/gpu/drm/xe/Kconfig > index 4684e99549d3..aeaf3ce19c4f 100644 > --- a/drivers/gpu/drm/xe/Kconfig > +++ b/drivers/gpu/drm/xe/Kconfig > @@ -44,6 +44,8 @@ config DRM_XE > config DRM_XE_DISPLAY > bool "Enable display support" > depends on DRM_XE && EXPERT > + select I2C > + select I2C_ALGOBIT > default y > help > Disable this option only if you want to compile out display support. ^ permalink raw reply [flat|nested] 21+ messages in thread
* [Intel-xe] [PATCH 2/2] fixup! drm/xe: Introduce a new DRM driver for Intel GPUs @ 2023-05-01 19:03 Rodrigo Vivi 2023-05-01 19:37 ` [Intel-xe] [PATCH] " Rodrigo Vivi 0 siblings, 1 reply; 21+ messages in thread From: Rodrigo Vivi @ 2023-05-01 19:03 UTC (permalink / raw) To: intel-xe; +Cc: Rodrigo Vivi Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> --- drivers/gpu/drm/xe/Makefile | 1 + drivers/gpu/drm/xe/xe_pt.c | 110 +++++++++++++++---------------- drivers/gpu/drm/xe/xe_pt_types.h | 4 +- 3 files changed, 56 insertions(+), 59 deletions(-) diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile index 74a84080f242..b84e191ba14f 100644 --- a/drivers/gpu/drm/xe/Makefile +++ b/drivers/gpu/drm/xe/Makefile @@ -73,6 +73,7 @@ xe-y += xe_bb.o \ xe_pm.o \ xe_preempt_fence.o \ xe_pt.o \ + xe_pt_walk.o \ xe_query.o \ xe_reg_sr.o \ xe_reg_whitelist.o \ diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c index 4ee5ea2cabc9..f15282996c3b 100644 --- a/drivers/gpu/drm/xe/xe_pt.c +++ b/drivers/gpu/drm/xe/xe_pt.c @@ -5,14 +5,13 @@ #include "xe_pt.h" -#include <drm/drm_pt_walk.h> - #include "xe_bo.h" #include "xe_device.h" #include "xe_gt.h" #include "xe_gt_tlb_invalidation.h" #include "xe_migrate.h" #include "xe_pt_types.h" +#include "xe_pt_walk.h" #include "xe_res_cursor.h" #include "xe_trace.h" #include "xe_ttm_stolen_mgr.h" @@ -20,8 +19,8 @@ struct xe_pt_dir { struct xe_pt pt; - /** @dir: Directory structure for the drm_pt_walk functionality */ - struct drm_pt_dir dir; + /** @dir: Directory structure for the xe_pt_walk functionality */ + struct xe_ptw_dir dir; }; #if IS_ENABLED(CONFIG_DRM_XE_DEBUG_VM) @@ -44,7 +43,7 @@ static struct xe_pt_dir *as_xe_pt_dir(struct xe_pt *pt) static struct xe_pt *xe_pt_entry(struct xe_pt_dir *pt_dir, unsigned int index) { - return container_of(pt_dir->dir.entries[index], struct xe_pt, drm); + return container_of(pt_dir->dir.entries[index], struct xe_pt, base); } /** @@ -211,7 +210,7 @@ struct xe_pt *xe_pt_create(struct xe_vm *vm, struct xe_gt *gt, int err; size = !level ? sizeof(struct xe_pt) : sizeof(struct xe_pt_dir) + - XE_PDES * sizeof(struct drm_pt *); + XE_PDES * sizeof(struct xe_ptw *); pt = kzalloc(size, GFP_KERNEL); if (!pt) return ERR_PTR(-ENOMEM); @@ -227,7 +226,7 @@ struct xe_pt *xe_pt_create(struct xe_vm *vm, struct xe_gt *gt, } pt->bo = bo; pt->level = level; - pt->drm.dir = level ? &as_xe_pt_dir(pt)->dir : NULL; + pt->base.dir = level ? &as_xe_pt_dir(pt)->dir : NULL; XE_BUG_ON(level > XE_VM_MAX_LEVEL); @@ -404,8 +403,8 @@ struct xe_pt_update { }; struct xe_pt_stage_bind_walk { - /** drm: The base class. */ - struct drm_pt_walk drm; + /** base: The base class. */ + struct xe_pt_walk base; /* Input parameters for the walk */ /** @vm: The vm we're building for. */ @@ -532,7 +531,7 @@ xe_pt_insert_entry(struct xe_pt_stage_bind_walk *xe_walk, struct xe_pt *parent, struct iosys_map *map = &parent->bo->vmap; if (unlikely(xe_child)) - parent->drm.dir->entries[offset] = &xe_child->drm; + parent->base.dir->entries[offset] = &xe_child->base; xe_pt_write(xe_walk->vm->xe, map, offset, pte); parent->num_live++; @@ -556,7 +555,7 @@ static bool xe_pt_hugepte_possible(u64 addr, u64 next, unsigned int level, u64 size, dma; /* Does the virtual range requested cover a huge pte? */ - if (!drm_pt_covers(addr, next, level, &xe_walk->drm)) + if (!xe_pt_covers(addr, next, level, &xe_walk->base)) return false; /* Does the DMA segment cover the whole pte? */ @@ -618,15 +617,15 @@ xe_pt_is_pte_ps64K(u64 addr, u64 next, struct xe_pt_stage_bind_walk *xe_walk) } static int -xe_pt_stage_bind_entry(struct drm_pt *parent, pgoff_t offset, +xe_pt_stage_bind_entry(struct xe_ptw *parent, pgoff_t offset, unsigned int level, u64 addr, u64 next, - struct drm_pt **child, + struct xe_ptw **child, enum page_walk_action *action, - struct drm_pt_walk *walk) + struct xe_pt_walk *walk) { struct xe_pt_stage_bind_walk *xe_walk = - container_of(walk, typeof(*xe_walk), drm); - struct xe_pt *xe_parent = container_of(parent, typeof(*xe_parent), drm); + container_of(walk, typeof(*xe_walk), base); + struct xe_pt *xe_parent = container_of(parent, typeof(*xe_parent), base); struct xe_pt *xe_child; bool covers; int ret = 0; @@ -675,7 +674,7 @@ xe_pt_stage_bind_entry(struct drm_pt *parent, pgoff_t offset, xe_walk->l0_end_addr = next; } - covers = drm_pt_covers(addr, next, level, &xe_walk->drm); + covers = xe_pt_covers(addr, next, level, &xe_walk->base); if (covers || !*child) { u64 flags = 0; @@ -689,7 +688,7 @@ xe_pt_stage_bind_entry(struct drm_pt *parent, pgoff_t offset, if (!covers) xe_pt_populate_empty(xe_walk->gt, xe_walk->vm, xe_child); - *child = &xe_child->drm; + *child = &xe_child->base; /* * Prefer the compact pagetable layout for L0 if possible. @@ -712,7 +711,7 @@ xe_pt_stage_bind_entry(struct drm_pt *parent, pgoff_t offset, return ret; } -static const struct drm_pt_walk_ops xe_pt_stage_bind_ops = { +static const struct xe_pt_walk_ops xe_pt_stage_bind_ops = { .pt_entry = xe_pt_stage_bind_entry, }; @@ -742,7 +741,7 @@ xe_pt_stage_bind(struct xe_gt *gt, struct xe_vma *vma, bool is_vram = !xe_vma_is_userptr(vma) && bo && xe_bo_is_vram(bo); struct xe_res_cursor curs; struct xe_pt_stage_bind_walk xe_walk = { - .drm = { + .base = { .ops = &xe_pt_stage_bind_ops, .shifts = xe_normal_pt_shifts, .max_level = XE_PT_HIGHEST_LEVEL, @@ -787,8 +786,8 @@ xe_pt_stage_bind(struct xe_gt *gt, struct xe_vma *vma, xe_res_first_sg(xe_bo_get_sg(bo), vma->bo_offset, vma->end - vma->start + 1, &curs); - ret = drm_pt_walk_range(&pt->drm, pt->level, vma->start, vma->end + 1, - &xe_walk.drm); + ret = xe_pt_walk_range(&pt->base, pt->level, vma->start, vma->end + 1, + &xe_walk.base); *num_entries = xe_walk.wupd.num_used_entries; return ret; @@ -814,20 +813,17 @@ xe_pt_stage_bind(struct xe_gt *gt, struct xe_vma *vma, * be shared page tables also at lower levels, so it adjusts the walk action * accordingly. * - * Note that the function is not device-specific so could be made a drm - * pagewalk helper. - * * Return: true if there were non-shared entries, false otherwise. */ static bool xe_pt_nonshared_offsets(u64 addr, u64 end, unsigned int level, - struct drm_pt_walk *walk, + struct xe_pt_walk *walk, enum page_walk_action *action, pgoff_t *offset, pgoff_t *end_offset) { u64 size = 1ull << walk->shifts[level]; - *offset = drm_pt_offset(addr, level, walk); - *end_offset = drm_pt_num_entries(addr, end, level, walk) + *offset; + *offset = xe_pt_offset(addr, level, walk); + *end_offset = xe_pt_num_entries(addr, end, level, walk) + *offset; if (!level) return true; @@ -851,8 +847,8 @@ static bool xe_pt_nonshared_offsets(u64 addr, u64 end, unsigned int level, } struct xe_pt_zap_ptes_walk { - /** @drm: The walk base-class */ - struct drm_pt_walk drm; + /** @base: The walk base-class */ + struct xe_pt_walk base; /* Input parameters for the walk */ /** @gt: The gt we're building for */ @@ -863,15 +859,15 @@ struct xe_pt_zap_ptes_walk { bool needs_invalidate; }; -static int xe_pt_zap_ptes_entry(struct drm_pt *parent, pgoff_t offset, +static int xe_pt_zap_ptes_entry(struct xe_ptw *parent, pgoff_t offset, unsigned int level, u64 addr, u64 next, - struct drm_pt **child, + struct xe_ptw **child, enum page_walk_action *action, - struct drm_pt_walk *walk) + struct xe_pt_walk *walk) { struct xe_pt_zap_ptes_walk *xe_walk = - container_of(walk, typeof(*xe_walk), drm); - struct xe_pt *xe_child = container_of(*child, typeof(*xe_child), drm); + container_of(walk, typeof(*xe_walk), base); + struct xe_pt *xe_child = container_of(*child, typeof(*xe_child), base); pgoff_t end_offset; XE_BUG_ON(!*child); @@ -893,7 +889,7 @@ static int xe_pt_zap_ptes_entry(struct drm_pt *parent, pgoff_t offset, return 0; } -static const struct drm_pt_walk_ops xe_pt_zap_ptes_ops = { +static const struct xe_pt_walk_ops xe_pt_zap_ptes_ops = { .pt_entry = xe_pt_zap_ptes_entry, }; @@ -916,7 +912,7 @@ static const struct drm_pt_walk_ops xe_pt_zap_ptes_ops = { bool xe_pt_zap_ptes(struct xe_gt *gt, struct xe_vma *vma) { struct xe_pt_zap_ptes_walk xe_walk = { - .drm = { + .base = { .ops = &xe_pt_zap_ptes_ops, .shifts = xe_normal_pt_shifts, .max_level = XE_PT_HIGHEST_LEVEL, @@ -928,8 +924,8 @@ bool xe_pt_zap_ptes(struct xe_gt *gt, struct xe_vma *vma) if (!(vma->gt_present & BIT(gt->info.id))) return false; - (void)drm_pt_walk_shared(&pt->drm, pt->level, vma->start, vma->end + 1, - &xe_walk.drm); + (void)xe_pt_walk_shared(&pt->base, pt->level, vma->start, vma->end + 1, + &xe_walk.base); return xe_walk.needs_invalidate; } @@ -1015,7 +1011,7 @@ static void xe_pt_commit_bind(struct xe_vma *vma, xe_pt_destroy(xe_pt_entry(pt_dir, j_), vma->vm->flags, deferred); - pt_dir->dir.entries[j_] = &newpte->drm; + pt_dir->dir.entries[j_] = &newpte->base; } kfree(entries[i].pt_entries); } @@ -1375,8 +1371,8 @@ __xe_pt_bind_vma(struct xe_gt *gt, struct xe_vma *vma, struct xe_engine *e, } struct xe_pt_stage_unbind_walk { - /** @drm: The pagewalk base-class. */ - struct drm_pt_walk drm; + /** @base: The pagewalk base-class. */ + struct xe_pt_walk base; /* Input parameters for the walk */ /** @gt: The gt we're unbinding from. */ @@ -1404,10 +1400,10 @@ struct xe_pt_stage_unbind_walk { static bool xe_pt_check_kill(u64 addr, u64 next, unsigned int level, const struct xe_pt *child, enum page_walk_action *action, - struct drm_pt_walk *walk) + struct xe_pt_walk *walk) { struct xe_pt_stage_unbind_walk *xe_walk = - container_of(walk, typeof(*xe_walk), drm); + container_of(walk, typeof(*xe_walk), base); unsigned int shift = walk->shifts[level]; u64 size = 1ull << shift; @@ -1428,13 +1424,13 @@ static bool xe_pt_check_kill(u64 addr, u64 next, unsigned int level, return false; } -static int xe_pt_stage_unbind_entry(struct drm_pt *parent, pgoff_t offset, +static int xe_pt_stage_unbind_entry(struct xe_ptw *parent, pgoff_t offset, unsigned int level, u64 addr, u64 next, - struct drm_pt **child, + struct xe_ptw **child, enum page_walk_action *action, - struct drm_pt_walk *walk) + struct xe_pt_walk *walk) { - struct xe_pt *xe_child = container_of(*child, typeof(*xe_child), drm); + struct xe_pt *xe_child = container_of(*child, typeof(*xe_child), base); XE_BUG_ON(!*child); XE_BUG_ON(!level && xe_child->is_compact); @@ -1445,15 +1441,15 @@ static int xe_pt_stage_unbind_entry(struct drm_pt *parent, pgoff_t offset, } static int -xe_pt_stage_unbind_post_descend(struct drm_pt *parent, pgoff_t offset, +xe_pt_stage_unbind_post_descend(struct xe_ptw *parent, pgoff_t offset, unsigned int level, u64 addr, u64 next, - struct drm_pt **child, + struct xe_ptw **child, enum page_walk_action *action, - struct drm_pt_walk *walk) + struct xe_pt_walk *walk) { struct xe_pt_stage_unbind_walk *xe_walk = - container_of(walk, typeof(*xe_walk), drm); - struct xe_pt *xe_child = container_of(*child, typeof(*xe_child), drm); + container_of(walk, typeof(*xe_walk), base); + struct xe_pt *xe_child = container_of(*child, typeof(*xe_child), base); pgoff_t end_offset; u64 size = 1ull << walk->shifts[--level]; @@ -1477,7 +1473,7 @@ xe_pt_stage_unbind_post_descend(struct drm_pt *parent, pgoff_t offset, return 0; } -static const struct drm_pt_walk_ops xe_pt_stage_unbind_ops = { +static const struct xe_pt_walk_ops xe_pt_stage_unbind_ops = { .pt_entry = xe_pt_stage_unbind_entry, .pt_post_descend = xe_pt_stage_unbind_post_descend, }; @@ -1500,7 +1496,7 @@ static unsigned int xe_pt_stage_unbind(struct xe_gt *gt, struct xe_vma *vma, struct xe_vm_pgtable_update *entries) { struct xe_pt_stage_unbind_walk xe_walk = { - .drm = { + .base = { .ops = &xe_pt_stage_unbind_ops, .shifts = xe_normal_pt_shifts, .max_level = XE_PT_HIGHEST_LEVEL, @@ -1512,8 +1508,8 @@ static unsigned int xe_pt_stage_unbind(struct xe_gt *gt, struct xe_vma *vma, }; struct xe_pt *pt = vma->vm->pt_root[gt->info.id]; - (void)drm_pt_walk_shared(&pt->drm, pt->level, vma->start, vma->end + 1, - &xe_walk.drm); + (void)xe_pt_walk_shared(&pt->base, pt->level, vma->start, vma->end + 1, + &xe_walk.base); return xe_walk.wupd.num_used_entries; } diff --git a/drivers/gpu/drm/xe/xe_pt_types.h b/drivers/gpu/drm/xe/xe_pt_types.h index 2bb5d0e319b7..2ed64c0a4485 100644 --- a/drivers/gpu/drm/xe/xe_pt_types.h +++ b/drivers/gpu/drm/xe/xe_pt_types.h @@ -6,7 +6,7 @@ #ifndef _XE_PT_TYPES_H_ #define _XE_PT_TYPES_H_ -#include <drm/drm_pt_walk.h> +#include "xe_pt_walk.h" enum xe_cache_level { XE_CACHE_NONE, @@ -17,7 +17,7 @@ enum xe_cache_level { #define XE_VM_MAX_LEVEL 4 struct xe_pt { - struct drm_pt drm; + struct xe_ptw base; struct xe_bo *bo; unsigned int level; unsigned int num_live; -- 2.39.2 ^ permalink raw reply related [flat|nested] 21+ messages in thread
* [Intel-xe] [PATCH] fixup! drm/xe: Introduce a new DRM driver for Intel GPUs 2023-05-01 19:03 [Intel-xe] [PATCH 2/2] " Rodrigo Vivi @ 2023-05-01 19:37 ` Rodrigo Vivi 0 siblings, 0 replies; 21+ messages in thread From: Rodrigo Vivi @ 2023-05-01 19:37 UTC (permalink / raw) To: intel-xe; +Cc: Rodrigo Vivi --- drivers/gpu/drm/xe/Makefile | 1 + drivers/gpu/drm/xe/xe_pt.c | 110 ++++++++++----------- drivers/gpu/drm/xe/xe_pt_types.h | 4 +- drivers/gpu/drm/xe/xe_pt_walk.c | 160 ++++++++++++++++++++++++++++++ drivers/gpu/drm/xe/xe_pt_walk.h | 161 +++++++++++++++++++++++++++++++ 5 files changed, 377 insertions(+), 59 deletions(-) create mode 100644 drivers/gpu/drm/xe/xe_pt_walk.c create mode 100644 drivers/gpu/drm/xe/xe_pt_walk.h diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile index 74a84080f242..b84e191ba14f 100644 --- a/drivers/gpu/drm/xe/Makefile +++ b/drivers/gpu/drm/xe/Makefile @@ -73,6 +73,7 @@ xe-y += xe_bb.o \ xe_pm.o \ xe_preempt_fence.o \ xe_pt.o \ + xe_pt_walk.o \ xe_query.o \ xe_reg_sr.o \ xe_reg_whitelist.o \ diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c index 4ee5ea2cabc9..f15282996c3b 100644 --- a/drivers/gpu/drm/xe/xe_pt.c +++ b/drivers/gpu/drm/xe/xe_pt.c @@ -5,14 +5,13 @@ #include "xe_pt.h" -#include <drm/drm_pt_walk.h> - #include "xe_bo.h" #include "xe_device.h" #include "xe_gt.h" #include "xe_gt_tlb_invalidation.h" #include "xe_migrate.h" #include "xe_pt_types.h" +#include "xe_pt_walk.h" #include "xe_res_cursor.h" #include "xe_trace.h" #include "xe_ttm_stolen_mgr.h" @@ -20,8 +19,8 @@ struct xe_pt_dir { struct xe_pt pt; - /** @dir: Directory structure for the drm_pt_walk functionality */ - struct drm_pt_dir dir; + /** @dir: Directory structure for the xe_pt_walk functionality */ + struct xe_ptw_dir dir; }; #if IS_ENABLED(CONFIG_DRM_XE_DEBUG_VM) @@ -44,7 +43,7 @@ static struct xe_pt_dir *as_xe_pt_dir(struct xe_pt *pt) static struct xe_pt *xe_pt_entry(struct xe_pt_dir *pt_dir, unsigned int index) { - return container_of(pt_dir->dir.entries[index], struct xe_pt, drm); + return container_of(pt_dir->dir.entries[index], struct xe_pt, base); } /** @@ -211,7 +210,7 @@ struct xe_pt *xe_pt_create(struct xe_vm *vm, struct xe_gt *gt, int err; size = !level ? sizeof(struct xe_pt) : sizeof(struct xe_pt_dir) + - XE_PDES * sizeof(struct drm_pt *); + XE_PDES * sizeof(struct xe_ptw *); pt = kzalloc(size, GFP_KERNEL); if (!pt) return ERR_PTR(-ENOMEM); @@ -227,7 +226,7 @@ struct xe_pt *xe_pt_create(struct xe_vm *vm, struct xe_gt *gt, } pt->bo = bo; pt->level = level; - pt->drm.dir = level ? &as_xe_pt_dir(pt)->dir : NULL; + pt->base.dir = level ? &as_xe_pt_dir(pt)->dir : NULL; XE_BUG_ON(level > XE_VM_MAX_LEVEL); @@ -404,8 +403,8 @@ struct xe_pt_update { }; struct xe_pt_stage_bind_walk { - /** drm: The base class. */ - struct drm_pt_walk drm; + /** base: The base class. */ + struct xe_pt_walk base; /* Input parameters for the walk */ /** @vm: The vm we're building for. */ @@ -532,7 +531,7 @@ xe_pt_insert_entry(struct xe_pt_stage_bind_walk *xe_walk, struct xe_pt *parent, struct iosys_map *map = &parent->bo->vmap; if (unlikely(xe_child)) - parent->drm.dir->entries[offset] = &xe_child->drm; + parent->base.dir->entries[offset] = &xe_child->base; xe_pt_write(xe_walk->vm->xe, map, offset, pte); parent->num_live++; @@ -556,7 +555,7 @@ static bool xe_pt_hugepte_possible(u64 addr, u64 next, unsigned int level, u64 size, dma; /* Does the virtual range requested cover a huge pte? */ - if (!drm_pt_covers(addr, next, level, &xe_walk->drm)) + if (!xe_pt_covers(addr, next, level, &xe_walk->base)) return false; /* Does the DMA segment cover the whole pte? */ @@ -618,15 +617,15 @@ xe_pt_is_pte_ps64K(u64 addr, u64 next, struct xe_pt_stage_bind_walk *xe_walk) } static int -xe_pt_stage_bind_entry(struct drm_pt *parent, pgoff_t offset, +xe_pt_stage_bind_entry(struct xe_ptw *parent, pgoff_t offset, unsigned int level, u64 addr, u64 next, - struct drm_pt **child, + struct xe_ptw **child, enum page_walk_action *action, - struct drm_pt_walk *walk) + struct xe_pt_walk *walk) { struct xe_pt_stage_bind_walk *xe_walk = - container_of(walk, typeof(*xe_walk), drm); - struct xe_pt *xe_parent = container_of(parent, typeof(*xe_parent), drm); + container_of(walk, typeof(*xe_walk), base); + struct xe_pt *xe_parent = container_of(parent, typeof(*xe_parent), base); struct xe_pt *xe_child; bool covers; int ret = 0; @@ -675,7 +674,7 @@ xe_pt_stage_bind_entry(struct drm_pt *parent, pgoff_t offset, xe_walk->l0_end_addr = next; } - covers = drm_pt_covers(addr, next, level, &xe_walk->drm); + covers = xe_pt_covers(addr, next, level, &xe_walk->base); if (covers || !*child) { u64 flags = 0; @@ -689,7 +688,7 @@ xe_pt_stage_bind_entry(struct drm_pt *parent, pgoff_t offset, if (!covers) xe_pt_populate_empty(xe_walk->gt, xe_walk->vm, xe_child); - *child = &xe_child->drm; + *child = &xe_child->base; /* * Prefer the compact pagetable layout for L0 if possible. @@ -712,7 +711,7 @@ xe_pt_stage_bind_entry(struct drm_pt *parent, pgoff_t offset, return ret; } -static const struct drm_pt_walk_ops xe_pt_stage_bind_ops = { +static const struct xe_pt_walk_ops xe_pt_stage_bind_ops = { .pt_entry = xe_pt_stage_bind_entry, }; @@ -742,7 +741,7 @@ xe_pt_stage_bind(struct xe_gt *gt, struct xe_vma *vma, bool is_vram = !xe_vma_is_userptr(vma) && bo && xe_bo_is_vram(bo); struct xe_res_cursor curs; struct xe_pt_stage_bind_walk xe_walk = { - .drm = { + .base = { .ops = &xe_pt_stage_bind_ops, .shifts = xe_normal_pt_shifts, .max_level = XE_PT_HIGHEST_LEVEL, @@ -787,8 +786,8 @@ xe_pt_stage_bind(struct xe_gt *gt, struct xe_vma *vma, xe_res_first_sg(xe_bo_get_sg(bo), vma->bo_offset, vma->end - vma->start + 1, &curs); - ret = drm_pt_walk_range(&pt->drm, pt->level, vma->start, vma->end + 1, - &xe_walk.drm); + ret = xe_pt_walk_range(&pt->base, pt->level, vma->start, vma->end + 1, + &xe_walk.base); *num_entries = xe_walk.wupd.num_used_entries; return ret; @@ -814,20 +813,17 @@ xe_pt_stage_bind(struct xe_gt *gt, struct xe_vma *vma, * be shared page tables also at lower levels, so it adjusts the walk action * accordingly. * - * Note that the function is not device-specific so could be made a drm - * pagewalk helper. - * * Return: true if there were non-shared entries, false otherwise. */ static bool xe_pt_nonshared_offsets(u64 addr, u64 end, unsigned int level, - struct drm_pt_walk *walk, + struct xe_pt_walk *walk, enum page_walk_action *action, pgoff_t *offset, pgoff_t *end_offset) { u64 size = 1ull << walk->shifts[level]; - *offset = drm_pt_offset(addr, level, walk); - *end_offset = drm_pt_num_entries(addr, end, level, walk) + *offset; + *offset = xe_pt_offset(addr, level, walk); + *end_offset = xe_pt_num_entries(addr, end, level, walk) + *offset; if (!level) return true; @@ -851,8 +847,8 @@ static bool xe_pt_nonshared_offsets(u64 addr, u64 end, unsigned int level, } struct xe_pt_zap_ptes_walk { - /** @drm: The walk base-class */ - struct drm_pt_walk drm; + /** @base: The walk base-class */ + struct xe_pt_walk base; /* Input parameters for the walk */ /** @gt: The gt we're building for */ @@ -863,15 +859,15 @@ struct xe_pt_zap_ptes_walk { bool needs_invalidate; }; -static int xe_pt_zap_ptes_entry(struct drm_pt *parent, pgoff_t offset, +static int xe_pt_zap_ptes_entry(struct xe_ptw *parent, pgoff_t offset, unsigned int level, u64 addr, u64 next, - struct drm_pt **child, + struct xe_ptw **child, enum page_walk_action *action, - struct drm_pt_walk *walk) + struct xe_pt_walk *walk) { struct xe_pt_zap_ptes_walk *xe_walk = - container_of(walk, typeof(*xe_walk), drm); - struct xe_pt *xe_child = container_of(*child, typeof(*xe_child), drm); + container_of(walk, typeof(*xe_walk), base); + struct xe_pt *xe_child = container_of(*child, typeof(*xe_child), base); pgoff_t end_offset; XE_BUG_ON(!*child); @@ -893,7 +889,7 @@ static int xe_pt_zap_ptes_entry(struct drm_pt *parent, pgoff_t offset, return 0; } -static const struct drm_pt_walk_ops xe_pt_zap_ptes_ops = { +static const struct xe_pt_walk_ops xe_pt_zap_ptes_ops = { .pt_entry = xe_pt_zap_ptes_entry, }; @@ -916,7 +912,7 @@ static const struct drm_pt_walk_ops xe_pt_zap_ptes_ops = { bool xe_pt_zap_ptes(struct xe_gt *gt, struct xe_vma *vma) { struct xe_pt_zap_ptes_walk xe_walk = { - .drm = { + .base = { .ops = &xe_pt_zap_ptes_ops, .shifts = xe_normal_pt_shifts, .max_level = XE_PT_HIGHEST_LEVEL, @@ -928,8 +924,8 @@ bool xe_pt_zap_ptes(struct xe_gt *gt, struct xe_vma *vma) if (!(vma->gt_present & BIT(gt->info.id))) return false; - (void)drm_pt_walk_shared(&pt->drm, pt->level, vma->start, vma->end + 1, - &xe_walk.drm); + (void)xe_pt_walk_shared(&pt->base, pt->level, vma->start, vma->end + 1, + &xe_walk.base); return xe_walk.needs_invalidate; } @@ -1015,7 +1011,7 @@ static void xe_pt_commit_bind(struct xe_vma *vma, xe_pt_destroy(xe_pt_entry(pt_dir, j_), vma->vm->flags, deferred); - pt_dir->dir.entries[j_] = &newpte->drm; + pt_dir->dir.entries[j_] = &newpte->base; } kfree(entries[i].pt_entries); } @@ -1375,8 +1371,8 @@ __xe_pt_bind_vma(struct xe_gt *gt, struct xe_vma *vma, struct xe_engine *e, } struct xe_pt_stage_unbind_walk { - /** @drm: The pagewalk base-class. */ - struct drm_pt_walk drm; + /** @base: The pagewalk base-class. */ + struct xe_pt_walk base; /* Input parameters for the walk */ /** @gt: The gt we're unbinding from. */ @@ -1404,10 +1400,10 @@ struct xe_pt_stage_unbind_walk { static bool xe_pt_check_kill(u64 addr, u64 next, unsigned int level, const struct xe_pt *child, enum page_walk_action *action, - struct drm_pt_walk *walk) + struct xe_pt_walk *walk) { struct xe_pt_stage_unbind_walk *xe_walk = - container_of(walk, typeof(*xe_walk), drm); + container_of(walk, typeof(*xe_walk), base); unsigned int shift = walk->shifts[level]; u64 size = 1ull << shift; @@ -1428,13 +1424,13 @@ static bool xe_pt_check_kill(u64 addr, u64 next, unsigned int level, return false; } -static int xe_pt_stage_unbind_entry(struct drm_pt *parent, pgoff_t offset, +static int xe_pt_stage_unbind_entry(struct xe_ptw *parent, pgoff_t offset, unsigned int level, u64 addr, u64 next, - struct drm_pt **child, + struct xe_ptw **child, enum page_walk_action *action, - struct drm_pt_walk *walk) + struct xe_pt_walk *walk) { - struct xe_pt *xe_child = container_of(*child, typeof(*xe_child), drm); + struct xe_pt *xe_child = container_of(*child, typeof(*xe_child), base); XE_BUG_ON(!*child); XE_BUG_ON(!level && xe_child->is_compact); @@ -1445,15 +1441,15 @@ static int xe_pt_stage_unbind_entry(struct drm_pt *parent, pgoff_t offset, } static int -xe_pt_stage_unbind_post_descend(struct drm_pt *parent, pgoff_t offset, +xe_pt_stage_unbind_post_descend(struct xe_ptw *parent, pgoff_t offset, unsigned int level, u64 addr, u64 next, - struct drm_pt **child, + struct xe_ptw **child, enum page_walk_action *action, - struct drm_pt_walk *walk) + struct xe_pt_walk *walk) { struct xe_pt_stage_unbind_walk *xe_walk = - container_of(walk, typeof(*xe_walk), drm); - struct xe_pt *xe_child = container_of(*child, typeof(*xe_child), drm); + container_of(walk, typeof(*xe_walk), base); + struct xe_pt *xe_child = container_of(*child, typeof(*xe_child), base); pgoff_t end_offset; u64 size = 1ull << walk->shifts[--level]; @@ -1477,7 +1473,7 @@ xe_pt_stage_unbind_post_descend(struct drm_pt *parent, pgoff_t offset, return 0; } -static const struct drm_pt_walk_ops xe_pt_stage_unbind_ops = { +static const struct xe_pt_walk_ops xe_pt_stage_unbind_ops = { .pt_entry = xe_pt_stage_unbind_entry, .pt_post_descend = xe_pt_stage_unbind_post_descend, }; @@ -1500,7 +1496,7 @@ static unsigned int xe_pt_stage_unbind(struct xe_gt *gt, struct xe_vma *vma, struct xe_vm_pgtable_update *entries) { struct xe_pt_stage_unbind_walk xe_walk = { - .drm = { + .base = { .ops = &xe_pt_stage_unbind_ops, .shifts = xe_normal_pt_shifts, .max_level = XE_PT_HIGHEST_LEVEL, @@ -1512,8 +1508,8 @@ static unsigned int xe_pt_stage_unbind(struct xe_gt *gt, struct xe_vma *vma, }; struct xe_pt *pt = vma->vm->pt_root[gt->info.id]; - (void)drm_pt_walk_shared(&pt->drm, pt->level, vma->start, vma->end + 1, - &xe_walk.drm); + (void)xe_pt_walk_shared(&pt->base, pt->level, vma->start, vma->end + 1, + &xe_walk.base); return xe_walk.wupd.num_used_entries; } diff --git a/drivers/gpu/drm/xe/xe_pt_types.h b/drivers/gpu/drm/xe/xe_pt_types.h index 2bb5d0e319b7..2ed64c0a4485 100644 --- a/drivers/gpu/drm/xe/xe_pt_types.h +++ b/drivers/gpu/drm/xe/xe_pt_types.h @@ -6,7 +6,7 @@ #ifndef _XE_PT_TYPES_H_ #define _XE_PT_TYPES_H_ -#include <drm/drm_pt_walk.h> +#include "xe_pt_walk.h" enum xe_cache_level { XE_CACHE_NONE, @@ -17,7 +17,7 @@ enum xe_cache_level { #define XE_VM_MAX_LEVEL 4 struct xe_pt { - struct drm_pt drm; + struct xe_ptw base; struct xe_bo *bo; unsigned int level; unsigned int num_live; diff --git a/drivers/gpu/drm/xe/xe_pt_walk.c b/drivers/gpu/drm/xe/xe_pt_walk.c new file mode 100644 index 000000000000..0def89af4372 --- /dev/null +++ b/drivers/gpu/drm/xe/xe_pt_walk.c @@ -0,0 +1,160 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright © 2022 Intel Corporation + */ +#include "xe_pt_walk.h" + +/** + * DOC: GPU page-table tree walking. + * The utilities in this file are similar to the CPU page-table walk + * utilities in mm/pagewalk.c. The main difference is that we distinguish + * the various levels of a page-table tree with an unsigned integer rather + * than by name. 0 is the lowest level, and page-tables with level 0 can + * not be directories pointing to lower levels, whereas all other levels + * can. The user of the utilities determines the highest level. + * + * Nomenclature: + * Each struct xe_ptw, regardless of level is referred to as a page table, and + * multiple page tables typically form a page table tree with page tables at + * intermediate levels being page directories pointing at page tables at lower + * levels. A shared page table for a given address range is a page-table which + * is neither fully within nor fully outside the address range and that can + * thus be shared by two or more address ranges. + * + * Please keep this code generic so that it can used as a drm-wide page- + * table walker should other drivers find use for it. + */ +static u64 xe_pt_addr_end(u64 addr, u64 end, unsigned int level, + const struct xe_pt_walk *walk) +{ + u64 size = 1ull << walk->shifts[level]; + u64 tmp = round_up(addr + 1, size); + + return min_t(u64, tmp, end); +} + +static bool xe_pt_next(pgoff_t *offset, u64 *addr, u64 next, u64 end, + unsigned int level, const struct xe_pt_walk *walk) +{ + pgoff_t step = 1; + + /* Shared pt walk skips to the last pagetable */ + if (unlikely(walk->shared_pt_mode)) { + unsigned int shift = walk->shifts[level]; + u64 skip_to = round_down(end, 1ull << shift); + + if (skip_to > next) { + step += (skip_to - next) >> shift; + next = skip_to; + } + } + + *addr = next; + *offset += step; + + return next != end; +} + +/** + * xe_pt_walk_range() - Walk a range of a gpu page table tree with callbacks + * for each page-table entry in all levels. + * @parent: The root page table for walk start. + * @level: The root page table level. + * @addr: Virtual address start. + * @end: Virtual address end + 1. + * @walk: Walk info. + * + * Similar to the CPU page-table walker, this is a helper to walk + * a gpu page table and call a provided callback function for each entry. + * + * Return: 0 on success, negative error code on error. The error is + * propagated from the callback and on error the walk is terminated. + */ +int xe_pt_walk_range(struct xe_ptw *parent, unsigned int level, + u64 addr, u64 end, struct xe_pt_walk *walk) +{ + pgoff_t offset = xe_pt_offset(addr, level, walk); + struct xe_ptw **entries = parent->dir ? parent->dir->entries : NULL; + const struct xe_pt_walk_ops *ops = walk->ops; + enum page_walk_action action; + struct xe_ptw *child; + int err = 0; + u64 next; + + do { + next = xe_pt_addr_end(addr, end, level, walk); + if (walk->shared_pt_mode && xe_pt_covers(addr, next, level, + walk)) + continue; +again: + action = ACTION_SUBTREE; + child = entries ? entries[offset] : NULL; + err = ops->pt_entry(parent, offset, level, addr, next, + &child, &action, walk); + if (err) + break; + + /* Probably not needed yet for gpu pagetable walk. */ + if (unlikely(action == ACTION_AGAIN)) + goto again; + + if (likely(!level || !child || action == ACTION_CONTINUE)) + continue; + + err = xe_pt_walk_range(child, level - 1, addr, next, walk); + + if (!err && ops->pt_post_descend) + err = ops->pt_post_descend(parent, offset, level, addr, + next, &child, &action, walk); + if (err) + break; + + } while (xe_pt_next(&offset, &addr, next, end, level, walk)); + + return err; +} + +/** + * xe_pt_walk_shared() - Walk shared page tables of a page-table tree. + * @parent: Root page table directory. + * @level: Level of the root. + * @addr: Start address. + * @end: Last address + 1. + * @walk: Walk info. + * + * This function is similar to xe_pt_walk_range() but it skips page tables + * that are private to the range. Since the root (or @parent) page table is + * typically also a shared page table this function is different in that it + * calls the pt_entry callback and the post_descend callback also for the + * root. The root can be detected in the callbacks by checking whether + * parent == *child. + * Walking only the shared page tables is common for unbind-type operations + * where the page-table entries for an address range are cleared or detached + * from the main page-table tree. + * + * Return: 0 on success, negative error code on error: If a callback + * returns an error, the walk will be terminated and the error returned by + * this function. + */ +int xe_pt_walk_shared(struct xe_ptw *parent, unsigned int level, + u64 addr, u64 end, struct xe_pt_walk *walk) +{ + const struct xe_pt_walk_ops *ops = walk->ops; + enum page_walk_action action = ACTION_SUBTREE; + struct xe_ptw *child = parent; + int err; + + walk->shared_pt_mode = true; + err = walk->ops->pt_entry(parent, 0, level + 1, addr, end, + &child, &action, walk); + + if (err || action != ACTION_SUBTREE) + return err; + + err = xe_pt_walk_range(parent, level, addr, end, walk); + if (!err && ops->pt_post_descend) { + err = ops->pt_post_descend(parent, 0, level + 1, addr, end, + &child, &action, walk); + } + return err; +} diff --git a/drivers/gpu/drm/xe/xe_pt_walk.h b/drivers/gpu/drm/xe/xe_pt_walk.h new file mode 100644 index 000000000000..42c51fa601ec --- /dev/null +++ b/drivers/gpu/drm/xe/xe_pt_walk.h @@ -0,0 +1,161 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright © 2022 Intel Corporation + */ +#ifndef __XE_PT_WALK__ +#define __XE_PT_WALK__ + +#include <linux/pagewalk.h> +#include <linux/types.h> + +struct xe_ptw_dir; + +/** + * struct xe_ptw - base class for driver pagetable subclassing. + * @dir: Pointer to an array of children if any. + * + * Drivers could subclass this, and if it's a page-directory, typically + * embed the xe_ptw_dir::entries array in the same allocation. + */ +struct xe_ptw { + struct xe_ptw_dir *dir; +}; + +/** + * struct xe_ptw_dir - page directory structure + * @entries: Array holding page directory children. + * + * It is the responsibility of the user to ensure @entries is + * correctly sized. + */ +struct xe_ptw_dir { + struct xe_ptw *entries[0]; +}; + +/** + * struct xe_pt_walk - Embeddable struct for walk parameters + */ +struct xe_pt_walk { + /** @ops: The walk ops used for the pagewalk */ + const struct xe_pt_walk_ops *ops; + /** + * @shifts: Array of page-table entry shifts used for the + * different levels, starting out with the leaf level 0 + * page-shift as the first entry. It's legal for this pointer to be + * changed during the walk. + */ + const u64 *shifts; + /** @max_level: Highest populated level in @sizes */ + unsigned int max_level; + /** + * @shared_pt_mode: Whether to skip all entries that are private + * to the address range and called only for entries that are + * shared with other address ranges. Such entries are referred to + * as shared pagetables. + */ + bool shared_pt_mode; +}; + +/** + * typedef xe_pt_entry_fn - gpu page-table-walk callback-function + * @parent: The parent page.table. + * @offset: The offset (number of entries) into the page table. + * @level: The level of @parent. + * @addr: The virtual address. + * @next: The virtual address for the next call, or end address. + * @child: Pointer to pointer to child page-table at this @offset. The + * function may modify the value pointed to if, for example, allocating a + * child page table. + * @action: The walk action to take upon return. See <linux/pagewalk.h>. + * @walk: The walk parameters. + */ +typedef int (*xe_pt_entry_fn)(struct xe_ptw *parent, pgoff_t offset, + unsigned int level, u64 addr, u64 next, + struct xe_ptw **child, + enum page_walk_action *action, + struct xe_pt_walk *walk); + +/** + * struct xe_pt_walk_ops - Walk callbacks. + */ +struct xe_pt_walk_ops { + /** + * @pt_entry: Callback to be called for each page table entry prior + * to descending to the next level. The returned value of the action + * function parameter is honored. + */ + xe_pt_entry_fn pt_entry; + /** + * @pt_post_descend: Callback to be called for each page table entry + * after return from descending to the next level. The returned value + * of the action function parameter is ignored. + */ + xe_pt_entry_fn pt_post_descend; +}; + +int xe_pt_walk_range(struct xe_ptw *parent, unsigned int level, + u64 addr, u64 end, struct xe_pt_walk *walk); + +int xe_pt_walk_shared(struct xe_ptw *parent, unsigned int level, + u64 addr, u64 end, struct xe_pt_walk *walk); + +/** + * xe_pt_covers - Whether the address range covers an entire entry in @level + * @addr: Start of the range. + * @end: End of range + 1. + * @level: Page table level. + * @walk: Page table walk info. + * + * This function is a helper to aid in determining whether a leaf page table + * entry can be inserted at this @level. + * + * Return: Whether the range provided covers exactly an entry at this level. + */ +static inline bool xe_pt_covers(u64 addr, u64 end, unsigned int level, + const struct xe_pt_walk *walk) +{ + u64 pt_size = 1ull << walk->shifts[level]; + + return end - addr == pt_size && IS_ALIGNED(addr, pt_size); +} + +/** + * xe_pt_num_entries: Number of page-table entries of a given range at this + * level + * @addr: Start address. + * @end: End address. + * @level: Page table level. + * @walk: Walk info. + * + * Return: The number of page table entries at this level between @start and + * @end. + */ +static inline pgoff_t +xe_pt_num_entries(u64 addr, u64 end, unsigned int level, + const struct xe_pt_walk *walk) +{ + u64 pt_size = 1ull << walk->shifts[level]; + + return (round_up(end, pt_size) - round_down(addr, pt_size)) >> + walk->shifts[level]; +} + +/** + * xe_pt_offset: Offset of the page-table entry for a given address. + * @addr: The address. + * @level: Page table level. + * @walk: Walk info. + * + * Return: The page table entry offset for the given address in a + * page table with size indicated by @level. + */ +static inline pgoff_t +xe_pt_offset(u64 addr, unsigned int level, const struct xe_pt_walk *walk) +{ + if (level < walk->max_level) + addr &= ((1ull << walk->shifts[level + 1]) - 1); + + return addr >> walk->shifts[level]; +} + +#endif -- 2.39.2 ^ permalink raw reply related [flat|nested] 21+ messages in thread
* [Intel-xe] [PATCH] fixup! drm/xe: Introduce a new DRM driver for Intel GPUs @ 2023-05-15 15:15 Francois Dugast 2023-05-15 15:32 ` Lucas De Marchi 0 siblings, 1 reply; 21+ messages in thread From: Francois Dugast @ 2023-05-15 15:15 UTC (permalink / raw) To: intel-xe; +Cc: Dugast From: "Dugast, Francois" <francois.dugast@intel.com> The driver contains code under GPL v2 license and code under MIT license. Link: https://www.kernel.org/doc/html/latest/process/license-rules.html Cc: Oded Gabbay <ogabbay@kernel.org> Signed-off-by: Dugast, Francois <francois.dugast@intel.com> --- drivers/gpu/drm/xe/xe_module.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/xe/xe_module.c b/drivers/gpu/drm/xe/xe_module.c index 6860586ce7f8..ae37c229a0b7 100644 --- a/drivers/gpu/drm/xe/xe_module.c +++ b/drivers/gpu/drm/xe/xe_module.c @@ -79,4 +79,4 @@ module_exit(xe_exit); MODULE_AUTHOR("Intel Corporation"); MODULE_DESCRIPTION(DRIVER_DESC); -MODULE_LICENSE("GPL and additional rights"); +MODULE_LICENSE("Dual MIT/GPL"); -- 2.34.1 ^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [Intel-xe] [PATCH] fixup! drm/xe: Introduce a new DRM driver for Intel GPUs 2023-05-15 15:15 Francois Dugast @ 2023-05-15 15:32 ` Lucas De Marchi 2023-05-15 21:09 ` Rodrigo Vivi 0 siblings, 1 reply; 21+ messages in thread From: Lucas De Marchi @ 2023-05-15 15:32 UTC (permalink / raw) To: Francois Dugast; +Cc: Dugast, intel-xe On Mon, May 15, 2023 at 03:15:48PM +0000, Francois Dugast wrote: >From: "Dugast, Francois" <francois.dugast@intel.com> > >The driver contains code under GPL v2 license and code under MIT license. that is the wrong reason for dual license > >Link: https://www.kernel.org/doc/html/latest/process/license-rules.html >Cc: Oded Gabbay <ogabbay@kernel.org> >Signed-off-by: Dugast, Francois <francois.dugast@intel.com> >--- > drivers/gpu/drm/xe/xe_module.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > >diff --git a/drivers/gpu/drm/xe/xe_module.c b/drivers/gpu/drm/xe/xe_module.c >index 6860586ce7f8..ae37c229a0b7 100644 >--- a/drivers/gpu/drm/xe/xe_module.c >+++ b/drivers/gpu/drm/xe/xe_module.c >@@ -79,4 +79,4 @@ module_exit(xe_exit); > MODULE_AUTHOR("Intel Corporation"); > > MODULE_DESCRIPTION(DRIVER_DESC); >-MODULE_LICENSE("GPL and additional rights"); >+MODULE_LICENSE("Dual MIT/GPL"); The module itself is GPL, like i915: $ git grep MODULE_LICENSE -- drivers/gpu/drm/i915/ drivers/gpu/drm/i915/gvt/kvmgt.c:MODULE_LICENSE("GPL and additional rights"); drivers/gpu/drm/i915/i915_module.c:MODULE_LICENSE("GPL and additional rights"); GPL can include source files licensed as MIT, but the final license of the module is still GPL, not dual license. Same thing as from the link you provided... The license described in the COPYING file applies to the kernel source as a whole, though individual source files can have a different license which is required to be compatible with the GPL-2.0 ... but applied to the module rather than the whole kernel. The inverse is not true. You can't license the whole (module) as MIT since it also contains GPL-licensed files. We could if all the files were dual-licensed, which is not the case. Since several parts of xe is based on i915, it'd be a very grey area and license tracking nightmare. With that, the disclaimer IANAL applies. Lucas De Marchi >-- >2.34.1 > ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Intel-xe] [PATCH] fixup! drm/xe: Introduce a new DRM driver for Intel GPUs 2023-05-15 15:32 ` Lucas De Marchi @ 2023-05-15 21:09 ` Rodrigo Vivi 2023-05-15 21:16 ` Rodrigo Vivi 0 siblings, 1 reply; 21+ messages in thread From: Rodrigo Vivi @ 2023-05-15 21:09 UTC (permalink / raw) To: Lucas De Marchi; +Cc: Dugast, intel-xe On Mon, May 15, 2023 at 08:32:23AM -0700, Lucas De Marchi wrote: > On Mon, May 15, 2023 at 03:15:48PM +0000, Francois Dugast wrote: > > From: "Dugast, Francois" <francois.dugast@intel.com> > > > > The driver contains code under GPL v2 license and code under MIT license. > > that is the wrong reason for dual license > > > > > Link: https://www.kernel.org/doc/html/latest/process/license-rules.html > > Cc: Oded Gabbay <ogabbay@kernel.org> > > Signed-off-by: Dugast, Francois <francois.dugast@intel.com> > > --- > > drivers/gpu/drm/xe/xe_module.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/drivers/gpu/drm/xe/xe_module.c b/drivers/gpu/drm/xe/xe_module.c > > index 6860586ce7f8..ae37c229a0b7 100644 > > --- a/drivers/gpu/drm/xe/xe_module.c > > +++ b/drivers/gpu/drm/xe/xe_module.c > > @@ -79,4 +79,4 @@ module_exit(xe_exit); > > MODULE_AUTHOR("Intel Corporation"); > > > > MODULE_DESCRIPTION(DRIVER_DESC); > > -MODULE_LICENSE("GPL and additional rights"); > > +MODULE_LICENSE("Dual MIT/GPL"); > > The module itself is GPL, like i915: > > $ git grep MODULE_LICENSE -- drivers/gpu/drm/i915/ > drivers/gpu/drm/i915/gvt/kvmgt.c:MODULE_LICENSE("GPL and additional rights"); > drivers/gpu/drm/i915/i915_module.c:MODULE_LICENSE("GPL and additional rights"); > > GPL can include source files licensed as MIT, but the final license of > the module is still GPL, not dual license. Same thing as from the link > you provided... > > The license described in the COPYING file applies to the kernel > source as a whole, though individual source files can have a > different license which is required to be compatible with the > GPL-2.0 > > ... but applied to the module rather than the whole kernel. > > The inverse is not true. You can't license the whole (module) as MIT > since it also contains GPL-licensed files. We could if all the files were > dual-licensed, which is not the case. Since several parts of xe is based > on i915, it'd be a very grey area and license tracking nightmare. > > With that, the disclaimer IANAL applies. Yeap, but according "Documentation/process/license-rules.rst" they are absolutely the same. "GPL and additional rights" or "Dual MIT/GPL" with the ask to not use the first one on new code but prefer the second one. So, Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> > > Lucas De Marchi > > > -- > > 2.34.1 > > ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Intel-xe] [PATCH] fixup! drm/xe: Introduce a new DRM driver for Intel GPUs 2023-05-15 21:09 ` Rodrigo Vivi @ 2023-05-15 21:16 ` Rodrigo Vivi 0 siblings, 0 replies; 21+ messages in thread From: Rodrigo Vivi @ 2023-05-15 21:16 UTC (permalink / raw) To: Lucas De Marchi; +Cc: Dugast, intel-xe On Mon, May 15, 2023 at 05:09:47PM -0400, Rodrigo Vivi wrote: > On Mon, May 15, 2023 at 08:32:23AM -0700, Lucas De Marchi wrote: > > On Mon, May 15, 2023 at 03:15:48PM +0000, Francois Dugast wrote: > > > From: "Dugast, Francois" <francois.dugast@intel.com> > > > > > > The driver contains code under GPL v2 license and code under MIT license. > > > > that is the wrong reason for dual license > > > > > > > > Link: https://www.kernel.org/doc/html/latest/process/license-rules.html > > > Cc: Oded Gabbay <ogabbay@kernel.org> > > > Signed-off-by: Dugast, Francois <francois.dugast@intel.com> > > > --- > > > drivers/gpu/drm/xe/xe_module.c | 2 +- > > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > > > diff --git a/drivers/gpu/drm/xe/xe_module.c b/drivers/gpu/drm/xe/xe_module.c > > > index 6860586ce7f8..ae37c229a0b7 100644 > > > --- a/drivers/gpu/drm/xe/xe_module.c > > > +++ b/drivers/gpu/drm/xe/xe_module.c > > > @@ -79,4 +79,4 @@ module_exit(xe_exit); > > > MODULE_AUTHOR("Intel Corporation"); > > > > > > MODULE_DESCRIPTION(DRIVER_DESC); > > > -MODULE_LICENSE("GPL and additional rights"); > > > +MODULE_LICENSE("Dual MIT/GPL"); > > > > The module itself is GPL, like i915: > > > > $ git grep MODULE_LICENSE -- drivers/gpu/drm/i915/ > > drivers/gpu/drm/i915/gvt/kvmgt.c:MODULE_LICENSE("GPL and additional rights"); > > drivers/gpu/drm/i915/i915_module.c:MODULE_LICENSE("GPL and additional rights"); > > > > GPL can include source files licensed as MIT, but the final license of > > the module is still GPL, not dual license. Same thing as from the link > > you provided... > > > > The license described in the COPYING file applies to the kernel > > source as a whole, though individual source files can have a > > different license which is required to be compatible with the > > GPL-2.0 > > > > ... but applied to the module rather than the whole kernel. > > > > The inverse is not true. You can't license the whole (module) as MIT > > since it also contains GPL-licensed files. We could if all the files were > > dual-licensed, which is not the case. Since several parts of xe is based > > on i915, it'd be a very grey area and license tracking nightmare. > > > > With that, the disclaimer IANAL applies. > > Yeap, but according "Documentation/process/license-rules.rst" > they are absolutely the same. > > "GPL and additional rights" > or > "Dual MIT/GPL" > > with the ask to not use the first one on new code but prefer the > second one. > > So, > > Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> so, we need to make this change one way or another. The question that would remain is if we need to completely change and really identify the whole driver as GPLv2 only, and with that I believe that we shouldn't. 1. I might be wrong, but it looks to me that if you put the whole driver as GPLv2, then the individual files as MIT wouldn't matter anymore and folks who nowadays get i915 and use as MIT but stripping the GPL files, wouldn't be able to do this any longer with Xe. 2. Having the full identification of the driver as GPLv2 only it would spread to various new files and that would get back to the point that we would be blocking the adoption for the cases I mentioned in the previous bullet. > > > > > Lucas De Marchi > > > > > -- > > > 2.34.1 > > > ^ permalink raw reply [flat|nested] 21+ messages in thread
* [Intel-xe] [PATCH] fixup! drm/xe: Introduce a new DRM driver for Intel GPUs @ 2023-05-31 6:19 Lucas De Marchi 2023-05-31 13:00 ` Gustavo Sousa ` (2 more replies) 0 siblings, 3 replies; 21+ messages in thread From: Lucas De Marchi @ 2023-05-31 6:19 UTC (permalink / raw) To: intel-xe; +Cc: Lucas De Marchi, Matt Roper drm/xe/sr: Fix too many kfree() on reallocation When re-allocating the array, the previous location shouldn't be freed. The issue can be more easily reproduced by reducing XE_REG_SR_GROW_STEP_DEFAULT. This was crashing kunit during cleanup on semi-random places depending on the number of save-restore entries. Jointly debugged with Matt Roper. Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> --- drivers/gpu/drm/xe/xe_reg_sr.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/gpu/drm/xe/xe_reg_sr.c b/drivers/gpu/drm/xe/xe_reg_sr.c index 24d9c73ef279..434133444d74 100644 --- a/drivers/gpu/drm/xe/xe_reg_sr.c +++ b/drivers/gpu/drm/xe/xe_reg_sr.c @@ -57,7 +57,6 @@ static struct xe_reg_sr_entry *alloc_entry(struct xe_reg_sr *sr) if (!arr) return NULL; - kfree(sr->pool.arr); sr->pool.arr = arr; sr->pool.allocated += sr->pool.grow_step; } -- 2.40.1 ^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [Intel-xe] [PATCH] fixup! drm/xe: Introduce a new DRM driver for Intel GPUs 2023-05-31 6:19 Lucas De Marchi @ 2023-05-31 13:00 ` Gustavo Sousa 2023-05-31 14:46 ` Matt Roper 2023-05-31 16:24 ` Lucas De Marchi 2 siblings, 0 replies; 21+ messages in thread From: Gustavo Sousa @ 2023-05-31 13:00 UTC (permalink / raw) To: Lucas De Marchi, intel-xe; +Cc: Matt Roper, Lucas De Marchi Quoting Lucas De Marchi (2023-05-31 03:19:02-03:00) >drm/xe/sr: Fix too many kfree() on reallocation > >When re-allocating the array, the previous location shouldn't be freed. >The issue can be more easily reproduced by reducing >XE_REG_SR_GROW_STEP_DEFAULT. This was crashing kunit during cleanup >on semi-random places depending on the number of save-restore entries. > >Jointly debugged with Matt Roper. > >Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Checked and krealloc does free the old pointer if a new one is created, so: Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com> By the way, I would suggest to send single-patch fixup with a cover letter, just so Patchwork doesn't think this is a new revision of an existing series. For example, this patch was included as a new revision of [1]. [1] https://patchwork.freedesktop.org/series/115290/ -- Gustavo Sousa >--- > drivers/gpu/drm/xe/xe_reg_sr.c | 1 - > 1 file changed, 1 deletion(-) > >diff --git a/drivers/gpu/drm/xe/xe_reg_sr.c b/drivers/gpu/drm/xe/xe_reg_sr.c >index 24d9c73ef279..434133444d74 100644 >--- a/drivers/gpu/drm/xe/xe_reg_sr.c >+++ b/drivers/gpu/drm/xe/xe_reg_sr.c >@@ -57,7 +57,6 @@ static struct xe_reg_sr_entry *alloc_entry(struct xe_reg_sr *sr) > if (!arr) > return NULL; > >- kfree(sr->pool.arr); > sr->pool.arr = arr; > sr->pool.allocated += sr->pool.grow_step; > } >-- >2.40.1 > ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Intel-xe] [PATCH] fixup! drm/xe: Introduce a new DRM driver for Intel GPUs 2023-05-31 6:19 Lucas De Marchi 2023-05-31 13:00 ` Gustavo Sousa @ 2023-05-31 14:46 ` Matt Roper 2023-05-31 16:24 ` Lucas De Marchi 2 siblings, 0 replies; 21+ messages in thread From: Matt Roper @ 2023-05-31 14:46 UTC (permalink / raw) To: Lucas De Marchi; +Cc: intel-xe On Tue, May 30, 2023 at 11:19:02PM -0700, Lucas De Marchi wrote: > drm/xe/sr: Fix too many kfree() on reallocation > > When re-allocating the array, the previous location shouldn't be freed. > The issue can be more easily reproduced by reducing > XE_REG_SR_GROW_STEP_DEFAULT. This was crashing kunit during cleanup > on semi-random places depending on the number of save-restore entries. > > Jointly debugged with Matt Roper. > > Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> > --- > drivers/gpu/drm/xe/xe_reg_sr.c | 1 - > 1 file changed, 1 deletion(-) > > diff --git a/drivers/gpu/drm/xe/xe_reg_sr.c b/drivers/gpu/drm/xe/xe_reg_sr.c > index 24d9c73ef279..434133444d74 100644 > --- a/drivers/gpu/drm/xe/xe_reg_sr.c > +++ b/drivers/gpu/drm/xe/xe_reg_sr.c > @@ -57,7 +57,6 @@ static struct xe_reg_sr_entry *alloc_entry(struct xe_reg_sr *sr) > if (!arr) > return NULL; > > - kfree(sr->pool.arr); > sr->pool.arr = arr; > sr->pool.allocated += sr->pool.grow_step; > } > -- > 2.40.1 > -- Matt Roper Graphics Software Engineer VTT-OSGC Platform Enablement Intel Corporation (916) 356-2795 ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Intel-xe] [PATCH] fixup! drm/xe: Introduce a new DRM driver for Intel GPUs 2023-05-31 6:19 Lucas De Marchi 2023-05-31 13:00 ` Gustavo Sousa 2023-05-31 14:46 ` Matt Roper @ 2023-05-31 16:24 ` Lucas De Marchi 2 siblings, 0 replies; 21+ messages in thread From: Lucas De Marchi @ 2023-05-31 16:24 UTC (permalink / raw) To: intel-xe; +Cc: Matt Roper On Tue, May 30, 2023 at 11:19:02PM -0700, Lucas De Marchi wrote: >drm/xe/sr: Fix too many kfree() on reallocation > >When re-allocating the array, the previous location shouldn't be freed. >The issue can be more easily reproduced by reducing >XE_REG_SR_GROW_STEP_DEFAULT. This was crashing kunit during cleanup >on semi-random places depending on the number of save-restore entries. > >Jointly debugged with Matt Roper. > >Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> pushed, thanks for the reviews. Lucas De Marchi >--- > drivers/gpu/drm/xe/xe_reg_sr.c | 1 - > 1 file changed, 1 deletion(-) > >diff --git a/drivers/gpu/drm/xe/xe_reg_sr.c b/drivers/gpu/drm/xe/xe_reg_sr.c >index 24d9c73ef279..434133444d74 100644 >--- a/drivers/gpu/drm/xe/xe_reg_sr.c >+++ b/drivers/gpu/drm/xe/xe_reg_sr.c >@@ -57,7 +57,6 @@ static struct xe_reg_sr_entry *alloc_entry(struct xe_reg_sr *sr) > if (!arr) > return NULL; > >- kfree(sr->pool.arr); > sr->pool.arr = arr; > sr->pool.allocated += sr->pool.grow_step; > } >-- >2.40.1 > ^ permalink raw reply [flat|nested] 21+ messages in thread
* [Intel-xe] [PATCH] fixup! drm/xe: Introduce a new DRM driver for Intel GPUs @ 2023-06-07 22:38 Ashutosh Dixit 2023-06-07 22:49 ` Matt Roper 0 siblings, 1 reply; 21+ messages in thread From: Ashutosh Dixit @ 2023-06-07 22:38 UTC (permalink / raw) To: intel-xe Trivial kernel-doc fix, s/vm_id/engine_id/ Signed-off-by: Ashutosh Dixit <ashutosh.dixit@intel.com> --- include/uapi/drm/xe_drm.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h index 0ebc50beb5e59..edd29e7f39eb3 100644 --- a/include/uapi/drm/xe_drm.h +++ b/include/uapi/drm/xe_drm.h @@ -656,7 +656,7 @@ struct drm_xe_exec { /** @extensions: Pointer to the first extension struct, if any */ __u64 extensions; - /** @vm_id: VM ID to run batch buffer in */ + /** @engine_id: Engine ID for the batch buffer */ __u32 engine_id; /** @num_syncs: Amount of struct drm_xe_sync in array. */ -- 2.38.0 ^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [Intel-xe] [PATCH] fixup! drm/xe: Introduce a new DRM driver for Intel GPUs 2023-06-07 22:38 Ashutosh Dixit @ 2023-06-07 22:49 ` Matt Roper 0 siblings, 0 replies; 21+ messages in thread From: Matt Roper @ 2023-06-07 22:49 UTC (permalink / raw) To: Ashutosh Dixit; +Cc: intel-xe On Wed, Jun 07, 2023 at 03:38:40PM -0700, Ashutosh Dixit wrote: > Trivial kernel-doc fix, s/vm_id/engine_id/ > > Signed-off-by: Ashutosh Dixit <ashutosh.dixit@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> > --- > include/uapi/drm/xe_drm.h | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h > index 0ebc50beb5e59..edd29e7f39eb3 100644 > --- a/include/uapi/drm/xe_drm.h > +++ b/include/uapi/drm/xe_drm.h > @@ -656,7 +656,7 @@ struct drm_xe_exec { > /** @extensions: Pointer to the first extension struct, if any */ > __u64 extensions; > > - /** @vm_id: VM ID to run batch buffer in */ > + /** @engine_id: Engine ID for the batch buffer */ > __u32 engine_id; > > /** @num_syncs: Amount of struct drm_xe_sync in array. */ > -- > 2.38.0 > -- Matt Roper Graphics Software Engineer Linux GPU Platform Enablement Intel Corporation ^ permalink raw reply [flat|nested] 21+ messages in thread
* [Intel-xe] [PATCH] fixup! drm/xe: Introduce a new DRM driver for Intel GPUs @ 2023-07-07 17:10 Francois Dugast 2023-07-07 20:05 ` Matthew Brost 0 siblings, 1 reply; 21+ messages in thread From: Francois Dugast @ 2023-07-07 17:10 UTC (permalink / raw) To: intel-xe; +Cc: Francois Dugast Fix the SPDX license string so that it can be picked by tools. Signed-off-by: Francois Dugast <francois.dugast@intel.com> --- drivers/gpu/drm/xe/xe_trace.c | 2 +- drivers/gpu/drm/xe/xe_trace.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/xe/xe_trace.c b/drivers/gpu/drm/xe/xe_trace.c index 1026fb37f75b..2570c0b859c4 100644 --- a/drivers/gpu/drm/xe/xe_trace.c +++ b/drivers/gpu/drm/xe/xe_trace.c @@ -1,4 +1,4 @@ -// SPDX-Liense-Identifier: GPL-2.0 +// SPDX-License-Identifier: GPL-2.0 /* * Copyright © 2022 Intel Corporation */ diff --git a/drivers/gpu/drm/xe/xe_trace.h b/drivers/gpu/drm/xe/xe_trace.h index 02861c26e145..7fdbcec8c781 100644 --- a/drivers/gpu/drm/xe/xe_trace.h +++ b/drivers/gpu/drm/xe/xe_trace.h @@ -1,4 +1,4 @@ -/* SPDX-Liense-Identifier: GPL-2.0 */ +/* SPDX-License-Identifier: GPL-2.0 */ /* * Copyright © 2022 Intel Corporation */ -- 2.34.1 ^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [Intel-xe] [PATCH] fixup! drm/xe: Introduce a new DRM driver for Intel GPUs 2023-07-07 17:10 Francois Dugast @ 2023-07-07 20:05 ` Matthew Brost 0 siblings, 0 replies; 21+ messages in thread From: Matthew Brost @ 2023-07-07 20:05 UTC (permalink / raw) To: Francois Dugast; +Cc: intel-xe On Fri, Jul 07, 2023 at 05:10:39PM +0000, Francois Dugast wrote: > Fix the SPDX license string so that it can be picked by tools. > > Signed-off-by: Francois Dugast <francois.dugast@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> > --- > drivers/gpu/drm/xe/xe_trace.c | 2 +- > drivers/gpu/drm/xe/xe_trace.h | 2 +- > 2 files changed, 2 insertions(+), 2 deletions(-) > > diff --git a/drivers/gpu/drm/xe/xe_trace.c b/drivers/gpu/drm/xe/xe_trace.c > index 1026fb37f75b..2570c0b859c4 100644 > --- a/drivers/gpu/drm/xe/xe_trace.c > +++ b/drivers/gpu/drm/xe/xe_trace.c > @@ -1,4 +1,4 @@ > -// SPDX-Liense-Identifier: GPL-2.0 > +// SPDX-License-Identifier: GPL-2.0 > /* > * Copyright © 2022 Intel Corporation > */ > diff --git a/drivers/gpu/drm/xe/xe_trace.h b/drivers/gpu/drm/xe/xe_trace.h > index 02861c26e145..7fdbcec8c781 100644 > --- a/drivers/gpu/drm/xe/xe_trace.h > +++ b/drivers/gpu/drm/xe/xe_trace.h > @@ -1,4 +1,4 @@ > -/* SPDX-Liense-Identifier: GPL-2.0 */ > +/* SPDX-License-Identifier: GPL-2.0 */ > /* > * Copyright © 2022 Intel Corporation > */ > -- > 2.34.1 > ^ permalink raw reply [flat|nested] 21+ messages in thread
* [Intel-xe] [PATCH] fixup! drm/xe: Introduce a new DRM driver for Intel GPUs @ 2023-07-24 8:12 Niranjana Vishwanathapura 0 siblings, 0 replies; 21+ messages in thread From: Niranjana Vishwanathapura @ 2023-07-24 8:12 UTC (permalink / raw) To: intel-xe Use kvmalloc_array() instead of kmalloc() to avoid memory allocation failure in xe_vma_userptr_pin_pages(). v2: Add fixup Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> --- drivers/gpu/drm/xe/xe_vm.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c index 62a99c393d6b..6429d6e5113d 100644 --- a/drivers/gpu/drm/xe/xe_vm.c +++ b/drivers/gpu/drm/xe/xe_vm.c @@ -72,7 +72,7 @@ int xe_vma_userptr_pin_pages(struct xe_vma *vma) if (notifier_seq == vma->userptr.notifier_seq) return 0; - pages = kmalloc(sizeof(*pages) * num_pages, GFP_KERNEL); + pages = kvmalloc_array(num_pages, sizeof(*pages), GFP_KERNEL); if (!pages) return -ENOMEM; @@ -152,7 +152,7 @@ int xe_vma_userptr_pin_pages(struct xe_vma *vma) out: release_pages(pages, pinned); - kfree(pages); + kvfree(pages); if (!(ret < 0)) { vma->userptr.notifier_seq = notifier_seq; -- 2.21.0.rc0.32.g243a4c7e27 ^ permalink raw reply related [flat|nested] 21+ messages in thread
* [Intel-xe] [PATCH] fixup! drm/xe: Introduce a new DRM driver for Intel GPUs @ 2023-08-03 22:00 Daniele Ceraolo Spurio 2023-08-04 0:24 ` Matthew Brost 0 siblings, 1 reply; 21+ messages in thread From: Daniele Ceraolo Spurio @ 2023-08-03 22:00 UTC (permalink / raw) To: intel-xe Resets can be caused by userspace (and we do so in our testing), so we can't print at warning level when they occur. Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Matthew Brost <matthew.brost@intel.com> --- drivers/gpu/drm/xe/xe_guc_submit.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c index 193362518a62..60c311079fcc 100644 --- a/drivers/gpu/drm/xe/xe_guc_submit.c +++ b/drivers/gpu/drm/xe/xe_guc_submit.c @@ -843,8 +843,8 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job) XE_WARN_ON(q->flags & EXEC_QUEUE_FLAG_KERNEL); XE_WARN_ON(q->flags & EXEC_QUEUE_FLAG_VM && !exec_queue_killed(q)); - drm_warn(&xe->drm, "Timedout job: seqno=%u, guc_id=%d, flags=0x%lx", - xe_sched_job_seqno(job), q->guc->id, q->flags); + drm_notice(&xe->drm, "Timedout job: seqno=%u, guc_id=%d, flags=0x%lx", + xe_sched_job_seqno(job), q->guc->id, q->flags); simple_error_capture(q); xe_devcoredump(q); } else { @@ -1597,7 +1597,7 @@ int xe_guc_exec_queue_reset_handler(struct xe_guc *guc, u32 *msg, u32 len) if (unlikely(!q)) return -EPROTO; - drm_warn(&xe->drm, "Engine reset: guc_id=%d", guc_id); + drm_info(&xe->drm, "Engine reset: guc_id=%d", guc_id); /* FIXME: Do error capture, most likely async */ -- 2.41.0 ^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [Intel-xe] [PATCH] fixup! drm/xe: Introduce a new DRM driver for Intel GPUs 2023-08-03 22:00 Daniele Ceraolo Spurio @ 2023-08-04 0:24 ` Matthew Brost 0 siblings, 0 replies; 21+ messages in thread From: Matthew Brost @ 2023-08-04 0:24 UTC (permalink / raw) To: Daniele Ceraolo Spurio; +Cc: intel-xe On Thu, Aug 03, 2023 at 03:00:29PM -0700, Daniele Ceraolo Spurio wrote: > Resets can be caused by userspace (and we do so in our testing), > so we can't print at warning level when they occur. > > Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> > Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> > --- > drivers/gpu/drm/xe/xe_guc_submit.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c > index 193362518a62..60c311079fcc 100644 > --- a/drivers/gpu/drm/xe/xe_guc_submit.c > +++ b/drivers/gpu/drm/xe/xe_guc_submit.c > @@ -843,8 +843,8 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job) > XE_WARN_ON(q->flags & EXEC_QUEUE_FLAG_KERNEL); > XE_WARN_ON(q->flags & EXEC_QUEUE_FLAG_VM && !exec_queue_killed(q)); > > - drm_warn(&xe->drm, "Timedout job: seqno=%u, guc_id=%d, flags=0x%lx", > - xe_sched_job_seqno(job), q->guc->id, q->flags); > + drm_notice(&xe->drm, "Timedout job: seqno=%u, guc_id=%d, flags=0x%lx", > + xe_sched_job_seqno(job), q->guc->id, q->flags); > simple_error_capture(q); > xe_devcoredump(q); > } else { > @@ -1597,7 +1597,7 @@ int xe_guc_exec_queue_reset_handler(struct xe_guc *guc, u32 *msg, u32 len) > if (unlikely(!q)) > return -EPROTO; > > - drm_warn(&xe->drm, "Engine reset: guc_id=%d", guc_id); > + drm_info(&xe->drm, "Engine reset: guc_id=%d", guc_id); > > /* FIXME: Do error capture, most likely async */ > > -- > 2.41.0 > ^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2023-08-04 0:25 UTC | newest] Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2023-03-16 18:29 [Intel-xe] [PATCH] fixup! drm/xe: Introduce a new DRM driver for Intel GPUs Lucas De Marchi 2023-03-16 18:32 ` [Intel-xe] ✓ CI.Patch_applied: success for " Patchwork 2023-03-16 18:33 ` [Intel-xe] ✗ CI.KUnit: failure " Patchwork 2023-03-16 20:53 ` Lucas De Marchi 2023-03-17 6:08 ` [Intel-xe] [PATCH] " Mauro Carvalho Chehab 2023-05-01 19:03 [Intel-xe] [PATCH 2/2] " Rodrigo Vivi 2023-05-01 19:37 ` [Intel-xe] [PATCH] " Rodrigo Vivi 2023-05-15 15:15 Francois Dugast 2023-05-15 15:32 ` Lucas De Marchi 2023-05-15 21:09 ` Rodrigo Vivi 2023-05-15 21:16 ` Rodrigo Vivi 2023-05-31 6:19 Lucas De Marchi 2023-05-31 13:00 ` Gustavo Sousa 2023-05-31 14:46 ` Matt Roper 2023-05-31 16:24 ` Lucas De Marchi 2023-06-07 22:38 Ashutosh Dixit 2023-06-07 22:49 ` Matt Roper 2023-07-07 17:10 Francois Dugast 2023-07-07 20:05 ` Matthew Brost 2023-07-24 8:12 Niranjana Vishwanathapura 2023-08-03 22:00 Daniele Ceraolo Spurio 2023-08-04 0:24 ` Matthew Brost
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.