* [PATCH 1/3] drm/ttm: rework ttm_tt page limit @ 2020-11-17 14:06 Christian König 2020-11-17 14:06 ` [PATCH 2/3] drm/ttm: move memory accounting into vmwgfx Christian König ` (3 more replies) 0 siblings, 4 replies; 18+ messages in thread From: Christian König @ 2020-11-17 14:06 UTC (permalink / raw) To: airlied, Felix.Kuehling, daniel, dri-devel, linux-graphics-maintainer, sroland TTM implements a rather extensive accounting of allocated memory. There are two reasons for this: 1. It tries to block userspace allocating a huge number of very small BOs without accounting for the kmalloced memory. 2. Make sure we don't over allocate and run into an OOM situation during swapout while trying to handle the memory shortage. This is only partially a good idea. First of all it is perfectly valid for an application to use all of system memory, limiting it to 50% is not really acceptable. What we need to take care of is that the application is held accountable for the memory it allocated. This is what control mechanisms like memcg and the normal Linux page accounting already do. Making sure that we don't run into an OOM situation while trying to cope with a memory shortage is still a good idea, but this is also not very well implemented since it means another opportunity of recursion from the driver back into TTM. So start to rework all of this by accounting the memory before allocating the pages and making sure that we never go over a certain limit pages locked by TTM in the ttm_tt objects. Signed-off-by: Christian König <christian.koenig@amd.com> --- drivers/gpu/drm/ttm/ttm_memory.c | 17 ++++++++- drivers/gpu/drm/ttm/ttm_tt.c | 64 +++++++++++++++++++++++++++++--- include/drm/ttm/ttm_tt.h | 2 + 3 files changed, 77 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_memory.c b/drivers/gpu/drm/ttm/ttm_memory.c index 5ed1fc8f2ace..c53136048630 100644 --- a/drivers/gpu/drm/ttm/ttm_memory.c +++ b/drivers/gpu/drm/ttm/ttm_memory.c @@ -38,6 +38,7 @@ #include <linux/slab.h> #include <linux/swap.h> #include <drm/ttm/ttm_pool.h> +#include <drm/ttm/ttm_tt.h> #define TTM_MEMORY_ALLOC_RETRIES 4 @@ -414,10 +415,11 @@ static int ttm_mem_init_dma32_zone(struct ttm_mem_global *glob, int ttm_mem_global_init(struct ttm_mem_global *glob) { + uint64_t num_pages, num_dma32_pages; + struct ttm_mem_zone *zone; struct sysinfo si; int ret; int i; - struct ttm_mem_zone *zone; spin_lock_init(&glob->lock); glob->swap_queue = create_singlethread_workqueue("ttm_swap"); @@ -452,6 +454,19 @@ int ttm_mem_global_init(struct ttm_mem_global *glob) zone->name, (unsigned long long)zone->max_mem >> 10); } ttm_pool_mgr_init(glob->zone_kernel->max_mem/(2*PAGE_SIZE)); + + /* We generally allow TTM to allocate a maximum of 50% of + * the available system memory. + */ + num_pages = (u64)si.totalram * si.mem_unit; + num_pages = (num_pages * 50 / 100) >> PAGE_SHIFT; + + /* But for DMA32 we limit ourself to only use 2GiB maximum. */ + num_dma32_pages = (u64)(si.totalram - si.totalhigh) * si.mem_unit; + num_dma32_pages = min(num_dma32_pages, 2ULL << 30) >> PAGE_SHIFT; + + ttm_tt_mgr_init(num_pages, num_dma32_pages); + return 0; out_no_zone: ttm_mem_global_release(glob); diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c index cfd633c7e764..c7b71c9861a6 100644 --- a/drivers/gpu/drm/ttm/ttm_tt.c +++ b/drivers/gpu/drm/ttm/ttm_tt.c @@ -38,6 +38,19 @@ #include <drm/drm_cache.h> #include <drm/ttm/ttm_bo_driver.h> +static unsigned long ttm_pages_limit; + +MODULE_PARM_DESC(pages_limit, "Limit for the allocated pages"); +module_param_named(pages_limit, ttm_pages_limit, ulong, 0644); + +static unsigned long ttm_dma32_pages_limit; + +MODULE_PARM_DESC(dma32_pages_limit, "Limit for the allocated DMA32 pages"); +module_param_named(dma32_pages_limit, ttm_dma32_pages_limit, ulong, 0644); + +static atomic_long_t ttm_pages_allocated; +static atomic_long_t ttm_dma32_pages_allocated; + /** * Allocates a ttm structure for the given BO. */ @@ -295,8 +308,8 @@ static void ttm_tt_add_mapping(struct ttm_bo_device *bdev, struct ttm_tt *ttm) ttm->pages[i]->mapping = bdev->dev_mapping; } -int ttm_tt_populate(struct ttm_bo_device *bdev, - struct ttm_tt *ttm, struct ttm_operation_ctx *ctx) +int ttm_tt_populate(struct ttm_bo_device *bdev, struct ttm_tt *ttm, + struct ttm_operation_ctx *ctx) { int ret; @@ -306,12 +319,25 @@ int ttm_tt_populate(struct ttm_bo_device *bdev, if (ttm_tt_is_populated(ttm)) return 0; + atomic_long_add(ttm->num_pages, &ttm_pages_allocated); + if (bdev->pool.use_dma32) + atomic_long_add(ttm->num_pages, &ttm_dma32_pages_allocated); + + while (atomic_long_read(&ttm_pages_allocated) > ttm_pages_limit || + atomic_long_read(&ttm_dma32_pages_allocated) > + ttm_dma32_pages_limit) { + + ret = ttm_bo_swapout(ctx); + if (ret) + goto error; + } + if (bdev->driver->ttm_tt_populate) ret = bdev->driver->ttm_tt_populate(bdev, ttm, ctx); else ret = ttm_pool_alloc(&bdev->pool, ttm, ctx); if (ret) - return ret; + goto error; ttm_tt_add_mapping(bdev, ttm); ttm->page_flags |= TTM_PAGE_FLAG_PRIV_POPULATED; @@ -324,6 +350,12 @@ int ttm_tt_populate(struct ttm_bo_device *bdev, } return 0; + +error: + atomic_long_sub(ttm->num_pages, &ttm_pages_allocated); + if (bdev->pool.use_dma32) + atomic_long_sub(ttm->num_pages, &ttm_dma32_pages_allocated); + return ret; } EXPORT_SYMBOL(ttm_tt_populate); @@ -341,8 +373,7 @@ static void ttm_tt_clear_mapping(struct ttm_tt *ttm) } } -void ttm_tt_unpopulate(struct ttm_bo_device *bdev, - struct ttm_tt *ttm) +void ttm_tt_unpopulate(struct ttm_bo_device *bdev, struct ttm_tt *ttm) { if (!ttm_tt_is_populated(ttm)) return; @@ -352,5 +383,28 @@ void ttm_tt_unpopulate(struct ttm_bo_device *bdev, bdev->driver->ttm_tt_unpopulate(bdev, ttm); else ttm_pool_free(&bdev->pool, ttm); + + atomic_long_sub(ttm->num_pages, &ttm_pages_allocated); + if (bdev->pool.use_dma32) + atomic_long_sub(ttm->num_pages, &ttm_dma32_pages_allocated); + ttm->page_flags &= ~TTM_PAGE_FLAG_PRIV_POPULATED; } + +/** + * ttm_tt_mgr_init - set defaults for the number of pages + * + * @num_pages: global limit of system pages to use + * @num_dma32_pages: global limit of DMA32 pages to use + * + * Set reasonable defaults for the number of allocated pages. + */ +void ttm_tt_mgr_init(unsigned long num_pages, unsigned long num_dma32_pages) +{ + if (!ttm_pages_limit) + ttm_pages_limit = num_pages; + + if (!ttm_dma32_pages_limit) + ttm_dma32_pages_limit = num_dma32_pages; + +} diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h index da27e9d8fa64..05ca43f13884 100644 --- a/include/drm/ttm/ttm_tt.h +++ b/include/drm/ttm/ttm_tt.h @@ -157,6 +157,8 @@ int ttm_tt_populate(struct ttm_bo_device *bdev, struct ttm_tt *ttm, struct ttm_o */ void ttm_tt_unpopulate(struct ttm_bo_device *bdev, struct ttm_tt *ttm); +void ttm_tt_mgr_init(unsigned long num_pages, unsigned long num_dma32_pages); + #if IS_ENABLED(CONFIG_AGP) #include <linux/agp_backend.h> -- 2.25.1 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH 2/3] drm/ttm: move memory accounting into vmwgfx 2020-11-17 14:06 [PATCH 1/3] drm/ttm: rework ttm_tt page limit Christian König @ 2020-11-17 14:06 ` Christian König 2020-11-18 20:58 ` kernel test robot 2020-11-19 7:09 ` kernel test robot 2020-11-17 14:06 ` [PATCH 3/3] drm/ttm: make up to 90% of system memory available Christian König ` (2 subsequent siblings) 3 siblings, 2 replies; 18+ messages in thread From: Christian König @ 2020-11-17 14:06 UTC (permalink / raw) To: airlied, Felix.Kuehling, daniel, dri-devel, linux-graphics-maintainer, sroland This is just another feature which is only used by VMWGFX, so move it into the driver instead. I've tried to add the accounting sysfs file to the kobject of the drm minor, but I'm not 100% sure if this works as expected. Signed-off-by: Christian König <christian.koenig@amd.com> --- .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 16 +++-- drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 8 +-- drivers/gpu/drm/drm_gem_vram_helper.c | 6 +- drivers/gpu/drm/nouveau/nouveau_bo.c | 7 +- drivers/gpu/drm/nouveau/nouveau_drv.h | 1 - drivers/gpu/drm/qxl/qxl_object.c | 4 +- drivers/gpu/drm/radeon/radeon_object.c | 8 +-- drivers/gpu/drm/ttm/Makefile | 6 +- drivers/gpu/drm/ttm/ttm_bo.c | 65 ++++++++----------- drivers/gpu/drm/ttm/ttm_bo_util.c | 1 - drivers/gpu/drm/ttm/ttm_pool.c | 13 +--- drivers/gpu/drm/vmwgfx/Makefile | 2 +- drivers/gpu/drm/{ttm => vmwgfx}/ttm_memory.c | 31 +++------ .../gpu/drm/vmwgfx}/ttm_memory.h | 5 +- drivers/gpu/drm/vmwgfx/ttm_object.h | 3 +- drivers/gpu/drm/vmwgfx/vmwgfx_bo.c | 22 +++++-- drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 5 ++ drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c | 28 +++++++- include/drm/ttm/ttm_bo_api.h | 11 +--- include/drm/ttm/ttm_bo_driver.h | 1 - include/drm/ttm/ttm_tt.h | 1 + 21 files changed, 118 insertions(+), 126 deletions(-) rename drivers/gpu/drm/{ttm => vmwgfx}/ttm_memory.c (94%) rename {include/drm/ttm => drivers/gpu/drm/vmwgfx}/ttm_memory.h (97%) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c index 1755386470e6..4eec73da0a7b 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c @@ -118,6 +118,16 @@ void amdgpu_amdkfd_gpuvm_init_mem_limits(void) */ #define ESTIMATE_PT_SIZE(mem_size) ((mem_size) >> 14) +static size_t amdgpu_amdkfd_acc_size(uint64_t size) +{ + size >>= PAGE_SHIFT; + size *= sizeof(dma_addr_t) + sizeof(void *); + + return __roundup_pow_of_two(sizeof(struct amdgpu_bo)) + + __rountup_pow_of_two(sizeof(struct ttm_tt)) + + PAGE_ALIGN(size); +} + static int amdgpu_amdkfd_reserve_mem_limit(struct amdgpu_device *adev, uint64_t size, u32 domain, bool sg) { @@ -126,8 +136,7 @@ static int amdgpu_amdkfd_reserve_mem_limit(struct amdgpu_device *adev, size_t acc_size, system_mem_needed, ttm_mem_needed, vram_needed; int ret = 0; - acc_size = ttm_bo_dma_acc_size(&adev->mman.bdev, size, - sizeof(struct amdgpu_bo)); + acc_size = amdgpu_amdkfd_acc_size(size); vram_needed = 0; if (domain == AMDGPU_GEM_DOMAIN_GTT) { @@ -174,8 +183,7 @@ static void unreserve_mem_limit(struct amdgpu_device *adev, { size_t acc_size; - acc_size = ttm_bo_dma_acc_size(&adev->mman.bdev, size, - sizeof(struct amdgpu_bo)); + acc_size = amdgpu_amdkfd_acc_size(size); spin_lock(&kfd_mem_limit.mem_limit_lock); if (domain == AMDGPU_GEM_DOMAIN_GTT) { diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c index c6c9723d3d8a..b4ed0662ec5b 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c @@ -523,7 +523,6 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev, }; struct amdgpu_bo *bo; unsigned long page_align, size = bp->size; - size_t acc_size; int r; /* Note that GDS/GWS/OA allocates 1 page per byte/resource. */ @@ -546,9 +545,6 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev, *bo_ptr = NULL; - acc_size = ttm_bo_dma_acc_size(&adev->mman.bdev, size, - sizeof(struct amdgpu_bo)); - bo = kzalloc(sizeof(struct amdgpu_bo), GFP_KERNEL); if (bo == NULL) return -ENOMEM; @@ -577,8 +573,8 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev, bo->tbo.priority = 1; r = ttm_bo_init_reserved(&adev->mman.bdev, &bo->tbo, size, bp->type, - &bo->placement, page_align, &ctx, acc_size, - NULL, bp->resv, &amdgpu_bo_destroy); + &bo->placement, page_align, &ctx, NULL, + bp->resv, &amdgpu_bo_destroy); if (unlikely(r != 0)) return r; diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c index 02ca22e90290..5cf7797048e1 100644 --- a/drivers/gpu/drm/drm_gem_vram_helper.c +++ b/drivers/gpu/drm/drm_gem_vram_helper.c @@ -189,7 +189,6 @@ struct drm_gem_vram_object *drm_gem_vram_create(struct drm_device *dev, struct drm_vram_mm *vmm = dev->vram_mm; struct ttm_bo_device *bdev; int ret; - size_t acc_size; if (WARN_ONCE(!vmm, "VRAM MM not initialized")) return ERR_PTR(-EINVAL); @@ -216,7 +215,6 @@ struct drm_gem_vram_object *drm_gem_vram_create(struct drm_device *dev, } bdev = &vmm->bdev; - acc_size = ttm_bo_dma_acc_size(bdev, size, sizeof(*gbo)); gbo->bo.bdev = bdev; drm_gem_vram_placement(gbo, DRM_GEM_VRAM_PL_FLAG_SYSTEM); @@ -226,8 +224,8 @@ struct drm_gem_vram_object *drm_gem_vram_create(struct drm_device *dev, * to release gbo->bo.base and kfree gbo. */ ret = ttm_bo_init(bdev, &gbo->bo, size, ttm_bo_type_device, - &gbo->placement, pg_align, false, acc_size, - NULL, NULL, ttm_buffer_object_destroy); + &gbo->placement, pg_align, false, NULL, NULL, + ttm_buffer_object_destroy); if (ret) return ERR_PTR(ret); diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index 7aa4286784ae..3ed5bac70852 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -300,18 +300,15 @@ nouveau_bo_init(struct nouveau_bo *nvbo, u64 size, int align, u32 domain, struct sg_table *sg, struct dma_resv *robj) { int type = sg ? ttm_bo_type_sg : ttm_bo_type_device; - size_t acc_size; int ret; - acc_size = ttm_bo_dma_acc_size(nvbo->bo.bdev, size, sizeof(*nvbo)); - nvbo->bo.mem.num_pages = size >> PAGE_SHIFT; nouveau_bo_placement_set(nvbo, domain, 0); INIT_LIST_HEAD(&nvbo->io_reserve_lru); ret = ttm_bo_init(nvbo->bo.bdev, &nvbo->bo, size, type, - &nvbo->placement, align >> PAGE_SHIFT, false, - acc_size, sg, robj, nouveau_bo_del_ttm); + &nvbo->placement, align >> PAGE_SHIFT, false, sg, + robj, nouveau_bo_del_ttm); if (ret) { /* ttm will call nouveau_bo_del_ttm if it fails.. */ return ret; diff --git a/drivers/gpu/drm/nouveau/nouveau_drv.h b/drivers/gpu/drm/nouveau/nouveau_drv.h index 9d04d1b36434..8e90b3e47bbe 100644 --- a/drivers/gpu/drm/nouveau/nouveau_drv.h +++ b/drivers/gpu/drm/nouveau/nouveau_drv.h @@ -54,7 +54,6 @@ #include <drm/ttm/ttm_bo_api.h> #include <drm/ttm/ttm_bo_driver.h> #include <drm/ttm/ttm_placement.h> -#include <drm/ttm/ttm_memory.h> #include <drm/ttm/ttm_module.h> #include <drm/drm_audio_component.h> diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c index ceebc5881f68..705b51535492 100644 --- a/drivers/gpu/drm/qxl/qxl_object.c +++ b/drivers/gpu/drm/qxl/qxl_object.c @@ -138,8 +138,8 @@ int qxl_bo_create(struct qxl_device *qdev, qxl_ttm_placement_from_domain(bo, domain); r = ttm_bo_init_reserved(&qdev->mman.bdev, &bo->tbo, size, type, - &bo->placement, 0, &ctx, size, - NULL, NULL, &qxl_ttm_bo_destroy); + &bo->placement, 0, &ctx, NULL, NULL, + &qxl_ttm_bo_destroy); if (unlikely(r != 0)) { if (r != -ERESTARTSYS) dev_err(qdev->ddev.dev, diff --git a/drivers/gpu/drm/radeon/radeon_object.c b/drivers/gpu/drm/radeon/radeon_object.c index ab81e35cb060..934109634ee6 100644 --- a/drivers/gpu/drm/radeon/radeon_object.c +++ b/drivers/gpu/drm/radeon/radeon_object.c @@ -161,7 +161,6 @@ int radeon_bo_create(struct radeon_device *rdev, struct radeon_bo *bo; enum ttm_bo_type type; unsigned long page_align = roundup(byte_align, PAGE_SIZE) >> PAGE_SHIFT; - size_t acc_size; int r; size = ALIGN(size, PAGE_SIZE); @@ -175,9 +174,6 @@ int radeon_bo_create(struct radeon_device *rdev, } *bo_ptr = NULL; - acc_size = ttm_bo_dma_acc_size(&rdev->mman.bdev, size, - sizeof(struct radeon_bo)); - bo = kzalloc(sizeof(struct radeon_bo), GFP_KERNEL); if (bo == NULL) return -ENOMEM; @@ -232,8 +228,8 @@ int radeon_bo_create(struct radeon_device *rdev, /* Kernel allocation are uninterruptible */ down_read(&rdev->pm.mclk_lock); r = ttm_bo_init(&rdev->mman.bdev, &bo->tbo, size, type, - &bo->placement, page_align, !kernel, acc_size, - sg, resv, &radeon_ttm_bo_destroy); + &bo->placement, page_align, !kernel, sg, resv, + &radeon_ttm_bo_destroy); up_read(&rdev->pm.mclk_lock); if (unlikely(r != 0)) { return r; diff --git a/drivers/gpu/drm/ttm/Makefile b/drivers/gpu/drm/ttm/Makefile index b6f5f87b270f..7e8dc85caac3 100644 --- a/drivers/gpu/drm/ttm/Makefile +++ b/drivers/gpu/drm/ttm/Makefile @@ -2,10 +2,8 @@ # # Makefile for the drm device driver. This driver provides support for the -ttm-y := ttm_memory.o ttm_tt.o ttm_bo.o \ - ttm_bo_util.o ttm_bo_vm.o ttm_module.o \ - ttm_execbuf_util.o ttm_range_manager.o \ - ttm_resource.o ttm_pool.o +ttm-y := ttm_tt.o ttm_bo.o ttm_bo_util.o ttm_bo_vm.o ttm_module.o \ + ttm_execbuf_util.o ttm_range_manager.o ttm_resource.o ttm_pool.o ttm-$(CONFIG_AGP) += ttm_agp_backend.o obj-$(CONFIG_DRM_TTM) += ttm.o diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index e6bcbfe530ec..a958135cb3fe 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -480,7 +480,6 @@ static void ttm_bo_release(struct kref *kref) struct ttm_buffer_object *bo = container_of(kref, struct ttm_buffer_object, kref); struct ttm_bo_device *bdev = bo->bdev; - size_t acc_size = bo->acc_size; int ret; if (!bo->deleted) { @@ -541,7 +540,6 @@ static void ttm_bo_release(struct kref *kref) if (!ttm_bo_uses_embedded_gem_object(bo)) dma_resv_fini(&bo->base._resv); bo->destroy(bo); - ttm_mem_global_free(&ttm_mem_glob, acc_size); } void ttm_bo_put(struct ttm_buffer_object *bo) @@ -1105,25 +1103,13 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev, struct ttm_placement *placement, uint32_t page_alignment, struct ttm_operation_ctx *ctx, - size_t acc_size, struct sg_table *sg, struct dma_resv *resv, void (*destroy) (struct ttm_buffer_object *)) { - struct ttm_mem_global *mem_glob = &ttm_mem_glob; - int ret = 0; unsigned long num_pages; bool locked; - - ret = ttm_mem_global_alloc(mem_glob, acc_size, ctx); - if (ret) { - pr_err("Out of kernel memory\n"); - if (destroy) - (*destroy)(bo); - else - kfree(bo); - return -ENOMEM; - } + int ret = 0; num_pages = (size + PAGE_SIZE - 1) >> PAGE_SHIFT; if (num_pages == 0) { @@ -1132,7 +1118,6 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev, (*destroy)(bo); else kfree(bo); - ttm_mem_global_free(mem_glob, acc_size); return -EINVAL; } bo->destroy = destroy ? destroy : ttm_bo_default_destroy; @@ -1153,7 +1138,6 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev, bo->mem.bus.addr = NULL; bo->moving = NULL; bo->mem.placement = 0; - bo->acc_size = acc_size; bo->pin_count = 0; bo->sg = sg; if (resv) { @@ -1213,7 +1197,6 @@ int ttm_bo_init(struct ttm_bo_device *bdev, struct ttm_placement *placement, uint32_t page_alignment, bool interruptible, - size_t acc_size, struct sg_table *sg, struct dma_resv *resv, void (*destroy) (struct ttm_buffer_object *)) @@ -1222,8 +1205,7 @@ int ttm_bo_init(struct ttm_bo_device *bdev, int ret; ret = ttm_bo_init_reserved(bdev, bo, size, type, placement, - page_alignment, &ctx, acc_size, - sg, resv, destroy); + page_alignment, &ctx, sg, resv, destroy); if (ret) return ret; @@ -1234,20 +1216,6 @@ int ttm_bo_init(struct ttm_bo_device *bdev, } EXPORT_SYMBOL(ttm_bo_init); -size_t ttm_bo_dma_acc_size(struct ttm_bo_device *bdev, - unsigned long bo_size, - unsigned struct_size) -{ - unsigned npages = (PAGE_ALIGN(bo_size)) >> PAGE_SHIFT; - size_t size = 0; - - size += ttm_round_pot(struct_size); - size += ttm_round_pot(npages * (2*sizeof(void *) + sizeof(dma_addr_t))); - size += ttm_round_pot(sizeof(struct ttm_tt)); - return size; -} -EXPORT_SYMBOL(ttm_bo_dma_acc_size); - static void ttm_bo_global_kobj_release(struct kobject *kobj) { struct ttm_bo_global *glob = @@ -1264,9 +1232,10 @@ static void ttm_bo_global_release(void) if (--ttm_bo_glob_use_count > 0) goto out; + ttm_pool_mgr_fini(); + kobject_del(&glob->kobj); kobject_put(&glob->kobj); - ttm_mem_global_release(&ttm_mem_glob); memset(glob, 0, sizeof(*glob)); out: mutex_unlock(&ttm_global_mutex); @@ -1275,6 +1244,8 @@ static void ttm_bo_global_release(void) static int ttm_bo_global_init(void) { struct ttm_bo_global *glob = &ttm_bo_glob; + uint64_t num_pages, num_dma32_pages; + struct sysinfo si; int ret = 0; unsigned i; @@ -1282,9 +1253,27 @@ static int ttm_bo_global_init(void) if (++ttm_bo_glob_use_count > 1) goto out; - ret = ttm_mem_global_init(&ttm_mem_glob); - if (ret) - goto out; + si_meminfo(&si); + + /* Limit the number of pages in the pool to about 50% of the total + * system memory. + */ + num_pages = (u64)si.totalram * si.mem_unit; + num_pages = (num_pages * 50 / 100) >> PAGE_SHIFT; + + ttm_pool_mgr_init(num_pages); + + /* We generally allow TTM to allocate a maximum of 50% of + * the available system memory. + */ + num_pages = (u64)si.totalram * si.mem_unit; + num_pages = (num_pages * 50 / 100) >> PAGE_SHIFT; + + /* But for DMA32 we limit ourself to only use 2GiB maximum. */ + num_dma32_pages = (u64)(si.totalram - si.totalhigh) * si.mem_unit; + num_dma32_pages = min(num_dma32_pages, 2ULL << 30) >> PAGE_SHIFT; + + ttm_tt_mgr_init(num_pages, num_dma32_pages); spin_lock_init(&glob->lru_lock); glob->dummy_read_page = alloc_page(__GFP_ZERO | GFP_DMA32); diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index 7ccb2295cac1..1b6195771011 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -309,7 +309,6 @@ static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo, kref_init(&fbo->base.kref); fbo->base.destroy = &ttm_transfered_destroy; - fbo->base.acc_size = 0; fbo->base.pin_count = 1; if (bo->type != ttm_bo_type_sg) fbo->base.base.resv = &fbo->base.base._resv; diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c index 1b96780b4989..6a4e093e7a22 100644 --- a/drivers/gpu/drm/ttm/ttm_pool.c +++ b/drivers/gpu/drm/ttm/ttm_pool.c @@ -407,16 +407,10 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, caching = pages + (1 << order); } - r = ttm_mem_global_alloc_page(&ttm_mem_glob, p, - (1 << order) * PAGE_SIZE, - ctx); - if (r) - goto error_free_page; - if (dma_addr) { r = ttm_pool_map(pool, order, p, &dma_addr); if (r) - goto error_global_free; + goto error_free_page; } num_pages -= 1 << order; @@ -430,9 +424,6 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, return 0; -error_global_free: - ttm_mem_global_free_page(&ttm_mem_glob, p, (1 << order) * PAGE_SIZE); - error_free_page: ttm_pool_free_page(pool, tt->caching, order, p); @@ -467,8 +458,6 @@ void ttm_pool_free(struct ttm_pool *pool, struct ttm_tt *tt) order = ttm_pool_page_order(pool, p); num_pages = 1ULL << order; - ttm_mem_global_free_page(&ttm_mem_glob, p, - num_pages * PAGE_SIZE); if (tt->dma_address) ttm_pool_unmap(pool, tt->dma_address[i], num_pages); diff --git a/drivers/gpu/drm/vmwgfx/Makefile b/drivers/gpu/drm/vmwgfx/Makefile index 31f85f09f1fc..86af5d7b0242 100644 --- a/drivers/gpu/drm/vmwgfx/Makefile +++ b/drivers/gpu/drm/vmwgfx/Makefile @@ -9,7 +9,7 @@ vmwgfx-y := vmwgfx_execbuf.o vmwgfx_gmr.o vmwgfx_kms.o vmwgfx_drv.o \ vmwgfx_cotable.o vmwgfx_so.o vmwgfx_binding.o vmwgfx_msg.o \ vmwgfx_simple_resource.o vmwgfx_va.o vmwgfx_blit.o \ vmwgfx_validation.o vmwgfx_page_dirty.o vmwgfx_streamoutput.o \ - ttm_object.o ttm_lock.o + ttm_object.o ttm_lock.o ttm_memory.o vmwgfx-$(CONFIG_TRANSPARENT_HUGEPAGE) += vmwgfx_thp.o obj-$(CONFIG_DRM_VMWGFX) := vmwgfx.o diff --git a/drivers/gpu/drm/ttm/ttm_memory.c b/drivers/gpu/drm/vmwgfx/ttm_memory.c similarity index 94% rename from drivers/gpu/drm/ttm/ttm_memory.c rename to drivers/gpu/drm/vmwgfx/ttm_memory.c index c53136048630..fa75110d7fab 100644 --- a/drivers/gpu/drm/ttm/ttm_memory.c +++ b/drivers/gpu/drm/vmwgfx/ttm_memory.c @@ -28,8 +28,6 @@ #define pr_fmt(fmt) "[TTM] " fmt -#include <drm/ttm/ttm_memory.h> -#include <drm/ttm/ttm_module.h> #include <linux/spinlock.h> #include <linux/sched.h> #include <linux/wait.h> @@ -37,8 +35,11 @@ #include <linux/module.h> #include <linux/slab.h> #include <linux/swap.h> -#include <drm/ttm/ttm_pool.h> -#include <drm/ttm/ttm_tt.h> + +#include <drm/drm_device.h> +#include <drm/drm_file.h> + +#include "ttm_memory.h" #define TTM_MEMORY_ALLOC_RETRIES 4 @@ -413,9 +414,8 @@ static int ttm_mem_init_dma32_zone(struct ttm_mem_global *glob, } #endif -int ttm_mem_global_init(struct ttm_mem_global *glob) +int ttm_mem_global_init(struct ttm_mem_global *glob, struct drm_device *dev) { - uint64_t num_pages, num_dma32_pages; struct ttm_mem_zone *zone; struct sysinfo si; int ret; @@ -425,7 +425,8 @@ int ttm_mem_global_init(struct ttm_mem_global *glob) glob->swap_queue = create_singlethread_workqueue("ttm_swap"); INIT_WORK(&glob->work, ttm_shrink_work); ret = kobject_init_and_add( - &glob->kobj, &ttm_mem_glob_kobj_type, ttm_get_kobj(), "memory_accounting"); + &glob->kobj, &ttm_mem_glob_kobj_type, &dev->primary->kdev->kobj, + "memory_accounting"); if (unlikely(ret != 0)) { kobject_put(&glob->kobj); return ret; @@ -453,19 +454,6 @@ int ttm_mem_global_init(struct ttm_mem_global *glob) pr_info("Zone %7s: Available graphics memory: %llu KiB\n", zone->name, (unsigned long long)zone->max_mem >> 10); } - ttm_pool_mgr_init(glob->zone_kernel->max_mem/(2*PAGE_SIZE)); - - /* We generally allow TTM to allocate a maximum of 50% of - * the available system memory. - */ - num_pages = (u64)si.totalram * si.mem_unit; - num_pages = (num_pages * 50 / 100) >> PAGE_SHIFT; - - /* But for DMA32 we limit ourself to only use 2GiB maximum. */ - num_dma32_pages = (u64)(si.totalram - si.totalhigh) * si.mem_unit; - num_dma32_pages = min(num_dma32_pages, 2ULL << 30) >> PAGE_SHIFT; - - ttm_tt_mgr_init(num_pages, num_dma32_pages); return 0; out_no_zone: @@ -478,9 +466,6 @@ void ttm_mem_global_release(struct ttm_mem_global *glob) struct ttm_mem_zone *zone; unsigned int i; - /* let the page allocator first stop the shrink work. */ - ttm_pool_mgr_fini(); - flush_workqueue(glob->swap_queue); destroy_workqueue(glob->swap_queue); glob->swap_queue = NULL; diff --git a/include/drm/ttm/ttm_memory.h b/drivers/gpu/drm/vmwgfx/ttm_memory.h similarity index 97% rename from include/drm/ttm/ttm_memory.h rename to drivers/gpu/drm/vmwgfx/ttm_memory.h index c1f167881e33..850ee6c867da 100644 --- a/include/drm/ttm/ttm_memory.h +++ b/drivers/gpu/drm/vmwgfx/ttm_memory.h @@ -35,7 +35,8 @@ #include <linux/errno.h> #include <linux/kobject.h> #include <linux/mm.h> -#include "ttm_bo_api.h" + +#include <drm/ttm/ttm_bo_api.h> /** * struct ttm_mem_global - Global memory accounting structure. @@ -79,7 +80,7 @@ extern struct ttm_mem_global { #endif } ttm_mem_glob; -int ttm_mem_global_init(struct ttm_mem_global *glob); +int ttm_mem_global_init(struct ttm_mem_global *glob, struct drm_device *dev); void ttm_mem_global_release(struct ttm_mem_global *glob); int ttm_mem_global_alloc(struct ttm_mem_global *glob, uint64_t memory, struct ttm_operation_ctx *ctx); diff --git a/drivers/gpu/drm/vmwgfx/ttm_object.h b/drivers/gpu/drm/vmwgfx/ttm_object.h index ede26df87c93..49b064f0cb19 100644 --- a/drivers/gpu/drm/vmwgfx/ttm_object.h +++ b/drivers/gpu/drm/vmwgfx/ttm_object.h @@ -43,7 +43,8 @@ #include <linux/rcupdate.h> #include <drm/drm_hashtab.h> -#include <drm/ttm/ttm_memory.h> + +#include "ttm_memory.h" /** * enum ttm_ref_type diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c index 263d76ae43f0..5878f0e9199f 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c @@ -508,11 +508,16 @@ int vmw_bo_create_kernel(struct vmw_private *dev_priv, unsigned long size, acc_size = ttm_round_pot(sizeof(*bo)); acc_size += ttm_round_pot(npages * sizeof(void *)); acc_size += ttm_round_pot(sizeof(struct ttm_tt)); + + ret = ttm_mem_global_alloc(&ttm_mem_glob, acc_size, &ctx); + if (unlikely(ret)) + goto error_free; + ret = ttm_bo_init_reserved(&dev_priv->bdev, bo, size, ttm_bo_type_device, placement, 0, - &ctx, acc_size, NULL, NULL, NULL); + &ctx, NULL, NULL, NULL); if (unlikely(ret)) - goto error_free; + goto error_account; ttm_bo_pin(bo); ttm_bo_unreserve(bo); @@ -520,6 +525,9 @@ int vmw_bo_create_kernel(struct vmw_private *dev_priv, unsigned long size, return 0; +error_account: + ttm_mem_global_free(&ttm_mem_glob, acc_size); + error_free: kfree(bo); return ret; @@ -559,11 +567,17 @@ int vmw_bo_init(struct vmw_private *dev_priv, vmw_bo->base.priority = 3; vmw_bo->res_tree = RB_ROOT; + ret = ttm_mem_global_alloc(&ttm_mem_glob, acc_size, &ctx); + if (unlikely(ret)) + return ret; + ret = ttm_bo_init_reserved(bdev, &vmw_bo->base, size, ttm_bo_type_device, placement, - 0, &ctx, acc_size, NULL, NULL, bo_free); - if (unlikely(ret)) + 0, &ctx, NULL, NULL, bo_free); + if (unlikely(ret)) { + ttm_mem_global_free(&ttm_mem_glob, acc_size); return ret; + } if (pin) ttm_bo_pin(&vmw_bo->base); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c index 216daf93022c..1310857879e7 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c @@ -1273,6 +1273,7 @@ static void vmw_remove(struct pci_dev *pdev) { struct drm_device *dev = pci_get_drvdata(pdev); + ttm_mem_global_release(&ttm_mem_glob); drm_dev_unregister(dev); vmw_driver_unload(dev); drm_dev_put(dev); @@ -1520,6 +1521,10 @@ static int vmw_probe(struct pci_dev *pdev, const struct pci_device_id *ent) goto err_pci_disable_device; } + ret = ttm_mem_global_init(&ttm_mem_glob, dev); + if (ret) + return ret; + dev->pdev = pdev; pci_set_drvdata(pdev, dev); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c index 6a04261ce760..128e977939c3 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c @@ -575,11 +575,31 @@ static void vmw_ttm_destroy(struct ttm_bo_device *bdev, struct ttm_tt *ttm) static int vmw_ttm_populate(struct ttm_bo_device *bdev, struct ttm_tt *ttm, struct ttm_operation_ctx *ctx) { + unsigned int i; + int ret; + /* TODO: maybe completely drop this ? */ if (ttm_tt_is_populated(ttm)) return 0; - return ttm_pool_alloc(&bdev->pool, ttm, ctx); + ret = ttm_pool_alloc(&bdev->pool, ttm, ctx); + if (ret) + return ret; + + for (i = 0; i < ttm->num_pages; ++i) { + ret = ttm_mem_global_alloc_page(&ttm_mem_glob, ttm->pages[i], + PAGE_SIZE, ctx); + if (ret) + goto error; + } + return 0; + +error: + while (i--) + ttm_mem_global_free_page(&ttm_mem_glob, ttm->pages[i], + PAGE_SIZE); + ttm_pool_free(&bdev->pool, ttm); + return ret; } static void vmw_ttm_unpopulate(struct ttm_bo_device *bdev, @@ -587,6 +607,7 @@ static void vmw_ttm_unpopulate(struct ttm_bo_device *bdev, { struct vmw_ttm_tt *vmw_tt = container_of(ttm, struct vmw_ttm_tt, dma_ttm); + unsigned int i; if (vmw_tt->mob) { vmw_mob_destroy(vmw_tt->mob); @@ -594,6 +615,11 @@ static void vmw_ttm_unpopulate(struct ttm_bo_device *bdev, } vmw_ttm_unmap_dma(vmw_tt); + + for (i = 0; i < ttm->num_pages; ++i) + ttm_mem_global_free_page(&ttm_mem_glob, ttm->pages[i], + PAGE_SIZE); + ttm_pool_free(&bdev->pool, ttm); } diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h index 2564e66e67d7..e5027f7cdd68 100644 --- a/include/drm/ttm/ttm_bo_api.h +++ b/include/drm/ttm/ttm_bo_api.h @@ -88,7 +88,6 @@ struct ttm_tt; * @type: The bo type. * @destroy: Destruction function. If NULL, kfree is used. * @num_pages: Actual number of pages. - * @acc_size: Accounted size for this object. * @kref: Reference count of this buffer object. When this refcount reaches * zero, the object is destroyed or put on the delayed delete list. * @mem: structure describing current placement. @@ -126,7 +125,6 @@ struct ttm_buffer_object { enum ttm_bo_type type; void (*destroy) (struct ttm_buffer_object *); unsigned long num_pages; - size_t acc_size; /** * Members not needing protection. @@ -356,10 +354,6 @@ void ttm_bo_unlock_delayed_workqueue(struct ttm_bo_device *bdev, int resched); bool ttm_bo_eviction_valuable(struct ttm_buffer_object *bo, const struct ttm_place *place); -size_t ttm_bo_dma_acc_size(struct ttm_bo_device *bdev, - unsigned long bo_size, - unsigned struct_size); - /** * ttm_bo_init_reserved * @@ -370,7 +364,6 @@ size_t ttm_bo_dma_acc_size(struct ttm_bo_device *bdev, * @flags: Initial placement flags. * @page_alignment: Data alignment in pages. * @ctx: TTM operation context for memory allocation. - * @acc_size: Accounted size for this object. * @resv: Pointer to a dma_resv, or NULL to let ttm allocate one. * @destroy: Destroy function. Use NULL for kfree(). * @@ -402,7 +395,6 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev, struct ttm_placement *placement, uint32_t page_alignment, struct ttm_operation_ctx *ctx, - size_t acc_size, struct sg_table *sg, struct dma_resv *resv, void (*destroy) (struct ttm_buffer_object *)); @@ -422,7 +414,6 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev, * holds a pointer to a persistent shmem object. Typically, this would * point to the shmem object backing a GEM object if TTM is used to back a * GEM user interface. - * @acc_size: Accounted size for this object. * @resv: Pointer to a dma_resv, or NULL to let ttm allocate one. * @destroy: Destroy function. Use NULL for kfree(). * @@ -447,7 +438,7 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev, int ttm_bo_init(struct ttm_bo_device *bdev, struct ttm_buffer_object *bo, unsigned long size, enum ttm_bo_type type, struct ttm_placement *placement, - uint32_t page_alignment, bool interrubtible, size_t acc_size, + uint32_t page_alignment, bool interrubtible, struct sg_table *sg, struct dma_resv *resv, void (*destroy) (struct ttm_buffer_object *)); diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index f02f7cf9ae90..54788a5160a0 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -38,7 +38,6 @@ #include <linux/dma-resv.h> #include "ttm_bo_api.h" -#include "ttm_memory.h" #include "ttm_module.h" #include "ttm_placement.h" #include "ttm_tt.h" diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h index 05ca43f13884..4acb4d5b5eef 100644 --- a/include/drm/ttm/ttm_tt.h +++ b/include/drm/ttm/ttm_tt.h @@ -30,6 +30,7 @@ #include <linux/types.h> #include <drm/ttm/ttm_caching.h> +struct ttm_bo_device; struct ttm_tt; struct ttm_resource; struct ttm_buffer_object; -- 2.25.1 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH 2/3] drm/ttm: move memory accounting into vmwgfx 2020-11-17 14:06 ` [PATCH 2/3] drm/ttm: move memory accounting into vmwgfx Christian König @ 2020-11-18 20:58 ` kernel test robot 2020-11-19 7:09 ` kernel test robot 1 sibling, 0 replies; 18+ messages in thread From: kernel test robot @ 2020-11-18 20:58 UTC (permalink / raw) To: Christian König, airlied, Felix.Kuehling, daniel, dri-devel, linux-graphics-maintainer, sroland Cc: kbuild-all [-- Attachment #1: Type: text/plain, Size: 1751 bytes --] Hi "Christian, I love your patch! Yet something to improve: [auto build test ERROR on next-20201117] [cannot apply to linus/master v5.10-rc4 v5.10-rc3 v5.10-rc2 v5.10-rc4] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Christian-K-nig/drm-ttm-rework-ttm_tt-page-limit/20201117-220814 base: 7c8ca8129ee9724cb1527895fe6dec942ef07f19 config: arm-allyesconfig (attached as .config) compiler: arm-linux-gnueabi-gcc (GCC) 9.3.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/0day-ci/linux/commit/78c57dc48dc2612e7a6a9d265622f3f459a92c56 git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Christian-K-nig/drm-ttm-rework-ttm_tt-page-limit/20201117-220814 git checkout 78c57dc48dc2612e7a6a9d265622f3f459a92c56 # save the attached .config to linux build tree COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=arm If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@intel.com> All errors (new ones prefixed by >>): arm-linux-gnueabi-ld: drivers/gpu/drm/ttm/ttm_bo.o: in function `ttm_bo_device_init': (.text+0x1bb0): undefined reference to `__aeabi_uldivmod' >> arm-linux-gnueabi-ld: (.text+0x1c1c): undefined reference to `__aeabi_uldivmod' --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org [-- Attachment #2: .config.gz --] [-- Type: application/gzip, Size: 77225 bytes --] [-- Attachment #3: Type: text/plain, Size: 160 bytes --] _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 2/3] drm/ttm: move memory accounting into vmwgfx 2020-11-17 14:06 ` [PATCH 2/3] drm/ttm: move memory accounting into vmwgfx Christian König 2020-11-18 20:58 ` kernel test robot @ 2020-11-19 7:09 ` kernel test robot 1 sibling, 0 replies; 18+ messages in thread From: kernel test robot @ 2020-11-19 7:09 UTC (permalink / raw) To: Christian König, airlied, Felix.Kuehling, daniel, dri-devel, linux-graphics-maintainer, sroland Cc: kbuild-all [-- Attachment #1: Type: text/plain, Size: 3019 bytes --] Hi "Christian, I love your patch! Yet something to improve: [auto build test ERROR on next-20201117] [cannot apply to linus/master v5.10-rc4 v5.10-rc3 v5.10-rc2 v5.10-rc4] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Christian-K-nig/drm-ttm-rework-ttm_tt-page-limit/20201117-220814 base: 7c8ca8129ee9724cb1527895fe6dec942ef07f19 config: mips-allyesconfig (attached as .config) compiler: mips-linux-gcc (GCC) 9.3.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/0day-ci/linux/commit/78c57dc48dc2612e7a6a9d265622f3f459a92c56 git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Christian-K-nig/drm-ttm-rework-ttm_tt-page-limit/20201117-220814 git checkout 78c57dc48dc2612e7a6a9d265622f3f459a92c56 # save the attached .config to linux build tree COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=mips If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@intel.com> All errors (new ones prefixed by >>): arch/mips/kernel/head.o: in function `dtb_found': (.ref.text+0xe0): relocation truncated to fit: R_MIPS_26 against `start_kernel' init/main.o: in function `set_reset_devices': main.c:(.init.text+0x20): relocation truncated to fit: R_MIPS_26 against `_mcount' main.c:(.init.text+0x30): relocation truncated to fit: R_MIPS_26 against `__sanitizer_cov_trace_pc' init/main.o: in function `debug_kernel': main.c:(.init.text+0x9c): relocation truncated to fit: R_MIPS_26 against `_mcount' main.c:(.init.text+0xac): relocation truncated to fit: R_MIPS_26 against `__sanitizer_cov_trace_pc' init/main.o: in function `quiet_kernel': main.c:(.init.text+0x118): relocation truncated to fit: R_MIPS_26 against `_mcount' main.c:(.init.text+0x128): relocation truncated to fit: R_MIPS_26 against `__sanitizer_cov_trace_pc' init/main.o: in function `init_setup': main.c:(.init.text+0x1a4): relocation truncated to fit: R_MIPS_26 against `_mcount' main.c:(.init.text+0x1c8): relocation truncated to fit: R_MIPS_26 against `__sanitizer_cov_trace_pc' main.c:(.init.text+0x1e8): relocation truncated to fit: R_MIPS_26 against `__sanitizer_cov_trace_pc' main.c:(.init.text+0x1fc): additional relocation overflows omitted from the output mips-linux-ld: drivers/gpu/drm/ttm/ttm_bo.o: in function `ttm_bo_device_init': >> (.text.ttm_bo_device_init+0x1f0): undefined reference to `__udivdi3' >> mips-linux-ld: (.text.ttm_bo_device_init+0x238): undefined reference to `__udivdi3' --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org [-- Attachment #2: .config.gz --] [-- Type: application/gzip, Size: 69306 bytes --] [-- Attachment #3: Type: text/plain, Size: 160 bytes --] _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 3/3] drm/ttm: make up to 90% of system memory available 2020-11-17 14:06 [PATCH 1/3] drm/ttm: rework ttm_tt page limit Christian König 2020-11-17 14:06 ` [PATCH 2/3] drm/ttm: move memory accounting into vmwgfx Christian König @ 2020-11-17 14:06 ` Christian König 2020-11-17 16:38 ` Michel Dänzer 2020-11-17 17:19 ` Daniel Vetter 2020-11-18 7:55 ` [PATCH 1/3] drm/ttm: rework ttm_tt page limit kernel test robot 2020-11-19 7:14 ` kernel test robot 3 siblings, 2 replies; 18+ messages in thread From: Christian König @ 2020-11-17 14:06 UTC (permalink / raw) To: airlied, Felix.Kuehling, daniel, dri-devel, linux-graphics-maintainer, sroland Increase the ammount of system memory drivers can use to about 90% of the total available. Signed-off-by: Christian König <christian.koenig@amd.com> --- drivers/gpu/drm/ttm/ttm_bo.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index a958135cb3fe..0a93df93dba4 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -1267,7 +1267,7 @@ static int ttm_bo_global_init(void) * the available system memory. */ num_pages = (u64)si.totalram * si.mem_unit; - num_pages = (num_pages * 50 / 100) >> PAGE_SHIFT; + num_pages = (num_pages * 90 / 100) >> PAGE_SHIFT; /* But for DMA32 we limit ourself to only use 2GiB maximum. */ num_dma32_pages = (u64)(si.totalram - si.totalhigh) * si.mem_unit; -- 2.25.1 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH 3/3] drm/ttm: make up to 90% of system memory available 2020-11-17 14:06 ` [PATCH 3/3] drm/ttm: make up to 90% of system memory available Christian König @ 2020-11-17 16:38 ` Michel Dänzer 2020-11-18 12:09 ` Christian König 2020-11-17 17:19 ` Daniel Vetter 1 sibling, 1 reply; 18+ messages in thread From: Michel Dänzer @ 2020-11-17 16:38 UTC (permalink / raw) To: Christian König Cc: airlied, Felix.Kuehling, sroland, dri-devel, linux-graphics-maintainer On 2020-11-17 3:06 p.m., Christian König wrote: > Increase the ammount of system memory drivers can use to about 90% of > the total available. > > Signed-off-by: Christian König <christian.koenig@amd.com> > --- > drivers/gpu/drm/ttm/ttm_bo.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c > index a958135cb3fe..0a93df93dba4 100644 > --- a/drivers/gpu/drm/ttm/ttm_bo.c > +++ b/drivers/gpu/drm/ttm/ttm_bo.c > @@ -1267,7 +1267,7 @@ static int ttm_bo_global_init(void) > * the available system memory. > */ > num_pages = (u64)si.totalram * si.mem_unit; > - num_pages = (num_pages * 50 / 100) >> PAGE_SHIFT; > + num_pages = (num_pages * 90 / 100) >> PAGE_SHIFT; > > /* But for DMA32 we limit ourself to only use 2GiB maximum. */ > num_dma32_pages = (u64)(si.totalram - si.totalhigh) * si.mem_unit; > This should update the comment added in patch 1. -- Earthling Michel Dänzer | https://redhat.com Libre software enthusiast | Mesa and X developer _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 3/3] drm/ttm: make up to 90% of system memory available 2020-11-17 16:38 ` Michel Dänzer @ 2020-11-18 12:09 ` Christian König 0 siblings, 0 replies; 18+ messages in thread From: Christian König @ 2020-11-18 12:09 UTC (permalink / raw) To: Michel Dänzer Cc: airlied, Felix.Kuehling, sroland, dri-devel, linux-graphics-maintainer Am 17.11.20 um 17:38 schrieb Michel Dänzer: > On 2020-11-17 3:06 p.m., Christian König wrote: >> Increase the ammount of system memory drivers can use to about 90% of >> the total available. >> >> Signed-off-by: Christian König <christian.koenig@amd.com> >> --- >> drivers/gpu/drm/ttm/ttm_bo.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c >> index a958135cb3fe..0a93df93dba4 100644 >> --- a/drivers/gpu/drm/ttm/ttm_bo.c >> +++ b/drivers/gpu/drm/ttm/ttm_bo.c >> @@ -1267,7 +1267,7 @@ static int ttm_bo_global_init(void) >> * the available system memory. >> */ >> num_pages = (u64)si.totalram * si.mem_unit; >> - num_pages = (num_pages * 50 / 100) >> PAGE_SHIFT; >> + num_pages = (num_pages * 90 / 100) >> PAGE_SHIFT; >> /* But for DMA32 we limit ourself to only use 2GiB maximum. */ >> num_dma32_pages = (u64)(si.totalram - si.totalhigh) * si.mem_unit; >> > > This should update the comment added in patch 1. Yeah indeed, I've tested this with 70, 75, 80, 85 and finally 90 and it looks like I've forgot to update the comment after a while. Thanks, Christian. _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 3/3] drm/ttm: make up to 90% of system memory available 2020-11-17 14:06 ` [PATCH 3/3] drm/ttm: make up to 90% of system memory available Christian König 2020-11-17 16:38 ` Michel Dänzer @ 2020-11-17 17:19 ` Daniel Vetter 2020-11-18 13:35 ` Christian König 1 sibling, 1 reply; 18+ messages in thread From: Daniel Vetter @ 2020-11-17 17:19 UTC (permalink / raw) To: Christian König Cc: airlied, Felix.Kuehling, sroland, dri-devel, linux-graphics-maintainer On Tue, Nov 17, 2020 at 03:06:15PM +0100, Christian König wrote: > Increase the ammount of system memory drivers can use to about 90% of > the total available. > > Signed-off-by: Christian König <christian.koenig@amd.com> > --- > drivers/gpu/drm/ttm/ttm_bo.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c > index a958135cb3fe..0a93df93dba4 100644 > --- a/drivers/gpu/drm/ttm/ttm_bo.c > +++ b/drivers/gpu/drm/ttm/ttm_bo.c > @@ -1267,7 +1267,7 @@ static int ttm_bo_global_init(void) > * the available system memory. > */ > num_pages = (u64)si.totalram * si.mem_unit; > - num_pages = (num_pages * 50 / 100) >> PAGE_SHIFT; > + num_pages = (num_pages * 90 / 100) >> PAGE_SHIFT; I don't think this is the design we want. As long as it was set at "half of system memory" it was clear that a) it's a hack b) precision didn't matter. But if you go to the limit and still want to keep the "we make sure there's no OOM", then precision starts to matter: - memory hotplug and hotunplug is a thing - userspace can mlock, and it's configureable - drivers can pin_user_pages for IO and random other stuff. Some of it is accounted as some subsystem specific mlock (like rdma does iirc), some is just yolo or short term enough (like) - none of what we do here takes into considerations any interactions with core mm tracking (like cgroups or numa or anything like that) If we want to drop the "half of system ram" limit (and yes that makes sense) I think the right design is: - we give up on the "no OOM" guarantee. - This means if you want real isolation of tasks, we need cgroups, and we need to integrate ttm cgroups with system memory cgroups somehow. Unlike randomly picked hardcoded limits this should work a lot more reliably and be a lot more useful in practical use in the field. - This also means that drivers start to fail in interesting ways. I think locking headaches are covered with all the lockdep annotations I've pushed, plus some of the things I still have in-flight (I have a might_alloc() annotations somewhere). That leaves validation of error paths for when allocations fail. Ime a very effective way we used in i915 is (ab)using EINTR restarting, which per drmIoctl uapi spec is requried. We could put a debug mode into ttm_tt which randomly fails with -EINTR to make sure it's all working correctly (plus anything else that allocates memory), and unlike real out-of-memory injection piglit and any other cts will complete without failure. Which gives us an excellent metric for "does it work". Actualy OOM, even injected one, tends to make stuff blow up in a way that's very hard to track and make sure you've got good coverage, since all your usual tests die pretty quickly. - ttm_tt needs to play fair with every other system memory user. We need to register a shrinker for each ttm_tt (so usually one per device I guess), which walks the lru (in shrink_count) and uses dma_resv_trylock for actual shrinking. We probably want to move it to SYSTEM first for that shrinker to pick up, so that there's some global fairness across all ttm_tt. - for GFP_DMA32 that means zone aware shrinkers. We've never used those, because thus far i915 didn't have any big need for low memory, so we haven't used this in practice. But it's supposed to be a thing. It's a bit more code than the oneliner above, but I also think it's a lot more solid. Plus it would resolve the last big issue where i915 gem is fairly fundamentally different compared to ttm. For that question I think once Maarten's locking rework for i915 has landed and all the other ttm rework from you and Dave is in, we've resolved them all. > /* But for DMA32 we limit ourself to only use 2GiB maximum. */ > num_dma32_pages = (u64)(si.totalram - si.totalhigh) * si.mem_unit; > -- > 2.25.1 > -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 3/3] drm/ttm: make up to 90% of system memory available 2020-11-17 17:19 ` Daniel Vetter @ 2020-11-18 13:35 ` Christian König 2020-11-18 21:15 ` Daniel Vetter 0 siblings, 1 reply; 18+ messages in thread From: Christian König @ 2020-11-18 13:35 UTC (permalink / raw) To: Daniel Vetter Cc: airlied, sroland, Felix.Kuehling, linux-graphics-maintainer, dri-devel Am 17.11.20 um 18:19 schrieb Daniel Vetter: > On Tue, Nov 17, 2020 at 03:06:15PM +0100, Christian König wrote: >> Increase the ammount of system memory drivers can use to about 90% of >> the total available. >> >> Signed-off-by: Christian König <christian.koenig@amd.com> >> --- >> drivers/gpu/drm/ttm/ttm_bo.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c >> index a958135cb3fe..0a93df93dba4 100644 >> --- a/drivers/gpu/drm/ttm/ttm_bo.c >> +++ b/drivers/gpu/drm/ttm/ttm_bo.c >> @@ -1267,7 +1267,7 @@ static int ttm_bo_global_init(void) >> * the available system memory. >> */ >> num_pages = (u64)si.totalram * si.mem_unit; >> - num_pages = (num_pages * 50 / 100) >> PAGE_SHIFT; >> + num_pages = (num_pages * 90 / 100) >> PAGE_SHIFT; > I don't think this is the design we want. As long as it was set at "half > of system memory" it was clear that a) it's a hack b) precision didn't > matter. Yeah, I'm also wondering what to do here exactly. In generally I would completely nuke that limit if possible. > But if you go to the limit and still want to keep the "we make sure > there's no OOM", then precision starts to matter: > - memory hotplug and hotunplug is a thing > - userspace can mlock, and it's configureable > - drivers can pin_user_pages for IO and random other stuff. Some of it is > accounted as some subsystem specific mlock (like rdma does iirc), some > is just yolo or short term enough (like) > - none of what we do here takes into considerations any interactions with > core mm tracking (like cgroups or numa or anything like that) OOM is perfectly fine with me, we should just not run into an OOM killer situation because we call shmem_read_mapping_page_gfp() in the shrinker callback. Any idea how we could guarantee that? > If we want to drop the "half of system ram" limit (and yes that makes > sense) I think the right design is: > > - we give up on the "no OOM" guarantee. > > - This means if you want real isolation of tasks, we need cgroups, and we > need to integrate ttm cgroups with system memory cgroups somehow. Unlike > randomly picked hardcoded limits this should work a lot more reliably > and be a lot more useful in practical use in the field. > > - This also means that drivers start to fail in interesting ways. I think > locking headaches are covered with all the lockdep annotations I've > pushed, plus some of the things I still have in-flight (I have a > might_alloc() annotations somewhere). That leaves validation of error > paths for when allocations fail. Ime a very effective way we used in > i915 is (ab)using EINTR restarting, which per drmIoctl uapi spec is > requried. We could put a debug mode into ttm_tt which randomly fails > with -EINTR to make sure it's all working correctly (plus anything else > that allocates memory), and unlike real out-of-memory injection piglit > and any other cts will complete without failure. Which gives us an > excellent metric for "does it work". Actualy OOM, even injected one, > tends to make stuff blow up in a way that's very hard to track and make > sure you've got good coverage, since all your usual tests die pretty > quickly. > > - ttm_tt needs to play fair with every other system memory user. We need > to register a shrinker for each ttm_tt (so usually one per device I > guess), which walks the lru (in shrink_count) and uses dma_resv_trylock > for actual shrinking. We probably want to move it to SYSTEM first for > that shrinker to pick up, so that there's some global fairness across > all ttm_tt. I already have patches for this. What's still missing is teaching the OOM killer which task is using the memory since memory referenced through the file descriptors are currently not accounted towards any process resources. > - for GFP_DMA32 that means zone aware shrinkers. We've never used those, > because thus far i915 didn't have any big need for low memory, so we > haven't used this in practice. But it's supposed to be a thing. I think we can mostly forget about GFP_DMA32, this should only be used for AGP and other ancient hardware. Christian. > It's a bit more code than the oneliner above, but I also think it's a lot > more solid. Plus it would resolve the last big issue where i915 gem is > fairly fundamentally different compared to ttm. For that question I think > once Maarten's locking rework for i915 has landed and all the other ttm > rework from you and Dave is in, we've resolved them all. > > >> /* But for DMA32 we limit ourself to only use 2GiB maximum. */ >> num_dma32_pages = (u64)(si.totalram - si.totalhigh) * si.mem_unit; >> -- >> 2.25.1 >> _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 3/3] drm/ttm: make up to 90% of system memory available 2020-11-18 13:35 ` Christian König @ 2020-11-18 21:15 ` Daniel Vetter 2020-11-19 15:27 ` Christian König 0 siblings, 1 reply; 18+ messages in thread From: Daniel Vetter @ 2020-11-18 21:15 UTC (permalink / raw) To: christian.koenig Cc: airlied, Felix.Kuehling, sroland, dri-devel, linux-graphics-maintainer On Wed, Nov 18, 2020 at 02:35:24PM +0100, Christian König wrote: > Am 17.11.20 um 18:19 schrieb Daniel Vetter: > > On Tue, Nov 17, 2020 at 03:06:15PM +0100, Christian König wrote: > > > Increase the ammount of system memory drivers can use to about 90% of > > > the total available. > > > > > > Signed-off-by: Christian König <christian.koenig@amd.com> > > > --- > > > drivers/gpu/drm/ttm/ttm_bo.c | 2 +- > > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > > > diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c > > > index a958135cb3fe..0a93df93dba4 100644 > > > --- a/drivers/gpu/drm/ttm/ttm_bo.c > > > +++ b/drivers/gpu/drm/ttm/ttm_bo.c > > > @@ -1267,7 +1267,7 @@ static int ttm_bo_global_init(void) > > > * the available system memory. > > > */ > > > num_pages = (u64)si.totalram * si.mem_unit; > > > - num_pages = (num_pages * 50 / 100) >> PAGE_SHIFT; > > > + num_pages = (num_pages * 90 / 100) >> PAGE_SHIFT; > > I don't think this is the design we want. As long as it was set at "half > > of system memory" it was clear that a) it's a hack b) precision didn't > > matter. > > Yeah, I'm also wondering what to do here exactly. In generally I would > completely nuke that limit if possible. > > > But if you go to the limit and still want to keep the "we make sure > > there's no OOM", then precision starts to matter: > > - memory hotplug and hotunplug is a thing > > - userspace can mlock, and it's configureable > > - drivers can pin_user_pages for IO and random other stuff. Some of it is > > accounted as some subsystem specific mlock (like rdma does iirc), some > > is just yolo or short term enough (like) > > - none of what we do here takes into considerations any interactions with > > core mm tracking (like cgroups or numa or anything like that) > > OOM is perfectly fine with me, we should just not run into an OOM killer > situation because we call shmem_read_mapping_page_gfp() in the shrinker > callback. > > Any idea how we could guarantee that? Uh ... I just realized I missed something between how ttm works and how i915 works. With i915 directly using shmem, all we do in the shrinker is unpin the shmem pages. With ttm we have the page pool in the middle. And it's needed for dma coherent and other things. This is rather fundamental. btw I don't think you'll get OOM, you get lockdep splat because you're recusring on fs_reclaim fake lock. We should probably put a might_alloc() into shmem_read_mapping_page_gfp(). > > If we want to drop the "half of system ram" limit (and yes that makes > > sense) I think the right design is: > > > > - we give up on the "no OOM" guarantee. > > > > - This means if you want real isolation of tasks, we need cgroups, and we > > need to integrate ttm cgroups with system memory cgroups somehow. Unlike > > randomly picked hardcoded limits this should work a lot more reliably > > and be a lot more useful in practical use in the field. > > > > - This also means that drivers start to fail in interesting ways. I think > > locking headaches are covered with all the lockdep annotations I've > > pushed, plus some of the things I still have in-flight (I have a > > might_alloc() annotations somewhere). That leaves validation of error > > paths for when allocations fail. Ime a very effective way we used in > > i915 is (ab)using EINTR restarting, which per drmIoctl uapi spec is > > requried. We could put a debug mode into ttm_tt which randomly fails > > with -EINTR to make sure it's all working correctly (plus anything else > > that allocates memory), and unlike real out-of-memory injection piglit > > and any other cts will complete without failure. Which gives us an > > excellent metric for "does it work". Actualy OOM, even injected one, > > tends to make stuff blow up in a way that's very hard to track and make > > sure you've got good coverage, since all your usual tests die pretty > > quickly. > > > > - ttm_tt needs to play fair with every other system memory user. We need > > to register a shrinker for each ttm_tt (so usually one per device I > > guess), which walks the lru (in shrink_count) and uses dma_resv_trylock > > for actual shrinking. We probably want to move it to SYSTEM first for > > that shrinker to pick up, so that there's some global fairness across > > all ttm_tt. > > I already have patches for this. > > What's still missing is teaching the OOM killer which task is using the > memory since memory referenced through the file descriptors are currently > not accounted towards any process resources. Yeah I don't think we've fixed that problem for i915 either. It loves to kill randome other stuff. In igt we solve this by marking any igt testcase as "pls kill this first". Well "solve". > > - for GFP_DMA32 that means zone aware shrinkers. We've never used those, > > because thus far i915 didn't have any big need for low memory, so we > > haven't used this in practice. But it's supposed to be a thing. > > I think we can mostly forget about GFP_DMA32, this should only be used for > AGP and other ancient hardware. Ok, that's good. So having an arbitrary "only half of GFP_DMA32" for these should also be acceptable, since back then gpus didn't really have gigabytes of vram yet. Biggest r500 seems to have topped out at 512MB. How much do we require the dma page pool? Afaiui it's only when there's bounce buffers or something nasty like that present. Are there more use cases? For fixing the TT -> SYSTEM problem of calling shmem_read_mapping_page_gfp from shrinker I think there's 3 solutions: 1)Call shmem_read_mapping_page_gfp with GFP_IO. This way we should not be calling back into our shrinker, which are at GFP_FS level. This isn't great, but since page reclaim is a giantic loop which repeats as long as we're making meaningful forward progress, we should be able to trickle TT pages to SYSTEM pages even under severe memory pressure. 2)For the normal page pool (i.e. neither GFP_DMA32 nor dma_alloc_coherent or write-combine) we stop copying the pages for TT <-> SYSTEM, but just directly pin the pages we get from shmem. Like all the shmem based soc drivers do. No copying, no need for allocations, no problem in shrinkers. 3)For the remaining page pools we'd need ttmfs so that we can control the address_space_operations and directly swap out from our pages to swapout, bypassing shmem and the copying. Iirc Chris Wilson had a prototype once somewhere under gemfs for this. Only needed if 1) isn't fast enough for these because there's too many such cases where we care. At least on intel hw the direction is very much towards fully coherent, so 1&2 should be good enough. -Daniel > > Christian. > > > It's a bit more code than the oneliner above, but I also think it's a lot > > more solid. Plus it would resolve the last big issue where i915 gem is > > fairly fundamentally different compared to ttm. For that question I think > > once Maarten's locking rework for i915 has landed and all the other ttm > > rework from you and Dave is in, we've resolved them all. > > > > > > > /* But for DMA32 we limit ourself to only use 2GiB maximum. */ > > > num_dma32_pages = (u64)(si.totalram - si.totalhigh) * si.mem_unit; > > > -- > > > 2.25.1 > > > > -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 3/3] drm/ttm: make up to 90% of system memory available 2020-11-18 21:15 ` Daniel Vetter @ 2020-11-19 15:27 ` Christian König 2020-11-19 15:42 ` Daniel Vetter 0 siblings, 1 reply; 18+ messages in thread From: Christian König @ 2020-11-19 15:27 UTC (permalink / raw) To: Daniel Vetter Cc: airlied, sroland, Felix.Kuehling, linux-graphics-maintainer, dri-devel Am 18.11.20 um 22:15 schrieb Daniel Vetter: > On Wed, Nov 18, 2020 at 02:35:24PM +0100, Christian König wrote: >> Am 17.11.20 um 18:19 schrieb Daniel Vetter: >>> On Tue, Nov 17, 2020 at 03:06:15PM +0100, Christian König wrote: >>>> Increase the ammount of system memory drivers can use to about 90% of >>>> the total available. >>>> >>>> Signed-off-by: Christian König <christian.koenig@amd.com> >>>> --- >>>> drivers/gpu/drm/ttm/ttm_bo.c | 2 +- >>>> 1 file changed, 1 insertion(+), 1 deletion(-) >>>> >>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c >>>> index a958135cb3fe..0a93df93dba4 100644 >>>> --- a/drivers/gpu/drm/ttm/ttm_bo.c >>>> +++ b/drivers/gpu/drm/ttm/ttm_bo.c >>>> @@ -1267,7 +1267,7 @@ static int ttm_bo_global_init(void) >>>> * the available system memory. >>>> */ >>>> num_pages = (u64)si.totalram * si.mem_unit; >>>> - num_pages = (num_pages * 50 / 100) >> PAGE_SHIFT; >>>> + num_pages = (num_pages * 90 / 100) >> PAGE_SHIFT; >>> I don't think this is the design we want. As long as it was set at "half >>> of system memory" it was clear that a) it's a hack b) precision didn't >>> matter. >> Yeah, I'm also wondering what to do here exactly. In generally I would >> completely nuke that limit if possible. >> >>> But if you go to the limit and still want to keep the "we make sure >>> there's no OOM", then precision starts to matter: >>> - memory hotplug and hotunplug is a thing >>> - userspace can mlock, and it's configureable >>> - drivers can pin_user_pages for IO and random other stuff. Some of it is >>> accounted as some subsystem specific mlock (like rdma does iirc), some >>> is just yolo or short term enough (like) >>> - none of what we do here takes into considerations any interactions with >>> core mm tracking (like cgroups or numa or anything like that) >> OOM is perfectly fine with me, we should just not run into an OOM killer >> situation because we call shmem_read_mapping_page_gfp() in the shrinker >> callback. >> >> Any idea how we could guarantee that? > Uh ... > > I just realized I missed something between how ttm works and how i915 > works. With i915 directly using shmem, all we do in the shrinker is unpin > the shmem pages. With ttm we have the page pool in the middle. And it's > needed for dma coherent and other things. This is rather fundamental. Yeah, the WC/UC handling is otherwise way to slow. Only for normal WB allocations we could do this. > btw I don't think you'll get OOM, you get lockdep splat because you're > recusring on fs_reclaim fake lock. We should probably put a might_alloc() > into shmem_read_mapping_page_gfp(). Yeah, one reason why I haven't enabled that yet. > >>> If we want to drop the "half of system ram" limit (and yes that makes >>> sense) I think the right design is: >>> >>> - we give up on the "no OOM" guarantee. >>> >>> - This means if you want real isolation of tasks, we need cgroups, and we >>> need to integrate ttm cgroups with system memory cgroups somehow. Unlike >>> randomly picked hardcoded limits this should work a lot more reliably >>> and be a lot more useful in practical use in the field. >>> >>> - This also means that drivers start to fail in interesting ways. I think >>> locking headaches are covered with all the lockdep annotations I've >>> pushed, plus some of the things I still have in-flight (I have a >>> might_alloc() annotations somewhere). That leaves validation of error >>> paths for when allocations fail. Ime a very effective way we used in >>> i915 is (ab)using EINTR restarting, which per drmIoctl uapi spec is >>> requried. We could put a debug mode into ttm_tt which randomly fails >>> with -EINTR to make sure it's all working correctly (plus anything else >>> that allocates memory), and unlike real out-of-memory injection piglit >>> and any other cts will complete without failure. Which gives us an >>> excellent metric for "does it work". Actualy OOM, even injected one, >>> tends to make stuff blow up in a way that's very hard to track and make >>> sure you've got good coverage, since all your usual tests die pretty >>> quickly. >>> >>> - ttm_tt needs to play fair with every other system memory user. We need >>> to register a shrinker for each ttm_tt (so usually one per device I >>> guess), which walks the lru (in shrink_count) and uses dma_resv_trylock >>> for actual shrinking. We probably want to move it to SYSTEM first for >>> that shrinker to pick up, so that there's some global fairness across >>> all ttm_tt. >> I already have patches for this. >> >> What's still missing is teaching the OOM killer which task is using the >> memory since memory referenced through the file descriptors are currently >> not accounted towards any process resources. > Yeah I don't think we've fixed that problem for i915 either. It loves to > kill randome other stuff. In igt we solve this by marking any igt testcase > as "pls kill this first". Well "solve". Well that is a "creative" solution :) I will be rather looking into if we can somehow track to which files_struct a file belongs to and if we somehow can use this in the OOM killer. My last try of giving each file something wasn't received well. The real boomer is that you can really nicely create a deny of service using memfd_create() because of this :) >>> - for GFP_DMA32 that means zone aware shrinkers. We've never used those, >>> because thus far i915 didn't have any big need for low memory, so we >>> haven't used this in practice. But it's supposed to be a thing. >> I think we can mostly forget about GFP_DMA32, this should only be used for >> AGP and other ancient hardware. > Ok, that's good. So having an arbitrary "only half of GFP_DMA32" for these > should also be acceptable, since back then gpus didn't really have > gigabytes of vram yet. Biggest r500 seems to have topped out at 512MB. > > How much do we require the dma page pool? Afaiui it's only when there's > bounce buffers or something nasty like that present. Are there more use > cases? Not that I know off. > > For fixing the TT -> SYSTEM problem of calling shmem_read_mapping_page_gfp > from shrinker I think there's 3 solutions: > > 1)Call shmem_read_mapping_page_gfp with GFP_IO. This way we should not be > calling back into our shrinker, which are at GFP_FS level. This isn't > great, but since page reclaim is a giantic loop which repeats as long as > we're making meaningful forward progress, we should be able to trickle > TT pages to SYSTEM pages even under severe memory pressure. This is most likely the way to go for us. > 2)For the normal page pool (i.e. neither GFP_DMA32 nor dma_alloc_coherent > or write-combine) we stop copying the pages for TT <-> SYSTEM, but just > directly pin the pages we get from shmem. Like all the shmem based soc > drivers do. No copying, no need for allocations, no problem in > shrinkers. Not really doable, WC is just to widely used and shmem doesn't support larger orders. Regards, Christian. > > 3)For the remaining page pools we'd need ttmfs so that we can control the > address_space_operations and directly swap out from our pages to > swapout, bypassing shmem and the copying. Iirc Chris Wilson had a > prototype once somewhere under gemfs for this. Only needed if 1) isn't > fast enough for these because there's too many such cases where we care. > > At least on intel hw the direction is very much towards fully coherent, > so 1&2 should be good enough. > -Daniel > >> Christian. >> >>> It's a bit more code than the oneliner above, but I also think it's a lot >>> more solid. Plus it would resolve the last big issue where i915 gem is >>> fairly fundamentally different compared to ttm. For that question I think >>> once Maarten's locking rework for i915 has landed and all the other ttm >>> rework from you and Dave is in, we've resolved them all. >>> >>> >>>> /* But for DMA32 we limit ourself to only use 2GiB maximum. */ >>>> num_dma32_pages = (u64)(si.totalram - si.totalhigh) * si.mem_unit; >>>> -- >>>> 2.25.1 >>>> _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 3/3] drm/ttm: make up to 90% of system memory available 2020-11-19 15:27 ` Christian König @ 2020-11-19 15:42 ` Daniel Vetter 0 siblings, 0 replies; 18+ messages in thread From: Daniel Vetter @ 2020-11-19 15:42 UTC (permalink / raw) To: Christian König Cc: airlied, Felix.Kuehling, sroland, dri-devel, linux-graphics-maintainer On Thu, Nov 19, 2020 at 04:27:00PM +0100, Christian König wrote: > Am 18.11.20 um 22:15 schrieb Daniel Vetter: > > On Wed, Nov 18, 2020 at 02:35:24PM +0100, Christian König wrote: > > > Am 17.11.20 um 18:19 schrieb Daniel Vetter: > > > > On Tue, Nov 17, 2020 at 03:06:15PM +0100, Christian König wrote: > > > > > Increase the ammount of system memory drivers can use to about 90% of > > > > > the total available. > > > > > > > > > > Signed-off-by: Christian König <christian.koenig@amd.com> > > > > > --- > > > > > drivers/gpu/drm/ttm/ttm_bo.c | 2 +- > > > > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > > > > > > > diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c > > > > > index a958135cb3fe..0a93df93dba4 100644 > > > > > --- a/drivers/gpu/drm/ttm/ttm_bo.c > > > > > +++ b/drivers/gpu/drm/ttm/ttm_bo.c > > > > > @@ -1267,7 +1267,7 @@ static int ttm_bo_global_init(void) > > > > > * the available system memory. > > > > > */ > > > > > num_pages = (u64)si.totalram * si.mem_unit; > > > > > - num_pages = (num_pages * 50 / 100) >> PAGE_SHIFT; > > > > > + num_pages = (num_pages * 90 / 100) >> PAGE_SHIFT; > > > > I don't think this is the design we want. As long as it was set at "half > > > > of system memory" it was clear that a) it's a hack b) precision didn't > > > > matter. > > > Yeah, I'm also wondering what to do here exactly. In generally I would > > > completely nuke that limit if possible. > > > > > > > But if you go to the limit and still want to keep the "we make sure > > > > there's no OOM", then precision starts to matter: > > > > - memory hotplug and hotunplug is a thing > > > > - userspace can mlock, and it's configureable > > > > - drivers can pin_user_pages for IO and random other stuff. Some of it is > > > > accounted as some subsystem specific mlock (like rdma does iirc), some > > > > is just yolo or short term enough (like) > > > > - none of what we do here takes into considerations any interactions with > > > > core mm tracking (like cgroups or numa or anything like that) > > > OOM is perfectly fine with me, we should just not run into an OOM killer > > > situation because we call shmem_read_mapping_page_gfp() in the shrinker > > > callback. > > > > > > Any idea how we could guarantee that? > > Uh ... > > > > I just realized I missed something between how ttm works and how i915 > > works. With i915 directly using shmem, all we do in the shrinker is unpin > > the shmem pages. With ttm we have the page pool in the middle. And it's > > needed for dma coherent and other things. This is rather fundamental. > > Yeah, the WC/UC handling is otherwise way to slow. Only for normal WB > allocations we could do this. > > > btw I don't think you'll get OOM, you get lockdep splat because you're > > recusring on fs_reclaim fake lock. We should probably put a might_alloc() > > into shmem_read_mapping_page_gfp(). > > Yeah, one reason why I haven't enabled that yet. > > > > > > > If we want to drop the "half of system ram" limit (and yes that makes > > > > sense) I think the right design is: > > > > > > > > - we give up on the "no OOM" guarantee. > > > > > > > > - This means if you want real isolation of tasks, we need cgroups, and we > > > > need to integrate ttm cgroups with system memory cgroups somehow. Unlike > > > > randomly picked hardcoded limits this should work a lot more reliably > > > > and be a lot more useful in practical use in the field. > > > > > > > > - This also means that drivers start to fail in interesting ways. I think > > > > locking headaches are covered with all the lockdep annotations I've > > > > pushed, plus some of the things I still have in-flight (I have a > > > > might_alloc() annotations somewhere). That leaves validation of error > > > > paths for when allocations fail. Ime a very effective way we used in > > > > i915 is (ab)using EINTR restarting, which per drmIoctl uapi spec is > > > > requried. We could put a debug mode into ttm_tt which randomly fails > > > > with -EINTR to make sure it's all working correctly (plus anything else > > > > that allocates memory), and unlike real out-of-memory injection piglit > > > > and any other cts will complete without failure. Which gives us an > > > > excellent metric for "does it work". Actualy OOM, even injected one, > > > > tends to make stuff blow up in a way that's very hard to track and make > > > > sure you've got good coverage, since all your usual tests die pretty > > > > quickly. > > > > > > > > - ttm_tt needs to play fair with every other system memory user. We need > > > > to register a shrinker for each ttm_tt (so usually one per device I > > > > guess), which walks the lru (in shrink_count) and uses dma_resv_trylock > > > > for actual shrinking. We probably want to move it to SYSTEM first for > > > > that shrinker to pick up, so that there's some global fairness across > > > > all ttm_tt. > > > I already have patches for this. > > > > > > What's still missing is teaching the OOM killer which task is using the > > > memory since memory referenced through the file descriptors are currently > > > not accounted towards any process resources. > > Yeah I don't think we've fixed that problem for i915 either. It loves to > > kill randome other stuff. In igt we solve this by marking any igt testcase > > as "pls kill this first". Well "solve". > > Well that is a "creative" solution :) > > I will be rather looking into if we can somehow track to which files_struct > a file belongs to and if we somehow can use this in the OOM killer. > > My last try of giving each file something wasn't received well. > > The real boomer is that you can really nicely create a deny of service using > memfd_create() because of this :) Yeah proper accounting is the right fix here I agree. > > > > - for GFP_DMA32 that means zone aware shrinkers. We've never used those, > > > > because thus far i915 didn't have any big need for low memory, so we > > > > haven't used this in practice. But it's supposed to be a thing. > > > I think we can mostly forget about GFP_DMA32, this should only be used for > > > AGP and other ancient hardware. > > Ok, that's good. So having an arbitrary "only half of GFP_DMA32" for these > > should also be acceptable, since back then gpus didn't really have > > gigabytes of vram yet. Biggest r500 seems to have topped out at 512MB. > > > > How much do we require the dma page pool? Afaiui it's only when there's > > bounce buffers or something nasty like that present. Are there more use > > cases? > > Not that I know off. > > > > > For fixing the TT -> SYSTEM problem of calling shmem_read_mapping_page_gfp > > from shrinker I think there's 3 solutions: > > > > 1)Call shmem_read_mapping_page_gfp with GFP_IO. This way we should not be > > calling back into our shrinker, which are at GFP_FS level. This isn't > > great, but since page reclaim is a giantic loop which repeats as long as > > we're making meaningful forward progress, we should be able to trickle > > TT pages to SYSTEM pages even under severe memory pressure. > > This is most likely the way to go for us. GFP_NOFS is the flag, I mixed them up. > > 2)For the normal page pool (i.e. neither GFP_DMA32 nor dma_alloc_coherent > > or write-combine) we stop copying the pages for TT <-> SYSTEM, but just > > directly pin the pages we get from shmem. Like all the shmem based soc > > drivers do. No copying, no need for allocations, no problem in > > shrinkers. > > Not really doable, WC is just to widely used and shmem doesn't support > larger orders. Hm I thought you get large pages automatically, if you reassemble them. It's just the interface that sucks. btw did you look at 3 below? It might break dma-api assumptions a bit too badly since we're doing nice page flushing behind everyone's back. -Daniel > > Regards, > Christian. > > > > > 3)For the remaining page pools we'd need ttmfs so that we can control the > > address_space_operations and directly swap out from our pages to > > swapout, bypassing shmem and the copying. Iirc Chris Wilson had a > > prototype once somewhere under gemfs for this. Only needed if 1) isn't > > fast enough for these because there's too many such cases where we care. > > > > At least on intel hw the direction is very much towards fully coherent, > > so 1&2 should be good enough. > > -Daniel > > > > > Christian. > > > > > > > It's a bit more code than the oneliner above, but I also think it's a lot > > > > more solid. Plus it would resolve the last big issue where i915 gem is > > > > fairly fundamentally different compared to ttm. For that question I think > > > > once Maarten's locking rework for i915 has landed and all the other ttm > > > > rework from you and Dave is in, we've resolved them all. > > > > > > > > > > > > > /* But for DMA32 we limit ourself to only use 2GiB maximum. */ > > > > > num_dma32_pages = (u64)(si.totalram - si.totalhigh) * si.mem_unit; > > > > > -- > > > > > 2.25.1 > > > > > > -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 1/3] drm/ttm: rework ttm_tt page limit 2020-11-17 14:06 [PATCH 1/3] drm/ttm: rework ttm_tt page limit Christian König 2020-11-17 14:06 ` [PATCH 2/3] drm/ttm: move memory accounting into vmwgfx Christian König 2020-11-17 14:06 ` [PATCH 3/3] drm/ttm: make up to 90% of system memory available Christian König @ 2020-11-18 7:55 ` kernel test robot 2020-11-19 7:14 ` kernel test robot 3 siblings, 0 replies; 18+ messages in thread From: kernel test robot @ 2020-11-18 7:55 UTC (permalink / raw) To: Christian König, airlied, Felix.Kuehling, daniel, dri-devel, linux-graphics-maintainer, sroland Cc: kbuild-all [-- Attachment #1: Type: text/plain, Size: 1673 bytes --] Hi "Christian, I love your patch! Yet something to improve: [auto build test ERROR on next-20201117] [cannot apply to linus/master v5.10-rc4 v5.10-rc3 v5.10-rc2 v5.10-rc4] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Christian-K-nig/drm-ttm-rework-ttm_tt-page-limit/20201117-220814 base: 7c8ca8129ee9724cb1527895fe6dec942ef07f19 config: arm-allyesconfig (attached as .config) compiler: arm-linux-gnueabi-gcc (GCC) 9.3.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/0day-ci/linux/commit/ed6ebc71bdbee87b189c3127e4e6a840776af4e2 git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Christian-K-nig/drm-ttm-rework-ttm_tt-page-limit/20201117-220814 git checkout ed6ebc71bdbee87b189c3127e4e6a840776af4e2 # save the attached .config to linux build tree COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=arm If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@intel.com> All errors (new ones prefixed by >>): arm-linux-gnueabi-ld: drivers/gpu/drm/ttm/ttm_memory.o: in function `ttm_mem_global_init': >> (.text+0x14f4): undefined reference to `__aeabi_uldivmod' --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org [-- Attachment #2: .config.gz --] [-- Type: application/gzip, Size: 77225 bytes --] [-- Attachment #3: Type: text/plain, Size: 160 bytes --] _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 1/3] drm/ttm: rework ttm_tt page limit 2020-11-17 14:06 [PATCH 1/3] drm/ttm: rework ttm_tt page limit Christian König ` (2 preceding siblings ...) 2020-11-18 7:55 ` [PATCH 1/3] drm/ttm: rework ttm_tt page limit kernel test robot @ 2020-11-19 7:14 ` kernel test robot 3 siblings, 0 replies; 18+ messages in thread From: kernel test robot @ 2020-11-19 7:14 UTC (permalink / raw) To: Christian König, airlied, Felix.Kuehling, daniel, dri-devel, linux-graphics-maintainer, sroland Cc: kbuild-all [-- Attachment #1: Type: text/plain, Size: 1609 bytes --] Hi "Christian, I love your patch! Yet something to improve: [auto build test ERROR on next-20201117] [cannot apply to linus/master v5.10-rc4 v5.10-rc3 v5.10-rc2 v5.10-rc4] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Christian-K-nig/drm-ttm-rework-ttm_tt-page-limit/20201117-220814 base: 7c8ca8129ee9724cb1527895fe6dec942ef07f19 config: mips-allmodconfig (attached as .config) compiler: mips-linux-gcc (GCC) 9.3.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/0day-ci/linux/commit/ed6ebc71bdbee87b189c3127e4e6a840776af4e2 git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Christian-K-nig/drm-ttm-rework-ttm_tt-page-limit/20201117-220814 git checkout ed6ebc71bdbee87b189c3127e4e6a840776af4e2 # save the attached .config to linux build tree COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=mips If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@intel.com> All errors (new ones prefixed by >>, old ones prefixed by <<): >> ERROR: modpost: "__udivdi3" [drivers/gpu/drm/ttm/ttm.ko] undefined! --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org [-- Attachment #2: .config.gz --] [-- Type: application/gzip, Size: 69191 bytes --] [-- Attachment #3: Type: text/plain, Size: 160 bytes --] _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 1/3] drm/ttm: rework ttm_tt page limit v3 @ 2021-01-25 13:25 Christian König 2021-01-25 13:25 ` [PATCH 2/3] drm/ttm: move memory accounting into vmwgfx Christian König 0 siblings, 1 reply; 18+ messages in thread From: Christian König @ 2021-01-25 13:25 UTC (permalink / raw) To: daniel, dri-devel TTM implements a rather extensive accounting of allocated memory. There are two reasons for this: 1. It tries to block userspace allocating a huge number of very small BOs without accounting for the kmalloced memory. 2. Make sure we don't over allocate and run into an OOM situation during swapout while trying to handle the memory shortage. This is only partially a good idea. First of all it is perfectly valid for an application to use all of system memory, limiting it to 50% is not really acceptable. What we need to take care of is that the application is held accountable for the memory it allocated. This is what control mechanisms like memcg and the normal Linux page accounting already do. Making sure that we don't run into an OOM situation while trying to cope with a memory shortage is still a good idea, but this is also not very well implemented since it means another opportunity of recursion from the driver back into TTM. So start to rework all of this by implementing a shrinker callback which allows for TT object to be swapped out if necessary. v2: Switch from limit to shrinker callback. v3: fix gfp mask handling, use atomic for swapable_pages, add debugfs Signed-off-by: Christian König <christian.koenig@amd.com> --- drivers/gpu/drm/ttm/ttm_bo.c | 4 +- drivers/gpu/drm/ttm/ttm_memory.c | 7 +- drivers/gpu/drm/ttm/ttm_tt.c | 111 ++++++++++++++++++++++++++-- drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 2 +- include/drm/ttm/ttm_bo_api.h | 2 +- include/drm/ttm/ttm_tt.h | 6 +- 6 files changed, 117 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 20256797f3a6..643befc1a6f2 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -1219,7 +1219,7 @@ EXPORT_SYMBOL(ttm_bo_wait); * A buffer object shrink method that tries to swap out the first * buffer object on the bo_global::swap_lru list. */ -int ttm_bo_swapout(struct ttm_operation_ctx *ctx) +int ttm_bo_swapout(struct ttm_operation_ctx *ctx, gfp_t gfp_flags) { struct ttm_global *glob = &ttm_glob; struct ttm_buffer_object *bo; @@ -1302,7 +1302,7 @@ int ttm_bo_swapout(struct ttm_operation_ctx *ctx) if (bo->bdev->funcs->swap_notify) bo->bdev->funcs->swap_notify(bo); - ret = ttm_tt_swapout(bo->bdev, bo->ttm); + ret = ttm_tt_swapout(bo->bdev, bo->ttm, gfp_flags); out: /** diff --git a/drivers/gpu/drm/ttm/ttm_memory.c b/drivers/gpu/drm/ttm/ttm_memory.c index a3bfbd9cea68..634a85c2dc4c 100644 --- a/drivers/gpu/drm/ttm/ttm_memory.c +++ b/drivers/gpu/drm/ttm/ttm_memory.c @@ -37,6 +37,7 @@ #include <linux/slab.h> #include <linux/swap.h> #include <drm/ttm/ttm_pool.h> +#include <drm/ttm/ttm_tt.h> #include "ttm_module.h" @@ -276,9 +277,9 @@ static void ttm_shrink(struct ttm_mem_global *glob, bool from_wq, while (ttm_zones_above_swap_target(glob, from_wq, extra)) { spin_unlock(&glob->lock); - ret = ttm_bo_swapout(ctx); + ret = ttm_bo_swapout(ctx, GFP_KERNEL); spin_lock(&glob->lock); - if (unlikely(ret != 0)) + if (unlikely(ret < 0)) break; } @@ -453,6 +454,7 @@ int ttm_mem_global_init(struct ttm_mem_global *glob) zone->name, (unsigned long long)zone->max_mem >> 10); } ttm_pool_mgr_init(glob->zone_kernel->max_mem/(2*PAGE_SIZE)); + ttm_tt_mgr_init(); return 0; out_no_zone: ttm_mem_global_release(glob); @@ -466,6 +468,7 @@ void ttm_mem_global_release(struct ttm_mem_global *glob) /* let the page allocator first stop the shrink work. */ ttm_pool_mgr_fini(); + ttm_tt_mgr_fini(); flush_workqueue(glob->swap_queue); destroy_workqueue(glob->swap_queue); diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c index 7782d5393c7c..b67795de228d 100644 --- a/drivers/gpu/drm/ttm/ttm_tt.c +++ b/drivers/gpu/drm/ttm/ttm_tt.c @@ -38,6 +38,11 @@ #include <drm/drm_cache.h> #include <drm/ttm/ttm_bo_driver.h> +#include "ttm_module.h" + +static struct shrinker mm_shrinker; +static atomic_long_t swapable_pages; + /* * Allocates a ttm structure for the given BO. */ @@ -223,32 +228,41 @@ int ttm_tt_swapin(struct ttm_tt *ttm) return ret; } -int ttm_tt_swapout(struct ttm_device *bdev, struct ttm_tt *ttm) +/** + * ttm_tt_swapout - swap out tt object + * + * @bdev: TTM device structure. + * @ttm: The struct ttm_tt. + * @gfp_flags: Flags to use for memory allocation. + * + * Swapout a TT object to a shmem_file, return number of pages swapped out or + * negative error code. + */ +int ttm_tt_swapout(struct ttm_device *bdev, struct ttm_tt *ttm, + gfp_t gfp_flags) { + loff_t size = (loff_t)ttm->num_pages << PAGE_SHIFT; struct address_space *swap_space; struct file *swap_storage; struct page *from_page; struct page *to_page; - gfp_t gfp_mask; int i, ret; - swap_storage = shmem_file_setup("ttm swap", - ttm->num_pages << PAGE_SHIFT, - 0); + swap_storage = shmem_file_setup("ttm swap", size, 0); if (IS_ERR(swap_storage)) { pr_err("Failed allocating swap storage\n"); return PTR_ERR(swap_storage); } swap_space = swap_storage->f_mapping; - gfp_mask = mapping_gfp_mask(swap_space); + gfp_flags &= mapping_gfp_mask(swap_space); for (i = 0; i < ttm->num_pages; ++i) { from_page = ttm->pages[i]; if (unlikely(from_page == NULL)) continue; - to_page = shmem_read_mapping_page_gfp(swap_space, i, gfp_mask); + to_page = shmem_read_mapping_page_gfp(swap_space, i, gfp_flags); if (IS_ERR(to_page)) { ret = PTR_ERR(to_page); goto out_err; @@ -263,7 +277,7 @@ int ttm_tt_swapout(struct ttm_device *bdev, struct ttm_tt *ttm) ttm->swap_storage = swap_storage; ttm->page_flags |= TTM_PAGE_FLAG_SWAPPED; - return 0; + return ttm->num_pages; out_err: fput(swap_storage); @@ -280,6 +294,8 @@ static void ttm_tt_add_mapping(struct ttm_device *bdev, struct ttm_tt *ttm) for (i = 0; i < ttm->num_pages; ++i) ttm->pages[i]->mapping = bdev->dev_mapping; + + atomic_long_add(ttm->num_pages, &swapable_pages); } int ttm_tt_populate(struct ttm_device *bdev, @@ -326,6 +342,8 @@ static void ttm_tt_clear_mapping(struct ttm_tt *ttm) (*page)->mapping = NULL; (*page++)->index = 0; } + + atomic_long_sub(ttm->num_pages, &swapable_pages); } void ttm_tt_unpopulate(struct ttm_device *bdev, @@ -341,3 +359,80 @@ void ttm_tt_unpopulate(struct ttm_device *bdev, ttm_pool_free(&bdev->pool, ttm); ttm->page_flags &= ~TTM_PAGE_FLAG_PRIV_POPULATED; } + +/* As long as pages are available make sure to release at least one */ +static unsigned long ttm_tt_shrinker_scan(struct shrinker *shrink, + struct shrink_control *sc) +{ + struct ttm_operation_ctx ctx = { + .no_wait_gpu = false + }; + int ret; + + if (!(sc->gfp_mask & __GFP_FS)) + return SHRINK_EMPTY; + + ret = ttm_bo_swapout(&ctx, GFP_NOFS); + return ret < 0 ? SHRINK_EMPTY : ret; +} + +/* Return the number of pages available or SHRINK_EMPTY if we have none */ +static unsigned long ttm_tt_shrinker_count(struct shrinker *shrink, + struct shrink_control *sc) +{ + unsigned long num_pages; + + if (!(sc->gfp_mask & __GFP_FS)) + return SHRINK_EMPTY; + + num_pages = atomic_long_read(&swapable_pages); + return num_pages ? num_pages : SHRINK_EMPTY; +} + +#ifdef CONFIG_DEBUG_FS + +/* Test the shrinker functions and dump the result */ +static int ttm_tt_debugfs_shrink_show(struct seq_file *m, void *data) +{ + struct shrink_control sc = { .gfp_mask = GFP_KERNEL }; + + fs_reclaim_acquire(GFP_KERNEL); + seq_printf(m, "%lu/%lu\n", ttm_tt_shrinker_count(&mm_shrinker, &sc), + ttm_tt_shrinker_scan(&mm_shrinker, &sc)); + fs_reclaim_release(GFP_KERNEL); + + return 0; +} +DEFINE_SHOW_ATTRIBUTE(ttm_tt_debugfs_shrink); + +#endif + + + +/** + * ttm_tt_mgr_init - register with the MM shrinker + * + * Register with the MM shrinker for swapping out BOs. + */ +int ttm_tt_mgr_init(void) +{ +#ifdef CONFIG_DEBUG_FS + debugfs_create_file("tt_shrink", 0400, ttm_debugfs_root, NULL, + &ttm_tt_debugfs_shrink_fops); +#endif + + mm_shrinker.count_objects = ttm_tt_shrinker_count; + mm_shrinker.scan_objects = ttm_tt_shrinker_scan; + mm_shrinker.seeks = 1; + return register_shrinker(&mm_shrinker); +} + +/** + * ttm_tt_mgr_fini - unregister our MM shrinker + * + * Unregisters the MM shrinker. + */ +void ttm_tt_mgr_fini(void) +{ + unregister_shrinker(&mm_shrinker); +} diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c index b454d80c273e..710ba5169a74 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c @@ -1383,7 +1383,7 @@ static int vmw_pm_freeze(struct device *kdev) vmw_execbuf_release_pinned_bo(dev_priv); vmw_resource_evict_all(dev_priv); vmw_release_device_early(dev_priv); - while (ttm_bo_swapout(&ctx) == 0); + while (ttm_bo_swapout(&ctx, GFP_KERNEL) > 0); if (dev_priv->enable_fb) vmw_fifo_resource_dec(dev_priv); if (atomic_read(&dev_priv->num_fifo_resources) != 0) { diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h index 62734db0b421..1297a8fb7ccb 100644 --- a/include/drm/ttm/ttm_bo_api.h +++ b/include/drm/ttm/ttm_bo_api.h @@ -569,7 +569,7 @@ ssize_t ttm_bo_io(struct ttm_device *bdev, struct file *filp, const char __user *wbuf, char __user *rbuf, size_t count, loff_t *f_pos, bool write); -int ttm_bo_swapout(struct ttm_operation_ctx *ctx); +int ttm_bo_swapout(struct ttm_operation_ctx *ctx, gfp_t gfp_flags); /** * ttm_bo_uses_embedded_gem_object - check if the given bo uses the diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h index 0020a0588985..cce57fb49e2c 100644 --- a/include/drm/ttm/ttm_tt.h +++ b/include/drm/ttm/ttm_tt.h @@ -135,7 +135,8 @@ void ttm_tt_destroy_common(struct ttm_device *bdev, struct ttm_tt *ttm); * Swap in a previously swap out ttm_tt. */ int ttm_tt_swapin(struct ttm_tt *ttm); -int ttm_tt_swapout(struct ttm_device *bdev, struct ttm_tt *ttm); +int ttm_tt_swapout(struct ttm_device *bdev, struct ttm_tt *ttm, + gfp_t gfp_flags); /** * ttm_tt_populate - allocate pages for a ttm @@ -155,6 +156,9 @@ int ttm_tt_populate(struct ttm_device *bdev, struct ttm_tt *ttm, struct ttm_oper */ void ttm_tt_unpopulate(struct ttm_device *bdev, struct ttm_tt *ttm); +int ttm_tt_mgr_init(void); +void ttm_tt_mgr_fini(void); + #if IS_ENABLED(CONFIG_AGP) #include <linux/agp_backend.h> -- 2.25.1 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH 2/3] drm/ttm: move memory accounting into vmwgfx 2021-01-25 13:25 [PATCH 1/3] drm/ttm: rework ttm_tt page limit v3 Christian König @ 2021-01-25 13:25 ` Christian König 2021-01-26 12:23 ` kernel test robot ` (2 more replies) 0 siblings, 3 replies; 18+ messages in thread From: Christian König @ 2021-01-25 13:25 UTC (permalink / raw) To: daniel, dri-devel This is just another feature which is only used by VMWGFX, so move it into the driver instead. I've tried to add the accounting sysfs file to the kobject of the drm minor, but I'm not 100% sure if this works as expected. Signed-off-by: Christian König <christian.koenig@amd.com> --- .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 16 ++++++--- drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 8 ++--- drivers/gpu/drm/drm_gem_vram_helper.c | 6 ++-- drivers/gpu/drm/nouveau/nouveau_bo.c | 7 ++-- drivers/gpu/drm/nouveau/nouveau_drv.h | 1 - drivers/gpu/drm/qxl/qxl_object.c | 4 +-- drivers/gpu/drm/radeon/radeon_object.c | 8 ++--- drivers/gpu/drm/ttm/Makefile | 7 ++-- drivers/gpu/drm/ttm/ttm_bo.c | 33 +------------------ drivers/gpu/drm/ttm/ttm_bo_util.c | 1 - drivers/gpu/drm/ttm/ttm_device.c | 24 +++++++++++--- drivers/gpu/drm/ttm/ttm_pool.c | 13 +------- drivers/gpu/drm/vmwgfx/Makefile | 2 +- drivers/gpu/drm/{ttm => vmwgfx}/ttm_memory.c | 19 ++++------- .../gpu/drm/vmwgfx}/ttm_memory.h | 5 +-- drivers/gpu/drm/vmwgfx/ttm_object.h | 3 +- drivers/gpu/drm/vmwgfx/vmwgfx_bo.c | 22 ++++++++++--- drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 5 +++ drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c | 28 +++++++++++++++- include/drm/ttm/ttm_bo_api.h | 13 ++------ include/drm/ttm/ttm_bo_driver.h | 1 - include/drm/ttm/ttm_tt.h | 1 + 22 files changed, 112 insertions(+), 115 deletions(-) rename drivers/gpu/drm/{ttm => vmwgfx}/ttm_memory.c (97%) rename {include/drm/ttm => drivers/gpu/drm/vmwgfx}/ttm_memory.h (97%) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c index 0849b68e784f..3e62923a7e63 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c @@ -118,6 +118,16 @@ void amdgpu_amdkfd_gpuvm_init_mem_limits(void) */ #define ESTIMATE_PT_SIZE(mem_size) ((mem_size) >> 14) +static size_t amdgpu_amdkfd_acc_size(uint64_t size) +{ + size >>= PAGE_SHIFT; + size *= sizeof(dma_addr_t) + sizeof(void *); + + return __roundup_pow_of_two(sizeof(struct amdgpu_bo)) + + __rountup_pow_of_two(sizeof(struct ttm_tt)) + + PAGE_ALIGN(size); +} + static int amdgpu_amdkfd_reserve_mem_limit(struct amdgpu_device *adev, uint64_t size, u32 domain, bool sg) { @@ -126,8 +136,7 @@ static int amdgpu_amdkfd_reserve_mem_limit(struct amdgpu_device *adev, size_t acc_size, system_mem_needed, ttm_mem_needed, vram_needed; int ret = 0; - acc_size = ttm_bo_dma_acc_size(&adev->mman.bdev, size, - sizeof(struct amdgpu_bo)); + acc_size = amdgpu_amdkfd_acc_size(size); vram_needed = 0; if (domain == AMDGPU_GEM_DOMAIN_GTT) { @@ -174,8 +183,7 @@ static void unreserve_mem_limit(struct amdgpu_device *adev, { size_t acc_size; - acc_size = ttm_bo_dma_acc_size(&adev->mman.bdev, size, - sizeof(struct amdgpu_bo)); + acc_size = amdgpu_amdkfd_acc_size(size); spin_lock(&kfd_mem_limit.mem_limit_lock); if (domain == AMDGPU_GEM_DOMAIN_GTT) { diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c index 6cc9919b12cc..599c9a132eb6 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c @@ -523,7 +523,6 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev, }; struct amdgpu_bo *bo; unsigned long page_align, size = bp->size; - size_t acc_size; int r; /* Note that GDS/GWS/OA allocates 1 page per byte/resource. */ @@ -546,9 +545,6 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev, *bo_ptr = NULL; - acc_size = ttm_bo_dma_acc_size(&adev->mman.bdev, size, - sizeof(struct amdgpu_bo)); - bo = kzalloc(sizeof(struct amdgpu_bo), GFP_KERNEL); if (bo == NULL) return -ENOMEM; @@ -577,8 +573,8 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev, bo->tbo.priority = 1; r = ttm_bo_init_reserved(&adev->mman.bdev, &bo->tbo, size, bp->type, - &bo->placement, page_align, &ctx, acc_size, - NULL, bp->resv, &amdgpu_bo_destroy); + &bo->placement, page_align, &ctx, NULL, + bp->resv, &amdgpu_bo_destroy); if (unlikely(r != 0)) return r; diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c index 0b13c8507688..a0992f0b8afd 100644 --- a/drivers/gpu/drm/drm_gem_vram_helper.c +++ b/drivers/gpu/drm/drm_gem_vram_helper.c @@ -189,7 +189,6 @@ struct drm_gem_vram_object *drm_gem_vram_create(struct drm_device *dev, struct drm_vram_mm *vmm = dev->vram_mm; struct ttm_device *bdev; int ret; - size_t acc_size; if (WARN_ONCE(!vmm, "VRAM MM not initialized")) return ERR_PTR(-EINVAL); @@ -216,7 +215,6 @@ struct drm_gem_vram_object *drm_gem_vram_create(struct drm_device *dev, } bdev = &vmm->bdev; - acc_size = ttm_bo_dma_acc_size(bdev, size, sizeof(*gbo)); gbo->bo.bdev = bdev; drm_gem_vram_placement(gbo, DRM_GEM_VRAM_PL_FLAG_SYSTEM); @@ -226,8 +224,8 @@ struct drm_gem_vram_object *drm_gem_vram_create(struct drm_device *dev, * to release gbo->bo.base and kfree gbo. */ ret = ttm_bo_init(bdev, &gbo->bo, size, ttm_bo_type_device, - &gbo->placement, pg_align, false, acc_size, - NULL, NULL, ttm_buffer_object_destroy); + &gbo->placement, pg_align, false, NULL, NULL, + ttm_buffer_object_destroy); if (ret) return ERR_PTR(ret); diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index c177940d6e2c..ca2a8ae1938e 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -300,18 +300,15 @@ nouveau_bo_init(struct nouveau_bo *nvbo, u64 size, int align, u32 domain, struct sg_table *sg, struct dma_resv *robj) { int type = sg ? ttm_bo_type_sg : ttm_bo_type_device; - size_t acc_size; int ret; - acc_size = ttm_bo_dma_acc_size(nvbo->bo.bdev, size, sizeof(*nvbo)); - nvbo->bo.mem.num_pages = size >> PAGE_SHIFT; nouveau_bo_placement_set(nvbo, domain, 0); INIT_LIST_HEAD(&nvbo->io_reserve_lru); ret = ttm_bo_init(nvbo->bo.bdev, &nvbo->bo, size, type, - &nvbo->placement, align >> PAGE_SHIFT, false, - acc_size, sg, robj, nouveau_bo_del_ttm); + &nvbo->placement, align >> PAGE_SHIFT, false, sg, + robj, nouveau_bo_del_ttm); if (ret) { /* ttm will call nouveau_bo_del_ttm if it fails.. */ return ret; diff --git a/drivers/gpu/drm/nouveau/nouveau_drv.h b/drivers/gpu/drm/nouveau/nouveau_drv.h index edf9d1ee9d58..a491c2c1c56e 100644 --- a/drivers/gpu/drm/nouveau/nouveau_drv.h +++ b/drivers/gpu/drm/nouveau/nouveau_drv.h @@ -54,7 +54,6 @@ #include <drm/ttm/ttm_bo_api.h> #include <drm/ttm/ttm_bo_driver.h> #include <drm/ttm/ttm_placement.h> -#include <drm/ttm/ttm_memory.h> #include <drm/drm_audio_component.h> diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c index ceebc5881f68..705b51535492 100644 --- a/drivers/gpu/drm/qxl/qxl_object.c +++ b/drivers/gpu/drm/qxl/qxl_object.c @@ -138,8 +138,8 @@ int qxl_bo_create(struct qxl_device *qdev, qxl_ttm_placement_from_domain(bo, domain); r = ttm_bo_init_reserved(&qdev->mman.bdev, &bo->tbo, size, type, - &bo->placement, 0, &ctx, size, - NULL, NULL, &qxl_ttm_bo_destroy); + &bo->placement, 0, &ctx, NULL, NULL, + &qxl_ttm_bo_destroy); if (unlikely(r != 0)) { if (r != -ERESTARTSYS) dev_err(qdev->ddev.dev, diff --git a/drivers/gpu/drm/radeon/radeon_object.c b/drivers/gpu/drm/radeon/radeon_object.c index 6a336284466f..804f7a427be7 100644 --- a/drivers/gpu/drm/radeon/radeon_object.c +++ b/drivers/gpu/drm/radeon/radeon_object.c @@ -159,7 +159,6 @@ int radeon_bo_create(struct radeon_device *rdev, struct radeon_bo *bo; enum ttm_bo_type type; unsigned long page_align = roundup(byte_align, PAGE_SIZE) >> PAGE_SHIFT; - size_t acc_size; int r; size = ALIGN(size, PAGE_SIZE); @@ -173,9 +172,6 @@ int radeon_bo_create(struct radeon_device *rdev, } *bo_ptr = NULL; - acc_size = ttm_bo_dma_acc_size(&rdev->mman.bdev, size, - sizeof(struct radeon_bo)); - bo = kzalloc(sizeof(struct radeon_bo), GFP_KERNEL); if (bo == NULL) return -ENOMEM; @@ -230,8 +226,8 @@ int radeon_bo_create(struct radeon_device *rdev, /* Kernel allocation are uninterruptible */ down_read(&rdev->pm.mclk_lock); r = ttm_bo_init(&rdev->mman.bdev, &bo->tbo, size, type, - &bo->placement, page_align, !kernel, acc_size, - sg, resv, &radeon_ttm_bo_destroy); + &bo->placement, page_align, !kernel, sg, resv, + &radeon_ttm_bo_destroy); up_read(&rdev->pm.mclk_lock); if (unlikely(r != 0)) { return r; diff --git a/drivers/gpu/drm/ttm/Makefile b/drivers/gpu/drm/ttm/Makefile index 8e6437eadabe..40e5e9da7953 100644 --- a/drivers/gpu/drm/ttm/Makefile +++ b/drivers/gpu/drm/ttm/Makefile @@ -2,10 +2,9 @@ # # Makefile for the drm device driver. This driver provides support for the -ttm-y := ttm_memory.o ttm_tt.o ttm_bo.o \ - ttm_bo_util.o ttm_bo_vm.o ttm_module.o \ - ttm_execbuf_util.o ttm_range_manager.o \ - ttm_resource.o ttm_pool.o ttm_device.o +ttm-y := ttm_tt.o ttm_bo.o ttm_bo_util.o ttm_bo_vm.o ttm_module.o \ + ttm_execbuf_util.o ttm_range_manager.o ttm_resource.o ttm_pool.o \ + ttm_device.o ttm-$(CONFIG_AGP) += ttm_agp_backend.o obj-$(CONFIG_DRM_TTM) += ttm.o diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 643befc1a6f2..e38102282fd5 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -425,7 +425,6 @@ static void ttm_bo_release(struct kref *kref) struct ttm_buffer_object *bo = container_of(kref, struct ttm_buffer_object, kref); struct ttm_device *bdev = bo->bdev; - size_t acc_size = bo->acc_size; int ret; if (!bo->deleted) { @@ -485,7 +484,6 @@ static void ttm_bo_release(struct kref *kref) if (!ttm_bo_uses_embedded_gem_object(bo)) dma_resv_fini(&bo->base._resv); bo->destroy(bo); - ttm_mem_global_free(&ttm_mem_glob, acc_size); } void ttm_bo_put(struct ttm_buffer_object *bo) @@ -1046,25 +1044,13 @@ int ttm_bo_init_reserved(struct ttm_device *bdev, struct ttm_placement *placement, uint32_t page_alignment, struct ttm_operation_ctx *ctx, - size_t acc_size, struct sg_table *sg, struct dma_resv *resv, void (*destroy) (struct ttm_buffer_object *)) { - struct ttm_mem_global *mem_glob = &ttm_mem_glob; bool locked; int ret = 0; - ret = ttm_mem_global_alloc(mem_glob, acc_size, ctx); - if (ret) { - pr_err("Out of kernel memory\n"); - if (destroy) - (*destroy)(bo); - else - kfree(bo); - return -ENOMEM; - } - bo->destroy = destroy ? destroy : ttm_bo_default_destroy; kref_init(&bo->kref); @@ -1081,7 +1067,6 @@ int ttm_bo_init_reserved(struct ttm_device *bdev, bo->mem.bus.addr = NULL; bo->moving = NULL; bo->mem.placement = 0; - bo->acc_size = acc_size; bo->pin_count = 0; bo->sg = sg; if (resv) { @@ -1142,7 +1127,6 @@ int ttm_bo_init(struct ttm_device *bdev, struct ttm_placement *placement, uint32_t page_alignment, bool interruptible, - size_t acc_size, struct sg_table *sg, struct dma_resv *resv, void (*destroy) (struct ttm_buffer_object *)) @@ -1151,8 +1135,7 @@ int ttm_bo_init(struct ttm_device *bdev, int ret; ret = ttm_bo_init_reserved(bdev, bo, size, type, placement, - page_alignment, &ctx, acc_size, - sg, resv, destroy); + page_alignment, &ctx, sg, resv, destroy); if (ret) return ret; @@ -1163,20 +1146,6 @@ int ttm_bo_init(struct ttm_device *bdev, } EXPORT_SYMBOL(ttm_bo_init); -size_t ttm_bo_dma_acc_size(struct ttm_device *bdev, - unsigned long bo_size, - unsigned struct_size) -{ - unsigned npages = (PAGE_ALIGN(bo_size)) >> PAGE_SHIFT; - size_t size = 0; - - size += ttm_round_pot(struct_size); - size += ttm_round_pot(npages * (2*sizeof(void *) + sizeof(dma_addr_t))); - size += ttm_round_pot(sizeof(struct ttm_tt)); - return size; -} -EXPORT_SYMBOL(ttm_bo_dma_acc_size); - /* * buffer object vm functions. */ diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index db0f2661d504..031e5819fec4 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -309,7 +309,6 @@ static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo, kref_init(&fbo->base.kref); fbo->base.destroy = &ttm_transfered_destroy; - fbo->base.acc_size = 0; fbo->base.pin_count = 0; if (bo->type != ttm_bo_type_sg) fbo->base.base.resv = &fbo->base.base._resv; diff --git a/drivers/gpu/drm/ttm/ttm_device.c b/drivers/gpu/drm/ttm/ttm_device.c index ac0903c9e60a..b9eae352aef8 100644 --- a/drivers/gpu/drm/ttm/ttm_device.c +++ b/drivers/gpu/drm/ttm/ttm_device.c @@ -27,9 +27,12 @@ #define pr_fmt(fmt) "[TTM DEVICE] " fmt +#include <linux/mm.h> + #include <drm/ttm/ttm_device.h> -#include <drm/ttm/ttm_memory.h> +#include <drm/ttm/ttm_tt.h> #include <drm/ttm/ttm_placement.h> +#include <drm/ttm/ttm_bo_api.h> #include "ttm_module.h" @@ -49,9 +52,11 @@ static void ttm_global_release(void) if (--ttm_glob_use_count > 0) goto out; + ttm_pool_mgr_fini(); + ttm_tt_mgr_fini(); + kobject_del(&glob->kobj); kobject_put(&glob->kobj); - ttm_mem_global_release(&ttm_mem_glob); __free_page(glob->dummy_read_page); memset(glob, 0, sizeof(*glob)); out: @@ -61,6 +66,8 @@ static void ttm_global_release(void) static int ttm_global_init(void) { struct ttm_global *glob = &ttm_glob; + uint64_t num_pages; + struct sysinfo si; int ret = 0; unsigned i; @@ -68,9 +75,16 @@ static int ttm_global_init(void) if (++ttm_glob_use_count > 1) goto out; - ret = ttm_mem_global_init(&ttm_mem_glob); - if (ret) - goto out; + si_meminfo(&si); + + /* Limit the number of pages in the pool to about 50% of the total + * system memory. + */ + num_pages = (u64)si.totalram * si.mem_unit; + num_pages = (num_pages * 50 / 100) >> PAGE_SHIFT; + + ttm_pool_mgr_init(num_pages); + ttm_tt_mgr_init(); spin_lock_init(&glob->lru_lock); glob->dummy_read_page = alloc_page(__GFP_ZERO | GFP_DMA32); diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c index e0617717113f..6b0f957d63d5 100644 --- a/drivers/gpu/drm/ttm/ttm_pool.c +++ b/drivers/gpu/drm/ttm/ttm_pool.c @@ -404,16 +404,10 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, caching = pages + (1 << order); } - r = ttm_mem_global_alloc_page(&ttm_mem_glob, p, - (1 << order) * PAGE_SIZE, - ctx); - if (r) - goto error_free_page; - if (dma_addr) { r = ttm_pool_map(pool, order, p, &dma_addr); if (r) - goto error_global_free; + goto error_free_page; } num_pages -= 1 << order; @@ -427,9 +421,6 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, return 0; -error_global_free: - ttm_mem_global_free_page(&ttm_mem_glob, p, (1 << order) * PAGE_SIZE); - error_free_page: ttm_pool_free_page(pool, tt->caching, order, p); @@ -464,8 +455,6 @@ void ttm_pool_free(struct ttm_pool *pool, struct ttm_tt *tt) order = ttm_pool_page_order(pool, p); num_pages = 1ULL << order; - ttm_mem_global_free_page(&ttm_mem_glob, p, - num_pages * PAGE_SIZE); if (tt->dma_address) ttm_pool_unmap(pool, tt->dma_address[i], num_pages); diff --git a/drivers/gpu/drm/vmwgfx/Makefile b/drivers/gpu/drm/vmwgfx/Makefile index cc4cdca7176e..8c02fa5852e7 100644 --- a/drivers/gpu/drm/vmwgfx/Makefile +++ b/drivers/gpu/drm/vmwgfx/Makefile @@ -9,7 +9,7 @@ vmwgfx-y := vmwgfx_execbuf.o vmwgfx_gmr.o vmwgfx_kms.o vmwgfx_drv.o \ vmwgfx_cotable.o vmwgfx_so.o vmwgfx_binding.o vmwgfx_msg.o \ vmwgfx_simple_resource.o vmwgfx_va.o vmwgfx_blit.o \ vmwgfx_validation.o vmwgfx_page_dirty.o vmwgfx_streamoutput.o \ - ttm_object.o ttm_lock.o + ttm_object.o ttm_lock.o ttm_memory.o vmwgfx-$(CONFIG_TRANSPARENT_HUGEPAGE) += vmwgfx_thp.o obj-$(CONFIG_DRM_VMWGFX) := vmwgfx.o diff --git a/drivers/gpu/drm/ttm/ttm_memory.c b/drivers/gpu/drm/vmwgfx/ttm_memory.c similarity index 97% rename from drivers/gpu/drm/ttm/ttm_memory.c rename to drivers/gpu/drm/vmwgfx/ttm_memory.c index 634a85c2dc4c..1306d9e0f095 100644 --- a/drivers/gpu/drm/ttm/ttm_memory.c +++ b/drivers/gpu/drm/vmwgfx/ttm_memory.c @@ -28,7 +28,6 @@ #define pr_fmt(fmt) "[TTM] " fmt -#include <drm/ttm/ttm_memory.h> #include <linux/spinlock.h> #include <linux/sched.h> #include <linux/wait.h> @@ -36,10 +35,11 @@ #include <linux/module.h> #include <linux/slab.h> #include <linux/swap.h> -#include <drm/ttm/ttm_pool.h> -#include <drm/ttm/ttm_tt.h> -#include "ttm_module.h" +#include <drm/drm_device.h> +#include <drm/drm_file.h> + +#include "ttm_memory.h" #define TTM_MEMORY_ALLOC_RETRIES 4 @@ -414,7 +414,7 @@ static int ttm_mem_init_dma32_zone(struct ttm_mem_global *glob, } #endif -int ttm_mem_global_init(struct ttm_mem_global *glob) +int ttm_mem_global_init(struct ttm_mem_global *glob, struct drm_device *dev) { struct sysinfo si; int ret; @@ -425,7 +425,8 @@ int ttm_mem_global_init(struct ttm_mem_global *glob) glob->swap_queue = create_singlethread_workqueue("ttm_swap"); INIT_WORK(&glob->work, ttm_shrink_work); ret = kobject_init_and_add( - &glob->kobj, &ttm_mem_glob_kobj_type, ttm_get_kobj(), "memory_accounting"); + &glob->kobj, &ttm_mem_glob_kobj_type, &dev->primary->kdev->kobj, + "memory_accounting"); if (unlikely(ret != 0)) { kobject_put(&glob->kobj); return ret; @@ -453,8 +454,6 @@ int ttm_mem_global_init(struct ttm_mem_global *glob) pr_info("Zone %7s: Available graphics memory: %llu KiB\n", zone->name, (unsigned long long)zone->max_mem >> 10); } - ttm_pool_mgr_init(glob->zone_kernel->max_mem/(2*PAGE_SIZE)); - ttm_tt_mgr_init(); return 0; out_no_zone: ttm_mem_global_release(glob); @@ -466,10 +465,6 @@ void ttm_mem_global_release(struct ttm_mem_global *glob) struct ttm_mem_zone *zone; unsigned int i; - /* let the page allocator first stop the shrink work. */ - ttm_pool_mgr_fini(); - ttm_tt_mgr_fini(); - flush_workqueue(glob->swap_queue); destroy_workqueue(glob->swap_queue); glob->swap_queue = NULL; diff --git a/include/drm/ttm/ttm_memory.h b/drivers/gpu/drm/vmwgfx/ttm_memory.h similarity index 97% rename from include/drm/ttm/ttm_memory.h rename to drivers/gpu/drm/vmwgfx/ttm_memory.h index c1f167881e33..850ee6c867da 100644 --- a/include/drm/ttm/ttm_memory.h +++ b/drivers/gpu/drm/vmwgfx/ttm_memory.h @@ -35,7 +35,8 @@ #include <linux/errno.h> #include <linux/kobject.h> #include <linux/mm.h> -#include "ttm_bo_api.h" + +#include <drm/ttm/ttm_bo_api.h> /** * struct ttm_mem_global - Global memory accounting structure. @@ -79,7 +80,7 @@ extern struct ttm_mem_global { #endif } ttm_mem_glob; -int ttm_mem_global_init(struct ttm_mem_global *glob); +int ttm_mem_global_init(struct ttm_mem_global *glob, struct drm_device *dev); void ttm_mem_global_release(struct ttm_mem_global *glob); int ttm_mem_global_alloc(struct ttm_mem_global *glob, uint64_t memory, struct ttm_operation_ctx *ctx); diff --git a/drivers/gpu/drm/vmwgfx/ttm_object.h b/drivers/gpu/drm/vmwgfx/ttm_object.h index ede26df87c93..49b064f0cb19 100644 --- a/drivers/gpu/drm/vmwgfx/ttm_object.h +++ b/drivers/gpu/drm/vmwgfx/ttm_object.h @@ -43,7 +43,8 @@ #include <linux/rcupdate.h> #include <drm/drm_hashtab.h> -#include <drm/ttm/ttm_memory.h> + +#include "ttm_memory.h" /** * enum ttm_ref_type diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c index 6b3bfd8c678a..50e529a01677 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c @@ -507,11 +507,16 @@ int vmw_bo_create_kernel(struct vmw_private *dev_priv, unsigned long size, acc_size = ttm_round_pot(sizeof(*bo)); acc_size += ttm_round_pot(npages * sizeof(void *)); acc_size += ttm_round_pot(sizeof(struct ttm_tt)); + + ret = ttm_mem_global_alloc(&ttm_mem_glob, acc_size, &ctx); + if (unlikely(ret)) + goto error_free; + ret = ttm_bo_init_reserved(&dev_priv->bdev, bo, size, ttm_bo_type_device, placement, 0, - &ctx, acc_size, NULL, NULL, NULL); + &ctx, NULL, NULL, NULL); if (unlikely(ret)) - goto error_free; + goto error_account; ttm_bo_pin(bo); ttm_bo_unreserve(bo); @@ -519,6 +524,9 @@ int vmw_bo_create_kernel(struct vmw_private *dev_priv, unsigned long size, return 0; +error_account: + ttm_mem_global_free(&ttm_mem_glob, acc_size); + error_free: kfree(bo); return ret; @@ -558,11 +566,17 @@ int vmw_bo_init(struct vmw_private *dev_priv, vmw_bo->base.priority = 3; vmw_bo->res_tree = RB_ROOT; + ret = ttm_mem_global_alloc(&ttm_mem_glob, acc_size, &ctx); + if (unlikely(ret)) + return ret; + ret = ttm_bo_init_reserved(bdev, &vmw_bo->base, size, ttm_bo_type_device, placement, - 0, &ctx, acc_size, NULL, NULL, bo_free); - if (unlikely(ret)) + 0, &ctx, NULL, NULL, bo_free); + if (unlikely(ret)) { + ttm_mem_global_free(&ttm_mem_glob, acc_size); return ret; + } if (pin) ttm_bo_pin(&vmw_bo->base); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c index 710ba5169a74..ca59a879f0f0 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c @@ -1268,6 +1268,7 @@ static void vmw_remove(struct pci_dev *pdev) { struct drm_device *dev = pci_get_drvdata(pdev); + ttm_mem_global_release(&ttm_mem_glob); drm_dev_unregister(dev); vmw_driver_unload(dev); } @@ -1511,6 +1512,10 @@ static int vmw_probe(struct pci_dev *pdev, const struct pci_device_id *ent) if (ret) return ret; + ret = ttm_mem_global_init(&ttm_mem_glob, &vmw->drm); + if (ret) + return ret; + vmw = devm_drm_dev_alloc(&pdev->dev, &driver, struct vmw_private, drm); if (IS_ERR(vmw)) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c index d1bfa59579f1..63f10c865061 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c @@ -576,11 +576,31 @@ static void vmw_ttm_destroy(struct ttm_device *bdev, struct ttm_tt *ttm) static int vmw_ttm_populate(struct ttm_device *bdev, struct ttm_tt *ttm, struct ttm_operation_ctx *ctx) { + unsigned int i; + int ret; + /* TODO: maybe completely drop this ? */ if (ttm_tt_is_populated(ttm)) return 0; - return ttm_pool_alloc(&bdev->pool, ttm, ctx); + ret = ttm_pool_alloc(&bdev->pool, ttm, ctx); + if (ret) + return ret; + + for (i = 0; i < ttm->num_pages; ++i) { + ret = ttm_mem_global_alloc_page(&ttm_mem_glob, ttm->pages[i], + PAGE_SIZE, ctx); + if (ret) + goto error; + } + return 0; + +error: + while (i--) + ttm_mem_global_free_page(&ttm_mem_glob, ttm->pages[i], + PAGE_SIZE); + ttm_pool_free(&bdev->pool, ttm); + return ret; } static void vmw_ttm_unpopulate(struct ttm_device *bdev, @@ -588,6 +608,7 @@ static void vmw_ttm_unpopulate(struct ttm_device *bdev, { struct vmw_ttm_tt *vmw_tt = container_of(ttm, struct vmw_ttm_tt, dma_ttm); + unsigned int i; if (vmw_tt->mob) { vmw_mob_destroy(vmw_tt->mob); @@ -595,6 +616,11 @@ static void vmw_ttm_unpopulate(struct ttm_device *bdev, } vmw_ttm_unmap_dma(vmw_tt); + + for (i = 0; i < ttm->num_pages; ++i) + ttm_mem_global_free_page(&ttm_mem_glob, ttm->pages[i], + PAGE_SIZE); + ttm_pool_free(&bdev->pool, ttm); } diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h index 1297a8fb7ccb..4fb523dfab32 100644 --- a/include/drm/ttm/ttm_bo_api.h +++ b/include/drm/ttm/ttm_bo_api.h @@ -88,7 +88,6 @@ struct ttm_tt; * @type: The bo type. * @destroy: Destruction function. If NULL, kfree is used. * @num_pages: Actual number of pages. - * @acc_size: Accounted size for this object. * @kref: Reference count of this buffer object. When this refcount reaches * zero, the object is destroyed or put on the delayed delete list. * @mem: structure describing current placement. @@ -125,7 +124,6 @@ struct ttm_buffer_object { struct ttm_device *bdev; enum ttm_bo_type type; void (*destroy) (struct ttm_buffer_object *); - size_t acc_size; /** * Members not needing protection. @@ -357,10 +355,6 @@ void ttm_bo_unlock_delayed_workqueue(struct ttm_device *bdev, int resched); bool ttm_bo_eviction_valuable(struct ttm_buffer_object *bo, const struct ttm_place *place); -size_t ttm_bo_dma_acc_size(struct ttm_device *bdev, - unsigned long bo_size, - unsigned struct_size); - /** * ttm_bo_init_reserved * @@ -371,7 +365,6 @@ size_t ttm_bo_dma_acc_size(struct ttm_device *bdev, * @flags: Initial placement flags. * @page_alignment: Data alignment in pages. * @ctx: TTM operation context for memory allocation. - * @acc_size: Accounted size for this object. * @resv: Pointer to a dma_resv, or NULL to let ttm allocate one. * @destroy: Destroy function. Use NULL for kfree(). * @@ -402,8 +395,7 @@ int ttm_bo_init_reserved(struct ttm_device *bdev, struct ttm_placement *placement, uint32_t page_alignment, struct ttm_operation_ctx *ctx, - size_t acc_size, struct sg_table *sg, - struct dma_resv *resv, + struct sg_table *sg, struct dma_resv *resv, void (*destroy) (struct ttm_buffer_object *)); /** @@ -421,7 +413,6 @@ int ttm_bo_init_reserved(struct ttm_device *bdev, * holds a pointer to a persistent shmem object. Typically, this would * point to the shmem object backing a GEM object if TTM is used to back a * GEM user interface. - * @acc_size: Accounted size for this object. * @resv: Pointer to a dma_resv, or NULL to let ttm allocate one. * @destroy: Destroy function. Use NULL for kfree(). * @@ -446,7 +437,7 @@ int ttm_bo_init_reserved(struct ttm_device *bdev, int ttm_bo_init(struct ttm_device *bdev, struct ttm_buffer_object *bo, size_t size, enum ttm_bo_type type, struct ttm_placement *placement, - uint32_t page_alignment, bool interrubtible, size_t acc_size, + uint32_t page_alignment, bool interrubtible, struct sg_table *sg, struct dma_resv *resv, void (*destroy) (struct ttm_buffer_object *)); diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index 1c9bf993e252..8959c0075cfd 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -40,7 +40,6 @@ #include <drm/ttm/ttm_device.h> #include "ttm_bo_api.h" -#include "ttm_memory.h" #include "ttm_placement.h" #include "ttm_tt.h" #include "ttm_pool.h" diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h index cce57fb49e2c..069f8130241a 100644 --- a/include/drm/ttm/ttm_tt.h +++ b/include/drm/ttm/ttm_tt.h @@ -30,6 +30,7 @@ #include <linux/types.h> #include <drm/ttm/ttm_caching.h> +struct ttm_bo_device; struct ttm_tt; struct ttm_resource; struct ttm_buffer_object; -- 2.25.1 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH 2/3] drm/ttm: move memory accounting into vmwgfx 2021-01-25 13:25 ` [PATCH 2/3] drm/ttm: move memory accounting into vmwgfx Christian König @ 2021-01-26 12:23 ` kernel test robot 2021-01-26 14:02 ` kernel test robot 2021-01-26 16:56 ` kernel test robot 2 siblings, 0 replies; 18+ messages in thread From: kernel test robot @ 2021-01-26 12:23 UTC (permalink / raw) To: Christian König, daniel, dri-devel; +Cc: kbuild-all [-- Attachment #1: Type: text/plain, Size: 1782 bytes --] Hi "Christian, I love your patch! Yet something to improve: [auto build test ERROR on drm-tip/drm-tip] [also build test ERROR on next-20210125] [cannot apply to drm-intel/for-linux-next drm-exynos/exynos-drm-next tegra-drm/drm/tegra/for-next linus/master drm/drm-next v5.11-rc5] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Christian-K-nig/drm-ttm-rework-ttm_tt-page-limit-v3/20210126-140030 base: git://anongit.freedesktop.org/drm/drm-tip drm-tip config: riscv-rv32_defconfig (attached as .config) compiler: riscv32-linux-gcc (GCC) 9.3.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/0day-ci/linux/commit/8bdbdb659aa407a4f3e28dab6d6465304d326888 git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Christian-K-nig/drm-ttm-rework-ttm_tt-page-limit-v3/20210126-140030 git checkout 8bdbdb659aa407a4f3e28dab6d6465304d326888 # save the attached .config to linux build tree COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=riscv If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@intel.com> All errors (new ones prefixed by >>): riscv32-linux-ld: drivers/gpu/drm/ttm/ttm_device.o: in function `.L20': >> ttm_device.c:(.text+0x330): undefined reference to `__udivdi3' --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org [-- Attachment #2: .config.gz --] [-- Type: application/gzip, Size: 20447 bytes --] [-- Attachment #3: Type: text/plain, Size: 160 bytes --] _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 2/3] drm/ttm: move memory accounting into vmwgfx 2021-01-25 13:25 ` [PATCH 2/3] drm/ttm: move memory accounting into vmwgfx Christian König 2021-01-26 12:23 ` kernel test robot @ 2021-01-26 14:02 ` kernel test robot 2021-01-26 16:56 ` kernel test robot 2 siblings, 0 replies; 18+ messages in thread From: kernel test robot @ 2021-01-26 14:02 UTC (permalink / raw) To: Christian König, daniel, dri-devel; +Cc: clang-built-linux, kbuild-all [-- Attachment #1: Type: text/plain, Size: 3347 bytes --] Hi "Christian, I love your patch! Perhaps something to improve: [auto build test WARNING on drm-tip/drm-tip] [also build test WARNING on next-20210125] [cannot apply to drm-intel/for-linux-next drm-exynos/exynos-drm-next tegra-drm/drm/tegra/for-next linus/master drm/drm-next v5.11-rc5] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Christian-K-nig/drm-ttm-rework-ttm_tt-page-limit-v3/20210126-140030 base: git://anongit.freedesktop.org/drm/drm-tip drm-tip config: x86_64-randconfig-a003-20210126 (attached as .config) compiler: clang version 12.0.0 (https://github.com/llvm/llvm-project 925ae8c790c7e354f12ec14a6cac6aa49fc75b29) reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # install x86_64 cross compiling tool for clang build # apt-get install binutils-x86-64-linux-gnu # https://github.com/0day-ci/linux/commit/8bdbdb659aa407a4f3e28dab6d6465304d326888 git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Christian-K-nig/drm-ttm-rework-ttm_tt-page-limit-v3/20210126-140030 git checkout 8bdbdb659aa407a4f3e28dab6d6465304d326888 # save the attached .config to linux build tree COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=x86_64 If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@intel.com> All warnings (new ones prefixed by >>): >> drivers/gpu/drm/vmwgfx/vmwgfx_drv.c:1515:44: warning: variable 'vmw' is uninitialized when used here [-Wuninitialized] ret = ttm_mem_global_init(&ttm_mem_glob, &vmw->drm); ^~~ drivers/gpu/drm/vmwgfx/vmwgfx_drv.c:1504:25: note: initialize the variable 'vmw' to silence this warning struct vmw_private *vmw; ^ = NULL 1 warning generated. vim +/vmw +1515 drivers/gpu/drm/vmwgfx/vmwgfx_drv.c 1501 1502 static int vmw_probe(struct pci_dev *pdev, const struct pci_device_id *ent) 1503 { 1504 struct vmw_private *vmw; 1505 int ret; 1506 1507 ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, "svgadrmfb"); 1508 if (ret) 1509 return ret; 1510 1511 ret = pcim_enable_device(pdev); 1512 if (ret) 1513 return ret; 1514 > 1515 ret = ttm_mem_global_init(&ttm_mem_glob, &vmw->drm); 1516 if (ret) 1517 return ret; 1518 1519 vmw = devm_drm_dev_alloc(&pdev->dev, &driver, 1520 struct vmw_private, drm); 1521 if (IS_ERR(vmw)) 1522 return PTR_ERR(vmw); 1523 1524 pci_set_drvdata(pdev, &vmw->drm); 1525 1526 ret = vmw_driver_load(vmw, ent->device); 1527 if (ret) 1528 return ret; 1529 1530 ret = drm_dev_register(&vmw->drm, 0); 1531 if (ret) { 1532 vmw_driver_unload(&vmw->drm); 1533 return ret; 1534 } 1535 1536 return 0; 1537 } 1538 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org [-- Attachment #2: .config.gz --] [-- Type: application/gzip, Size: 41828 bytes --] [-- Attachment #3: Type: text/plain, Size: 160 bytes --] _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 2/3] drm/ttm: move memory accounting into vmwgfx 2021-01-25 13:25 ` [PATCH 2/3] drm/ttm: move memory accounting into vmwgfx Christian König 2021-01-26 12:23 ` kernel test robot 2021-01-26 14:02 ` kernel test robot @ 2021-01-26 16:56 ` kernel test robot 2 siblings, 0 replies; 18+ messages in thread From: kernel test robot @ 2021-01-26 16:56 UTC (permalink / raw) To: Christian König, daniel, dri-devel; +Cc: kbuild-all [-- Attachment #1: Type: text/plain, Size: 1739 bytes --] Hi "Christian, I love your patch! Yet something to improve: [auto build test ERROR on drm-tip/drm-tip] [also build test ERROR on next-20210125] [cannot apply to drm-intel/for-linux-next drm-exynos/exynos-drm-next tegra-drm/drm/tegra/for-next linus/master drm/drm-next v5.11-rc5] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Christian-K-nig/drm-ttm-rework-ttm_tt-page-limit-v3/20210126-140030 base: git://anongit.freedesktop.org/drm/drm-tip drm-tip config: arm-defconfig (attached as .config) compiler: arm-linux-gnueabi-gcc (GCC) 9.3.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/0day-ci/linux/commit/8bdbdb659aa407a4f3e28dab6d6465304d326888 git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Christian-K-nig/drm-ttm-rework-ttm_tt-page-limit-v3/20210126-140030 git checkout 8bdbdb659aa407a4f3e28dab6d6465304d326888 # save the attached .config to linux build tree COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=arm If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@intel.com> All errors (new ones prefixed by >>, old ones prefixed by <<): >> ERROR: modpost: "__aeabi_uldivmod" [drivers/gpu/drm/ttm/ttm.ko] undefined! --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org [-- Attachment #2: .config.gz --] [-- Type: application/gzip, Size: 54200 bytes --] [-- Attachment #3: Type: text/plain, Size: 160 bytes --] _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2021-01-26 16:56 UTC | newest] Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2020-11-17 14:06 [PATCH 1/3] drm/ttm: rework ttm_tt page limit Christian König 2020-11-17 14:06 ` [PATCH 2/3] drm/ttm: move memory accounting into vmwgfx Christian König 2020-11-18 20:58 ` kernel test robot 2020-11-19 7:09 ` kernel test robot 2020-11-17 14:06 ` [PATCH 3/3] drm/ttm: make up to 90% of system memory available Christian König 2020-11-17 16:38 ` Michel Dänzer 2020-11-18 12:09 ` Christian König 2020-11-17 17:19 ` Daniel Vetter 2020-11-18 13:35 ` Christian König 2020-11-18 21:15 ` Daniel Vetter 2020-11-19 15:27 ` Christian König 2020-11-19 15:42 ` Daniel Vetter 2020-11-18 7:55 ` [PATCH 1/3] drm/ttm: rework ttm_tt page limit kernel test robot 2020-11-19 7:14 ` kernel test robot 2021-01-25 13:25 [PATCH 1/3] drm/ttm: rework ttm_tt page limit v3 Christian König 2021-01-25 13:25 ` [PATCH 2/3] drm/ttm: move memory accounting into vmwgfx Christian König 2021-01-26 12:23 ` kernel test robot 2021-01-26 14:02 ` kernel test robot 2021-01-26 16:56 ` kernel test robot
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).