* [PATCH v8] drm: Add library for shmem backed GEM objects @ 2019-03-13 0:43 Rob Herring 2019-03-13 20:56 ` Eric Anholt 2019-03-14 16:34 ` [PATCH] drm/v3d: Use the new shmem helpers to reduce driver boilerplate Eric Anholt 0 siblings, 2 replies; 8+ messages in thread From: Rob Herring @ 2019-03-13 0:43 UTC (permalink / raw) To: dri-devel, Eric Anholt From: Noralf Trønnes <noralf@tronnes.org> This adds a library for shmem backed GEM objects. v8: - export drm_gem_shmem_create_with_handle - call mapping_set_gfp_mask to set default zone to GFP_HIGHUSER - Add helper drm_gem_shmem_get_pages_sgt() v7: - Use write-combine for mmap instead. This is the more common case. (robher) v6: - Fix uninitialized variable issue in an error path (anholt). - Add a drm_gem_shmem_vm_open() to the fops to get proper refcounting of the pages (anholt). v5: - Drop drm_gem_shmem_prime_mmap() (Daniel Vetter) - drm_gem_shmem_mmap(): Subtract drm_vma_node_start() to get the real vma->vm_pgoff - drm_gem_shmem_fault(): Use vmf->pgoff now that vma->vm_pgoff is correct v4: - Drop cache modes (Thomas Hellstrom) - Add a GEM attached vtable v3: - Grammar (Sam Ravnborg) - s/drm_gem_shmem_put_pages_unlocked/drm_gem_shmem_put_pages_locked/ (Sam Ravnborg) - Add debug output in error path (Sam Ravnborg) Signed-off-by: Noralf Trønnes <noralf@tronnes.org> Signed-off-by: Eric Anholt <eric@anholt.net> Signed-off-by: Rob Herring <robh@kernel.org> --- Documentation/gpu/drm-kms-helpers.rst | 12 + drivers/gpu/drm/Kconfig | 6 + drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/drm_gem_shmem_helper.c | 625 +++++++++++++++++++++++++ include/drm/drm_gem_shmem_helper.h | 159 +++++++ 5 files changed, 803 insertions(+) create mode 100644 drivers/gpu/drm/drm_gem_shmem_helper.c create mode 100644 include/drm/drm_gem_shmem_helper.h diff --git a/Documentation/gpu/drm-kms-helpers.rst b/Documentation/gpu/drm-kms-helpers.rst index 17ca7f8bf3d3..58b375e47615 100644 --- a/Documentation/gpu/drm-kms-helpers.rst +++ b/Documentation/gpu/drm-kms-helpers.rst @@ -369,3 +369,15 @@ Legacy CRTC/Modeset Helper Functions Reference .. kernel-doc:: drivers/gpu/drm/drm_crtc_helper.c :export: + +SHMEM GEM Helper Reference +========================== + +.. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c + :doc: overview + +.. kernel-doc:: include/drm/drm_gem_shmem_helper.h + :internal: + +.. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c + :export: diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index bd943a71756c..febdc102b75c 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -173,6 +173,12 @@ config DRM_KMS_CMA_HELPER help Choose this if you need the KMS CMA helper functions +config DRM_GEM_SHMEM_HELPER + bool + depends on DRM + help + Choose this if you need the GEM shmem helper functions + config DRM_VM bool depends on DRM && MMU diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile index 1ac55c65eac0..7476ed945e30 100644 --- a/drivers/gpu/drm/Makefile +++ b/drivers/gpu/drm/Makefile @@ -25,6 +25,7 @@ drm-$(CONFIG_DRM_LIB_RANDOM) += lib/drm_random.o drm-$(CONFIG_DRM_VM) += drm_vm.o drm-$(CONFIG_COMPAT) += drm_ioc32.o drm-$(CONFIG_DRM_GEM_CMA_HELPER) += drm_gem_cma_helper.o +drm-$(CONFIG_DRM_GEM_SHMEM_HELPER) += drm_gem_shmem_helper.o drm-$(CONFIG_PCI) += ati_pcigart.o drm-$(CONFIG_DRM_PANEL) += drm_panel.o drm-$(CONFIG_OF) += drm_of.o diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c new file mode 100644 index 000000000000..3750a982aaf6 --- /dev/null +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -0,0 +1,625 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright 2018 Noralf Trønnes + */ + +#include <linux/dma-buf.h> +#include <linux/export.h> +#include <linux/mutex.h> +#include <linux/shmem_fs.h> +#include <linux/slab.h> +#include <linux/vmalloc.h> + +#include <drm/drm_device.h> +#include <drm/drm_drv.h> +#include <drm/drm_gem_shmem_helper.h> +#include <drm/drm_prime.h> +#include <drm/drm_print.h> + +/** + * DOC: overview + * + * This library provides helpers for GEM objects backed by shmem buffers + * allocated using anonymous pageable memory. + */ + +static const struct drm_gem_object_funcs drm_gem_shmem_funcs = { + .free = drm_gem_shmem_free_object, + .print_info = drm_gem_shmem_print_info, + .pin = drm_gem_shmem_pin, + .unpin = drm_gem_shmem_unpin, + .get_sg_table = drm_gem_shmem_get_sg_table, + .vmap = drm_gem_shmem_vmap, + .vunmap = drm_gem_shmem_vunmap, + .vm_ops = &drm_gem_shmem_vm_ops, +}; + +/** + * drm_gem_shmem_create - Allocate an object with the given size + * @dev: DRM device + * @size: Size of the object to allocate + * + * This function creates a shmem GEM object. + * + * Returns: + * A struct drm_gem_shmem_object * on success or an ERR_PTR()-encoded negative + * error code on failure. + */ +struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size) +{ + struct drm_gem_shmem_object *shmem; + struct drm_gem_object *obj; + int ret; + + size = PAGE_ALIGN(size); + + if (dev->driver->gem_create_object) + obj = dev->driver->gem_create_object(dev, size); + else + obj = kzalloc(sizeof(*shmem), GFP_KERNEL); + if (!obj) + return ERR_PTR(-ENOMEM); + + if (!obj->funcs) + obj->funcs = &drm_gem_shmem_funcs; + + ret = drm_gem_object_init(dev, obj, size); + if (ret) + goto err_free; + + ret = drm_gem_create_mmap_offset(obj); + if (ret) + goto err_release; + + shmem = to_drm_gem_shmem_obj(obj); + mutex_init(&shmem->pages_lock); + mutex_init(&shmem->vmap_lock); + + /* + * Our buffers are kept pinned, so allocating them + * from the MOVABLE zone is a really bad idea, and + * conflicts with CMA. See comments above new_inode() + * why this is required _and_ expected if you're + * going to pin these pages. + */ + mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER | + __GFP_RETRY_MAYFAIL | __GFP_NOWARN); + + return shmem; + +err_release: + drm_gem_object_release(obj); +err_free: + kfree(obj); + + return ERR_PTR(ret); +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_create); + +/** + * drm_gem_shmem_free_object - Free resources associated with a shmem GEM object + * @obj: GEM object to free + * + * This function cleans up the GEM object state and frees the memory used to + * store the object itself. + */ +void drm_gem_shmem_free_object(struct drm_gem_object *obj) +{ + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + + WARN_ON(shmem->vmap_use_count); + + if (obj->import_attach) { + shmem->pages_use_count--; + drm_prime_gem_destroy(obj, shmem->sgt); + kvfree(shmem->pages); + } else { + if (shmem->sgt) { + dma_unmap_sg(obj->dev->dev, shmem->sgt->sgl, + shmem->sgt->nents, DMA_BIDIRECTIONAL); + + drm_gem_shmem_put_pages(shmem); + sg_free_table(shmem->sgt); + kfree(shmem->sgt); + } + } + + WARN_ON(shmem->pages_use_count); + + drm_gem_object_release(obj); + mutex_destroy(&shmem->pages_lock); + mutex_destroy(&shmem->vmap_lock); + kfree(shmem); +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_free_object); + +static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + struct page **pages; + + if (shmem->pages_use_count++ > 0) + return 0; + + pages = drm_gem_get_pages(obj); + if (IS_ERR(pages)) { + DRM_DEBUG_KMS("Failed to get pages (%ld)\n", PTR_ERR(pages)); + shmem->pages_use_count = 0; + return PTR_ERR(pages); + } + + shmem->pages = pages; + + return 0; +} + +/* + * drm_gem_shmem_get_pages - Allocate backing pages for a shmem GEM object + * @shmem: shmem GEM object + * + * This function makes sure that backing pages exists for the shmem GEM object + * and increases the use count. + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) +{ + int ret; + + ret = mutex_lock_interruptible(&shmem->pages_lock); + if (ret) + return ret; + ret = drm_gem_shmem_get_pages_locked(shmem); + mutex_unlock(&shmem->pages_lock); + + return ret; +} +EXPORT_SYMBOL(drm_gem_shmem_get_pages); + +static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + + if (WARN_ON_ONCE(!shmem->pages_use_count)) + return; + + if (--shmem->pages_use_count > 0) + return; + + drm_gem_put_pages(obj, shmem->pages, + shmem->pages_mark_dirty_on_put, + shmem->pages_mark_accessed_on_put); + shmem->pages = NULL; +} + +/* + * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object + * @shmem: shmem GEM object + * + * This function decreases the use count and puts the backing pages when use drops to zero. + */ +void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) +{ + mutex_lock(&shmem->pages_lock); + drm_gem_shmem_put_pages_locked(shmem); + mutex_unlock(&shmem->pages_lock); +} +EXPORT_SYMBOL(drm_gem_shmem_put_pages); + +/** + * drm_gem_shmem_pin - Pin backing pages for a shmem GEM object + * @obj: GEM object + * + * This function makes sure the backing pages are pinned in memory while the + * buffer is exported. + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drm_gem_shmem_pin(struct drm_gem_object *obj) +{ + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + + return drm_gem_shmem_get_pages(shmem); +} +EXPORT_SYMBOL(drm_gem_shmem_pin); + +/** + * drm_gem_shmem_unpin - Unpin backing pages for a shmem GEM object + * @obj: GEM object + * + * This function removes the requirement that the backing pages are pinned in + * memory. + */ +void drm_gem_shmem_unpin(struct drm_gem_object *obj) +{ + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + + drm_gem_shmem_put_pages(shmem); +} +EXPORT_SYMBOL(drm_gem_shmem_unpin); + +static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + int ret; + + if (shmem->vmap_use_count++ > 0) + return shmem->vaddr; + + ret = drm_gem_shmem_get_pages(shmem); + if (ret) + goto err_zero_use; + + if (obj->import_attach) + shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf); + else + shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT, VM_MAP, PAGE_KERNEL); + + if (!shmem->vaddr) { + DRM_DEBUG_KMS("Failed to vmap pages\n"); + ret = -ENOMEM; + goto err_put_pages; + } + + return shmem->vaddr; + +err_put_pages: + drm_gem_shmem_put_pages(shmem); +err_zero_use: + shmem->vmap_use_count = 0; + + return ERR_PTR(ret); +} + +/* + * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object + * @shmem: shmem GEM object + * + * This function makes sure that a virtual address exists for the buffer backing + * the shmem GEM object. + * + * Returns: + * 0 on success or a negative error code on failure. + */ +void *drm_gem_shmem_vmap(struct drm_gem_object *obj) +{ + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + void *vaddr; + int ret; + + ret = mutex_lock_interruptible(&shmem->vmap_lock); + if (ret) + return ERR_PTR(ret); + vaddr = drm_gem_shmem_vmap_locked(shmem); + mutex_unlock(&shmem->vmap_lock); + + return vaddr; +} +EXPORT_SYMBOL(drm_gem_shmem_vmap); + +static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + + if (WARN_ON_ONCE(!shmem->vmap_use_count)) + return; + + if (--shmem->vmap_use_count > 0) + return; + + if (obj->import_attach) + dma_buf_vunmap(obj->import_attach->dmabuf, shmem->vaddr); + else + vunmap(shmem->vaddr); + + shmem->vaddr = NULL; + drm_gem_shmem_put_pages(shmem); +} + +/* + * drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object + * @shmem: shmem GEM object + * + * This function removes the virtual address when use count drops to zero. + */ +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr) +{ + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + + mutex_lock(&shmem->vmap_lock); + drm_gem_shmem_vunmap_locked(shmem); + mutex_unlock(&shmem->vmap_lock); +} +EXPORT_SYMBOL(drm_gem_shmem_vunmap); + +struct drm_gem_shmem_object * +drm_gem_shmem_create_with_handle(struct drm_file *file_priv, + struct drm_device *dev, size_t size, + uint32_t *handle) +{ + struct drm_gem_shmem_object *shmem; + int ret; + + shmem = drm_gem_shmem_create(dev, size); + if (IS_ERR(shmem)) + return shmem; + + /* + * Allocate an id of idr table where the obj is registered + * and handle has the id what user can see. + */ + ret = drm_gem_handle_create(file_priv, &shmem->base, handle); + /* drop reference from allocate - handle holds it now. */ + drm_gem_object_put_unlocked(&shmem->base); + if (ret) + return ERR_PTR(ret); + + return shmem; +} +EXPORT_SYMBOL(drm_gem_shmem_create_with_handle); + +/** + * drm_gem_shmem_dumb_create - Create a dumb shmem buffer object + * @file: DRM file structure to create the dumb buffer for + * @dev: DRM device + * @args: IOCTL data + * + * This function computes the pitch of the dumb buffer and rounds it up to an + * integer number of bytes per pixel. Drivers for hardware that doesn't have + * any additional restrictions on the pitch can directly use this function as + * their &drm_driver.dumb_create callback. + * + * For hardware with additional restrictions, drivers can adjust the fields + * set up by userspace before calling into this function. + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drm_gem_shmem_dumb_create(struct drm_file *file, struct drm_device *dev, + struct drm_mode_create_dumb *args) +{ + u32 min_pitch = DIV_ROUND_UP(args->width * args->bpp, 8); + struct drm_gem_shmem_object *shmem; + + if (!args->pitch || !args->size) { + args->pitch = min_pitch; + args->size = args->pitch * args->height; + } else { + /* ensure sane minimum values */ + if (args->pitch < min_pitch) + args->pitch = min_pitch; + if (args->size < args->pitch * args->height) + args->size = args->pitch * args->height; + } + + shmem = drm_gem_shmem_create_with_handle(file, dev, args->size, &args->handle); + + return PTR_ERR_OR_ZERO(shmem); +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_dumb_create); + +static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf) +{ + struct vm_area_struct *vma = vmf->vma; + struct drm_gem_object *obj = vma->vm_private_data; + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + loff_t num_pages = obj->size >> PAGE_SHIFT; + struct page *page; + + if (vmf->pgoff > num_pages || WARN_ON_ONCE(!shmem->pages)) + return VM_FAULT_SIGBUS; + + page = shmem->pages[vmf->pgoff]; + + return vmf_insert_page(vma, vmf->address, page); +} + +static void drm_gem_shmem_vm_open(struct vm_area_struct *vma) +{ + struct drm_gem_object *obj = vma->vm_private_data; + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + int ret; + + ret = drm_gem_shmem_get_pages(shmem); + WARN_ON_ONCE(ret != 0); + + drm_gem_vm_open(vma); +} + +static void drm_gem_shmem_vm_close(struct vm_area_struct *vma) +{ + struct drm_gem_object *obj = vma->vm_private_data; + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + + drm_gem_shmem_put_pages(shmem); + drm_gem_vm_close(vma); +} + +const struct vm_operations_struct drm_gem_shmem_vm_ops = { + .fault = drm_gem_shmem_fault, + .open = drm_gem_shmem_vm_open, + .close = drm_gem_shmem_vm_close, +}; +EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops); + +/** + * drm_gem_shmem_mmap - Memory-map a shmem GEM object + * @filp: File object + * @vma: VMA for the area to be mapped + * + * This function implements an augmented version of the GEM DRM file mmap + * operation for shmem objects. Drivers which employ the shmem helpers should + * use this function as their &file_operations.mmap handler in the DRM device file's + * file_operations structure. + * + * Instead of directly referencing this function, drivers should use the + * DEFINE_DRM_GEM_SHMEM_FOPS() macro. + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drm_gem_shmem_mmap(struct file *filp, struct vm_area_struct *vma) +{ + struct drm_gem_shmem_object *shmem; + int ret; + + ret = drm_gem_mmap(filp, vma); + if (ret) + return ret; + + shmem = to_drm_gem_shmem_obj(vma->vm_private_data); + + ret = drm_gem_shmem_get_pages(shmem); + if (ret) { + drm_gem_vm_close(vma); + return ret; + } + + /* VM_PFNMAP was set by drm_gem_mmap() */ + vma->vm_flags &= ~VM_PFNMAP; + vma->vm_flags |= VM_MIXEDMAP; + + /* Remove the fake offset */ + vma->vm_pgoff -= drm_vma_node_start(&shmem->base.vma_node); + + return 0; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_mmap); + +/** + * drm_gem_shmem_print_info() - Print &drm_gem_shmem_object info for debugfs + * @p: DRM printer + * @indent: Tab indentation level + * @obj: GEM object + */ +void drm_gem_shmem_print_info(struct drm_printer *p, unsigned int indent, + const struct drm_gem_object *obj) +{ + const struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + + drm_printf_indent(p, indent, "pages_use_count=%u\n", shmem->pages_use_count); + drm_printf_indent(p, indent, "vmap_use_count=%u\n", shmem->vmap_use_count); + drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr); +} +EXPORT_SYMBOL(drm_gem_shmem_print_info); + +/** + * drm_gem_shmem_get_sg_table - Provide a scatter/gather table of pinned + * pages for a shmem GEM object + * @obj: GEM object + * + * This function exports a scatter/gather table suitable for PRIME usage by + * calling the standard DMA mapping API. + * + * Returns: + * A pointer to the scatter/gather table of pinned pages or NULL on failure. + */ +struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_object *obj) +{ + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + + return drm_prime_pages_to_sg(shmem->pages, obj->size >> PAGE_SHIFT); +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table); + +/** + * drm_gem_shmem_get_pages_sgt - Pin pages, dma map them, and return a + * scatter/gather table for a shmem GEM object. + * @obj: GEM object + * + * This function returns a scatter/gather table suitable for driver usage. If + * the sg table doesn't exist, the pages are pinned, dma-mapped, and a sg + * table created. + * + * Returns: + * A pointer to the scatter/gather table of pinned pages or errno on failure. + */ +struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_object *obj) +{ + int ret; + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + struct sg_table *sgt; + + if (shmem->sgt) + return shmem->sgt; + + WARN_ON(obj->import_attach); + + ret = drm_gem_shmem_get_pages(shmem); + if (ret) + return ERR_PTR(ret); + + sgt = drm_gem_shmem_get_sg_table(&shmem->base); + if (IS_ERR(sgt)) { + ret = PTR_ERR(sgt); + goto err_put_pages; + } + /* Map the pages for use by the h/w. */ + dma_map_sg(obj->dev->dev, sgt->sgl, sgt->nents, DMA_BIDIRECTIONAL); + + shmem->sgt = sgt; + + return sgt; + +err_put_pages: + drm_gem_shmem_put_pages(shmem); + return ERR_PTR(ret); +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_get_pages_sgt); + +/** + * drm_gem_shmem_prime_import_sg_table - Produce a shmem GEM object from + * another driver's scatter/gather table of pinned pages + * @dev: Device to import into + * @attach: DMA-BUF attachment + * @sgt: Scatter/gather table of pinned pages + * + * This function imports a scatter/gather table exported via DMA-BUF by + * another driver. Drivers that use the shmem helpers should set this as their + * &drm_driver.gem_prime_import_sg_table callback. + * + * Returns: + * A pointer to a newly created GEM object or an ERR_PTR-encoded negative + * error code on failure. + */ +struct drm_gem_object * +drm_gem_shmem_prime_import_sg_table(struct drm_device *dev, + struct dma_buf_attachment *attach, + struct sg_table *sgt) +{ + size_t size = PAGE_ALIGN(attach->dmabuf->size); + size_t npages = size >> PAGE_SHIFT; + struct drm_gem_shmem_object *shmem; + int ret; + + shmem = drm_gem_shmem_create(dev, size); + if (IS_ERR(shmem)) + return ERR_CAST(shmem); + + shmem->pages = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL); + if (!shmem->pages) { + ret = -ENOMEM; + goto err_free_gem; + } + + ret = drm_prime_sg_to_page_addr_arrays(sgt, shmem->pages, NULL, npages); + if (ret < 0) + goto err_free_array; + + shmem->sgt = sgt; + shmem->pages_use_count = 1; /* Permanently pinned from our point of view */ + + DRM_DEBUG_PRIME("size = %zu\n", size); + + return &shmem->base; + +err_free_array: + kvfree(shmem->pages); +err_free_gem: + drm_gem_object_put_unlocked(&shmem->base); + + return ERR_PTR(ret); +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table); diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h new file mode 100644 index 000000000000..038b6d313447 --- /dev/null +++ b/include/drm/drm_gem_shmem_helper.h @@ -0,0 +1,159 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef __DRM_GEM_SHMEM_HELPER_H__ +#define __DRM_GEM_SHMEM_HELPER_H__ + +#include <linux/fs.h> +#include <linux/mm.h> +#include <linux/mutex.h> + +#include <drm/drm_file.h> +#include <drm/drm_gem.h> +#include <drm/drm_ioctl.h> +#include <drm/drm_prime.h> + +struct dma_buf_attachment; +struct drm_mode_create_dumb; +struct drm_printer; +struct sg_table; + +/** + * struct drm_gem_shmem_object - GEM object backed by shmem + */ +struct drm_gem_shmem_object { + /** + * @base: Base GEM object + */ + struct drm_gem_object base; + + /** + * @pages_lock: Protects the page table and use count + */ + struct mutex pages_lock; + + /** + * @pages: Page table + */ + struct page **pages; + + /** + * @pages_use_count: + * + * Reference count on the pages table. + * The pages are put when the count reaches zero. + */ + unsigned int pages_use_count; + + /** + * @pages_mark_dirty_on_put: + * + * Mark pages as dirty when they are put. + */ + unsigned int pages_mark_dirty_on_put : 1; + + /** + * @pages_mark_accessed_on_put: + * + * Mark pages as accessed when they are put. + */ + unsigned int pages_mark_accessed_on_put : 1; + + /** + * @sgt: Scatter/gather table for imported PRIME buffers + */ + struct sg_table *sgt; + + /** + * @vmap_lock: Protects the vmap address and use count + */ + struct mutex vmap_lock; + + /** + * @vaddr: Kernel virtual address of the backing memory + */ + void *vaddr; + + /** + * @vmap_use_count: + * + * Reference count on the virtual address. + * The address are un-mapped when the count reaches zero. + */ + unsigned int vmap_use_count; +}; + +#define to_drm_gem_shmem_obj(obj) \ + container_of(obj, struct drm_gem_shmem_object, base) + +/** + * DEFINE_DRM_GEM_SHMEM_FOPS() - Macro to generate file operations for shmem drivers + * @name: name for the generated structure + * + * This macro autogenerates a suitable &struct file_operations for shmem based + * drivers, which can be assigned to &drm_driver.fops. Note that this structure + * cannot be shared between drivers, because it contains a reference to the + * current module using THIS_MODULE. + * + * Note that the declaration is already marked as static - if you need a + * non-static version of this you're probably doing it wrong and will break the + * THIS_MODULE reference by accident. + */ +#define DEFINE_DRM_GEM_SHMEM_FOPS(name) \ + static const struct file_operations name = {\ + .owner = THIS_MODULE,\ + .open = drm_open,\ + .release = drm_release,\ + .unlocked_ioctl = drm_ioctl,\ + .compat_ioctl = drm_compat_ioctl,\ + .poll = drm_poll,\ + .read = drm_read,\ + .llseek = noop_llseek,\ + .mmap = drm_gem_shmem_mmap, \ + } + +struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size); +void drm_gem_shmem_free_object(struct drm_gem_object *obj); + +int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem); +void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem); +int drm_gem_shmem_pin(struct drm_gem_object *obj); +void drm_gem_shmem_unpin(struct drm_gem_object *obj); +void *drm_gem_shmem_vmap(struct drm_gem_object *obj); +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr); + +struct drm_gem_shmem_object * +drm_gem_shmem_create_with_handle(struct drm_file *file_priv, + struct drm_device *dev, size_t size, + uint32_t *handle); +int drm_gem_shmem_dumb_create(struct drm_file *file, struct drm_device *dev, + struct drm_mode_create_dumb *args); + +int drm_gem_shmem_mmap(struct file *filp, struct vm_area_struct *vma); + +extern const struct vm_operations_struct drm_gem_shmem_vm_ops; + +void drm_gem_shmem_print_info(struct drm_printer *p, unsigned int indent, + const struct drm_gem_object *obj); + +struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_object *obj); +struct drm_gem_object * +drm_gem_shmem_prime_import_sg_table(struct drm_device *dev, + struct dma_buf_attachment *attach, + struct sg_table *sgt); + +struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_object *obj); + +/** + * DRM_GEM_SHMEM_DRIVER_OPS - Default shmem GEM operations + * + * This macro provides a shortcut for setting the shmem GEM operations in + * the &drm_driver structure. + */ +#define DRM_GEM_SHMEM_DRIVER_OPS \ + .prime_handle_to_fd = drm_gem_prime_handle_to_fd, \ + .prime_fd_to_handle = drm_gem_prime_fd_to_handle, \ + .gem_prime_import_sg_table = drm_gem_shmem_prime_import_sg_table, \ + .gem_prime_mmap = drm_gem_prime_mmap, \ + .dumb_create = drm_gem_shmem_dumb_create + +#endif /* __DRM_GEM_SHMEM_HELPER_H__ */ -- 2.19.1 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v8] drm: Add library for shmem backed GEM objects 2019-03-13 0:43 [PATCH v8] drm: Add library for shmem backed GEM objects Rob Herring @ 2019-03-13 20:56 ` Eric Anholt 2019-03-14 16:50 ` Rob Herring 2019-03-14 16:34 ` [PATCH] drm/v3d: Use the new shmem helpers to reduce driver boilerplate Eric Anholt 1 sibling, 1 reply; 8+ messages in thread From: Eric Anholt @ 2019-03-13 20:56 UTC (permalink / raw) To: Rob Herring, dri-devel [-- Attachment #1.1: Type: text/plain, Size: 705 bytes --] Rob Herring <robh@kernel.org> writes: > From: Noralf Trønnes <noralf@tronnes.org> > > This adds a library for shmem backed GEM objects. I read through the whole thing, and the only bit I didn't like (drm_gem_shmem_create_with_handle returns an unrefcounted object pointer) was a copy-and-paste from CMA that we should fix in both places, and has no downside in the current usage since the pointer value is unused. Just one question left: > v7: > - Use write-combine for mmap instead. This is the more common > case. (robher) I don't actually see this in the code. Wasn't there supposed to be a: vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags)); [-- Attachment #1.2: signature.asc --] [-- Type: application/pgp-signature, Size: 832 bytes --] [-- Attachment #2: Type: text/plain, Size: 159 bytes --] _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v8] drm: Add library for shmem backed GEM objects 2019-03-13 20:56 ` Eric Anholt @ 2019-03-14 16:50 ` Rob Herring 2019-03-14 19:06 ` anholt 0 siblings, 1 reply; 8+ messages in thread From: Rob Herring @ 2019-03-14 16:50 UTC (permalink / raw) To: Eric Anholt; +Cc: dri-devel On Wed, Mar 13, 2019 at 3:56 PM Eric Anholt <eric@anholt.net> wrote: > > Rob Herring <robh@kernel.org> writes: > > > From: Noralf Trønnes <noralf@tronnes.org> > > > > This adds a library for shmem backed GEM objects. > > I read through the whole thing, and the only bit I didn't like > (drm_gem_shmem_create_with_handle returns an unrefcounted object > pointer) was a copy-and-paste from CMA that we should fix in both > places, and has no downside in the current usage since the pointer value > is unused. Just one question left: > > > v7: > > - Use write-combine for mmap instead. This is the more common > > case. (robher) > > I don't actually see this in the code. Wasn't there supposed to be a: > > vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags)); That's the default if you trace drm_gem_mmap() -> drm_gem_mmap_obj(). My wording here should have been more clear. Rob _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v8] drm: Add library for shmem backed GEM objects 2019-03-14 16:50 ` Rob Herring @ 2019-03-14 19:06 ` anholt 0 siblings, 0 replies; 8+ messages in thread From: anholt @ 2019-03-14 19:06 UTC (permalink / raw) To: Rob Herring; +Cc: dri-devel [-- Attachment #1.1: Type: text/plain, Size: 1083 bytes --] Rob Herring <robh@kernel.org> writes: > On Wed, Mar 13, 2019 at 3:56 PM Eric Anholt <eric@anholt.net> wrote: >> >> Rob Herring <robh@kernel.org> writes: >> >> > From: Noralf Trønnes <noralf@tronnes.org> >> > >> > This adds a library for shmem backed GEM objects. >> >> I read through the whole thing, and the only bit I didn't like >> (drm_gem_shmem_create_with_handle returns an unrefcounted object >> pointer) was a copy-and-paste from CMA that we should fix in both >> places, and has no downside in the current usage since the pointer value >> is unused. Just one question left: >> >> > v7: >> > - Use write-combine for mmap instead. This is the more common >> > case. (robher) >> >> I don't actually see this in the code. Wasn't there supposed to be a: >> >> vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags)); > > That's the default if you trace drm_gem_mmap() -> drm_gem_mmap_obj(). > My wording here should have been more clear. Ah, there it is! Added review to the helpers, and I've pushed both patches now. [-- Attachment #1.2: signature.asc --] [-- Type: application/pgp-signature, Size: 832 bytes --] [-- Attachment #2: Type: text/plain, Size: 159 bytes --] _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH] drm/v3d: Use the new shmem helpers to reduce driver boilerplate. 2019-03-13 0:43 [PATCH v8] drm: Add library for shmem backed GEM objects Rob Herring 2019-03-13 20:56 ` Eric Anholt @ 2019-03-14 16:34 ` Eric Anholt 2019-03-14 16:47 ` Eric Anholt 2019-03-14 18:15 ` Rob Herring 1 sibling, 2 replies; 8+ messages in thread From: Eric Anholt @ 2019-03-14 16:34 UTC (permalink / raw) To: dri-devel; +Cc: david.emett, thomas.spurden, linux-kernel The new shmem helpers from Noralf and Rob abstract out a bunch of our BO creation and mapping code. v2: Use the new sgt getter, and flag pages as dirty before freeing. Signed-off-by: Eric Anholt <eric@anholt.net> --- drivers/gpu/drm/v3d/Kconfig | 1 + drivers/gpu/drm/v3d/v3d_bo.c | 317 ++++++++++------------------------ drivers/gpu/drm/v3d/v3d_drv.c | 27 +-- drivers/gpu/drm/v3d/v3d_drv.h | 14 +- drivers/gpu/drm/v3d/v3d_gem.c | 12 +- drivers/gpu/drm/v3d/v3d_irq.c | 8 +- drivers/gpu/drm/v3d/v3d_mmu.c | 11 +- 7 files changed, 120 insertions(+), 270 deletions(-) diff --git a/drivers/gpu/drm/v3d/Kconfig b/drivers/gpu/drm/v3d/Kconfig index 1552bf552c94..75a74c45f109 100644 --- a/drivers/gpu/drm/v3d/Kconfig +++ b/drivers/gpu/drm/v3d/Kconfig @@ -5,6 +5,7 @@ config DRM_V3D depends on COMMON_CLK depends on MMU select DRM_SCHED + select DRM_GEM_SHMEM_HELPER help Choose this option if you have a system that has a Broadcom V3D 3.x or newer GPU, such as BCM7268. diff --git a/drivers/gpu/drm/v3d/v3d_bo.c b/drivers/gpu/drm/v3d/v3d_bo.c index dff38bf36c50..82f08d861389 100644 --- a/drivers/gpu/drm/v3d/v3d_bo.c +++ b/drivers/gpu/drm/v3d/v3d_bo.c @@ -25,158 +25,6 @@ #include "v3d_drv.h" #include "uapi/drm/v3d_drm.h" -/* Pins the shmem pages, fills in the .pages and .sgt fields of the BO, and maps - * it for DMA. - */ -static int -v3d_bo_get_pages(struct v3d_bo *bo) -{ - struct drm_gem_object *obj = &bo->base; - struct drm_device *dev = obj->dev; - int npages = obj->size >> PAGE_SHIFT; - int ret = 0; - - mutex_lock(&bo->lock); - if (bo->pages_refcount++ != 0) - goto unlock; - - if (!obj->import_attach) { - bo->pages = drm_gem_get_pages(obj); - if (IS_ERR(bo->pages)) { - ret = PTR_ERR(bo->pages); - goto unlock; - } - - bo->sgt = drm_prime_pages_to_sg(bo->pages, npages); - if (IS_ERR(bo->sgt)) { - ret = PTR_ERR(bo->sgt); - goto put_pages; - } - - /* Map the pages for use by the GPU. */ - dma_map_sg(dev->dev, bo->sgt->sgl, - bo->sgt->nents, DMA_BIDIRECTIONAL); - } else { - bo->pages = kcalloc(npages, sizeof(*bo->pages), GFP_KERNEL); - if (!bo->pages) - goto put_pages; - - drm_prime_sg_to_page_addr_arrays(bo->sgt, bo->pages, - NULL, npages); - - /* Note that dma-bufs come in mapped. */ - } - - mutex_unlock(&bo->lock); - - return 0; - -put_pages: - drm_gem_put_pages(obj, bo->pages, true, true); - bo->pages = NULL; -unlock: - bo->pages_refcount--; - mutex_unlock(&bo->lock); - return ret; -} - -static void -v3d_bo_put_pages(struct v3d_bo *bo) -{ - struct drm_gem_object *obj = &bo->base; - - mutex_lock(&bo->lock); - if (--bo->pages_refcount == 0) { - if (!obj->import_attach) { - dma_unmap_sg(obj->dev->dev, bo->sgt->sgl, - bo->sgt->nents, DMA_BIDIRECTIONAL); - sg_free_table(bo->sgt); - kfree(bo->sgt); - drm_gem_put_pages(obj, bo->pages, true, true); - } else { - kfree(bo->pages); - } - } - mutex_unlock(&bo->lock); -} - -static struct v3d_bo *v3d_bo_create_struct(struct drm_device *dev, - size_t unaligned_size) -{ - struct v3d_dev *v3d = to_v3d_dev(dev); - struct drm_gem_object *obj; - struct v3d_bo *bo; - size_t size = roundup(unaligned_size, PAGE_SIZE); - int ret; - - if (size == 0) - return ERR_PTR(-EINVAL); - - bo = kzalloc(sizeof(*bo), GFP_KERNEL); - if (!bo) - return ERR_PTR(-ENOMEM); - obj = &bo->base; - - INIT_LIST_HEAD(&bo->unref_head); - mutex_init(&bo->lock); - - ret = drm_gem_object_init(dev, obj, size); - if (ret) - goto free_bo; - - spin_lock(&v3d->mm_lock); - ret = drm_mm_insert_node_generic(&v3d->mm, &bo->node, - obj->size >> PAGE_SHIFT, - GMP_GRANULARITY >> PAGE_SHIFT, 0, 0); - spin_unlock(&v3d->mm_lock); - if (ret) - goto free_obj; - - return bo; - -free_obj: - drm_gem_object_release(obj); -free_bo: - kfree(bo); - return ERR_PTR(ret); -} - -struct v3d_bo *v3d_bo_create(struct drm_device *dev, struct drm_file *file_priv, - size_t unaligned_size) -{ - struct v3d_dev *v3d = to_v3d_dev(dev); - struct drm_gem_object *obj; - struct v3d_bo *bo; - int ret; - - bo = v3d_bo_create_struct(dev, unaligned_size); - if (IS_ERR(bo)) - return bo; - obj = &bo->base; - - ret = v3d_bo_get_pages(bo); - if (ret) - goto free_mm; - - v3d_mmu_insert_ptes(bo); - - mutex_lock(&v3d->bo_lock); - v3d->bo_stats.num_allocated++; - v3d->bo_stats.pages_allocated += obj->size >> PAGE_SHIFT; - mutex_unlock(&v3d->bo_lock); - - return bo; - -free_mm: - spin_lock(&v3d->mm_lock); - drm_mm_remove_node(&bo->node); - spin_unlock(&v3d->mm_lock); - - drm_gem_object_release(obj); - kfree(bo); - return ERR_PTR(ret); -} - /* Called DRM core on the last userspace/kernel unreference of the * BO. */ @@ -185,83 +33,120 @@ void v3d_free_object(struct drm_gem_object *obj) struct v3d_dev *v3d = to_v3d_dev(obj->dev); struct v3d_bo *bo = to_v3d_bo(obj); + v3d_mmu_remove_ptes(bo); + mutex_lock(&v3d->bo_lock); v3d->bo_stats.num_allocated--; v3d->bo_stats.pages_allocated -= obj->size >> PAGE_SHIFT; mutex_unlock(&v3d->bo_lock); - v3d_bo_put_pages(bo); - - if (obj->import_attach) - drm_prime_gem_destroy(obj, bo->sgt); - - v3d_mmu_remove_ptes(bo); spin_lock(&v3d->mm_lock); drm_mm_remove_node(&bo->node); spin_unlock(&v3d->mm_lock); - mutex_destroy(&bo->lock); + drm_gem_shmem_put_pages(&bo->base); - drm_gem_object_release(obj); - kfree(bo); -} + /* GPU execution may have dirtied any pages in the BO. */ + bo->base.pages_mark_dirty_on_put = true; -static void -v3d_set_mmap_vma_flags(struct vm_area_struct *vma) -{ - vma->vm_flags &= ~VM_PFNMAP; - vma->vm_flags |= VM_MIXEDMAP; - vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags)); + drm_gem_shmem_free_object(obj); } -vm_fault_t v3d_gem_fault(struct vm_fault *vmf) +static const struct drm_gem_object_funcs v3d_gem_funcs = { + .free = v3d_free_object, + .print_info = drm_gem_shmem_print_info, + .pin = drm_gem_shmem_pin, + .unpin = drm_gem_shmem_unpin, + .get_sg_table = drm_gem_shmem_get_sg_table, + .vmap = drm_gem_shmem_vmap, + .vunmap = drm_gem_shmem_vunmap, + .vm_ops = &drm_gem_shmem_vm_ops, +}; + +/* gem_create_object function for allocating a BO struct and doing + * early setup. + */ +struct drm_gem_object *v3d_create_object(struct drm_device *dev, size_t size) { - struct vm_area_struct *vma = vmf->vma; - struct drm_gem_object *obj = vma->vm_private_data; - struct v3d_bo *bo = to_v3d_bo(obj); - pfn_t pfn; - pgoff_t pgoff; - - /* We don't use vmf->pgoff since that has the fake offset: */ - pgoff = (vmf->address - vma->vm_start) >> PAGE_SHIFT; - pfn = __pfn_to_pfn_t(page_to_pfn(bo->pages[pgoff]), PFN_DEV); + struct v3d_bo *bo; + struct drm_gem_object *obj; - return vmf_insert_mixed(vma, vmf->address, pfn); -} + if (size == 0) + return NULL; -int v3d_mmap(struct file *filp, struct vm_area_struct *vma) -{ - int ret; + bo = kzalloc(sizeof(*bo), GFP_KERNEL); + if (!bo) + return NULL; + obj = &bo->base.base; - ret = drm_gem_mmap(filp, vma); - if (ret) - return ret; + obj->funcs = &v3d_gem_funcs; - v3d_set_mmap_vma_flags(vma); + INIT_LIST_HEAD(&bo->unref_head); - return ret; + return &bo->base.base; } -int v3d_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) +static int +v3d_bo_create_finish(struct drm_gem_object *obj) { + struct v3d_dev *v3d = to_v3d_dev(obj->dev); + struct v3d_bo *bo = to_v3d_bo(obj); + struct sg_table *sgt; int ret; - ret = drm_gem_mmap_obj(obj, obj->size, vma); - if (ret < 0) + /* So far we pin the BO in the MMU for its lifetime, so use + * shmem's helper for getting a lifetime sgt. + */ + sgt = drm_gem_shmem_get_pages_sgt(&bo->base.base); + if (IS_ERR(sgt)) + return PTR_ERR(sgt); + + spin_lock(&v3d->mm_lock); + /* Allocate the object's space in the GPU's page tables. + * Inserting PTEs will happen later, but the offset is for the + * lifetime of the BO. + */ + ret = drm_mm_insert_node_generic(&v3d->mm, &bo->node, + obj->size >> PAGE_SHIFT, + GMP_GRANULARITY >> PAGE_SHIFT, 0, 0); + spin_unlock(&v3d->mm_lock); + if (ret) { + drm_gem_shmem_put_pages(&bo->base); return ret; + } - v3d_set_mmap_vma_flags(vma); + /* Track stats for /debug/dri/n/bo_stats. */ + mutex_lock(&v3d->bo_lock); + v3d->bo_stats.num_allocated++; + v3d->bo_stats.pages_allocated += obj->size >> PAGE_SHIFT; + mutex_unlock(&v3d->bo_lock); + + v3d_mmu_insert_ptes(bo); return 0; } -struct sg_table * -v3d_prime_get_sg_table(struct drm_gem_object *obj) +struct v3d_bo *v3d_bo_create(struct drm_device *dev, struct drm_file *file_priv, + size_t unaligned_size) { - struct v3d_bo *bo = to_v3d_bo(obj); - int npages = obj->size >> PAGE_SHIFT; + struct drm_gem_shmem_object *shmem_obj; + struct v3d_bo *bo; + int ret; + + shmem_obj = drm_gem_shmem_create(dev, unaligned_size); + if (!shmem_obj) + return NULL; + bo = to_v3d_bo(&shmem_obj->base); + + ret = v3d_bo_create_finish(&shmem_obj->base); + if (ret) + goto free_obj; + + return bo; - return drm_prime_pages_to_sg(bo->pages, npages); +free_obj: + drm_gem_shmem_free_object(&shmem_obj->base); + return ERR_PTR(ret); } struct drm_gem_object * @@ -269,27 +154,18 @@ v3d_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sgt) { - struct v3d_dev *v3d = to_v3d_dev(dev); struct drm_gem_object *obj; - struct v3d_bo *bo; - - bo = v3d_bo_create_struct(dev, attach->dmabuf->size); - if (IS_ERR(bo)) - return ERR_CAST(bo); - obj = &bo->base; - - obj->resv = attach->dmabuf->resv; + int ret; - bo->sgt = sgt; - obj->import_attach = attach; - v3d_bo_get_pages(bo); + obj = drm_gem_shmem_prime_import_sg_table(dev, attach, sgt); + if (IS_ERR(obj)) + return obj; - mutex_lock(&v3d->bo_lock); - v3d->bo_stats.num_allocated++; - v3d->bo_stats.pages_allocated += obj->size >> PAGE_SHIFT; - mutex_unlock(&v3d->bo_lock); - - v3d_mmu_insert_ptes(bo); + ret = v3d_bo_create_finish(obj); + if (ret) { + drm_gem_shmem_free_object(obj); + return ERR_PTR(ret); + } return obj; } @@ -312,8 +188,8 @@ int v3d_create_bo_ioctl(struct drm_device *dev, void *data, args->offset = bo->node.start << PAGE_SHIFT; - ret = drm_gem_handle_create(file_priv, &bo->base, &args->handle); - drm_gem_object_put_unlocked(&bo->base); + ret = drm_gem_handle_create(file_priv, &bo->base.base, &args->handle); + drm_gem_object_put_unlocked(&bo->base.base); return ret; } @@ -323,7 +199,6 @@ int v3d_mmap_bo_ioctl(struct drm_device *dev, void *data, { struct drm_v3d_mmap_bo *args = data; struct drm_gem_object *gem_obj; - int ret; if (args->flags != 0) { DRM_INFO("unknown mmap_bo flags: %d\n", args->flags); @@ -336,12 +211,10 @@ int v3d_mmap_bo_ioctl(struct drm_device *dev, void *data, return -ENOENT; } - ret = drm_gem_create_mmap_offset(gem_obj); - if (ret == 0) - args->offset = drm_vma_node_offset_addr(&gem_obj->vma_node); + args->offset = drm_vma_node_offset_addr(&gem_obj->vma_node); drm_gem_object_put_unlocked(gem_obj); - return ret; + return 0; } int v3d_get_bo_offset_ioctl(struct drm_device *dev, void *data, diff --git a/drivers/gpu/drm/v3d/v3d_drv.c b/drivers/gpu/drm/v3d/v3d_drv.c index 89b7567d550f..d600628bb5c1 100644 --- a/drivers/gpu/drm/v3d/v3d_drv.c +++ b/drivers/gpu/drm/v3d/v3d_drv.c @@ -161,17 +161,7 @@ v3d_postclose(struct drm_device *dev, struct drm_file *file) kfree(v3d_priv); } -static const struct file_operations v3d_drm_fops = { - .owner = THIS_MODULE, - .open = drm_open, - .release = drm_release, - .unlocked_ioctl = drm_ioctl, - .mmap = v3d_mmap, - .poll = drm_poll, - .read = drm_read, - .compat_ioctl = drm_compat_ioctl, - .llseek = noop_llseek, -}; +DEFINE_DRM_GEM_SHMEM_FOPS(v3d_drm_fops); /* DRM_AUTH is required on SUBMIT_CL for now, while we don't have GMP * protection between clients. Note that render nodes would be be @@ -189,12 +179,6 @@ static const struct drm_ioctl_desc v3d_drm_ioctls[] = { DRM_IOCTL_DEF_DRV(V3D_SUBMIT_TFU, v3d_submit_tfu_ioctl, DRM_RENDER_ALLOW | DRM_AUTH), }; -static const struct vm_operations_struct v3d_vm_ops = { - .fault = v3d_gem_fault, - .open = drm_gem_vm_open, - .close = drm_gem_vm_close, -}; - static struct drm_driver v3d_drm_driver = { .driver_features = (DRIVER_GEM | DRIVER_RENDER | @@ -208,16 +192,11 @@ static struct drm_driver v3d_drm_driver = { .debugfs_init = v3d_debugfs_init, #endif - .gem_free_object_unlocked = v3d_free_object, - .gem_vm_ops = &v3d_vm_ops, - + .gem_create_object = v3d_create_object, .prime_handle_to_fd = drm_gem_prime_handle_to_fd, .prime_fd_to_handle = drm_gem_prime_fd_to_handle, - .gem_prime_import = drm_gem_prime_import, - .gem_prime_export = drm_gem_prime_export, - .gem_prime_get_sg_table = v3d_prime_get_sg_table, .gem_prime_import_sg_table = v3d_prime_import_sg_table, - .gem_prime_mmap = v3d_prime_mmap, + .gem_prime_mmap = drm_gem_prime_mmap, .ioctls = v3d_drm_ioctls, .num_ioctls = ARRAY_SIZE(v3d_drm_ioctls), diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h index 65906c05dc9c..7b0fe6240f7d 100644 --- a/drivers/gpu/drm/v3d/v3d_drv.h +++ b/drivers/gpu/drm/v3d/v3d_drv.h @@ -5,6 +5,7 @@ #include <drm/drmP.h> #include <drm/drm_encoder.h> #include <drm/drm_gem.h> +#include <drm/drm_gem_shmem_helper.h> #include <drm/gpu_scheduler.h> #include "uapi/drm/v3d_drm.h" @@ -111,16 +112,10 @@ struct v3d_file_priv { }; struct v3d_bo { - struct drm_gem_object base; - - struct mutex lock; + struct drm_gem_shmem_object base; struct drm_mm_node node; - u32 pages_refcount; - struct page **pages; - struct sg_table *sgt; - /* List entry for the BO's position in * v3d_exec_info->unref_list */ @@ -258,6 +253,7 @@ static inline unsigned long nsecs_to_jiffies_timeout(const u64 n) } /* v3d_bo.c */ +struct drm_gem_object *v3d_create_object(struct drm_device *dev, size_t size); void v3d_free_object(struct drm_gem_object *gem_obj); struct v3d_bo *v3d_bo_create(struct drm_device *dev, struct drm_file *file_priv, size_t size); @@ -267,10 +263,6 @@ int v3d_mmap_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); int v3d_get_bo_offset_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); -vm_fault_t v3d_gem_fault(struct vm_fault *vmf); -int v3d_mmap(struct file *filp, struct vm_area_struct *vma); -int v3d_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); -struct sg_table *v3d_prime_get_sg_table(struct drm_gem_object *obj); struct drm_gem_object *v3d_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sgt); diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c index 945eaaaad016..b84d89c7b3fb 100644 --- a/drivers/gpu/drm/v3d/v3d_gem.c +++ b/drivers/gpu/drm/v3d/v3d_gem.c @@ -201,7 +201,8 @@ v3d_attach_object_fences(struct v3d_bo **bos, int bo_count, for (i = 0; i < bo_count; i++) { /* XXX: Use shared fences for read-only objects. */ - reservation_object_add_excl_fence(bos[i]->base.resv, fence); + reservation_object_add_excl_fence(bos[i]->base.base.resv, + fence); } } @@ -237,7 +238,8 @@ v3d_lock_bo_reservations(struct v3d_bo **bos, * before we commit the CL to the hardware. */ for (i = 0; i < bo_count; i++) { - ret = reservation_object_reserve_shared(bos[i]->base.resv, 1); + ret = reservation_object_reserve_shared(bos[i]->base.base.resv, + 1); if (ret) { v3d_unlock_bo_reservations(bos, bo_count, acquire_ctx); @@ -345,11 +347,11 @@ v3d_exec_cleanup(struct kref *ref) dma_fence_put(exec->render_done_fence); for (i = 0; i < exec->bo_count; i++) - drm_gem_object_put_unlocked(&exec->bo[i]->base); + drm_gem_object_put_unlocked(&exec->bo[i]->base.base); kvfree(exec->bo); list_for_each_entry_safe(bo, save, &exec->unref_list, unref_head) { - drm_gem_object_put_unlocked(&bo->base); + drm_gem_object_put_unlocked(&bo->base.base); } pm_runtime_mark_last_busy(v3d->dev); @@ -376,7 +378,7 @@ v3d_tfu_job_cleanup(struct kref *ref) for (i = 0; i < ARRAY_SIZE(job->bo); i++) { if (job->bo[i]) - drm_gem_object_put_unlocked(&job->bo[i]->base); + drm_gem_object_put_unlocked(&job->bo[i]->base.base); } pm_runtime_mark_last_busy(v3d->dev); diff --git a/drivers/gpu/drm/v3d/v3d_irq.c b/drivers/gpu/drm/v3d/v3d_irq.c index b8aea2dfbe82..b4d6ae81186d 100644 --- a/drivers/gpu/drm/v3d/v3d_irq.c +++ b/drivers/gpu/drm/v3d/v3d_irq.c @@ -37,12 +37,14 @@ v3d_overflow_mem_work(struct work_struct *work) container_of(work, struct v3d_dev, overflow_mem_work); struct drm_device *dev = &v3d->drm; struct v3d_bo *bo = v3d_bo_create(dev, NULL /* XXX: GMP */, 256 * 1024); + struct drm_gem_object *obj; unsigned long irqflags; if (IS_ERR(bo)) { DRM_ERROR("Couldn't allocate binner overflow mem\n"); return; } + obj = &bo->base.base; /* We lost a race, and our work task came in after the bin job * completed and exited. This can happen because the HW @@ -59,15 +61,15 @@ v3d_overflow_mem_work(struct work_struct *work) goto out; } - drm_gem_object_get(&bo->base); + drm_gem_object_get(obj); list_add_tail(&bo->unref_head, &v3d->bin_job->unref_list); spin_unlock_irqrestore(&v3d->job_lock, irqflags); V3D_CORE_WRITE(0, V3D_PTB_BPOA, bo->node.start << PAGE_SHIFT); - V3D_CORE_WRITE(0, V3D_PTB_BPOS, bo->base.size); + V3D_CORE_WRITE(0, V3D_PTB_BPOS, obj->size); out: - drm_gem_object_put_unlocked(&bo->base); + drm_gem_object_put_unlocked(obj); } static irqreturn_t diff --git a/drivers/gpu/drm/v3d/v3d_mmu.c b/drivers/gpu/drm/v3d/v3d_mmu.c index b00f97c31b70..7a21f1787ab1 100644 --- a/drivers/gpu/drm/v3d/v3d_mmu.c +++ b/drivers/gpu/drm/v3d/v3d_mmu.c @@ -83,13 +83,14 @@ int v3d_mmu_set_page_table(struct v3d_dev *v3d) void v3d_mmu_insert_ptes(struct v3d_bo *bo) { - struct v3d_dev *v3d = to_v3d_dev(bo->base.dev); + struct drm_gem_shmem_object *shmem_obj = &bo->base; + struct v3d_dev *v3d = to_v3d_dev(shmem_obj->base.dev); u32 page = bo->node.start; u32 page_prot = V3D_PTE_WRITEABLE | V3D_PTE_VALID; unsigned int count; struct scatterlist *sgl; - for_each_sg(bo->sgt->sgl, sgl, bo->sgt->nents, count) { + for_each_sg(shmem_obj->sgt->sgl, sgl, shmem_obj->sgt->nents, count) { u32 page_address = sg_dma_address(sgl) >> V3D_MMU_PAGE_SHIFT; u32 pte = page_prot | page_address; u32 i; @@ -102,7 +103,7 @@ void v3d_mmu_insert_ptes(struct v3d_bo *bo) } WARN_ON_ONCE(page - bo->node.start != - bo->base.size >> V3D_MMU_PAGE_SHIFT); + shmem_obj->base.size >> V3D_MMU_PAGE_SHIFT); if (v3d_mmu_flush_all(v3d)) dev_err(v3d->dev, "MMU flush timeout\n"); @@ -110,8 +111,8 @@ void v3d_mmu_insert_ptes(struct v3d_bo *bo) void v3d_mmu_remove_ptes(struct v3d_bo *bo) { - struct v3d_dev *v3d = to_v3d_dev(bo->base.dev); - u32 npages = bo->base.size >> V3D_MMU_PAGE_SHIFT; + struct v3d_dev *v3d = to_v3d_dev(bo->base.base.dev); + u32 npages = bo->base.base.size >> V3D_MMU_PAGE_SHIFT; u32 page; for (page = bo->node.start; page < bo->node.start + npages; page++) -- 2.20.1 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH] drm/v3d: Use the new shmem helpers to reduce driver boilerplate. 2019-03-14 16:34 ` [PATCH] drm/v3d: Use the new shmem helpers to reduce driver boilerplate Eric Anholt @ 2019-03-14 16:47 ` Eric Anholt 2019-03-14 18:15 ` Rob Herring 1 sibling, 0 replies; 8+ messages in thread From: Eric Anholt @ 2019-03-14 16:47 UTC (permalink / raw) To: dri-devel; +Cc: linux-kernel, david.emett, thomas.spurden, Rob Herring [-- Attachment #1: Type: text/plain, Size: 1168 bytes --] Eric Anholt <eric@anholt.net> writes: > The new shmem helpers from Noralf and Rob abstract out a bunch of our > BO creation and mapping code. > > v2: Use the new sgt getter, and flag pages as dirty before freeing. > > Signed-off-by: Eric Anholt <eric@anholt.net> > @@ -185,83 +33,120 @@ void v3d_free_object(struct drm_gem_object *obj) > struct v3d_dev *v3d = to_v3d_dev(obj->dev); > struct v3d_bo *bo = to_v3d_bo(obj); > > + v3d_mmu_remove_ptes(bo); > + > mutex_lock(&v3d->bo_lock); > v3d->bo_stats.num_allocated--; > v3d->bo_stats.pages_allocated -= obj->size >> PAGE_SHIFT; > mutex_unlock(&v3d->bo_lock); > > - v3d_bo_put_pages(bo); > - > - if (obj->import_attach) > - drm_prime_gem_destroy(obj, bo->sgt); > - > - v3d_mmu_remove_ptes(bo); > spin_lock(&v3d->mm_lock); > drm_mm_remove_node(&bo->node); > spin_unlock(&v3d->mm_lock); > > - mutex_destroy(&bo->lock); > + drm_gem_shmem_put_pages(&bo->base); This put_pages() should be dropped -- it generated a WARN because the shared helpers already do a put_pages after freeing the sgt. The CTS had passed, so I missed it until after I'd sent the patch. [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 832 bytes --] ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] drm/v3d: Use the new shmem helpers to reduce driver boilerplate. 2019-03-14 16:34 ` [PATCH] drm/v3d: Use the new shmem helpers to reduce driver boilerplate Eric Anholt 2019-03-14 16:47 ` Eric Anholt @ 2019-03-14 18:15 ` Rob Herring 2019-03-14 19:11 ` Eric Anholt 1 sibling, 1 reply; 8+ messages in thread From: Rob Herring @ 2019-03-14 18:15 UTC (permalink / raw) To: Eric Anholt; +Cc: david.emett, thomas.spurden, linux-kernel, dri-devel On Thu, Mar 14, 2019 at 11:34 AM Eric Anholt <eric@anholt.net> wrote: > > The new shmem helpers from Noralf and Rob abstract out a bunch of our > BO creation and mapping code. > > v2: Use the new sgt getter, and flag pages as dirty before freeing. > > Signed-off-by: Eric Anholt <eric@anholt.net> > --- > drivers/gpu/drm/v3d/Kconfig | 1 + > drivers/gpu/drm/v3d/v3d_bo.c | 317 ++++++++++------------------------ > drivers/gpu/drm/v3d/v3d_drv.c | 27 +-- > drivers/gpu/drm/v3d/v3d_drv.h | 14 +- > drivers/gpu/drm/v3d/v3d_gem.c | 12 +- > drivers/gpu/drm/v3d/v3d_irq.c | 8 +- > drivers/gpu/drm/v3d/v3d_mmu.c | 11 +- > 7 files changed, 120 insertions(+), 270 deletions(-) > diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c > index 945eaaaad016..b84d89c7b3fb 100644 > --- a/drivers/gpu/drm/v3d/v3d_gem.c > +++ b/drivers/gpu/drm/v3d/v3d_gem.c > @@ -201,7 +201,8 @@ v3d_attach_object_fences(struct v3d_bo **bos, int bo_count, > > for (i = 0; i < bo_count; i++) { > /* XXX: Use shared fences for read-only objects. */ > - reservation_object_add_excl_fence(bos[i]->base.resv, fence); > + reservation_object_add_excl_fence(bos[i]->base.base.resv, > + fence); All these 'bos[i]->base.base' occurrences are not the prettiest. Changing your bos array to a struct drm_gem_object ** instead would help. That's what I did for panfrost. Though I've not looked at were you actually need the v3d_bo. Either way, Reviewed-by: Rob Herring <robh@kernel.org> Rob _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] drm/v3d: Use the new shmem helpers to reduce driver boilerplate. 2019-03-14 18:15 ` Rob Herring @ 2019-03-14 19:11 ` Eric Anholt 0 siblings, 0 replies; 8+ messages in thread From: Eric Anholt @ 2019-03-14 19:11 UTC (permalink / raw) To: Rob Herring; +Cc: david.emett, thomas.spurden, linux-kernel, dri-devel [-- Attachment #1.1: Type: text/plain, Size: 1785 bytes --] Rob Herring <robh@kernel.org> writes: > On Thu, Mar 14, 2019 at 11:34 AM Eric Anholt <eric@anholt.net> wrote: >> >> The new shmem helpers from Noralf and Rob abstract out a bunch of our >> BO creation and mapping code. >> >> v2: Use the new sgt getter, and flag pages as dirty before freeing. >> >> Signed-off-by: Eric Anholt <eric@anholt.net> >> --- >> drivers/gpu/drm/v3d/Kconfig | 1 + >> drivers/gpu/drm/v3d/v3d_bo.c | 317 ++++++++++------------------------ >> drivers/gpu/drm/v3d/v3d_drv.c | 27 +-- >> drivers/gpu/drm/v3d/v3d_drv.h | 14 +- >> drivers/gpu/drm/v3d/v3d_gem.c | 12 +- >> drivers/gpu/drm/v3d/v3d_irq.c | 8 +- >> drivers/gpu/drm/v3d/v3d_mmu.c | 11 +- >> 7 files changed, 120 insertions(+), 270 deletions(-) > >> diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c >> index 945eaaaad016..b84d89c7b3fb 100644 >> --- a/drivers/gpu/drm/v3d/v3d_gem.c >> +++ b/drivers/gpu/drm/v3d/v3d_gem.c >> @@ -201,7 +201,8 @@ v3d_attach_object_fences(struct v3d_bo **bos, int bo_count, >> >> for (i = 0; i < bo_count; i++) { >> /* XXX: Use shared fences for read-only objects. */ >> - reservation_object_add_excl_fence(bos[i]->base.resv, fence); >> + reservation_object_add_excl_fence(bos[i]->base.base.resv, >> + fence); > > All these 'bos[i]->base.base' occurrences are not the prettiest. > Changing your bos array to a struct drm_gem_object ** instead would > help. That's what I did for panfrost. Though I've not looked at were > you actually need the v3d_bo. Agreed that would probably be a goodcleanup, but I'm hoping I can land the compute shaders first. I've been doing a lot of rebasing of that series and I want to just get it done. [-- Attachment #1.2: signature.asc --] [-- Type: application/pgp-signature, Size: 832 bytes --] [-- Attachment #2: Type: text/plain, Size: 159 bytes --] _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2019-03-14 19:11 UTC | newest] Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2019-03-13 0:43 [PATCH v8] drm: Add library for shmem backed GEM objects Rob Herring 2019-03-13 20:56 ` Eric Anholt 2019-03-14 16:50 ` Rob Herring 2019-03-14 19:06 ` anholt 2019-03-14 16:34 ` [PATCH] drm/v3d: Use the new shmem helpers to reduce driver boilerplate Eric Anholt 2019-03-14 16:47 ` Eric Anholt 2019-03-14 18:15 ` Rob Herring 2019-03-14 19:11 ` Eric Anholt
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).