From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Christoph Hellwig <hch@lst.de>,
Andrew Morton <akpm@linux-foundation.org>
Cc: Juergen Gross <jgross@suse.com>,
Stefano Stabellini <sstabellini@kernel.org>,
Matthew Wilcox <willy@infradead.org>,
dri-devel@lists.freedesktop.org, linux-mm@kvack.org,
Peter Zijlstra <peterz@infradead.org>,
linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org,
x86@kernel.org, Chris Wilson <chris@chris-wilson.co.uk>,
Minchan Kim <minchan@kernel.org>,
Matthew Auld <matthew.auld@intel.com>,
xen-devel@lists.xenproject.org,
Boris Ostrovsky <boris.ostrovsky@oracle.com>,
Nitin Gupta <ngupta@vflare.org>
Subject: Re: [Intel-gfx] [PATCH 06/11] drm/i915: use vmap in shmem_pin_map
Date: Fri, 25 Sep 2020 14:05:32 +0100 [thread overview]
Message-ID: <9459e195-a412-3357-c53d-4349e600896d@linux.intel.com> (raw)
In-Reply-To: <20200924135853.875294-7-hch@lst.de>
On 24/09/2020 14:58, Christoph Hellwig wrote:
> shmem_pin_map somewhat awkwardly reimplements vmap using
> alloc_vm_area and manual pte setup. The only practical difference
> is that alloc_vm_area prefeaults the vmalloc area PTEs, which doesn't
> seem to be required here (and could be added to vmap using a flag if
> actually required). Switch to use vmap, and use vfree to free both the
> vmalloc mapping and the page array, as well as dropping the references
> to each page.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> drivers/gpu/drm/i915/gt/shmem_utils.c | 76 +++++++--------------------
> 1 file changed, 18 insertions(+), 58 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/shmem_utils.c b/drivers/gpu/drm/i915/gt/shmem_utils.c
> index 43c7acbdc79dea..f011ea42487e11 100644
> --- a/drivers/gpu/drm/i915/gt/shmem_utils.c
> +++ b/drivers/gpu/drm/i915/gt/shmem_utils.c
> @@ -49,80 +49,40 @@ struct file *shmem_create_from_object(struct drm_i915_gem_object *obj)
> return file;
> }
>
> -static size_t shmem_npte(struct file *file)
> -{
> - return file->f_mapping->host->i_size >> PAGE_SHIFT;
> -}
> -
> -static void __shmem_unpin_map(struct file *file, void *ptr, size_t n_pte)
> -{
> - unsigned long pfn;
> -
> - vunmap(ptr);
> -
> - for (pfn = 0; pfn < n_pte; pfn++) {
> - struct page *page;
> -
> - page = shmem_read_mapping_page_gfp(file->f_mapping, pfn,
> - GFP_KERNEL);
> - if (!WARN_ON(IS_ERR(page))) {
> - put_page(page);
> - put_page(page);
> - }
> - }
> -}
> -
> void *shmem_pin_map(struct file *file)
> {
> - const size_t n_pte = shmem_npte(file);
> - pte_t *stack[32], **ptes, **mem;
> - struct vm_struct *area;
> - unsigned long pfn;
> -
> - mem = stack;
> - if (n_pte > ARRAY_SIZE(stack)) {
> - mem = kvmalloc_array(n_pte, sizeof(*mem), GFP_KERNEL);
> - if (!mem)
> - return NULL;
> - }
> + struct page **pages;
> + size_t n_pages, i;
> + void *vaddr;
>
> - area = alloc_vm_area(n_pte << PAGE_SHIFT, mem);
> - if (!area) {
> - if (mem != stack)
> - kvfree(mem);
> + n_pages = file->f_mapping->host->i_size >> PAGE_SHIFT;
> + pages = kvmalloc_array(n_pages, sizeof(*pages), GFP_KERNEL);
> + if (!pages)
> return NULL;
> - }
>
> - ptes = mem;
> - for (pfn = 0; pfn < n_pte; pfn++) {
> - struct page *page;
> -
> - page = shmem_read_mapping_page_gfp(file->f_mapping, pfn,
> - GFP_KERNEL);
> - if (IS_ERR(page))
> + for (i = 0; i < n_pages; i++) {
> + pages[i] = shmem_read_mapping_page_gfp(file->f_mapping, i,
> + GFP_KERNEL);
> + if (IS_ERR(pages[i]))
> goto err_page;
> -
> - **ptes++ = mk_pte(page, PAGE_KERNEL);
> }
>
> - if (mem != stack)
> - kvfree(mem);
> -
> + vaddr = vmap(pages, n_pages, VM_MAP_PUT_PAGES, PAGE_KERNEL);
> + if (!vaddr)
> + goto err_page;
> mapping_set_unevictable(file->f_mapping);
> - return area->addr;
> -
> + return vaddr;
> err_page:
> - if (mem != stack)
> - kvfree(mem);
> -
> - __shmem_unpin_map(file, area->addr, pfn);
> + while (--i >= 0)
> + put_page(pages[i]);
> + kvfree(pages);
> return NULL;
> }
>
> void shmem_unpin_map(struct file *file, void *ptr)
> {
> mapping_clear_unevictable(file->f_mapping);
> - __shmem_unpin_map(file, ptr, shmem_npte(file));
> + vfree(ptr);
> }
>
> static int __shmem_rw(struct file *file, loff_t off,
>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Regards,
Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2020-09-25 13:06 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-24 13:58 [Intel-gfx] remove alloc_vm_area v2 Christoph Hellwig
2020-09-24 13:58 ` [Intel-gfx] [PATCH 01/11] mm: update the documentation for vfree Christoph Hellwig
2020-09-24 13:58 ` [Intel-gfx] [PATCH 02/11] mm: add a VM_MAP_PUT_PAGES flag for vmap Christoph Hellwig
2020-09-24 13:58 ` [Intel-gfx] [PATCH 03/11] mm: add a vmap_pfn function Christoph Hellwig
2020-09-24 13:58 ` [Intel-gfx] [PATCH 04/11] mm: allow a NULL fn callback in apply_to_page_range Christoph Hellwig
2020-09-24 13:58 ` [Intel-gfx] [PATCH 05/11] zsmalloc: switch from alloc_vm_area to get_vm_area Christoph Hellwig
2020-09-24 13:58 ` [Intel-gfx] [PATCH 06/11] drm/i915: use vmap in shmem_pin_map Christoph Hellwig
2020-09-25 13:05 ` Tvrtko Ursulin [this message]
2020-09-24 13:58 ` [Intel-gfx] [PATCH 07/11] drm/i915: stop using kmap in i915_gem_object_map Christoph Hellwig
2020-09-25 13:01 ` Tvrtko Ursulin
2020-09-24 13:58 ` [Intel-gfx] [PATCH 08/11] drm/i915: use vmap " Christoph Hellwig
2020-09-25 13:11 ` Tvrtko Ursulin
2020-09-25 14:08 ` Matthew Auld
2020-09-25 16:02 ` Christoph Hellwig
2020-09-25 16:09 ` [Intel-gfx] [PATCH 08/11, fixed] " Christoph Hellwig
2020-09-24 13:58 ` [Intel-gfx] [PATCH 09/11] xen/xenbus: use apply_to_page_range directly in xenbus_map_ring_pv Christoph Hellwig
2020-09-24 23:42 ` boris.ostrovsky
2020-09-24 13:58 ` [Intel-gfx] [PATCH 10/11] x86/xen: open code alloc_vm_area in arch_gnttab_valloc Christoph Hellwig
2020-09-24 23:43 ` boris.ostrovsky
2020-09-24 13:58 ` [Intel-gfx] [PATCH 11/11] mm: remove alloc_vm_area Christoph Hellwig
2020-09-24 15:09 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/11] mm: update the documentation for vfree Patchwork
2020-09-24 15:34 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2020-09-24 20:44 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
2020-09-25 16:50 ` [Intel-gfx] ✓ Fi.CI.BAT: success for series starting with [01/11] mm: update the documentation for vfree (rev2) Patchwork
2020-09-26 0:08 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
2020-09-26 2:43 ` [Intel-gfx] remove alloc_vm_area v2 Andrew Morton
2020-09-26 6:29 ` Christoph Hellwig
2020-09-28 10:13 ` Joonas Lahtinen
2020-09-28 12:37 ` Christoph Hellwig
2020-09-29 12:43 ` Joonas Lahtinen
2020-09-30 14:48 ` Christoph Hellwig
2020-09-30 18:37 ` Daniel Vetter
2020-10-02 12:21 [Intel-gfx] remove alloc_vm_area v4 Christoph Hellwig
2020-10-02 12:21 ` [Intel-gfx] [PATCH 06/11] drm/i915: use vmap in shmem_pin_map Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9459e195-a412-3357-c53d-4349e600896d@linux.intel.com \
--to=tvrtko.ursulin@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=boris.ostrovsky@oracle.com \
--cc=chris@chris-wilson.co.uk \
--cc=dri-devel@lists.freedesktop.org \
--cc=hch@lst.de \
--cc=intel-gfx@lists.freedesktop.org \
--cc=jgross@suse.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=matthew.auld@intel.com \
--cc=minchan@kernel.org \
--cc=ngupta@vflare.org \
--cc=peterz@infradead.org \
--cc=sstabellini@kernel.org \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).