intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	linux-mm@kvack.org, Peter Zijlstra <peterz@infradead.org>,
	intel-gfx@lists.freedesktop.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, Minchan Kim <minchan@kernel.org>,
	dri-devel@lists.freedesktop.org, xen-devel@lists.xenproject.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Nitin Gupta <ngupta@vflare.org>
Subject: Re: [Intel-gfx] [PATCH 3/6] drm/i915: use vmap in shmem_pin_map
Date: Tue, 22 Sep 2020 12:21:44 +0100	[thread overview]
Message-ID: <20200922112144.GB32101@casper.infradead.org> (raw)
In-Reply-To: <20200922062249.GA30831@lst.de>

On Tue, Sep 22, 2020 at 08:22:49AM +0200, Christoph Hellwig wrote:
> On Mon, Sep 21, 2020 at 08:11:57PM +0100, Matthew Wilcox wrote:
> > This is awkward.  I'd like it if we had a vfree() variant which called
> > put_page() instead of __free_pages().  I'd like it even more if we
> > used release_pages() instead of our own loop that called put_page().
> 
> Note that we don't need a new vfree variant, we can do this manually if
> we want, take a look at kernel/dma/remap.c.  But I thought this code
> intentionally doesn't want to do that to avoid locking in the memory
> for the pages array.  Maybe the i915 maintainers can clarify.

Actually, vfree() will work today; I cc'd you on a documentation update
to make it clear that this is permitted.

From my current experience with the i915 shmem code, I think that the
i915 maintainers are experts at graphics, and are unfamiliar with the MM.
There are a number of places where they do things the hard way.
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  parent reply	other threads:[~2020-09-22 11:22 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-18 16:37 [Intel-gfx] remove alloc_vm_area Christoph Hellwig
2020-09-18 16:37 ` [Intel-gfx] [PATCH 1/6] zsmalloc: switch from alloc_vm_area to get_vm_area Christoph Hellwig
2020-09-21 17:42   ` Minchan Kim
2020-09-21 18:17     ` Christoph Hellwig
2020-09-21 18:42       ` Minchan Kim
2020-09-21 18:43         ` Christoph Hellwig
2020-09-18 16:37 ` [Intel-gfx] [PATCH 2/6] mm: add a vmap_pfn function Christoph Hellwig
2020-09-18 16:37 ` [Intel-gfx] [PATCH 3/6] drm/i915: use vmap in shmem_pin_map Christoph Hellwig
2020-09-21 19:11   ` Matthew Wilcox
2020-09-22  6:22     ` Christoph Hellwig
2020-09-22  8:23       ` Tvrtko Ursulin
2020-09-22 14:31         ` Christoph Hellwig
2020-09-22 16:13           ` Tvrtko Ursulin
2020-09-22 16:33             ` Christoph Hellwig
2020-09-22 17:04               ` Tvrtko Ursulin
2020-09-23  6:11                 ` Christoph Hellwig
2020-09-22 11:21       ` Matthew Wilcox [this message]
2020-09-22 14:39         ` Christoph Hellwig
2020-09-22 14:53           ` Matthew Wilcox
2020-09-18 16:37 ` [Intel-gfx] [PATCH 4/6] drm/i915: use vmap in i915_gem_object_map Christoph Hellwig
2020-09-23  9:52   ` Tvrtko Ursulin
2020-09-23 13:41     ` Christoph Hellwig
2020-09-23 13:58       ` Tvrtko Ursulin
2020-09-23 14:44         ` Christoph Hellwig
2020-09-24 12:22           ` Tvrtko Ursulin
2020-09-24 13:23             ` Christoph Hellwig
2020-09-18 16:37 ` [Intel-gfx] [PATCH 5/6] xen/xenbus: use apply_to_page_range directly in xenbus_map_ring_pv Christoph Hellwig
2020-09-18 16:37 ` [Intel-gfx] [PATCH 6/6] x86/xen: open code alloc_vm_area in arch_gnttab_valloc Christoph Hellwig
2020-09-21 20:44   ` boris.ostrovsky
2020-09-22 14:58     ` Christoph Hellwig
2020-09-22 15:24       ` boris.ostrovsky
2020-09-22 15:27         ` Christoph Hellwig
2020-09-22 15:34           ` boris.ostrovsky
2020-09-18 17:03 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for series starting with [1/6] zsmalloc: switch from alloc_vm_area to get_vm_area Patchwork
2020-09-21 17:50 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for series starting with [1/6] zsmalloc: switch from alloc_vm_area to get_vm_area (rev2) Patchwork
2020-09-21 18:47 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for series starting with [1/6] zsmalloc: switch from alloc_vm_area to get_vm_area (rev3) Patchwork
2020-09-22 14:44 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for series starting with [1/6] zsmalloc: switch from alloc_vm_area to get_vm_area (rev4) Patchwork
2020-09-22 15:01 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for series starting with [1/6] zsmalloc: switch from alloc_vm_area to get_vm_area (rev5) Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200922112144.GB32101@casper.infradead.org \
    --to=willy@infradead.org \
    --cc=akpm@linux-foundation.org \
    --cc=boris.ostrovsky@oracle.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=hch@lst.de \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=jgross@suse.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    --cc=ngupta@vflare.org \
    --cc=peterz@infradead.org \
    --cc=sstabellini@kernel.org \
    --cc=x86@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).