From: Chris Wilson <chris@chris-wilson.co.uk>
To: Jason Gunthorpe <jgg@ziepe.ca>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
intel-gfx@lists.freedesktop.org,
Andrew Morton <akpm@linux-foundation.org>,
Jan Kara <jack@suse.cz>, Jérôme Glisse <jglisse@redhat.com>,
John Hubbard <jhubbard@nvidia.com>,
Claudio Imbrenda <imbrenda@linux.ibm.com>,
"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
Subject: Re: [PATCH] mm: Skip opportunistic reclaim for dma pinned pages
Date: Wed, 24 Jun 2020 21:23:53 +0100 [thread overview]
Message-ID: <159303023309.4527.5420769464370063531@build.alporthouse.com> (raw)
In-Reply-To: <20200624192116.GO6578@ziepe.ca>
Quoting Jason Gunthorpe (2020-06-24 20:21:16)
> On Wed, Jun 24, 2020 at 08:14:17PM +0100, Chris Wilson wrote:
> > A general rule of thumb is that shrinkers should be fast and effective.
> > They are called from direct reclaim at the most incovenient of times when
> > the caller is waiting for a page. If we attempt to reclaim a page being
> > pinned for active dma [pin_user_pages()], we will incur far greater
> > latency than a normal anonymous page mapped multiple times. Worse the
> > page may be in use indefinitely by the HW and unable to be reclaimed
> > in a timely manner.
>
> A pinned page can't be migrated, discarded or swapped by definition -
> it would cause data corruption.
>
> So, how do things even get here and/or work today at all? I think the
> explanation is missing something important.
[<0>] userptr_mn_invalidate_range_start+0xa7/0x170 [i915]
[<0>] __mmu_notifier_invalidate_range_start+0x110/0x150
[<0>] try_to_unmap_one+0x790/0x870
[<0>] rmap_walk_file+0xe9/0x230
[<0>] rmap_walk+0x27/0x30
[<0>] try_to_unmap+0x89/0xc0
[<0>] shrink_page_list+0x88a/0xf50
[<0>] shrink_inactive_list+0x137/0x2f0
[<0>] shrink_lruvec+0x4ec/0x5f0
[<0>] shrink_node+0x15d/0x410
[<0>] try_to_free_pages+0x17f/0x430
[<0>] __alloc_pages_slowpath+0x2ab/0xcc0
[<0>] __alloc_pages_nodemask+0x1ad/0x1e0
[<0>] new_slab+0x2d8/0x310
[<0>] ___slab_alloc.constprop.0+0x288/0x520
[<0>] __slab_alloc.constprop.0+0xd/0x20
[<0>] kmem_cache_alloc_trace+0x1ad/0x1c0
and that hits an active pin_user_pages object.
Is there some information that would help in particular?
-Chris
next prev parent reply other threads:[~2020-06-24 20:24 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-24 19:14 [PATCH] mm: Skip opportunistic reclaim for dma pinned pages Chris Wilson
2020-06-24 19:21 ` Jason Gunthorpe
2020-06-24 20:23 ` Yang Shi
2020-06-24 21:02 ` Yang Shi
2020-06-24 20:23 ` Chris Wilson [this message]
2020-06-24 20:47 ` John Hubbard
2020-06-24 23:20 ` Jason Gunthorpe
2020-06-25 0:11 ` John Hubbard
2020-06-25 11:24 ` Jan Kara
2020-06-25 7:57 ` Michal Hocko
2020-06-25 11:00 ` Chris Wilson
2020-06-25 15:12 ` Michal Hocko
2020-06-25 15:48 ` Chris Wilson
2020-06-25 11:42 ` Matthew Wilcox
2020-06-25 13:40 ` Jan Kara
2020-06-25 16:05 ` Matthew Wilcox
2020-06-25 16:32 ` Yang Shi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=159303023309.4527.5420769464370063531@build.alporthouse.com \
--to=chris@chris-wilson.co.uk \
--cc=akpm@linux-foundation.org \
--cc=imbrenda@linux.ibm.com \
--cc=intel-gfx@lists.freedesktop.org \
--cc=jack@suse.cz \
--cc=jgg@ziepe.ca \
--cc=jglisse@redhat.com \
--cc=jhubbard@nvidia.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).