From: David Hildenbrand <david@redhat.com>
To: Michal Hocko <mhocko@suse.com>, Minchan Kim <minchan@kernel.org>
Cc: Matthew Wilcox <willy@infradead.org>,
Andrew Morton <akpm@linux-foundation.org>,
linux-mm <linux-mm@kvack.org>,
LKML <linux-kernel@vger.kernel.org>,
cgoldswo@codeaurora.org, linux-fsdevel@vger.kernel.org,
vbabka@suse.cz, viro@zeniv.linux.org.uk, joaodias@google.com
Subject: Re: [RFC 1/2] mm: disable LRU pagevec during the migration temporarily
Date: Thu, 18 Feb 2021 09:24:27 +0100 [thread overview]
Message-ID: <0c9bc288-4713-f552-ce97-d050eb749e20@redhat.com> (raw)
In-Reply-To: <YC4ifqXYEeWrj4aF@dhcp22.suse.cz>
On 18.02.21 09:17, Michal Hocko wrote:
> On Wed 17-02-21 13:32:05, Minchan Kim wrote:
>> On Wed, Feb 17, 2021 at 09:16:12PM +0000, Matthew Wilcox wrote:
>>> On Wed, Feb 17, 2021 at 12:46:19PM -0800, Minchan Kim wrote:
>>>>> I suspect you do not want to add atomic_read inside hot paths, right? Is
>>>>> this really something that we have to microoptimize for? atomic_read is
>>>>> a simple READ_ONCE on many archs.
>>>>
>>>> It's also spin_lock_irq_save in some arch. If the new synchonization is
>>>> heavily compilcated, atomic would be better for simple start but I thought
>>>> this locking scheme is too simple so no need to add atomic operation in
>>>> readside.
>>>
>>> What arch uses a spinlock for atomic_read()? I just had a quick grep and
>>> didn't see any.
>>
>> Ah, my bad. I was confused with update side.
>> Okay, let's use atomic op to make it simple.
>
> Thanks. This should make the code much more simple. Before you send
> another version for the review I have another thing to consider. You are
> kind of wiring this into the migration code but control over lru pcp
> caches can be used in other paths as well. Memory offlining would be
> another user. We already disable page allocator pcp caches to prevent
> regular draining. We could do the same with lru pcp caches.
>
Agreed. And dealing with PCP more reliably might also be of interest in
context of more reliable alloc_contig_range().
--
Thanks,
David / dhildenb
next prev parent reply other threads:[~2021-02-18 8:24 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-02-16 17:03 [RFC 1/2] mm: disable LRU pagevec during the migration temporarily Minchan Kim
2021-02-16 17:03 ` [RFC 2/2] mm: fs: Invalidate BH LRU during page migration Minchan Kim
2021-02-16 18:22 ` [RFC 1/2] mm: disable LRU pagevec during the migration temporarily Matthew Wilcox
2021-02-16 21:30 ` Minchan Kim
2021-02-17 8:59 ` Michal Hocko
2021-02-17 9:50 ` Michal Hocko
2021-02-17 20:51 ` Minchan Kim
2021-02-17 21:11 ` Matthew Wilcox
2021-02-17 21:22 ` Minchan Kim
2021-02-17 20:46 ` Minchan Kim
2021-02-17 21:16 ` Matthew Wilcox
2021-02-17 21:32 ` Minchan Kim
2021-02-18 8:17 ` Michal Hocko
2021-02-18 8:24 ` David Hildenbrand [this message]
2021-02-18 15:52 ` Minchan Kim
2021-02-18 16:08 ` Michal Hocko
2021-02-18 16:21 ` Minchan Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0c9bc288-4713-f552-ce97-d050eb749e20@redhat.com \
--to=david@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=cgoldswo@codeaurora.org \
--cc=joaodias@google.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=minchan@kernel.org \
--cc=vbabka@suse.cz \
--cc=viro@zeniv.linux.org.uk \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).