linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Vlastimil Babka <vbabka@suse.cz>, linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org, Michal Hocko <mhocko@kernel.org>,
	Pavel Tatashin <pasha.tatashin@soleen.com>,
	Oscar Salvador <osalvador@suse.de>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Michal Hocko <mhocko@suse.com>
Subject: Re: [PATCH 9/9] mm, page_alloc: optionally disable pcplists during page isolation
Date: Thu, 1 Oct 2020 10:47:14 +0200	[thread overview]
Message-ID: <99642169-03e0-f71f-af52-c901d5a88d0c@redhat.com> (raw)
In-Reply-To: <2ce92f9a-eaa2-45b2-207c-46a79d6a2bde@suse.cz>

On 25.09.20 13:10, Vlastimil Babka wrote:
> On 9/25/20 12:54 PM, David Hildenbrand wrote:
>>>> --- a/mm/page_isolation.c
>>>> +++ b/mm/page_isolation.c
>>>> @@ -15,6 +15,22 @@
>>>>  #define CREATE_TRACE_POINTS
>>>>  #include <trace/events/page_isolation.h>
>>>>  
>>>> +void zone_pcplist_disable(struct zone *zone)
>>>> +{
>>>> +	down_read(&pcp_batch_high_lock);
>>>> +	if (atomic_inc_return(&zone->pcplist_disabled) == 1) {
>>>> +		zone_update_pageset_high_and_batch(zone, 0, 1);
>>>> +		__drain_all_pages(zone, true);
>>>> +	}
>>> Hm, if one CPU is still inside the if-clause, the other one would
>>> continue, however pcp wpould not be disabled and zones not drained when
>>> returning.
> 
> Ah, well spotted, thanks!
> 
>>> (while we only allow a single Offline_pages() call, it will be different
>>> when we use the function in other context - especially,
>>> alloc_contig_range() for some users)
>>>
>>> Can't we use down_write() here? So it's serialized and everybody has to
>>> properly wait. (and we would not have to rely on an atomic_t)
>> Sorry, I meant down_write only temporarily in this code path. Not
>> keeping it locked in write when returning (I remember there is a way to
>> downgrade).
> 
> Hmm that temporary write lock would still block new callers until previous
> finish with the downgraded-to-read lock.
> 
> But I guess something like this would work:
> 
> retry:
>   if (atomic_read(...) == 0) {
>     // zone_update... + drain
>     atomic_inc(...);
>   else if (atomic_inc_return == 1)
>     // atomic_cmpxchg from 0 to 1; if that fails, goto retry
> 
> Tricky, but races could only read to unnecessary duplicated updates + flushing
> but nothing worse?
> 
> Or add another spinlock to cover this part instead of the temp write lock...

My gut feeling is, that that would be the cleanest approach.


-- 
Thanks,

David / dhildenb



  reply	other threads:[~2020-10-01  8:47 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-22 14:37 [PATCH 0/9] disable pcplists during memory offline Vlastimil Babka
2020-09-22 14:37 ` [PATCH 1/9] mm, page_alloc: clean up pageset high and batch update Vlastimil Babka
2020-09-25 10:18   ` David Hildenbrand
2020-10-05 12:03   ` Michal Hocko
2020-09-22 14:37 ` [PATCH 2/9] mm, page_alloc: calculate pageset high and batch once per zone Vlastimil Babka
2020-10-05 12:52   ` Michal Hocko
2020-10-06 22:04     ` Vlastimil Babka
2020-09-22 14:37 ` [PATCH 3/9] mm, page_alloc: remove setup_pageset() Vlastimil Babka
2020-09-25 10:19   ` David Hildenbrand
2020-10-05 12:59   ` Michal Hocko
2020-10-06 22:11     ` Vlastimil Babka
2020-09-22 14:37 ` [PATCH 4/9] mm, page_alloc: simplify pageset_update() Vlastimil Babka
2020-09-25 10:23   ` David Hildenbrand
2020-10-05 13:20   ` Michal Hocko
2020-09-22 14:37 ` [PATCH 5/9] mm, page_alloc: make per_cpu_pageset accessible only after init Vlastimil Babka
2020-09-25 10:25   ` David Hildenbrand
2020-10-05 13:24   ` Michal Hocko
2020-10-06 22:28     ` Vlastimil Babka
2020-09-22 14:37 ` [PATCH 6/9] mm, page_alloc: cache pageset high and batch in struct zone Vlastimil Babka
2020-09-25 10:34   ` David Hildenbrand
2020-10-06 22:31     ` Vlastimil Babka
2020-10-05 13:28   ` Michal Hocko
2020-10-06 22:34     ` Vlastimil Babka
2020-09-22 14:37 ` [PATCH 7/9] mm, page_alloc: move draining pcplists to page isolation users Vlastimil Babka
2020-09-25 10:39   ` David Hildenbrand
2020-10-05 13:57   ` Michal Hocko
2020-09-22 14:37 ` [PATCH 8/9] mm, page_alloc: drain all pcplists during memory offline Vlastimil Babka
2020-09-25 10:46   ` David Hildenbrand
2020-10-05 14:03     ` Michal Hocko
2020-09-22 14:37 ` [PATCH 9/9] mm, page_alloc: optionally disable pcplists during page isolation Vlastimil Babka
2020-09-25 10:53   ` David Hildenbrand
2020-09-25 10:54     ` David Hildenbrand
2020-09-25 11:10       ` Vlastimil Babka
2020-10-01  8:47         ` David Hildenbrand [this message]
2020-10-05 14:05         ` Michal Hocko
2020-10-05 14:22           ` Vlastimil Babka
2020-10-05 16:56             ` Michal Hocko
2020-10-06  8:34   ` Michal Hocko
2020-10-06  8:40     ` David Hildenbrand
2020-10-06 10:05       ` Michal Hocko
2020-09-22 17:15 ` [PATCH 0/9] disable pcplists during memory offline David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=99642169-03e0-f71f-af52-c901d5a88d0c@redhat.com \
    --to=david@redhat.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=mhocko@suse.com \
    --cc=osalvador@suse.de \
    --cc=pasha.tatashin@soleen.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).