From: David Hildenbrand <email@example.com> To: Mel Gorman <firstname.lastname@example.org> Cc: Dave Hansen <email@example.com>, Andrew Morton <firstname.lastname@example.org>, Hillf Danton <email@example.com>, Dave Hansen <firstname.lastname@example.org>, Vlastimil Babka <email@example.com>, Michal Hocko <firstname.lastname@example.org>, LKML <email@example.com>, Linux-MM <firstname.lastname@example.org>, "Tang, Feng" <email@example.com> Subject: Re: [PATCH 0/6 v2] Calculate pcp->high based on zone sizes and active CPUs Date: Fri, 28 May 2021 12:21:20 +0200 [thread overview] Message-ID: <firstname.lastname@example.org> (raw) In-Reply-To: <20210528100918.GM30378@techsingularity.net> On 28.05.21 12:09, Mel Gorman wrote: > On Fri, May 28, 2021 at 11:52:53AM +0200, David Hildenbrand wrote: >>>> "Disable pcplists so that page isolation cannot race with freeing >>>> in a way that pages from isolated pageblock are left on pcplists." >>>> >>>> Guess we'd then want to move the draining before start_isolate_page_range() >>>> in alloc_contig_range(). >>>> >>> >>> Or instead of draining, validate the PFN range in alloc_contig_range >>> is within the same zone and if so, call zone_pcp_disable() before >>> start_isolate_page_range and enable after __alloc_contig_migrate_range. >>> >> >> We require the caller to only pass a range within a single zone, so that >> should be fine. >> >> The only ugly thing about zone_pcp_disable() is >> mutex_lock(&pcp_batch_high_lock) which would serialize all >> alloc_contig_range() and even with offline_pages(). >> > > True so it would have to be accessed if that is bad or not. If racing > against offline_pages, memory is potentially being offlined in the > target zone which may cause allocation failure. If racing with other > alloc_contig_range calls, the two callers are potentially racing to > isolate and allocate the same range. The arguement could be made that > alloc_contig_ranges should be serialised within one zone to improve the > allocation success rate at the potential cost of allocation latency. We have 3 main users of alloc_contig_range(): 1. CMA CMA synchronizes allocation within a CMA area via the allocation bitmap. So parallel CMA is perfectly possible and avoids races by design. 2. alloc_contig_pages() / Gigantic pages Gigantic page allocation could race with virtio-mem. CMA does not apply. Possible but unlikely to matter in practice. virtio-mem will retry later again. 3. virito-mem A virtio-mem device only operates on its assigned memory region, so we cannot have alloc_contig_range() from different devices racing, even when within a single zone. It could only race with gigantic pages as CMA does not apply. So serializing would mostly harm parallel CMA (possible and likely) and parallel virtio-mem operation (e.g., unplug memory of two virtio-mem devices -- unlikely but possible). Memory offlining racing with CMA is not an issue (impossible). virtio-mem synchronizes with memory offlining via memory notifiers, there is only a tiny little race window that usually doesn't matter as virtio-mem is expected to usually triggers offlining itself, and not user space rancomly. Memory offlining can race with dynamic gigantic page allocation, wich is highly unreliable already. I wonder if we could optimize locking in zone_pcp_disable() instead. -- Thanks, David / dhildenb
next prev parent reply other threads:[~2021-05-28 10:21 UTC|newest] Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-05-25 8:01 Mel Gorman 2021-05-25 8:01 ` [PATCH 1/6] mm/page_alloc: Delete vm.percpu_pagelist_fraction Mel Gorman 2021-05-26 17:41 ` Vlastimil Babka 2021-05-25 8:01 ` [PATCH 2/6] mm/page_alloc: Disassociate the pcp->high from pcp->batch Mel Gorman 2021-05-26 18:14 ` Vlastimil Babka 2021-05-27 10:52 ` Mel Gorman 2021-05-28 10:27 ` Vlastimil Babka 2021-05-25 8:01 ` [PATCH 3/6] mm/page_alloc: Adjust pcp->high after CPU hotplug events Mel Gorman 2021-05-28 11:08 ` Vlastimil Babka 2021-05-25 8:01 ` [PATCH 4/6] mm/page_alloc: Scale the number of pages that are batch freed Mel Gorman 2021-05-28 11:19 ` Vlastimil Babka 2021-05-25 8:01 ` [PATCH 5/6] mm/page_alloc: Limit the number of pages on PCP lists when reclaim is active Mel Gorman 2021-05-28 11:43 ` Vlastimil Babka 2021-05-25 8:01 ` [PATCH 6/6] mm/page_alloc: Introduce vm.percpu_pagelist_high_fraction Mel Gorman 2021-05-28 11:59 ` Vlastimil Babka 2021-05-28 12:53 ` Mel Gorman 2021-05-28 14:38 ` Vlastimil Babka 2021-05-27 19:36 ` [PATCH 0/6 v2] Calculate pcp->high based on zone sizes and active CPUs Dave Hansen 2021-05-28 8:55 ` Mel Gorman 2021-05-28 9:03 ` David Hildenbrand 2021-05-28 9:08 ` David Hildenbrand 2021-05-28 9:49 ` Mel Gorman 2021-05-28 9:52 ` David Hildenbrand 2021-05-28 10:09 ` Mel Gorman 2021-05-28 10:21 ` David Hildenbrand [this message] 2021-05-28 12:12 ` Vlastimil Babka 2021-05-28 12:37 ` Mel Gorman 2021-05-28 14:39 ` Dave Hansen 2021-05-28 15:18 ` Mel Gorman 2021-05-28 16:17 ` Dave Hansen 2021-05-31 12:00 ` Feng Tang
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --subject='Re: [PATCH 0/6 v2] Calculate pcp->high based on zone sizes and active CPUs' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).