From: Vlastimil Babka <firstname.lastname@example.org> To: Andrew Morton <email@example.com> Cc: firstname.lastname@example.org, email@example.com, Michal Hocko <firstname.lastname@example.org>, Pavel Tatashin <email@example.com>, David Hildenbrand <firstname.lastname@example.org>, Oscar Salvador <email@example.com>, Joonsoo Kim <firstname.lastname@example.org>, Vlastimil Babka <email@example.com> Subject: [PATCH v3 0/7] disable pcplists during memory offline Date: Wed, 11 Nov 2020 10:28:05 +0100 [thread overview] Message-ID: <firstname.lastname@example.org> (raw) Changes since v2 : - add acks/reviews (thanks David and Oscar) - small wording and style changes - rebase to next-20201111 Changes since v1 : - add acks/reviews (thanks David and Michal) - drop "mm, page_alloc: make per_cpu_pageset accessible only after init" as that's orthogonal and needs more consideration - squash "mm, page_alloc: drain all pcplists during memory offline" into the last patch, and move new zone_pcp_* functions into mm/page_alloc. As such, the new 'force all cpus' param of __drain_all_pages() is never exported outside page_alloc.c so I didn't add a new wrapper function to hide the bool - keep pcp_batch_high_lock a mutex as offline_pages is synchronized anyway, as suggested by Michal. Thus we don't need atomic variable and sync around it, and patch is much smaller. If alloc_contic_range() wants to use the new functionality and keep parallelism, we can add that on top. As per the discussions   this is an attempt to implement David's suggestion that page isolation should disable pcplists to avoid races with page freeing in progress. This is done without extra checks in fast paths, as explained in Patch 9. The repeated draining done by  is then no longer needed. Previous version (RFC) is at . The RFC tried to hide pcplists disabling/enabling into page isolation, but it wasn't completely possible, as memory offline does not unisolation. Michal suggested an explicit API in  so that's the current implementation and it seems indeed nicer. Once we accept that page isolation users need to do explicit actions around it depending on the needed guarantees, we can also IMHO accept that the current pcplist draining can be also done by the callers, which is more effective. After all, there are only two users of page isolation. So patch 6 does effectively the same thing as Pavel proposed in , and patch 7 implement stronger guarantees only for memory offline. If CMA decides to opt-in to the stronger guarantee, it can be added later. Patches 1-5 are preparatory cleanups for pcplist disabling. Patchset was briefly tested in QEMU so that memory online/offline works, but I haven't done a stress test that would prove the race fixed by  is eliminated. Note that patch 7 could be avoided if we instead adjusted page freeing in shown in , but I believe the current implementation of disabling pcplists is not too much complex, so I would prefer this instead of adding new checks and longer irq-disabled section into page freeing hotpaths.  https://email@example.com/  https://firstname.lastname@example.org/  https://email@example.com/  https://lore.kernel.org/linux-mm/20200909113647.GG7348@dhcp22.suse.cz/  https://firstname.lastname@example.org/  https://email@example.com/  https://firstname.lastname@example.org/  https://email@example.com/ Vlastimil Babka (7): mm, page_alloc: clean up pageset high and batch update mm, page_alloc: calculate pageset high and batch once per zone mm, page_alloc: remove setup_pageset() mm, page_alloc: simplify pageset_update() mm, page_alloc: cache pageset high and batch in struct zone mm, page_alloc: move draining pcplists to page isolation users mm, page_alloc: disable pcplists during memory offline include/linux/mmzone.h | 6 ++ mm/internal.h | 2 + mm/memory_hotplug.c | 27 +++--- mm/page_alloc.c | 195 ++++++++++++++++++++++++----------------- mm/page_isolation.c | 10 +-- 5 files changed, 141 insertions(+), 99 deletions(-) -- 2.29.1
next reply other threads:[~2020-11-11 9:28 UTC|newest] Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-11-11 9:28 Vlastimil Babka [this message] 2020-11-11 9:28 ` [PATCH v3 1/7] mm, page_alloc: clean up pageset high and batch update Vlastimil Babka 2020-11-11 9:55 ` Pankaj Gupta 2020-11-11 9:55 ` Pankaj Gupta 2020-11-11 9:28 ` [PATCH v3 2/7] mm, page_alloc: calculate pageset high and batch once per zone Vlastimil Babka 2020-11-11 10:19 ` Pankaj Gupta 2020-11-11 10:19 ` Pankaj Gupta 2020-11-11 9:28 ` [PATCH v3 3/7] mm, page_alloc: remove setup_pageset() Vlastimil Babka 2020-11-11 10:23 ` Pankaj Gupta 2020-11-11 10:23 ` Pankaj Gupta 2020-11-11 9:28 ` [PATCH v3 4/7] mm, page_alloc: simplify pageset_update() Vlastimil Babka 2020-11-11 9:28 ` [PATCH v3 5/7] mm, page_alloc: cache pageset high and batch in struct zone Vlastimil Babka 2020-11-12 16:26 ` David Hildenbrand 2020-11-11 9:28 ` [PATCH v3 6/7] mm, page_alloc: move draining pcplists to page isolation users Vlastimil Babka 2020-11-11 9:28 ` [PATCH v3 7/7] mm, page_alloc: disable pcplists during memory offline Vlastimil Babka 2020-11-11 17:58 ` David Hildenbrand 2020-11-12 15:18 ` Vlastimil Babka 2020-11-12 16:09 ` David Hildenbrand 2020-11-11 17:59 ` David Hildenbrand
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --subject='Re: [PATCH v3 0/7] disable pcplists during memory offline' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.