All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 1/2] mm: cma: allocate cma areas bottom-up
@ 2020-12-17 20:12 Roman Gushchin
  2020-12-17 20:12 ` [PATCH v2 2/2] memblock: do not start bottom-up allocations with kernel_end Roman Gushchin
  2020-12-20  6:48 ` [PATCH v2 1/2] mm: cma: allocate cma areas bottom-up Mike Rapoport
  0 siblings, 2 replies; 49+ messages in thread
From: Roman Gushchin @ 2020-12-17 20:12 UTC (permalink / raw)
  To: Andrew Morton, Mike Rapoport, linux-mm
  Cc: Joonsoo Kim, Rik van Riel, Michal Hocko, linux-kernel,
	kernel-team, Roman Gushchin

Currently cma areas without a fixed base are allocated close to the
end of the node. This placement is sub-optimal because of compaction:
it brings pages into the cma area. In particular, it can bring in hot
executable pages, even if there is a plenty of free memory on the
machine. This results in cma allocation failures.

Instead let's place cma areas close to the beginning of a node.
In this case the compaction will help to free cma areas, resulting
in better cma allocation success rates.

If there is enough memory let's try to allocate bottom-up starting
with 4GB to exclude any possible interference with DMA32. On smaller
machines or in a case of a failure, stick with the old behavior.

16GB vm, 2GB cma area:
With this patch:
[    0.000000] Command line: root=/dev/vda3 rootflags=subvol=/root systemd.unified_cgroup_hierarchy=1 enforcing=0 console=ttyS0,115200 hugetlb_cma=2G
[    0.002928] hugetlb_cma: reserve 2048 MiB, up to 2048 MiB per node
[    0.002930] cma: Reserved 2048 MiB at 0x0000000100000000
[    0.002931] hugetlb_cma: reserved 2048 MiB on node 0

Without this patch:
[    0.000000] Command line: root=/dev/vda3 rootflags=subvol=/root systemd.unified_cgroup_hierarchy=1 enforcing=0 console=ttyS0,115200 hugetlb_cma=2G
[    0.002930] hugetlb_cma: reserve 2048 MiB, up to 2048 MiB per node
[    0.002933] cma: Reserved 2048 MiB at 0x00000003c0000000
[    0.002934] hugetlb_cma: reserved 2048 MiB on node 0

v2:
  - switched to memblock_set_bottom_up(true), by Mike
  - start with 4GB, by Mike

Signed-off-by: Roman Gushchin <guro@fb.com>
---
 mm/cma.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/mm/cma.c b/mm/cma.c
index 7f415d7cda9f..21fd40c092f0 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -337,6 +337,22 @@ int __init cma_declare_contiguous_nid(phys_addr_t base,
 			limit = highmem_start;
 		}
 
+		/*
+		 * If there is enough memory, try a bottom-up allocation first.
+		 * It will place the new cma area close to the start of the node
+		 * and guarantee that the compaction is moving pages out of the
+		 * cma area and not into it.
+		 * Avoid using first 4GB to not interfere with constrained zones
+		 * like DMA/DMA32.
+		 */
+		if (!memblock_bottom_up() &&
+		    memblock_end >= SZ_4G + size) {
+			memblock_set_bottom_up(true);
+			addr = memblock_alloc_range_nid(size, alignment, SZ_4G,
+							limit, nid, true);
+			memblock_set_bottom_up(false);
+		}
+
 		if (!addr) {
 			addr = memblock_alloc_range_nid(size, alignment, base,
 					limit, nid, true);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 49+ messages in thread

end of thread, other threads:[~2021-03-23 18:20 UTC | newest]

Thread overview: 49+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-17 20:12 [PATCH v2 1/2] mm: cma: allocate cma areas bottom-up Roman Gushchin
2020-12-17 20:12 ` [PATCH v2 2/2] memblock: do not start bottom-up allocations with kernel_end Roman Gushchin
2020-12-19 14:52   ` Wonhyuk Yang
2020-12-19 14:52     ` Wonhyuk Yang
2020-12-19 17:05     ` Roman Gushchin
2020-12-20  6:49   ` Mike Rapoport
2021-01-22  4:37     ` Thiago Jung Bauermann
2021-01-22  4:37       ` Thiago Jung Bauermann
2021-01-24  2:09       ` Andrew Morton
2021-01-24  2:09         ` Andrew Morton
2021-01-24  7:34         ` Mike Rapoport
2021-01-24  7:34           ` Mike Rapoport
2021-01-26  0:30           ` Thiago Jung Bauermann
2021-01-26  0:30             ` Thiago Jung Bauermann
2021-02-08 23:58           ` Thiago Jung Bauermann
2021-02-08 23:58             ` Thiago Jung Bauermann
2021-02-28  4:18   ` Florian Fainelli
2021-02-28  9:00     ` Mike Rapoport
2021-02-28 18:19       ` Florian Fainelli
2021-02-28 23:08         ` Serge Semin
2021-03-01  3:50           ` Florian Fainelli
2021-03-01  3:50             ` Florian Fainelli
2021-03-01  9:22             ` Serge Semin
2021-03-01  9:22               ` Serge Semin
2021-03-02  4:09               ` Florian Fainelli
2021-03-02  4:09                 ` Florian Fainelli
2021-03-02 13:26                 ` Serge Semin
2021-03-02  4:19               ` [PATCH] MIPS: BMIPS: Reserve exception base to prevent corruption Florian Fainelli
2021-03-02  8:09                 ` Mike Rapoport
2021-03-02 13:54                 ` Serge Semin
2021-03-02 19:04                 ` Roman Gushchin
2021-03-02 23:54                 ` Thomas Bogendoerfer
2021-03-03  1:30                   ` Florian Fainelli
2021-03-03  9:41                     ` Thomas Bogendoerfer
2021-03-03 17:45                       ` Maciej W. Rozycki
2021-03-03 18:15                         ` Thomas Bogendoerfer
2021-03-03 21:50                           ` Maciej W. Rozycki
2021-03-01  9:45             ` [PATCH v2 2/2] memblock: do not start bottom-up allocations with kernel_end Mike Rapoport
2021-03-01  9:45               ` Mike Rapoport
2021-03-02  3:55               ` Roman Gushchin
2021-03-02  3:55                 ` Roman Gushchin
2021-03-02 13:08                 ` Serge Semin
2021-03-23 18:19   ` [tip: x86/boot] x86/setup: Consolidate early memory reservations tip-bot2 for Mike Rapoport
2020-12-20  6:48 ` [PATCH v2 1/2] mm: cma: allocate cma areas bottom-up Mike Rapoport
2020-12-21 17:05   ` Roman Gushchin
2020-12-23  4:06     ` Andrew Morton
2020-12-23 16:35       ` Roman Gushchin
2020-12-23 22:10         ` Mike Rapoport
2020-12-28 19:36           ` Roman Gushchin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.