* incoming @ 2020-06-03 22:55 Andrew Morton 2020-06-03 22:56 ` [patch 001/131] mm/slub: fix a memory leak in sysfs_slab_add() Andrew Morton ` (131 more replies) 0 siblings, 132 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:55 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm More mm/ work, plenty more to come. 131 patches, based on d6f9469a03d832dcd17041ed67774ffb5f3e73b3. Subsystems affected by this patch series: mm/slub mm/memcg mm/gup mm/kasan mm/pagealloc mm/hugetlb mm/vmscan mm/tools mm/mempolicy mm/memblock mm/hugetlbfs mm/thp mm/mmap mm/kconfig Subsystem: mm/slub Wang Hai <wanghai38@huawei.com>: mm/slub: fix a memory leak in sysfs_slab_add() Subsystem: mm/memcg Shakeel Butt <shakeelb@google.com>: mm/memcg: optimize memory.numa_stat like memory.stat Subsystem: mm/gup John Hubbard <jhubbard@nvidia.com>: Patch series "mm/gup, drm/i915: refactor gup_fast, convert to pin_user_pages()", v2: mm/gup: move __get_user_pages_fast() down a few lines in gup.c mm/gup: refactor and de-duplicate gup_fast() code mm/gup: introduce pin_user_pages_fast_only() drm/i915: convert get_user_pages() --> pin_user_pages() mm/gup: might_lock_read(mmap_sem) in get_user_pages_fast() Subsystem: mm/kasan Daniel Axtens <dja@axtens.net>: Patch series "Fix some incompatibilites between KASAN and FORTIFY_SOURCE", v4: kasan: stop tests being eliminated as dead code with FORTIFY_SOURCE string.h: fix incompatibility between FORTIFY_SOURCE and KASAN Subsystem: mm/pagealloc Michal Hocko <mhocko@suse.com>: mm: clarify __GFP_MEMALLOC usage Mike Rapoport <rppt@linux.ibm.com>: Patch series "mm: rework free_area_init*() funcitons": mm: memblock: replace dereferences of memblock_region.nid with API calls mm: make early_pfn_to_nid() and related defintions close to each other mm: remove CONFIG_HAVE_MEMBLOCK_NODE_MAP option mm: free_area_init: use maximal zone PFNs rather than zone sizes mm: use free_area_init() instead of free_area_init_nodes() alpha: simplify detection of memory zone boundaries arm: simplify detection of memory zone boundaries arm64: simplify detection of memory zone boundaries for UMA configs csky: simplify detection of memory zone boundaries m68k: mm: simplify detection of memory zone boundaries parisc: simplify detection of memory zone boundaries sparc32: simplify detection of memory zone boundaries unicore32: simplify detection of memory zone boundaries xtensa: simplify detection of memory zone boundaries Baoquan He <bhe@redhat.com>: mm: memmap_init: iterate over memblock regions rather that check each PFN Mike Rapoport <rppt@linux.ibm.com>: mm: remove early_pfn_in_nid() and CONFIG_NODES_SPAN_OTHER_NODES mm: free_area_init: allow defining max_zone_pfn in descending order mm: rename free_area_init_node() to free_area_init_memoryless_node() mm: clean up free_area_init_node() and its helpers mm: simplify find_min_pfn_with_active_regions() docs/vm: update memory-models documentation Wei Yang <richard.weiyang@gmail.com>: Patch series "mm/page_alloc.c: cleanup on check page", v3: mm/page_alloc.c: bad_[reason|flags] is not necessary when PageHWPoison mm/page_alloc.c: bad_flags is not necessary for bad_page() mm/page_alloc.c: rename free_pages_check_bad() to check_free_page_bad() mm/page_alloc.c: rename free_pages_check() to check_free_page() mm/page_alloc.c: extract check_[new|free]_page_bad() common part to page_bad_reason() Roman Gushchin <guro@fb.com>: mm,page_alloc,cma: conditionally prefer cma pageblocks for movable allocations Baoquan He <bhe@redhat.com>: mm/page_alloc.c: remove unused free_bootmem_with_active_regions Patch series "improvements about lowmem_reserve and /proc/zoneinfo", v2: mm/page_alloc.c: only tune sysctl_lowmem_reserve_ratio value once when changing it mm/page_alloc.c: clear out zone->lowmem_reserve[] if the zone is empty mm/vmstat.c: do not show lowmem reserve protection information of empty zone Joonsoo Kim <iamjoonsoo.kim@lge.com>: Patch series "integrate classzone_idx and high_zoneidx", v5: mm/page_alloc: use ac->high_zoneidx for classzone_idx mm/page_alloc: integrate classzone_idx and high_zoneidx Wei Yang <richard.weiyang@gmail.com>: mm/page_alloc.c: use NODE_MASK_NONE in build_zonelists() mm: rename gfpflags_to_migratetype to gfp_migratetype for same convention Sandipan Das <sandipan@linux.ibm.com>: mm/page_alloc.c: reset numa stats for boot pagesets Charan Teja Reddy <charante@codeaurora.org>: mm, page_alloc: reset the zone->watermark_boost early Anshuman Khandual <anshuman.khandual@arm.com>: mm/page_alloc: restrict and formalize compound_page_dtors[] Daniel Jordan <daniel.m.jordan@oracle.com>: Patch series "initialize deferred pages with interrupts enabled", v4: mm/pagealloc.c: call touch_nmi_watchdog() on max order boundaries in deferred init Pavel Tatashin <pasha.tatashin@soleen.com>: mm: initialize deferred pages with interrupts enabled mm: call cond_resched() from deferred_init_memmap() Daniel Jordan <daniel.m.jordan@oracle.com>: Patch series "padata: parallelize deferred page init", v3: padata: remove exit routine padata: initialize earlier padata: allocate work structures for parallel jobs from a pool padata: add basic support for multithreaded jobs mm: don't track number of pages during deferred initialization mm: parallelize deferred_init_memmap() mm: make deferred init's max threads arch-specific padata: document multithreaded jobs Chen Tao <chentao107@huawei.com>: mm/page_alloc.c: add missing newline Subsystem: mm/hugetlb "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>: Patch series "thp/khugepaged improvements and CoW semantics", v4: khugepaged: add self test khugepaged: do not stop collapse if less than half PTEs are referenced khugepaged: drain all LRU caches before scanning pages khugepaged: drain LRU add pagevec after swapin khugepaged: allow to collapse a page shared across fork khugepaged: allow to collapse PTE-mapped compound pages thp: change CoW semantics for anon-THP khugepaged: introduce 'max_ptes_shared' tunable Mike Kravetz <mike.kravetz@oracle.com>: Patch series "Clean up hugetlb boot command line processing", v4: hugetlbfs: add arch_hugetlb_valid_size hugetlbfs: move hugepagesz= parsing to arch independent code hugetlbfs: remove hugetlb_add_hstate() warning for existing hstate hugetlbfs: clean up command line processing hugetlbfs: fix changes to command line processing Li Xinhai <lixinhai.lxh@gmail.com>: mm/hugetlb: avoid unnecessary check on pud and pmd entry in huge_pte_offset Anshuman Khandual <anshuman.khandual@arm.com>: Patch series "mm/hugetlb: Add some new generic fallbacks", v3: arm64/mm: drop __HAVE_ARCH_HUGE_PTEP_GET mm/hugetlb: define a generic fallback for is_hugepage_only_range() mm/hugetlb: define a generic fallback for arch_clear_hugepage_flags() "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm: simplify calling a compound page destructor Subsystem: mm/vmscan Wei Yang <richard.weiyang@gmail.com>: mm/vmscan.c: use update_lru_size() in update_lru_sizes() Jaewon Kim <jaewon31.kim@samsung.com>: mm/vmscan: count layzfree pages and fix nr_isolated_* mismatch Maninder Singh <maninder1.s@samsung.com>: mm/vmscan.c: change prototype for shrink_page_list Qiwu Chen <qiwuchen55@gmail.com>: mm/vmscan: update the comment of should_continue_reclaim() Johannes Weiner <hannes@cmpxchg.org>: Patch series "mm: memcontrol: charge swapin pages on instantiation", v2: mm: fix NUMA node file count error in replace_page_cache() mm: memcontrol: fix stat-corrupting race in charge moving mm: memcontrol: drop @compound parameter from memcg charging API mm: shmem: remove rare optimization when swapin races with hole punching mm: memcontrol: move out cgroup swaprate throttling mm: memcontrol: convert page cache to a new mem_cgroup_charge() API mm: memcontrol: prepare uncharging for removal of private page type counters mm: memcontrol: prepare move_account for removal of private page type counters mm: memcontrol: prepare cgroup vmstat infrastructure for native anon counters mm: memcontrol: switch to native NR_FILE_PAGES and NR_SHMEM counters mm: memcontrol: switch to native NR_ANON_MAPPED counter mm: memcontrol: switch to native NR_ANON_THPS counter mm: memcontrol: convert anon and file-thp to new mem_cgroup_charge() API mm: memcontrol: drop unused try/commit/cancel charge API mm: memcontrol: prepare swap controller setup for integration mm: memcontrol: make swap tracking an integral part of memory control mm: memcontrol: charge swapin pages on instantiation Alex Shi <alex.shi@linux.alibaba.com>: mm: memcontrol: document the new swap control behavior Johannes Weiner <hannes@cmpxchg.org>: mm: memcontrol: delete unused lrucare handling mm: memcontrol: update page->mem_cgroup stability rules mm: fix LRU balancing effect of new transparent huge pages mm: keep separate anon and file statistics on page reclaim activity mm: allow swappiness that prefers reclaiming anon over the file workingset mm: fold and remove lru_cache_add_anon() and lru_cache_add_file() mm: workingset: let cache workingset challenge anon mm: remove use-once cache bias from LRU balancing mm: vmscan: drop unnecessary div0 avoidance rounding in get_scan_count() mm: base LRU balancing on an explicit cost model mm: deactivations shouldn't bias the LRU balance mm: only count actual rotations as LRU reclaim cost mm: balance LRU lists based on relative thrashing mm: vmscan: determine anon/file pressure balance at the reclaim root mm: vmscan: reclaim writepage is IO cost mm: vmscan: limit the range of LRU type balancing Shakeel Butt <shakeelb@google.com>: mm: swap: fix vmstats for huge pages mm: swap: memcg: fix memcg stats for huge pages Subsystem: mm/tools Changhee Han <ch0.han@lge.com>: tools/vm/page_owner_sort.c: filter out unneeded line Subsystem: mm/mempolicy Michal Hocko <mhocko@suse.com>: mm, mempolicy: fix up gup usage in lookup_node Subsystem: mm/memblock chenqiwu <chenqiwu@xiaomi.com>: include/linux/memblock.h: fix minor typo and unclear comment Mike Rapoport <rppt@linux.ibm.com>: sparc32: register memory occupied by kernel as memblock.memory Subsystem: mm/hugetlbfs Shijie Hu <hushijie3@huawei.com>: hugetlbfs: get unmapped area below TASK_UNMAPPED_BASE for hugetlbfs Subsystem: mm/thp Yang Shi <yang.shi@linux.alibaba.com>: mm: thp: don't need to drain lru cache when splitting and mlocking THP Anshuman Khandual <anshuman.khandual@arm.com>: Patch series "mm/thp: Rename pmd_mknotpresent() as pmd_mknotvalid()", v2: powerpc/mm: drop platform defined pmd_mknotpresent() mm/thp: rename pmd_mknotpresent() as pmd_mkinvalid() Subsystem: mm/mmap Scott Cheloha <cheloha@linux.vnet.ibm.com>: drivers/base/memory.c: cache memory blocks in xarray to accelerate lookup Subsystem: mm/kconfig Zong Li <zong.li@sifive.com>: Patch series "Extract DEBUG_WX to shared use": mm: add DEBUG_WX support riscv: support DEBUG_WX x86: mm: use ARCH_HAS_DEBUG_WX instead of arch defined arm64: mm: use ARCH_HAS_DEBUG_WX instead of arch defined Documentation/admin-guide/cgroup-v1/memory.rst | 19 Documentation/admin-guide/kernel-parameters.txt | 40 Documentation/admin-guide/mm/hugetlbpage.rst | 35 Documentation/admin-guide/mm/transhuge.rst | 7 Documentation/admin-guide/sysctl/vm.rst | 23 Documentation/core-api/padata.rst | 41 Documentation/features/vm/numa-memblock/arch-support.txt | 34 Documentation/vm/memory-model.rst | 9 Documentation/vm/page_owner.rst | 3 arch/alpha/mm/init.c | 16 arch/alpha/mm/numa.c | 22 arch/arc/include/asm/hugepage.h | 2 arch/arc/mm/init.c | 41 arch/arm/include/asm/hugetlb.h | 7 arch/arm/include/asm/pgtable-3level.h | 2 arch/arm/mm/init.c | 66 arch/arm64/Kconfig | 2 arch/arm64/Kconfig.debug | 29 arch/arm64/include/asm/hugetlb.h | 13 arch/arm64/include/asm/pgtable.h | 2 arch/arm64/mm/hugetlbpage.c | 48 arch/arm64/mm/init.c | 56 arch/arm64/mm/numa.c | 9 arch/c6x/mm/init.c | 8 arch/csky/kernel/setup.c | 26 arch/h8300/mm/init.c | 6 arch/hexagon/mm/init.c | 6 arch/ia64/Kconfig | 1 arch/ia64/include/asm/hugetlb.h | 5 arch/ia64/mm/contig.c | 2 arch/ia64/mm/discontig.c | 2 arch/m68k/mm/init.c | 6 arch/m68k/mm/mcfmmu.c | 9 arch/m68k/mm/motorola.c | 15 arch/m68k/mm/sun3mmu.c | 10 arch/microblaze/Kconfig | 1 arch/microblaze/mm/init.c | 2 arch/mips/Kconfig | 1 arch/mips/include/asm/hugetlb.h | 11 arch/mips/include/asm/pgtable.h | 2 arch/mips/loongson64/numa.c | 2 arch/mips/mm/init.c | 2 arch/mips/sgi-ip27/ip27-memory.c | 2 arch/nds32/mm/init.c | 11 arch/nios2/mm/init.c | 8 arch/openrisc/mm/init.c | 9 arch/parisc/include/asm/hugetlb.h | 10 arch/parisc/mm/init.c | 22 arch/powerpc/Kconfig | 10 arch/powerpc/include/asm/book3s/64/pgtable.h | 4 arch/powerpc/include/asm/hugetlb.h | 5 arch/powerpc/mm/hugetlbpage.c | 38 arch/powerpc/mm/mem.c | 2 arch/riscv/Kconfig | 2 arch/riscv/include/asm/hugetlb.h | 10 arch/riscv/include/asm/ptdump.h | 11 arch/riscv/mm/hugetlbpage.c | 44 arch/riscv/mm/init.c | 5 arch/s390/Kconfig | 1 arch/s390/include/asm/hugetlb.h | 8 arch/s390/mm/hugetlbpage.c | 34 arch/s390/mm/init.c | 2 arch/sh/Kconfig | 1 arch/sh/include/asm/hugetlb.h | 7 arch/sh/mm/init.c | 2 arch/sparc/Kconfig | 10 arch/sparc/include/asm/hugetlb.h | 10 arch/sparc/mm/init_32.c | 1 arch/sparc/mm/init_64.c | 67 arch/sparc/mm/srmmu.c | 21 arch/um/kernel/mem.c | 12 arch/unicore32/include/asm/memory.h | 2 arch/unicore32/include/mach/memory.h | 6 arch/unicore32/kernel/pci.c | 14 arch/unicore32/mm/init.c | 43 arch/x86/Kconfig | 11 arch/x86/Kconfig.debug | 27 arch/x86/include/asm/hugetlb.h | 10 arch/x86/include/asm/pgtable.h | 2 arch/x86/mm/hugetlbpage.c | 35 arch/x86/mm/init.c | 2 arch/x86/mm/init_64.c | 12 arch/x86/mm/kmmio.c | 2 arch/x86/mm/numa.c | 11 arch/xtensa/mm/init.c | 8 drivers/base/memory.c | 44 drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 22 fs/cifs/file.c | 10 fs/fuse/dev.c | 2 fs/hugetlbfs/inode.c | 67 include/asm-generic/hugetlb.h | 2 include/linux/compaction.h | 9 include/linux/gfp.h | 7 include/linux/hugetlb.h | 16 include/linux/memblock.h | 15 include/linux/memcontrol.h | 102 - include/linux/mm.h | 52 include/linux/mmzone.h | 46 include/linux/padata.h | 43 include/linux/string.h | 60 include/linux/swap.h | 17 include/linux/vm_event_item.h | 4 include/linux/vmstat.h | 2 include/trace/events/compaction.h | 22 include/trace/events/huge_memory.h | 3 include/trace/events/vmscan.h | 14 init/Kconfig | 17 init/main.c | 2 kernel/events/uprobes.c | 22 kernel/padata.c | 293 +++- kernel/sysctl.c | 3 lib/test_kasan.c | 29 mm/Kconfig | 9 mm/Kconfig.debug | 32 mm/compaction.c | 70 - mm/filemap.c | 55 mm/gup.c | 237 ++- mm/huge_memory.c | 282 ---- mm/hugetlb.c | 260 ++- mm/internal.h | 25 mm/khugepaged.c | 316 ++-- mm/memblock.c | 19 mm/memcontrol.c | 642 +++------ mm/memory.c | 103 - mm/memory_hotplug.c | 10 mm/mempolicy.c | 5 mm/migrate.c | 30 mm/oom_kill.c | 4 mm/page_alloc.c | 735 ++++------ mm/page_owner.c | 7 mm/pgtable-generic.c | 2 mm/rmap.c | 53 mm/shmem.c | 156 -- mm/slab.c | 4 mm/slub.c | 8 mm/swap.c | 199 +- mm/swap_cgroup.c | 10 mm/swap_state.c | 110 - mm/swapfile.c | 39 mm/userfaultfd.c | 15 mm/vmscan.c | 344 ++-- mm/vmstat.c | 16 mm/workingset.c | 23 tools/testing/selftests/vm/.gitignore | 1 tools/testing/selftests/vm/Makefile | 1 tools/testing/selftests/vm/khugepaged.c | 1035 +++++++++++++++ tools/vm/page_owner_sort.c | 5 147 files changed, 3876 insertions(+), 3108 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 001/131] mm/slub: fix a memory leak in sysfs_slab_add() 2020-06-03 22:55 incoming Andrew Morton @ 2020-06-03 22:56 ` Andrew Morton 2020-06-03 22:56 ` [patch 002/131] mm/memcg: optimize memory.numa_stat like memory.stat Andrew Morton ` (130 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:56 UTC (permalink / raw) To: akpm, cl, hulkci, iamjoonsoo.kim, linux-mm, mm-commits, penberg, rientjes, torvalds, wanghai38 From: Wang Hai <wanghai38@huawei.com> Subject: mm/slub: fix a memory leak in sysfs_slab_add() syzkaller reports for memory leak when kobject_init_and_add() returns an error in the function sysfs_slab_add() [1] When this happened, the function kobject_put() is not called for the corresponding kobject, which potentially leads to memory leak. This patch fixes the issue by calling kobject_put() even if kobject_init_and_add() fails. [1] BUG: memory leak unreferenced object 0xffff8880a6d4be88 (size 8): comm "syz-executor.3", pid 946, jiffies 4295772514 (age 18.396s) hex dump (first 8 bytes): 70 69 64 5f 33 00 ff ff pid_3... backtrace: [<00000000a0980095>] kstrdup+0x35/0x70 mm/util.c:60 [<00000000ef0cff3f>] kstrdup_const+0x3d/0x50 mm/util.c:82 [<00000000e2461486>] kvasprintf_const+0x112/0x170 lib/kasprintf.c:48 [<000000005d749e93>] kobject_set_name_vargs+0x55/0x130 lib/kobject.c:289 [<0000000094e31519>] kobject_add_varg lib/kobject.c:384 [inline] [<0000000094e31519>] kobject_init_and_add+0xd8/0x170 lib/kobject.c:473 [<0000000060f13e32>] sysfs_slab_add+0x1d8/0x290 mm/slub.c:5811 [<00000000fe1d9a22>] __kmem_cache_create+0x50a/0x570 mm/slub.c:4384 [<000000006a71a1b4>] create_cache+0x113/0x1e0 mm/slab_common.c:407 [<0000000089491438>] kmem_cache_create_usercopy+0x1a1/0x260 mm/slab_common.c:505 [<000000008c992595>] kmem_cache_create+0xd/0x10 mm/slab_common.c:564 [<000000005320c4b6>] create_pid_cachep kernel/pid_namespace.c:54 [inline] [<000000005320c4b6>] create_pid_namespace kernel/pid_namespace.c:96 [inline] [<000000005320c4b6>] copy_pid_ns+0x77c/0x8f0 kernel/pid_namespace.c:148 [<00000000fc8e1a2b>] create_new_namespaces+0x26b/0xa30 kernel/nsproxy.c:95 [<0000000080f0c9a5>] unshare_nsproxy_namespaces+0xa7/0x1e0 kernel/nsproxy.c:229 [<0000000007e05aea>] ksys_unshare+0x3d2/0x770 kernel/fork.c:2969 [<00000000e04c8e4b>] __do_sys_unshare kernel/fork.c:3037 [inline] [<00000000e04c8e4b>] __se_sys_unshare kernel/fork.c:3035 [inline] [<00000000e04c8e4b>] __x64_sys_unshare+0x2d/0x40 kernel/fork.c:3035 [<000000005c4707c7>] do_syscall_64+0xa1/0x530 arch/x86/entry/common.c:295 Link: http://lkml.kernel.org/r/20200602115033.1054-1-wanghai38@huawei.com Fixes: 80da026a8e5d ("mm/slub: fix slab double-free in case of duplicate sysfs filename") Signed-off-by: Wang Hai <wanghai38@huawei.com> Reported-by: Hulk Robot <hulkci@huawei.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/slub.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) --- a/mm/slub.c~mm-slub-fix-a-memory-leak-in-sysfs_slab_add +++ a/mm/slub.c @@ -5835,8 +5835,10 @@ static int sysfs_slab_add(struct kmem_ca s->kobj.kset = kset; err = kobject_init_and_add(&s->kobj, &slab_ktype, NULL, "%s", name); - if (err) + if (err) { + kobject_put(&s->kobj); goto out; + } err = sysfs_create_group(&s->kobj, &slab_attr_group); if (err) _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 002/131] mm/memcg: optimize memory.numa_stat like memory.stat 2020-06-03 22:55 incoming Andrew Morton 2020-06-03 22:56 ` [patch 001/131] mm/slub: fix a memory leak in sysfs_slab_add() Andrew Morton @ 2020-06-03 22:56 ` Andrew Morton 2020-06-03 22:56 ` [patch 003/131] mm/gup: move __get_user_pages_fast() down a few lines in gup.c Andrew Morton ` (129 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:56 UTC (permalink / raw) To: akpm, guro, hannes, linux-mm, mhocko, mm-commits, shakeelb, torvalds From: Shakeel Butt <shakeelb@google.com> Subject: mm/memcg: optimize memory.numa_stat like memory.stat Currently reading memory.numa_stat traverses the underlying memcg tree multiple times to accumulate the stats to present the hierarchical view of the memcg tree. However the kernel already maintains the hierarchical view of the stats and use it in memory.stat. Just use the same mechanism in memory.numa_stat as well. I ran a simple benchmark which reads root_mem_cgroup's memory.numa_stat file in the presense of 10000 memcgs. The results are: Without the patch: $ time cat /dev/cgroup/memory/memory.numa_stat > /dev/null real 0m0.700s user 0m0.001s sys 0m0.697s With the patch: $ time cat /dev/cgroup/memory/memory.numa_stat > /dev/null real 0m0.001s user 0m0.001s sys 0m0.000s [akpm@linux-foundation.org: avoid forcing out-of-line code generation] Link: http://lkml.kernel.org/r/20200304022058.248270-1-shakeelb@google.com Signed-off-by: Shakeel Butt <shakeelb@google.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Roman Gushchin <guro@fb.com> Cc: Michal Hocko <mhocko@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/memcontrol.c | 49 +++++++++++++++++++++++----------------------- 1 file changed, 25 insertions(+), 24 deletions(-) --- a/mm/memcontrol.c~memcg-optimize-memorynuma_stat-like-memorystat +++ a/mm/memcontrol.c @@ -3743,7 +3743,7 @@ static int mem_cgroup_move_charge_write( #define LRU_ALL ((1 << NR_LRU_LISTS) - 1) static unsigned long mem_cgroup_node_nr_lru_pages(struct mem_cgroup *memcg, - int nid, unsigned int lru_mask) + int nid, unsigned int lru_mask, bool tree) { struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid)); unsigned long nr = 0; @@ -3754,13 +3754,17 @@ static unsigned long mem_cgroup_node_nr_ for_each_lru(lru) { if (!(BIT(lru) & lru_mask)) continue; - nr += lruvec_page_state_local(lruvec, NR_LRU_BASE + lru); + if (tree) + nr += lruvec_page_state(lruvec, NR_LRU_BASE + lru); + else + nr += lruvec_page_state_local(lruvec, NR_LRU_BASE + lru); } return nr; } static unsigned long mem_cgroup_nr_lru_pages(struct mem_cgroup *memcg, - unsigned int lru_mask) + unsigned int lru_mask, + bool tree) { unsigned long nr = 0; enum lru_list lru; @@ -3768,7 +3772,10 @@ static unsigned long mem_cgroup_nr_lru_p for_each_lru(lru) { if (!(BIT(lru) & lru_mask)) continue; - nr += memcg_page_state_local(memcg, NR_LRU_BASE + lru); + if (tree) + nr += memcg_page_state(memcg, NR_LRU_BASE + lru); + else + nr += memcg_page_state_local(memcg, NR_LRU_BASE + lru); } return nr; } @@ -3788,34 +3795,28 @@ static int memcg_numa_stat_show(struct s }; const struct numa_stat *stat; int nid; - unsigned long nr; struct mem_cgroup *memcg = mem_cgroup_from_seq(m); for (stat = stats; stat < stats + ARRAY_SIZE(stats); stat++) { - nr = mem_cgroup_nr_lru_pages(memcg, stat->lru_mask); - seq_printf(m, "%s=%lu", stat->name, nr); - for_each_node_state(nid, N_MEMORY) { - nr = mem_cgroup_node_nr_lru_pages(memcg, nid, - stat->lru_mask); - seq_printf(m, " N%d=%lu", nid, nr); - } + seq_printf(m, "%s=%lu", stat->name, + mem_cgroup_nr_lru_pages(memcg, stat->lru_mask, + false)); + for_each_node_state(nid, N_MEMORY) + seq_printf(m, " N%d=%lu", nid, + mem_cgroup_node_nr_lru_pages(memcg, nid, + stat->lru_mask, false)); seq_putc(m, '\n'); } for (stat = stats; stat < stats + ARRAY_SIZE(stats); stat++) { - struct mem_cgroup *iter; - nr = 0; - for_each_mem_cgroup_tree(iter, memcg) - nr += mem_cgroup_nr_lru_pages(iter, stat->lru_mask); - seq_printf(m, "hierarchical_%s=%lu", stat->name, nr); - for_each_node_state(nid, N_MEMORY) { - nr = 0; - for_each_mem_cgroup_tree(iter, memcg) - nr += mem_cgroup_node_nr_lru_pages( - iter, nid, stat->lru_mask); - seq_printf(m, " N%d=%lu", nid, nr); - } + seq_printf(m, "hierarchical_%s=%lu", stat->name, + mem_cgroup_nr_lru_pages(memcg, stat->lru_mask, + true)); + for_each_node_state(nid, N_MEMORY) + seq_printf(m, " N%d=%lu", nid, + mem_cgroup_node_nr_lru_pages(memcg, nid, + stat->lru_mask, true)); seq_putc(m, '\n'); } _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 003/131] mm/gup: move __get_user_pages_fast() down a few lines in gup.c 2020-06-03 22:55 incoming Andrew Morton 2020-06-03 22:56 ` [patch 001/131] mm/slub: fix a memory leak in sysfs_slab_add() Andrew Morton 2020-06-03 22:56 ` [patch 002/131] mm/memcg: optimize memory.numa_stat like memory.stat Andrew Morton @ 2020-06-03 22:56 ` Andrew Morton 2020-06-04 1:51 ` John Hubbard 2020-06-03 22:56 ` [patch 004/131] mm/gup: refactor and de-duplicate gup_fast() code Andrew Morton ` (128 subsequent siblings) 131 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:56 UTC (permalink / raw) To: airlied, akpm, chris, daniel, jani.nikula, jhubbard, joonas.lahtinen, jrdr.linux, linux-mm, matthew.auld, mm-commits, rodrigo.vivi, torvalds, tvrtko.ursulin, willy From: John Hubbard <jhubbard@nvidia.com> Subject: mm/gup: move __get_user_pages_fast() down a few lines in gup.c Patch series "mm/gup, drm/i915: refactor gup_fast, convert to pin_user_pages()", v2. In order to convert the drm/i915 driver from get_user_pages() to pin_user_pages(), a FOLL_PIN equivalent of __get_user_pages_fast() was required. That led to refactoring __get_user_pages_fast(), with the following goals: 1) As above: provide a pin_user_pages*() routine for drm/i915 to call, in place of __get_user_pages_fast(), 2) Get rid of the gup.c duplicate code for walking page tables with interrupts disabled. This duplicate code is a minor maintenance problem anyway. 3) Make it easy for an upcoming patch from Souptick, which aims to convert __get_user_pages_fast() to use a gup_flags argument, instead of a bool writeable arg. Also, if this series looks good, we can ask Souptick to change the name as well, to whatever the consensus is. My initial recommendation is: get_user_pages_fast_only(), to match the new pin_user_pages_only(). This patch (of 4): This is in order to avoid a forward declaration of internal_get_user_pages_fast(), in the next patch. This is code movement only--all generated code should be identical. Link: http://lkml.kernel.org/r/20200522051931.54191-1-jhubbard@nvidia.com Link: http://lkml.kernel.org/r/20200519002124.2025955-1-jhubbard@nvidia.com Link: http://lkml.kernel.org/r/20200519002124.2025955-2-jhubbard@nvidia.com Signed-off-by: John Hubbard <jhubbard@nvidia.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: David Airlie <airlied@linux.ie> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: "Joonas Lahtinen" <joonas.lahtinen@linux.intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Souptick Joarder <jrdr.linux@gmail.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/gup.c | 132 ++++++++++++++++++++++++++--------------------------- 1 file changed, 66 insertions(+), 66 deletions(-) --- a/mm/gup.c~mm-gup-move-__get_user_pages_fast-down-a-few-lines-in-gupc +++ a/mm/gup.c @@ -2703,72 +2703,6 @@ static bool gup_fast_permitted(unsigned } #endif -/* - * Like get_user_pages_fast() except it's IRQ-safe in that it won't fall back to - * the regular GUP. - * Note a difference with get_user_pages_fast: this always returns the - * number of pages pinned, 0 if no pages were pinned. - * - * If the architecture does not support this function, simply return with no - * pages pinned. - * - * Careful, careful! COW breaking can go either way, so a non-write - * access can get ambiguous page results. If you call this function without - * 'write' set, you'd better be sure that you're ok with that ambiguity. - */ -int __get_user_pages_fast(unsigned long start, int nr_pages, int write, - struct page **pages) -{ - unsigned long len, end; - unsigned long flags; - int nr_pinned = 0; - /* - * Internally (within mm/gup.c), gup fast variants must set FOLL_GET, - * because gup fast is always a "pin with a +1 page refcount" request. - */ - unsigned int gup_flags = FOLL_GET; - - if (write) - gup_flags |= FOLL_WRITE; - - start = untagged_addr(start) & PAGE_MASK; - len = (unsigned long) nr_pages << PAGE_SHIFT; - end = start + len; - - if (end <= start) - return 0; - if (unlikely(!access_ok((void __user *)start, len))) - return 0; - - /* - * Disable interrupts. We use the nested form as we can already have - * interrupts disabled by get_futex_key. - * - * With interrupts disabled, we block page table pages from being - * freed from under us. See struct mmu_table_batch comments in - * include/asm-generic/tlb.h for more details. - * - * We do not adopt an rcu_read_lock(.) here as we also want to - * block IPIs that come from THPs splitting. - * - * NOTE! We allow read-only gup_fast() here, but you'd better be - * careful about possible COW pages. You'll get _a_ COW page, but - * not necessarily the one you intended to get depending on what - * COW event happens after this. COW may break the page copy in a - * random direction. - */ - - if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) && - gup_fast_permitted(start, end)) { - local_irq_save(flags); - gup_pgd_range(start, end, gup_flags, pages, &nr_pinned); - local_irq_restore(flags); - } - - return nr_pinned; -} -EXPORT_SYMBOL_GPL(__get_user_pages_fast); - static int __gup_longterm_unlocked(unsigned long start, int nr_pages, unsigned int gup_flags, struct page **pages) { @@ -2848,6 +2782,72 @@ static int internal_get_user_pages_fast( return ret; } +/* + * Like get_user_pages_fast() except it's IRQ-safe in that it won't fall back to + * the regular GUP. + * Note a difference with get_user_pages_fast: this always returns the + * number of pages pinned, 0 if no pages were pinned. + * + * If the architecture does not support this function, simply return with no + * pages pinned. + * + * Careful, careful! COW breaking can go either way, so a non-write + * access can get ambiguous page results. If you call this function without + * 'write' set, you'd better be sure that you're ok with that ambiguity. + */ +int __get_user_pages_fast(unsigned long start, int nr_pages, int write, + struct page **pages) +{ + unsigned long len, end; + unsigned long flags; + int nr_pinned = 0; + /* + * Internally (within mm/gup.c), gup fast variants must set FOLL_GET, + * because gup fast is always a "pin with a +1 page refcount" request. + */ + unsigned int gup_flags = FOLL_GET; + + if (write) + gup_flags |= FOLL_WRITE; + + start = untagged_addr(start) & PAGE_MASK; + len = (unsigned long) nr_pages << PAGE_SHIFT; + end = start + len; + + if (end <= start) + return 0; + if (unlikely(!access_ok((void __user *)start, len))) + return 0; + + /* + * Disable interrupts. We use the nested form as we can already have + * interrupts disabled by get_futex_key. + * + * With interrupts disabled, we block page table pages from being + * freed from under us. See struct mmu_table_batch comments in + * include/asm-generic/tlb.h for more details. + * + * We do not adopt an rcu_read_lock(.) here as we also want to + * block IPIs that come from THPs splitting. + * + * NOTE! We allow read-only gup_fast() here, but you'd better be + * careful about possible COW pages. You'll get _a_ COW page, but + * not necessarily the one you intended to get depending on what + * COW event happens after this. COW may break the page copy in a + * random direction. + */ + + if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) && + gup_fast_permitted(start, end)) { + local_irq_save(flags); + gup_pgd_range(start, end, gup_flags, pages, &nr_pinned); + local_irq_restore(flags); + } + + return nr_pinned; +} +EXPORT_SYMBOL_GPL(__get_user_pages_fast); + /** * get_user_pages_fast() - pin user pages in memory * @start: starting user address _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: [patch 003/131] mm/gup: move __get_user_pages_fast() down a few lines in gup.c 2020-06-03 22:56 ` [patch 003/131] mm/gup: move __get_user_pages_fast() down a few lines in gup.c Andrew Morton @ 2020-06-04 1:51 ` John Hubbard 0 siblings, 0 replies; 349+ messages in thread From: John Hubbard @ 2020-06-04 1:51 UTC (permalink / raw) To: Andrew Morton, airlied, chris, daniel, jani.nikula, joonas.lahtinen, jrdr.linux, linux-mm, matthew.auld, mm-commits, rodrigo.vivi, torvalds, tvrtko.ursulin, willy On 2020-06-03 15:56, Andrew Morton wrote: > From: John Hubbard <jhubbard@nvidia.com> > Subject: mm/gup: move __get_user_pages_fast() down a few lines in gup.c > > Patch series "mm/gup, drm/i915: refactor gup_fast, convert to pin_user_pages()", v2. These patches 003 through 007 (gup refactoring and pin_user_pages stuff) all look good. Thanks for fixing up the merge conflicts with commit 17839856fd58 ("gup: document and work around "COW can break either way" issue"). I wasn't aware of that commit until the -next conflict email showed up in my inbox this morning. thanks, -- John Hubbard NVIDIA > > In order to convert the drm/i915 driver from get_user_pages() to > pin_user_pages(), a FOLL_PIN equivalent of __get_user_pages_fast() was > required. That led to refactoring __get_user_pages_fast(), with the > following goals: > > 1) As above: provide a pin_user_pages*() routine for drm/i915 to call, > in place of __get_user_pages_fast(), > > 2) Get rid of the gup.c duplicate code for walking page tables with > interrupts disabled. This duplicate code is a minor maintenance > problem anyway. > > 3) Make it easy for an upcoming patch from Souptick, which aims to > convert __get_user_pages_fast() to use a gup_flags argument, instead > of a bool writeable arg. Also, if this series looks good, we can > ask Souptick to change the name as well, to whatever the consensus > is. My initial recommendation is: get_user_pages_fast_only(), to > match the new pin_user_pages_only(). > > > This patch (of 4): > > This is in order to avoid a forward declaration of > internal_get_user_pages_fast(), in the next patch. > > This is code movement only--all generated code should be identical. > > Link: http://lkml.kernel.org/r/20200522051931.54191-1-jhubbard@nvidia.com > Link: http://lkml.kernel.org/r/20200519002124.2025955-1-jhubbard@nvidia.com > Link: http://lkml.kernel.org/r/20200519002124.2025955-2-jhubbard@nvidia.com > Signed-off-by: John Hubbard <jhubbard@nvidia.com> > Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> > Cc: Daniel Vetter <daniel@ffwll.ch> > Cc: David Airlie <airlied@linux.ie> > Cc: Jani Nikula <jani.nikula@linux.intel.com> > Cc: "Joonas Lahtinen" <joonas.lahtinen@linux.intel.com> > Cc: Matthew Auld <matthew.auld@intel.com> > Cc: Matthew Wilcox <willy@infradead.org> > Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> > Cc: Souptick Joarder <jrdr.linux@gmail.com> > Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> > Signed-off-by: Andrew Morton <akpm@linux-foundation.org> > --- > > mm/gup.c | 132 ++++++++++++++++++++++++++--------------------------- > 1 file changed, 66 insertions(+), 66 deletions(-) > > --- a/mm/gup.c~mm-gup-move-__get_user_pages_fast-down-a-few-lines-in-gupc > +++ a/mm/gup.c > @@ -2703,72 +2703,6 @@ static bool gup_fast_permitted(unsigned > } > #endif > > -/* > - * Like get_user_pages_fast() except it's IRQ-safe in that it won't fall back to > - * the regular GUP. > - * Note a difference with get_user_pages_fast: this always returns the > - * number of pages pinned, 0 if no pages were pinned. > - * > - * If the architecture does not support this function, simply return with no > - * pages pinned. > - * > - * Careful, careful! COW breaking can go either way, so a non-write > - * access can get ambiguous page results. If you call this function without > - * 'write' set, you'd better be sure that you're ok with that ambiguity. > - */ > -int __get_user_pages_fast(unsigned long start, int nr_pages, int write, > - struct page **pages) > -{ > - unsigned long len, end; > - unsigned long flags; > - int nr_pinned = 0; > - /* > - * Internally (within mm/gup.c), gup fast variants must set FOLL_GET, > - * because gup fast is always a "pin with a +1 page refcount" request. > - */ > - unsigned int gup_flags = FOLL_GET; > - > - if (write) > - gup_flags |= FOLL_WRITE; > - > - start = untagged_addr(start) & PAGE_MASK; > - len = (unsigned long) nr_pages << PAGE_SHIFT; > - end = start + len; > - > - if (end <= start) > - return 0; > - if (unlikely(!access_ok((void __user *)start, len))) > - return 0; > - > - /* > - * Disable interrupts. We use the nested form as we can already have > - * interrupts disabled by get_futex_key. > - * > - * With interrupts disabled, we block page table pages from being > - * freed from under us. See struct mmu_table_batch comments in > - * include/asm-generic/tlb.h for more details. > - * > - * We do not adopt an rcu_read_lock(.) here as we also want to > - * block IPIs that come from THPs splitting. > - * > - * NOTE! We allow read-only gup_fast() here, but you'd better be > - * careful about possible COW pages. You'll get _a_ COW page, but > - * not necessarily the one you intended to get depending on what > - * COW event happens after this. COW may break the page copy in a > - * random direction. > - */ > - > - if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) && > - gup_fast_permitted(start, end)) { > - local_irq_save(flags); > - gup_pgd_range(start, end, gup_flags, pages, &nr_pinned); > - local_irq_restore(flags); > - } > - > - return nr_pinned; > -} > -EXPORT_SYMBOL_GPL(__get_user_pages_fast); > - > static int __gup_longterm_unlocked(unsigned long start, int nr_pages, > unsigned int gup_flags, struct page **pages) > { > @@ -2848,6 +2782,72 @@ static int internal_get_user_pages_fast( > return ret; > } > > +/* > + * Like get_user_pages_fast() except it's IRQ-safe in that it won't fall back to > + * the regular GUP. > + * Note a difference with get_user_pages_fast: this always returns the > + * number of pages pinned, 0 if no pages were pinned. > + * > + * If the architecture does not support this function, simply return with no > + * pages pinned. > + * > + * Careful, careful! COW breaking can go either way, so a non-write > + * access can get ambiguous page results. If you call this function without > + * 'write' set, you'd better be sure that you're ok with that ambiguity. > + */ > +int __get_user_pages_fast(unsigned long start, int nr_pages, int write, > + struct page **pages) > +{ > + unsigned long len, end; > + unsigned long flags; > + int nr_pinned = 0; > + /* > + * Internally (within mm/gup.c), gup fast variants must set FOLL_GET, > + * because gup fast is always a "pin with a +1 page refcount" request. > + */ > + unsigned int gup_flags = FOLL_GET; > + > + if (write) > + gup_flags |= FOLL_WRITE; > + > + start = untagged_addr(start) & PAGE_MASK; > + len = (unsigned long) nr_pages << PAGE_SHIFT; > + end = start + len; > + > + if (end <= start) > + return 0; > + if (unlikely(!access_ok((void __user *)start, len))) > + return 0; > + > + /* > + * Disable interrupts. We use the nested form as we can already have > + * interrupts disabled by get_futex_key. > + * > + * With interrupts disabled, we block page table pages from being > + * freed from under us. See struct mmu_table_batch comments in > + * include/asm-generic/tlb.h for more details. > + * > + * We do not adopt an rcu_read_lock(.) here as we also want to > + * block IPIs that come from THPs splitting. > + * > + * NOTE! We allow read-only gup_fast() here, but you'd better be > + * careful about possible COW pages. You'll get _a_ COW page, but > + * not necessarily the one you intended to get depending on what > + * COW event happens after this. COW may break the page copy in a > + * random direction. > + */ > + > + if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) && > + gup_fast_permitted(start, end)) { > + local_irq_save(flags); > + gup_pgd_range(start, end, gup_flags, pages, &nr_pinned); > + local_irq_restore(flags); > + } > + > + return nr_pinned; > +} > +EXPORT_SYMBOL_GPL(__get_user_pages_fast); > + > /** > * get_user_pages_fast() - pin user pages in memory > * @start: starting user address > _ > ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 004/131] mm/gup: refactor and de-duplicate gup_fast() code 2020-06-03 22:55 incoming Andrew Morton ` (2 preceding siblings ...) 2020-06-03 22:56 ` [patch 003/131] mm/gup: move __get_user_pages_fast() down a few lines in gup.c Andrew Morton @ 2020-06-03 22:56 ` Andrew Morton 2020-06-04 2:19 ` Linus Torvalds 2020-06-03 22:56 ` [patch 005/131] mm/gup: introduce pin_user_pages_fast_only() Andrew Morton ` (127 subsequent siblings) 131 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:56 UTC (permalink / raw) To: airlied, akpm, chris, daniel, jani.nikula, jhubbard, joonas.lahtinen, jrdr.linux, linux-mm, matthew.auld, mm-commits, rodrigo.vivi, torvalds, tvrtko.ursulin, willy From: John Hubbard <jhubbard@nvidia.com> Subject: mm/gup: refactor and de-duplicate gup_fast() code There were two nearly identical sets of code for gup_fast() style of walking the page tables with interrupts disabled. This has lead to the usual maintenance problems that arise from having duplicated code. There is already a core internal routine in gup.c for gup_fast(), so just enhance it very slightly: allow skipping the fall-back to "slow" (regular) get_user_pages(), via the new FOLL_FAST_ONLY flag. Then, just call internal_get_user_pages_fast() from __get_user_pages_fast(), and adjust the API to match pre-existing API behavior. There is a change in behavior from this refactoring: the nested form of interrupt disabling is used in all gup_fast() variants now. That's because there is only one place that interrupt disabling for page walking is done, and so the safer form is required. This should, if anything, eliminate possible (rare) bugs, because the non-nested form of enabling interrupts was fragile at best. [jhubbard@nvidia.com: fixup] Link: http://lkml.kernel.org/r/20200521233841.1279742-1-jhubbard@nvidia.com Link: http://lkml.kernel.org/r/20200519002124.2025955-3-jhubbard@nvidia.com Signed-off-by: John Hubbard <jhubbard@nvidia.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: David Airlie <airlied@linux.ie> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: "Joonas Lahtinen" <joonas.lahtinen@linux.intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Souptick Joarder <jrdr.linux@gmail.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/mm.h | 1 mm/gup.c | 61 ++++++++++++++++++++----------------------- 2 files changed, 30 insertions(+), 32 deletions(-) --- a/include/linux/mm.h~mm-gup-refactor-and-de-duplicate-gup_fast-code +++ a/include/linux/mm.h @@ -2816,6 +2816,7 @@ struct page *follow_page(struct vm_area_ #define FOLL_LONGTERM 0x10000 /* mapping lifetime is indefinite: see below */ #define FOLL_SPLIT_PMD 0x20000 /* split huge pmd before returning */ #define FOLL_PIN 0x40000 /* pages must be released via unpin_user_page */ +#define FOLL_FAST_ONLY 0x80000 /* gup_fast: prevent fall-back to slow gup */ /* * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each --- a/mm/gup.c~mm-gup-refactor-and-de-duplicate-gup_fast-code +++ a/mm/gup.c @@ -2731,10 +2731,12 @@ static int internal_get_user_pages_fast( struct page **pages) { unsigned long addr, len, end; + unsigned long flags; int nr_pinned = 0, ret = 0; if (WARN_ON_ONCE(gup_flags & ~(FOLL_WRITE | FOLL_LONGTERM | - FOLL_FORCE | FOLL_PIN | FOLL_GET))) + FOLL_FORCE | FOLL_PIN | FOLL_GET | + FOLL_FAST_ONLY))) return -EINVAL; start = untagged_addr(start) & PAGE_MASK; @@ -2753,16 +2755,26 @@ static int internal_get_user_pages_fast( * order to avoid confusing the normal COW routines. So only * targets that are already writable are safe to do by just * looking at the page tables. + * + * Disable interrupts. The nested form is used, in order to allow full, + * general purpose use of this routine. + * + * With interrupts disabled, we block page table pages from being + * freed from under us. See struct mmu_table_batch comments in + * include/asm-generic/tlb.h for more details. + * + * We do not adopt an rcu_read_lock(.) here as we also want to + * block IPIs that come from THPs splitting. */ if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) && gup_fast_permitted(start, end)) { - local_irq_disable(); + local_irq_save(flags); gup_pgd_range(addr, end, gup_flags | FOLL_WRITE, pages, &nr_pinned); - local_irq_enable(); + local_irq_restore(flags); ret = nr_pinned; } - if (nr_pinned < nr_pages) { + if (nr_pinned < nr_pages && !(gup_flags & FOLL_FAST_ONLY)) { /* Try to get the remaining pages with get_user_pages */ start += nr_pinned << PAGE_SHIFT; pages += nr_pinned; @@ -2798,37 +2810,27 @@ static int internal_get_user_pages_fast( int __get_user_pages_fast(unsigned long start, int nr_pages, int write, struct page **pages) { - unsigned long len, end; - unsigned long flags; - int nr_pinned = 0; + int nr_pinned; /* * Internally (within mm/gup.c), gup fast variants must set FOLL_GET, * because gup fast is always a "pin with a +1 page refcount" request. + * + * FOLL_FAST_ONLY is required in order to match the API description of + * this routine: no fall back to regular ("slow") GUP. */ - unsigned int gup_flags = FOLL_GET; + unsigned int gup_flags = FOLL_GET | FOLL_FAST_ONLY; if (write) gup_flags |= FOLL_WRITE; - start = untagged_addr(start) & PAGE_MASK; - len = (unsigned long) nr_pages << PAGE_SHIFT; - end = start + len; - - if (end <= start) - return 0; - if (unlikely(!access_ok((void __user *)start, len))) - return 0; + nr_pinned = internal_get_user_pages_fast(start, nr_pages, gup_flags, + pages); /* - * Disable interrupts. We use the nested form as we can already have - * interrupts disabled by get_futex_key. - * - * With interrupts disabled, we block page table pages from being - * freed from under us. See struct mmu_table_batch comments in - * include/asm-generic/tlb.h for more details. - * - * We do not adopt an rcu_read_lock(.) here as we also want to - * block IPIs that come from THPs splitting. + * As specified in the API description above, this routine is not + * allowed to return negative values. However, the common core + * routine internal_get_user_pages_fast() *can* return -errno. + * Therefore, correct for that here: * * NOTE! We allow read-only gup_fast() here, but you'd better be * careful about possible COW pages. You'll get _a_ COW page, but @@ -2836,13 +2838,8 @@ int __get_user_pages_fast(unsigned long * COW event happens after this. COW may break the page copy in a * random direction. */ - - if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) && - gup_fast_permitted(start, end)) { - local_irq_save(flags); - gup_pgd_range(start, end, gup_flags, pages, &nr_pinned); - local_irq_restore(flags); - } + if (nr_pinned < 0) + nr_pinned = 0; return nr_pinned; } _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: [patch 004/131] mm/gup: refactor and de-duplicate gup_fast() code 2020-06-03 22:56 ` [patch 004/131] mm/gup: refactor and de-duplicate gup_fast() code Andrew Morton @ 2020-06-04 2:19 ` Linus Torvalds 2020-06-04 3:19 ` Linus Torvalds 0 siblings, 1 reply; 349+ messages in thread From: Linus Torvalds @ 2020-06-04 2:19 UTC (permalink / raw) To: Andrew Morton Cc: Dave Airlie, Chris Wilson, Daniel Vetter, Jani Nikula, jhubbard, Joonas Lahtinen, jrdr.linux, Linux-MM, matthew.auld, mm-commits, Rodrigo Vivi, tvrtko.ursulin, Matthew Wilcox On Wed, Jun 3, 2020 at 3:56 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > From: John Hubbard <jhubbard@nvidia.com> > Subject: mm/gup: refactor and de-duplicate gup_fast() code > > There were two nearly identical sets of code for gup_fast() style of > walking the page tables with interrupts disabled. This has lead to the > usual maintenance problems that arise from having duplicated code. Andrew, this is actually an example of why you absolutely should *not* rebase your series in the middle of the development tree. Now you've rebased it on top of my commit 17839856fd58 ("gup: document and work around "COW can break either way" issue") and in the process you broke the result completely for read-only pages. Now it uses FOLL_WRITE (because that's what internal_get_user_pages_fast() does), which will disallow read-only pages (in order to handle them properly for COW in the slow path), and then the fact that the slow-path is entirely disabled for this case means that it doesn't work at all. This "rebase onto whatever random base Linus has today" absolutely has *got* to stop. It's not ok for git trees, but it's not ok for these patch-queues either. It means that all the testing your patch queue got in linux-next is completely worthless, because what you send me is something very different from what was tested. Exactly as with the git trees, where I tell people constantly not to rebase their patches. Give me a base that it has been tested on, and a series that has actually been tested. Not this "rebased for your convenience" thing. I'd _much_ rather get a merge conflict when your patch series changes something that somebody else also changed. Because then I know something clashed, and if I screw up the merge, I only have myself to blame. If it's a very complex merge, I'll ask for help. That would be much better than getting a patch-bomb with 131 patches that all _look_ sane and build cleanly, but can be randomly broken because they got rebased hours before with no testing. The "let me fix things up onto a daily snapshot" really is a completely broken model. You are making it _harder_ for me, not easier, because now I have to look for subtle issues in every single commit rather than the big honking clue of "oh, I got a merge error, I'll need to really look at it". It so happened that with this one, I was very aware of the rebase, because you rebased on a patch that I wrote so when I looked through the patches I went "Hmm.." What about all the other times when I wouldn't have noticed and been so aware of what changed recently? Again: merge conflicts are *much* better than silently rebasing and hiding problems. Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: [patch 004/131] mm/gup: refactor and de-duplicate gup_fast() code 2020-06-04 2:19 ` Linus Torvalds @ 2020-06-04 3:19 ` Linus Torvalds 2020-06-04 4:31 ` Linus Torvalds 0 siblings, 1 reply; 349+ messages in thread From: Linus Torvalds @ 2020-06-04 3:19 UTC (permalink / raw) To: Andrew Morton Cc: Dave Airlie, Chris Wilson, Daniel Vetter, Jani Nikula, jhubbard, Joonas Lahtinen, jrdr.linux, Linux-MM, matthew.auld, mm-commits, Rodrigo Vivi, tvrtko.ursulin, Matthew Wilcox On Wed, Jun 3, 2020 at 7:19 PM Linus Torvalds <torvalds@linux-foundation.org> wrote: > > Now it uses FOLL_WRITE (because that's what > internal_get_user_pages_fast() does), which will disallow read-only > pages (in order to handle them properly for COW in the slow path), and > then the fact that the slow-path is entirely disabled for this case > means that it doesn't work at all. I have tried to fix it up, partly by editing the patches directly, and partly by then trying to fix up comments after-the-fact. The end result looks possibly correct after it all. But it would have been easier had I just had a merge conflict to deal with, rather than trying to fix up patches. Will do more testing etc before really merging and then pushing out. Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: [patch 004/131] mm/gup: refactor and de-duplicate gup_fast() code 2020-06-04 3:19 ` Linus Torvalds @ 2020-06-04 4:31 ` Linus Torvalds 2020-06-04 5:18 ` John Hubbard 0 siblings, 1 reply; 349+ messages in thread From: Linus Torvalds @ 2020-06-04 4:31 UTC (permalink / raw) To: Andrew Morton Cc: Dave Airlie, Chris Wilson, Daniel Vetter, Jani Nikula, jhubbard, Joonas Lahtinen, jrdr.linux, Linux-MM, matthew.auld, mm-commits, Rodrigo Vivi, tvrtko.ursulin, Matthew Wilcox On Wed, Jun 3, 2020 at 8:19 PM Linus Torvalds <torvalds@linux-foundation.org> wrote: > > I have tried to fix it up, partly by editing the patches directly, and > partly by then trying to fix up comments after-the-fact. The end result passes the smell test, boots for me, and looks like it might work. But I don't have any good real-world test for this, and I hope and assume that John has something GPU-related that actually uses the code and cares. Presumably there was _something_ that triggered those changes to de-duplicate that code? So please give it a look. Because of how I edited the patches (and Andrew edited them before me), what is attributed to John Hubbard isn't really the same as the patch he originally wrote. If I broke something in the process, feel free to let me know in less than polite terms. But it look better than the intermediate situation that definitely looked like it would just fail entirely on any read-only mappings due to not being able to fall back on the slow case. The drm code probably doesn't even care about the possible ambiguity with GUP picking a COW page that might later break the other way. Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: [patch 004/131] mm/gup: refactor and de-duplicate gup_fast() code 2020-06-04 4:31 ` Linus Torvalds @ 2020-06-04 5:18 ` John Hubbard 0 siblings, 0 replies; 349+ messages in thread From: John Hubbard @ 2020-06-04 5:18 UTC (permalink / raw) To: Linus Torvalds, Andrew Morton Cc: Dave Airlie, Chris Wilson, Daniel Vetter, Jani Nikula, Joonas Lahtinen, jrdr.linux, Linux-MM, matthew.auld, mm-commits, Rodrigo Vivi, tvrtko.ursulin, Matthew Wilcox, Chris Wilson On 2020-06-03 21:31, Linus Torvalds wrote: > On Wed, Jun 3, 2020 at 8:19 PM Linus Torvalds > <torvalds@linux-foundation.org> wrote: >> >> I have tried to fix it up, partly by editing the patches directly, and >> partly by then trying to fix up comments after-the-fact. > > The end result passes the smell test, boots for me, and looks like it > might work. > > But I don't have any good real-world test for this, and I hope and > assume that John has something GPU-related that actually uses the code > and cares. Presumably there was _something_ that triggered those > changes to de-duplicate that code? Yes: the Intel i915 driver required a pin_user_pages*() variant of the gup fast-only code. So the next 2 patches put the refactored code into use: 2170ecfa7688 drm/i915: convert get_user_pages() --> pin_user_pages() 104acc327648 mm/gup: introduce pin_user_pages_fast_only() > > So please give it a look. Because of how I edited the patches (and > Andrew edited them before me), what is attributed to John Hubbard > isn't really the same as the patch he originally wrote. > Looking at it now. I'm pleased to see that the fix is basically identical to a local fix that I was testing an hour ago. The only difference is the name and type of the local fast_flags variable. An unsigned long is larger than the API requires, but that is of course fine for now. As for testing, the original version of this the was part of a 4-part series [1] that ended up converting Intel i915 to use pin_user_pages*(). And Chris Wilson (+cc) was kind enough to run some drm/i915 CI tests on that and they passed at the time. Also, I have a set of xfstests and a few other things exercise a fair amount of get_user_pages*() and pin_user_pages*(). Running those now. But my run time testing is not set up for stress testing, and it's a very narrow look at things. But so far it looks promising. [1] https://lore.kernel.org/r/20200522051931.54191-1-jhubbard@nvidia.com thanks, -- John Hubbard NVIDIA ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 005/131] mm/gup: introduce pin_user_pages_fast_only() 2020-06-03 22:55 incoming Andrew Morton ` (3 preceding siblings ...) 2020-06-03 22:56 ` [patch 004/131] mm/gup: refactor and de-duplicate gup_fast() code Andrew Morton @ 2020-06-03 22:56 ` Andrew Morton 2020-06-03 22:56 ` [patch 006/131] drm/i915: convert get_user_pages() --> pin_user_pages() Andrew Morton ` (126 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:56 UTC (permalink / raw) To: airlied, akpm, chris, daniel, jani.nikula, jhubbard, joonas.lahtinen, jrdr.linux, linux-mm, matthew.auld, mm-commits, rodrigo.vivi, torvalds, tvrtko.ursulin, willy From: John Hubbard <jhubbard@nvidia.com> Subject: mm/gup: introduce pin_user_pages_fast_only() This is the FOLL_PIN equivalent of __get_user_pages_fast(), except with a more descriptive name, and gup_flags instead of a boolean "write" in the argument list. Link: http://lkml.kernel.org/r/20200519002124.2025955-4-jhubbard@nvidia.com Signed-off-by: John Hubbard <jhubbard@nvidia.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: David Airlie <airlied@linux.ie> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: "Joonas Lahtinen" <joonas.lahtinen@linux.intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Souptick Joarder <jrdr.linux@gmail.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/mm.h | 2 ++ mm/gup.c | 36 ++++++++++++++++++++++++++++++++++++ 2 files changed, 38 insertions(+) --- a/include/linux/mm.h~mm-gup-introduce-pin_user_pages_fast_only +++ a/include/linux/mm.h @@ -1827,6 +1827,8 @@ extern int mprotect_fixup(struct vm_area */ int __get_user_pages_fast(unsigned long start, int nr_pages, int write, struct page **pages); +int pin_user_pages_fast_only(unsigned long start, int nr_pages, + unsigned int gup_flags, struct page **pages); /* * per-process(per-mm_struct) statistics. */ --- a/mm/gup.c~mm-gup-introduce-pin_user_pages_fast_only +++ a/mm/gup.c @@ -2913,6 +2913,42 @@ int pin_user_pages_fast(unsigned long st } EXPORT_SYMBOL_GPL(pin_user_pages_fast); +/* + * This is the FOLL_PIN equivalent of __get_user_pages_fast(). Behavior is the + * same, except that this one sets FOLL_PIN instead of FOLL_GET. + * + * The API rules are the same, too: no negative values may be returned. + */ +int pin_user_pages_fast_only(unsigned long start, int nr_pages, + unsigned int gup_flags, struct page **pages) +{ + int nr_pinned; + + /* + * FOLL_GET and FOLL_PIN are mutually exclusive. Note that the API + * rules require returning 0, rather than -errno: + */ + if (WARN_ON_ONCE(gup_flags & FOLL_GET)) + return 0; + /* + * FOLL_FAST_ONLY is required in order to match the API description of + * this routine: no fall back to regular ("slow") GUP. + */ + gup_flags |= (FOLL_PIN | FOLL_FAST_ONLY); + nr_pinned = internal_get_user_pages_fast(start, nr_pages, gup_flags, + pages); + /* + * This routine is not allowed to return negative values. However, + * internal_get_user_pages_fast() *can* return -errno. Therefore, + * correct for that here: + */ + if (nr_pinned < 0) + nr_pinned = 0; + + return nr_pinned; +} +EXPORT_SYMBOL_GPL(pin_user_pages_fast_only); + /** * pin_user_pages_remote() - pin pages of a remote process (task != current) * _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 006/131] drm/i915: convert get_user_pages() --> pin_user_pages() 2020-06-03 22:55 incoming Andrew Morton ` (4 preceding siblings ...) 2020-06-03 22:56 ` [patch 005/131] mm/gup: introduce pin_user_pages_fast_only() Andrew Morton @ 2020-06-03 22:56 ` Andrew Morton 2020-06-03 22:56 ` [patch 007/131] mm/gup: might_lock_read(mmap_sem) in get_user_pages_fast() Andrew Morton ` (125 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:56 UTC (permalink / raw) To: airlied, akpm, chris, daniel, jani.nikula, jhubbard, joonas.lahtinen, jrdr.linux, linux-mm, matthew.auld, mm-commits, rodrigo.vivi, torvalds, tvrtko.ursulin, willy From: John Hubbard <jhubbard@nvidia.com> Subject: drm/i915: convert get_user_pages() --> pin_user_pages() This code was using get_user_pages*(), in a "Case 2" scenario (DMA/RDMA), using the categorization from [1]. That means that it's time to convert the get_user_pages*() + put_page() calls to pin_user_pages*() + unpin_user_pages() calls. There is some helpful background in [2]: basically, this is a small part of fixing a long-standing disconnect between pinning pages, and file systems' use of those pages. [1] Documentation/core-api/pin_user_pages.rst [2] "Explicit pinning of user-space pages": https://lwn.net/Articles/807108/ Link: http://lkml.kernel.org/r/20200519002124.2025955-5-jhubbard@nvidia.com Signed-off-by: John Hubbard <jhubbard@nvidia.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Souptick Joarder <jrdr.linux@gmail.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: "Joonas Lahtinen" <joonas.lahtinen@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: David Airlie <airlied@linux.ie> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 22 ++++++++++-------- 1 file changed, 13 insertions(+), 9 deletions(-) --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c~drm-i915-convert-get_user_pages-pin_user_pages +++ a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c @@ -471,7 +471,7 @@ __i915_gem_userptr_get_pages_worker(stru down_read(&mm->mmap_sem); locked = 1; } - ret = get_user_pages_remote + ret = pin_user_pages_remote (work->task, mm, obj->userptr.ptr + pinned * PAGE_SIZE, npages - pinned, @@ -507,7 +507,7 @@ __i915_gem_userptr_get_pages_worker(stru } mutex_unlock(&obj->mm.lock); - release_pages(pvec, pinned); + unpin_user_pages(pvec, pinned); kvfree(pvec); i915_gem_object_put(obj); @@ -564,6 +564,7 @@ static int i915_gem_userptr_get_pages(st struct sg_table *pages; bool active; int pinned; + unsigned int gup_flags = 0; /* If userspace should engineer that these pages are replaced in * the vma between us binding this page into the GTT and completion @@ -606,11 +607,14 @@ static int i915_gem_userptr_get_pages(st * * We may or may not care. */ - if (pvec) /* defer to worker if malloc fails */ - pinned = __get_user_pages_fast(obj->userptr.ptr, - num_pages, - !i915_gem_object_is_readonly(obj), - pvec); + if (pvec) { + /* defer to worker if malloc fails */ + if (!i915_gem_object_is_readonly(obj)) + gup_flags |= FOLL_WRITE; + pinned = pin_user_pages_fast_only(obj->userptr.ptr, + num_pages, gup_flags, + pvec); + } } active = false; @@ -628,7 +632,7 @@ static int i915_gem_userptr_get_pages(st __i915_gem_userptr_set_active(obj, true); if (IS_ERR(pages)) - release_pages(pvec, pinned); + unpin_user_pages(pvec, pinned); kvfree(pvec); return PTR_ERR_OR_ZERO(pages); @@ -683,7 +687,7 @@ i915_gem_userptr_put_pages(struct drm_i9 } mark_page_accessed(page); - put_page(page); + unpin_user_page(page); } obj->mm.dirty = false; _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 007/131] mm/gup: might_lock_read(mmap_sem) in get_user_pages_fast() 2020-06-03 22:55 incoming Andrew Morton ` (5 preceding siblings ...) 2020-06-03 22:56 ` [patch 006/131] drm/i915: convert get_user_pages() --> pin_user_pages() Andrew Morton @ 2020-06-03 22:56 ` Andrew Morton 2020-06-03 22:56 ` [patch 008/131] kasan: stop tests being eliminated as dead code with FORTIFY_SOURCE Andrew Morton ` (124 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:56 UTC (permalink / raw) To: akpm, jgg, jhubbard, linux-mm, mm-commits, torvalds, walken, willy From: John Hubbard <jhubbard@nvidia.com> Subject: mm/gup: might_lock_read(mmap_sem) in get_user_pages_fast() Instead of scattering these assertions across the drivers, do this assertion inside the core of get_user_pages_fast*() functions. That also includes pin_user_pages_fast*() routines. Add a might_lock_read(mmap_sem) call to internal_get_user_pages_fast(). Link: http://lkml.kernel.org/r/20200522010443.1290485-1-jhubbard@nvidia.com Signed-off-by: John Hubbard <jhubbard@nvidia.com> Suggested-by: Matthew Wilcox <willy@infradead.org> Reviewed-by: Matthew Wilcox <willy@infradead.org> Cc: Michel Lespinasse <walken@google.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/gup.c | 3 +++ 1 file changed, 3 insertions(+) --- a/mm/gup.c~mm-gup-might_lock_readmmap_sem-in-get_user_pages_fast +++ a/mm/gup.c @@ -2739,6 +2739,9 @@ static int internal_get_user_pages_fast( FOLL_FAST_ONLY))) return -EINVAL; + if (!(gup_flags & FOLL_FAST_ONLY)) + might_lock_read(¤t->mm->mmap_sem); + start = untagged_addr(start) & PAGE_MASK; addr = start; len = (unsigned long) nr_pages << PAGE_SHIFT; _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 008/131] kasan: stop tests being eliminated as dead code with FORTIFY_SOURCE 2020-06-03 22:55 incoming Andrew Morton ` (6 preceding siblings ...) 2020-06-03 22:56 ` [patch 007/131] mm/gup: might_lock_read(mmap_sem) in get_user_pages_fast() Andrew Morton @ 2020-06-03 22:56 ` Andrew Morton 2020-06-03 22:56 ` [patch 009/131] string.h: fix incompatibility between FORTIFY_SOURCE and KASAN Andrew Morton ` (123 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:56 UTC (permalink / raw) To: akpm, aryabinin, danielmicay, davidgow, dja, dvyukov, glider, linux-mm, mm-commits, torvalds From: Daniel Axtens <dja@axtens.net> Subject: kasan: stop tests being eliminated as dead code with FORTIFY_SOURCE Patch series "Fix some incompatibilites between KASAN and FORTIFY_SOURCE", v4. 3 KASAN self-tests fail on a kernel with both KASAN and FORTIFY_SOURCE: memchr, memcmp and strlen. When FORTIFY_SOURCE is on, a number of functions are replaced with fortified versions, which attempt to check the sizes of the operands. However, these functions often directly invoke __builtin_foo() once they have performed the fortify check. The compiler can detect that the results of these functions are not used, and knows that they have no other side effects, and so can eliminate them as dead code. Why are only memchr, memcmp and strlen affected? ================================================ Of string and string-like functions, kasan_test tests: * strchr -> not affected, no fortified version * strrchr -> likewise * strcmp -> likewise * strncmp -> likewise * strnlen -> not affected, the fortify source implementation calls the underlying strnlen implementation which is instrumented, not a builtin * strlen -> affected, the fortify souce implementation calls a __builtin version which the compiler can determine is dead. * memchr -> likewise * memcmp -> likewise * memset -> not affected, the compiler knows that memset writes to its first argument and therefore is not dead. Why does this not affect the functions normally? ================================================ In string.h, these functions are not marked as __pure, so the compiler cannot know that they do not have side effects. If relevant functions are marked as __pure in string.h, we see the following warnings and the functions are elided: lib/test_kasan.c: In function `kasan_memchr': lib/test_kasan.c:606:2: warning: statement with no effect [-Wunused-value] memchr(ptr, '1', size + 1); ^~~~~~~~~~~~~~~~~~~~~~~~~~ lib/test_kasan.c: In function `kasan_memcmp': lib/test_kasan.c:622:2: warning: statement with no effect [-Wunused-value] memcmp(ptr, arr, size+1); ^~~~~~~~~~~~~~~~~~~~~~~~ lib/test_kasan.c: In function `kasan_strings': lib/test_kasan.c:645:2: warning: statement with no effect [-Wunused-value] strchr(ptr, '1'); ^~~~~~~~~~~~~~~~ ... This annotation would make sense to add and could be added at any point, so the behaviour of test_kasan.c should change. The fix ======= Make all the functions that are pure write their results to a global, which makes them live. The strlen and memchr tests now pass. The memcmp test still fails to trigger, which is addressed in the next patch. [dja@axtens.net: drop patch 3] Link: http://lkml.kernel.org/r/20200424145521.8203-2-dja@axtens.net Link: http://lkml.kernel.org/r/20200423154503.5103-1-dja@axtens.net Link: http://lkml.kernel.org/r/20200423154503.5103-2-dja@axtens.net Fixes: 0c96350a2d2f ("lib/test_kasan.c: add tests for several string/memory API functions") Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Dmitry Vyukov <dvyukov@google.com> Tested-by: David Gow <davidgow@google.com> Cc: Daniel Micay <danielmicay@gmail.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Alexander Potapenko <glider@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- lib/test_kasan.c | 29 +++++++++++++++++++---------- 1 file changed, 19 insertions(+), 10 deletions(-) --- a/lib/test_kasan.c~kasan-stop-tests-being-eliminated-as-dead-code-with-fortify_source +++ a/lib/test_kasan.c @@ -24,6 +24,14 @@ #include <asm/page.h> /* + * We assign some test results to these globals to make sure the tests + * are not eliminated as dead code. + */ + +int kasan_int_result; +void *kasan_ptr_result; + +/* * Note: test functions are marked noinline so that their names appear in * reports. */ @@ -622,7 +630,7 @@ static noinline void __init kasan_memchr if (!ptr) return; - memchr(ptr, '1', size + 1); + kasan_ptr_result = memchr(ptr, '1', size + 1); kfree(ptr); } @@ -638,7 +646,7 @@ static noinline void __init kasan_memcmp return; memset(arr, 0, sizeof(arr)); - memcmp(ptr, arr, size+1); + kasan_int_result = memcmp(ptr, arr, size + 1); kfree(ptr); } @@ -661,22 +669,22 @@ static noinline void __init kasan_string * will likely point to zeroed byte. */ ptr += 16; - strchr(ptr, '1'); + kasan_ptr_result = strchr(ptr, '1'); pr_info("use-after-free in strrchr\n"); - strrchr(ptr, '1'); + kasan_ptr_result = strrchr(ptr, '1'); pr_info("use-after-free in strcmp\n"); - strcmp(ptr, "2"); + kasan_int_result = strcmp(ptr, "2"); pr_info("use-after-free in strncmp\n"); - strncmp(ptr, "2", 1); + kasan_int_result = strncmp(ptr, "2", 1); pr_info("use-after-free in strlen\n"); - strlen(ptr); + kasan_int_result = strlen(ptr); pr_info("use-after-free in strnlen\n"); - strnlen(ptr, 1); + kasan_int_result = strnlen(ptr, 1); } static noinline void __init kasan_bitops(void) @@ -743,11 +751,12 @@ static noinline void __init kasan_bitops __test_and_change_bit(BITS_PER_LONG + BITS_PER_BYTE, bits); pr_info("out-of-bounds in test_bit\n"); - (void)test_bit(BITS_PER_LONG + BITS_PER_BYTE, bits); + kasan_int_result = test_bit(BITS_PER_LONG + BITS_PER_BYTE, bits); #if defined(clear_bit_unlock_is_negative_byte) pr_info("out-of-bounds in clear_bit_unlock_is_negative_byte\n"); - clear_bit_unlock_is_negative_byte(BITS_PER_LONG + BITS_PER_BYTE, bits); + kasan_int_result = clear_bit_unlock_is_negative_byte(BITS_PER_LONG + + BITS_PER_BYTE, bits); #endif kfree(bits); } _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 009/131] string.h: fix incompatibility between FORTIFY_SOURCE and KASAN 2020-06-03 22:55 incoming Andrew Morton ` (7 preceding siblings ...) 2020-06-03 22:56 ` [patch 008/131] kasan: stop tests being eliminated as dead code with FORTIFY_SOURCE Andrew Morton @ 2020-06-03 22:56 ` Andrew Morton 2020-06-03 22:56 ` [patch 010/131] mm: clarify __GFP_MEMALLOC usage Andrew Morton ` (122 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:56 UTC (permalink / raw) To: akpm, aryabinin, danielmicay, davidgow, dja, dvyukov, glider, linux-mm, mm-commits, torvalds From: Daniel Axtens <dja@axtens.net> Subject: string.h: fix incompatibility between FORTIFY_SOURCE and KASAN The memcmp KASAN self-test fails on a kernel with both KASAN and FORTIFY_SOURCE. When FORTIFY_SOURCE is on, a number of functions are replaced with fortified versions, which attempt to check the sizes of the operands. However, these functions often directly invoke __builtin_foo() once they have performed the fortify check. Using __builtins may bypass KASAN checks if the compiler decides to inline it's own implementation as sequence of instructions, rather than emit a function call that goes out to a KASAN-instrumented implementation. Why is only memcmp affected? ============================ Of the string and string-like functions that kasan_test tests, only memcmp is replaced by an inline sequence of instructions in my testing on x86 with gcc version 9.2.1 20191008 (Ubuntu 9.2.1-9ubuntu2). I believe this is due to compiler heuristics. For example, if I annotate kmalloc calls with the alloc_size annotation (and disable some fortify compile-time checking!), the compiler will replace every memset except the one in kmalloc_uaf_memset with inline instructions. (I have some WIP patches to add this annotation.) Does this affect other functions in string.h? ============================================= Yes. Anything that uses __builtin_* rather than __real_* could be affected. This looks like: - strncpy - strcat - strlen - strlcpy maybe, under some circumstances? - strncat under some circumstances - memset - memcpy - memmove - memcmp (as noted) - memchr - strcpy Whether a function call is emitted always depends on the compiler. Most bugs should get caught by FORTIFY_SOURCE, but the missed memcmp test shows that this is not always the case. Isn't FORTIFY_SOURCE disabled with KASAN? ========================================- The string headers on all arches supporting KASAN disable fortify with kasan, but only when address sanitisation is _also_ disabled. For example from x86: #if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__) /* * For files that are not instrumented (e.g. mm/slub.c) we * should use not instrumented version of mem* functions. */ #define memcpy(dst, src, len) __memcpy(dst, src, len) #define memmove(dst, src, len) __memmove(dst, src, len) #define memset(s, c, n) __memset(s, c, n) #ifndef __NO_FORTIFY #define __NO_FORTIFY /* FORTIFY_SOURCE uses __builtin_memcpy, etc. */ #endif #endif This comes from commit 6974f0c4555e ("include/linux/string.h: add the option of fortified string.h functions"), and doesn't work when KASAN is enabled and the file is supposed to be sanitised - as with test_kasan.c I'm pretty sure this is not wrong, but not as expansive it should be: * we shouldn't use __builtin_memcpy etc in files where we don't have instrumentation - it could devolve into a function call to memcpy, which will be instrumented. Rather, we should use __memcpy which by convention is not instrumented. * we also shouldn't be using __builtin_memcpy when we have a KASAN instrumented file, because it could be replaced with inline asm that will not be instrumented. What is correct behaviour? ========================== Firstly, there is some overlap between fortification and KASAN: both provide some level of _runtime_ checking. Only fortify provides compile-time checking. KASAN and fortify can pick up different things at runtime: - Some fortify functions, notably the string functions, could easily be modified to consider sub-object sizes (e.g. members within a struct), and I have some WIP patches to do this. KASAN cannot detect these because it cannot insert poision between members of a struct. - KASAN can detect many over-reads/over-writes when the sizes of both operands are unknown, which fortify cannot. So there are a couple of options: 1) Flip the test: disable fortify in santised files and enable it in unsanitised files. This at least stops us missing KASAN checking, but we lose the fortify checking. 2) Make the fortify code always call out to real versions. Do this only for KASAN, for fear of losing the inlining opportunities we get from __builtin_*. (We can't use kasan_check_{read,write}: because the fortify functions are _extern inline_, you can't include _static_ inline functions without a compiler warning. kasan_check_{read,write} are static inline so we can't use them even when they would otherwise be suitable.) Take approach 2 and call out to real versions when KASAN is enabled. Use __underlying_foo to distinguish from __real_foo: __real_foo always refers to the kernel's implementation of foo, __underlying_foo could be either the kernel implementation or the __builtin_foo implementation. This is sometimes enough to make the memcmp test succeed with FORTIFY_SOURCE enabled. It is at least enough to get the function call into the module. One more fix is needed to make it reliable: see the next patch. Link: http://lkml.kernel.org/r/20200423154503.5103-3-dja@axtens.net Fixes: 6974f0c4555e ("include/linux/string.h: add the option of fortified string.h functions") Signed-off-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Dmitry Vyukov <dvyukov@google.com> Tested-by: David Gow <davidgow@google.com> Cc: Daniel Micay <danielmicay@gmail.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Alexander Potapenko <glider@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/string.h | 60 +++++++++++++++++++++++++++++++-------- 1 file changed, 48 insertions(+), 12 deletions(-) --- a/include/linux/string.h~stringh-fix-incompatibility-between-fortify_source-and-kasan +++ a/include/linux/string.h @@ -272,6 +272,31 @@ void __read_overflow3(void) __compiletim void __write_overflow(void) __compiletime_error("detected write beyond size of object passed as 1st parameter"); #if !defined(__NO_FORTIFY) && defined(__OPTIMIZE__) && defined(CONFIG_FORTIFY_SOURCE) + +#ifdef CONFIG_KASAN +extern void *__underlying_memchr(const void *p, int c, __kernel_size_t size) __RENAME(memchr); +extern int __underlying_memcmp(const void *p, const void *q, __kernel_size_t size) __RENAME(memcmp); +extern void *__underlying_memcpy(void *p, const void *q, __kernel_size_t size) __RENAME(memcpy); +extern void *__underlying_memmove(void *p, const void *q, __kernel_size_t size) __RENAME(memmove); +extern void *__underlying_memset(void *p, int c, __kernel_size_t size) __RENAME(memset); +extern char *__underlying_strcat(char *p, const char *q) __RENAME(strcat); +extern char *__underlying_strcpy(char *p, const char *q) __RENAME(strcpy); +extern __kernel_size_t __underlying_strlen(const char *p) __RENAME(strlen); +extern char *__underlying_strncat(char *p, const char *q, __kernel_size_t count) __RENAME(strncat); +extern char *__underlying_strncpy(char *p, const char *q, __kernel_size_t size) __RENAME(strncpy); +#else +#define __underlying_memchr __builtin_memchr +#define __underlying_memcmp __builtin_memcmp +#define __underlying_memcpy __builtin_memcpy +#define __underlying_memmove __builtin_memmove +#define __underlying_memset __builtin_memset +#define __underlying_strcat __builtin_strcat +#define __underlying_strcpy __builtin_strcpy +#define __underlying_strlen __builtin_strlen +#define __underlying_strncat __builtin_strncat +#define __underlying_strncpy __builtin_strncpy +#endif + __FORTIFY_INLINE char *strncpy(char *p, const char *q, __kernel_size_t size) { size_t p_size = __builtin_object_size(p, 0); @@ -279,14 +304,14 @@ __FORTIFY_INLINE char *strncpy(char *p, __write_overflow(); if (p_size < size) fortify_panic(__func__); - return __builtin_strncpy(p, q, size); + return __underlying_strncpy(p, q, size); } __FORTIFY_INLINE char *strcat(char *p, const char *q) { size_t p_size = __builtin_object_size(p, 0); if (p_size == (size_t)-1) - return __builtin_strcat(p, q); + return __underlying_strcat(p, q); if (strlcat(p, q, p_size) >= p_size) fortify_panic(__func__); return p; @@ -300,7 +325,7 @@ __FORTIFY_INLINE __kernel_size_t strlen( /* Work around gcc excess stack consumption issue */ if (p_size == (size_t)-1 || (__builtin_constant_p(p[p_size - 1]) && p[p_size - 1] == '\0')) - return __builtin_strlen(p); + return __underlying_strlen(p); ret = strnlen(p, p_size); if (p_size <= ret) fortify_panic(__func__); @@ -333,7 +358,7 @@ __FORTIFY_INLINE size_t strlcpy(char *p, __write_overflow(); if (len >= p_size) fortify_panic(__func__); - __builtin_memcpy(p, q, len); + __underlying_memcpy(p, q, len); p[len] = '\0'; } return ret; @@ -346,12 +371,12 @@ __FORTIFY_INLINE char *strncat(char *p, size_t p_size = __builtin_object_size(p, 0); size_t q_size = __builtin_object_size(q, 0); if (p_size == (size_t)-1 && q_size == (size_t)-1) - return __builtin_strncat(p, q, count); + return __underlying_strncat(p, q, count); p_len = strlen(p); copy_len = strnlen(q, count); if (p_size < p_len + copy_len + 1) fortify_panic(__func__); - __builtin_memcpy(p + p_len, q, copy_len); + __underlying_memcpy(p + p_len, q, copy_len); p[p_len + copy_len] = '\0'; return p; } @@ -363,7 +388,7 @@ __FORTIFY_INLINE void *memset(void *p, i __write_overflow(); if (p_size < size) fortify_panic(__func__); - return __builtin_memset(p, c, size); + return __underlying_memset(p, c, size); } __FORTIFY_INLINE void *memcpy(void *p, const void *q, __kernel_size_t size) @@ -378,7 +403,7 @@ __FORTIFY_INLINE void *memcpy(void *p, c } if (p_size < size || q_size < size) fortify_panic(__func__); - return __builtin_memcpy(p, q, size); + return __underlying_memcpy(p, q, size); } __FORTIFY_INLINE void *memmove(void *p, const void *q, __kernel_size_t size) @@ -393,7 +418,7 @@ __FORTIFY_INLINE void *memmove(void *p, } if (p_size < size || q_size < size) fortify_panic(__func__); - return __builtin_memmove(p, q, size); + return __underlying_memmove(p, q, size); } extern void *__real_memscan(void *, int, __kernel_size_t) __RENAME(memscan); @@ -419,7 +444,7 @@ __FORTIFY_INLINE int memcmp(const void * } if (p_size < size || q_size < size) fortify_panic(__func__); - return __builtin_memcmp(p, q, size); + return __underlying_memcmp(p, q, size); } __FORTIFY_INLINE void *memchr(const void *p, int c, __kernel_size_t size) @@ -429,7 +454,7 @@ __FORTIFY_INLINE void *memchr(const void __read_overflow(); if (p_size < size) fortify_panic(__func__); - return __builtin_memchr(p, c, size); + return __underlying_memchr(p, c, size); } void *__real_memchr_inv(const void *s, int c, size_t n) __RENAME(memchr_inv); @@ -460,11 +485,22 @@ __FORTIFY_INLINE char *strcpy(char *p, c size_t p_size = __builtin_object_size(p, 0); size_t q_size = __builtin_object_size(q, 0); if (p_size == (size_t)-1 && q_size == (size_t)-1) - return __builtin_strcpy(p, q); + return __underlying_strcpy(p, q); memcpy(p, q, strlen(q) + 1); return p; } +/* Don't use these outside the FORITFY_SOURCE implementation */ +#undef __underlying_memchr +#undef __underlying_memcmp +#undef __underlying_memcpy +#undef __underlying_memmove +#undef __underlying_memset +#undef __underlying_strcat +#undef __underlying_strcpy +#undef __underlying_strlen +#undef __underlying_strncat +#undef __underlying_strncpy #endif /** _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 010/131] mm: clarify __GFP_MEMALLOC usage 2020-06-03 22:55 incoming Andrew Morton ` (8 preceding siblings ...) 2020-06-03 22:56 ` [patch 009/131] string.h: fix incompatibility between FORTIFY_SOURCE and KASAN Andrew Morton @ 2020-06-03 22:56 ` Andrew Morton 2020-06-03 22:56 ` [patch 011/131] mm: memblock: replace dereferences of memblock_region.nid with API calls Andrew Morton ` (121 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:56 UTC (permalink / raw) To: akpm, jhubbard, joel, linux-mm, mhocko, mm-commits, neilb, paulmck, rientjes, torvalds From: Michal Hocko <mhocko@suse.com> Subject: mm: clarify __GFP_MEMALLOC usage It seems that the existing documentation is not explicit about the expected usage and potential risks enough. While it is calls out that users have to free memory when using this flag it is not really apparent that users have to careful to not deplete memory reserves and that they should implement some sort of throttling wrt. freeing process. This is partly based on Neil's explanation [1]. Let's also call out that a pre allocated pool allocator should be considered. [1] http://lkml.kernel.org/r/877dz0yxoa.fsf@notabene.neil.brown.name [akpm@linux-foundation.org: coding style fixes] [mhocko@kernel.org: update] Link: http://lkml.kernel.org/r/20200406070137.GC19426@dhcp22.suse.cz Link: http://lkml.kernel.org/r/20200403083543.11552-2-mhocko@kernel.org Signed-off-by: Michal Hocko <mhocko@suse.com> Cc: David Rientjes <rientjes@google.com> Cc: Joel Fernandes <joel@joelfernandes.org> Cc: Neil Brown <neilb@suse.de> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: John Hubbard <jhubbard@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/gfp.h | 5 +++++ 1 file changed, 5 insertions(+) --- a/include/linux/gfp.h~mm-clarify-__gfp_memalloc-usage +++ a/include/linux/gfp.h @@ -110,6 +110,11 @@ struct vm_area_struct; * the caller guarantees the allocation will allow more memory to be freed * very shortly e.g. process exiting or swapping. Users either should * be the MM or co-ordinating closely with the VM (e.g. swap over NFS). + * Users of this flag have to be extremely careful to not deplete the reserve + * completely and implement a throttling mechanism which controls the + * consumption of the reserve based on the amount of freed memory. + * Usage of a pre-allocated pool (e.g. mempool) should be always considered + * before using this flag. * * %__GFP_NOMEMALLOC is used to explicitly forbid access to emergency reserves. * This takes precedence over the %__GFP_MEMALLOC flag if both are set. _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 011/131] mm: memblock: replace dereferences of memblock_region.nid with API calls 2020-06-03 22:55 incoming Andrew Morton ` (9 preceding siblings ...) 2020-06-03 22:56 ` [patch 010/131] mm: clarify __GFP_MEMALLOC usage Andrew Morton @ 2020-06-03 22:56 ` Andrew Morton 2020-06-03 22:56 ` [patch 012/131] mm: make early_pfn_to_nid() and related defintions close to each other Andrew Morton ` (120 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:56 UTC (permalink / raw) To: akpm, bcain, bhe, catalin.marinas, corbet, dalias, davem, deller, geert, gerg, green.hu, guoren, gxt, heiko.carstens, hoan, James.Bottomley, jcmvbkbc, ley.foon.tan, linux-mm, linux, mattst88, mhocko, mm-commits, monstr, mpe, msalter, nickhu, paul.walmsley, richard, rppt, rppt, shorne, tony.luck, torvalds, tsbogend, vgupta, ysato From: Mike Rapoport <rppt@linux.ibm.com> Subject: mm: memblock: replace dereferences of memblock_region.nid with API calls Patch series "mm: rework free_area_init*() funcitons". After the discussion [1] about removal of CONFIG_NODES_SPAN_OTHER_NODES and CONFIG_HAVE_MEMBLOCK_NODE_MAP options, I took it a bit further and updated the node/zone initialization. Since all architectures have memblock, it is possible to use only the newer version of free_area_init_node() that calculates the zone and node boundaries based on memblock node mapping and architectural limits on possible zone PFNs. The architectures that still determined zone and hole sizes can be switched to the generic code and the old code that took those zone and hole sizes can be simply removed. And, since it all started from the removal of CONFIG_NODES_SPAN_OTHER_NODES, the memmap_init() is now updated to iterate over memblocks and so it does not need to perform early_pfn_to_nid() query for every PFN. [1] https://lore.kernel.org/lkml/1585420282-25630-1-git-send-email-Hoan@os.amperecomputing.com This patch (of 21): There are several places in the code that directly dereference memblock_region.nid despite this field being defined only when CONFIG_HAVE_MEMBLOCK_NODE_MAP=y. Replace these with calls to memblock_get_region_nid() to improve code robustness and to avoid possible breakage when CONFIG_HAVE_MEMBLOCK_NODE_MAP will be removed. Link: http://lkml.kernel.org/r/20200412194859.12663-1-rppt@kernel.org Link: http://lkml.kernel.org/r/20200412194859.12663-2-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Reviewed-by: Baoquan He <bhe@redhat.com> Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64] Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Mike Rapoport <rppt@kernel.org> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/arm64/mm/numa.c | 9 ++++++--- arch/x86/mm/numa.c | 6 ++++-- mm/memblock.c | 8 +++++--- mm/page_alloc.c | 4 ++-- 4 files changed, 17 insertions(+), 10 deletions(-) --- a/arch/arm64/mm/numa.c~mm-memblock-replace-dereferences-of-memblock_regionnid-with-api-calls +++ a/arch/arm64/mm/numa.c @@ -350,13 +350,16 @@ static int __init numa_register_nodes(vo struct memblock_region *mblk; /* Check that valid nid is set to memblks */ - for_each_memblock(memory, mblk) - if (mblk->nid == NUMA_NO_NODE || mblk->nid >= MAX_NUMNODES) { + for_each_memblock(memory, mblk) { + int mblk_nid = memblock_get_region_node(mblk); + + if (mblk_nid == NUMA_NO_NODE || mblk_nid >= MAX_NUMNODES) { pr_warn("Warning: invalid memblk node %d [mem %#010Lx-%#010Lx]\n", - mblk->nid, mblk->base, + mblk_nid, mblk->base, mblk->base + mblk->size - 1); return -EINVAL; } + } /* Finally register nodes. */ for_each_node_mask(nid, numa_nodes_parsed) { --- a/arch/x86/mm/numa.c~mm-memblock-replace-dereferences-of-memblock_regionnid-with-api-calls +++ a/arch/x86/mm/numa.c @@ -517,8 +517,10 @@ static void __init numa_clear_kernel_nod * reserve specific pages for Sandy Bridge graphics. ] */ for_each_memblock(reserved, mb_region) { - if (mb_region->nid != MAX_NUMNODES) - node_set(mb_region->nid, reserved_nodemask); + int nid = memblock_get_region_node(mb_region); + + if (nid != MAX_NUMNODES) + node_set(nid, reserved_nodemask); } /* --- a/mm/memblock.c~mm-memblock-replace-dereferences-of-memblock_regionnid-with-api-calls +++ a/mm/memblock.c @@ -1207,13 +1207,15 @@ void __init_memblock __next_mem_pfn_rang { struct memblock_type *type = &memblock.memory; struct memblock_region *r; + int r_nid; while (++*idx < type->cnt) { r = &type->regions[*idx]; + r_nid = memblock_get_region_node(r); if (PFN_UP(r->base) >= PFN_DOWN(r->base + r->size)) continue; - if (nid == MAX_NUMNODES || nid == r->nid) + if (nid == MAX_NUMNODES || nid == r_nid) break; } if (*idx >= type->cnt) { @@ -1226,7 +1228,7 @@ void __init_memblock __next_mem_pfn_rang if (out_end_pfn) *out_end_pfn = PFN_DOWN(r->base + r->size); if (out_nid) - *out_nid = r->nid; + *out_nid = r_nid; } /** @@ -1810,7 +1812,7 @@ int __init_memblock memblock_search_pfn_ *start_pfn = PFN_DOWN(type->regions[mid].base); *end_pfn = PFN_DOWN(type->regions[mid].base + type->regions[mid].size); - return type->regions[mid].nid; + return memblock_get_region_node(&type->regions[mid]); } #endif --- a/mm/page_alloc.c~mm-memblock-replace-dereferences-of-memblock_regionnid-with-api-calls +++ a/mm/page_alloc.c @@ -7220,7 +7220,7 @@ static void __init find_zone_movable_pfn if (!memblock_is_hotpluggable(r)) continue; - nid = r->nid; + nid = memblock_get_region_node(r); usable_startpfn = PFN_DOWN(r->base); zone_movable_pfn[nid] = zone_movable_pfn[nid] ? @@ -7241,7 +7241,7 @@ static void __init find_zone_movable_pfn if (memblock_is_mirror(r)) continue; - nid = r->nid; + nid = memblock_get_region_node(r); usable_startpfn = memblock_region_memory_base_pfn(r); _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 012/131] mm: make early_pfn_to_nid() and related defintions close to each other 2020-06-03 22:55 incoming Andrew Morton ` (10 preceding siblings ...) 2020-06-03 22:56 ` [patch 011/131] mm: memblock: replace dereferences of memblock_region.nid with API calls Andrew Morton @ 2020-06-03 22:56 ` Andrew Morton 2020-06-03 22:57 ` [patch 013/131] mm: remove CONFIG_HAVE_MEMBLOCK_NODE_MAP option Andrew Morton ` (119 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:56 UTC (permalink / raw) To: akpm, bcain, bhe, catalin.marinas, corbet, dalias, davem, deller, geert, gerg, green.hu, guoren, gxt, heiko.carstens, hoan, James.Bottomley, jcmvbkbc, ley.foon.tan, linux-mm, linux, mattst88, mhocko, mm-commits, monstr, mpe, msalter, nickhu, paul.walmsley, richard, rppt, shorne, tony.luck, torvalds, tsbogend, vgupta, ysato From: Mike Rapoport <rppt@linux.ibm.com> Subject: mm: make early_pfn_to_nid() and related defintions close to each other early_pfn_to_nid() and its helper __early_pfn_to_nid() are spread around include/linux/mm.h, include/linux/mmzone.h and mm/page_alloc.c. Drop unused stub for __early_pfn_to_nid() and move its actual generic implementation close to its users. Link: http://lkml.kernel.org/r/20200412194859.12663-3-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Reviewed-by: Baoquan He <bhe@redhat.com> Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64] Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/mm.h | 4 +-- include/linux/mmzone.h | 9 ------ mm/page_alloc.c | 51 +++++++++++++++++++-------------------- 3 files changed, 27 insertions(+), 37 deletions(-) --- a/include/linux/mm.h~mm-make-early_pfn_to_nid-and-related-defintions-close-to-each-other +++ a/include/linux/mm.h @@ -2445,9 +2445,9 @@ extern void sparse_memory_present_with_a #if !defined(CONFIG_HAVE_MEMBLOCK_NODE_MAP) && \ !defined(CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID) -static inline int __early_pfn_to_nid(unsigned long pfn, - struct mminit_pfnnid_cache *state) +static inline int early_pfn_to_nid(unsigned long pfn) { + BUILD_BUG_ON(IS_ENABLED(CONFIG_NUMA)); return 0; } #else --- a/include/linux/mmzone.h~mm-make-early_pfn_to_nid-and-related-defintions-close-to-each-other +++ a/include/linux/mmzone.h @@ -1080,15 +1080,6 @@ static inline struct zoneref *first_zone #include <asm/sparsemem.h> #endif -#if !defined(CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID) && \ - !defined(CONFIG_HAVE_MEMBLOCK_NODE_MAP) -static inline unsigned long early_pfn_to_nid(unsigned long pfn) -{ - BUILD_BUG_ON(IS_ENABLED(CONFIG_NUMA)); - return 0; -} -#endif - #ifdef CONFIG_FLATMEM #define pfn_to_nid(pfn) (0) #endif --- a/mm/page_alloc.c~mm-make-early_pfn_to_nid-and-related-defintions-close-to-each-other +++ a/mm/page_alloc.c @@ -1504,6 +1504,31 @@ void __free_pages_core(struct page *page static struct mminit_pfnnid_cache early_pfnnid_cache __meminitdata; +#ifndef CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID + +/* + * Required by SPARSEMEM. Given a PFN, return what node the PFN is on. + */ +int __meminit __early_pfn_to_nid(unsigned long pfn, + struct mminit_pfnnid_cache *state) +{ + unsigned long start_pfn, end_pfn; + int nid; + + if (state->last_start <= pfn && pfn < state->last_end) + return state->last_nid; + + nid = memblock_search_pfn_nid(pfn, &start_pfn, &end_pfn); + if (nid != NUMA_NO_NODE) { + state->last_start = start_pfn; + state->last_end = end_pfn; + state->last_nid = nid; + } + + return nid; +} +#endif /* CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID */ + int __meminit early_pfn_to_nid(unsigned long pfn) { static DEFINE_SPINLOCK(early_pfn_lock); @@ -6310,32 +6335,6 @@ void __meminit init_currently_empty_zone zone->initialized = 1; } -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP -#ifndef CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID - -/* - * Required by SPARSEMEM. Given a PFN, return what node the PFN is on. - */ -int __meminit __early_pfn_to_nid(unsigned long pfn, - struct mminit_pfnnid_cache *state) -{ - unsigned long start_pfn, end_pfn; - int nid; - - if (state->last_start <= pfn && pfn < state->last_end) - return state->last_nid; - - nid = memblock_search_pfn_nid(pfn, &start_pfn, &end_pfn); - if (nid != NUMA_NO_NODE) { - state->last_start = start_pfn; - state->last_end = end_pfn; - state->last_nid = nid; - } - - return nid; -} -#endif /* CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID */ - /** * free_bootmem_with_active_regions - Call memblock_free_early_nid for each active range * @nid: The node to free memory on. If MAX_NUMNODES, all nodes are freed. _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 013/131] mm: remove CONFIG_HAVE_MEMBLOCK_NODE_MAP option 2020-06-03 22:55 incoming Andrew Morton ` (11 preceding siblings ...) 2020-06-03 22:56 ` [patch 012/131] mm: make early_pfn_to_nid() and related defintions close to each other Andrew Morton @ 2020-06-03 22:57 ` Andrew Morton 2020-06-03 22:57 ` [patch 014/131] mm: free_area_init: use maximal zone PFNs rather than zone sizes Andrew Morton ` (118 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:57 UTC (permalink / raw) To: akpm, bcain, bhe, catalin.marinas, corbet, dalias, davem, deller, geert, gerg, green.hu, guoren, gxt, heiko.carstens, hoan, James.Bottomley, jcmvbkbc, ley.foon.tan, linux-mm, linux, mattst88, mhocko, mm-commits, monstr, mpe, msalter, nickhu, paul.walmsley, richard, rppt, shorne, tony.luck, torvalds, tsbogend, vgupta, ysato From: Mike Rapoport <rppt@linux.ibm.com> Subject: mm: remove CONFIG_HAVE_MEMBLOCK_NODE_MAP option CONFIG_HAVE_MEMBLOCK_NODE_MAP is used to differentiate initialization of nodes and zones structures between the systems that have region to node mapping in memblock and those that don't. Currently all the NUMA architectures enable this option and for the non-NUMA systems we can presume that all the memory belongs to node 0 and therefore the compile time configuration option is not required. The remaining few architectures that use DISCONTIGMEM without NUMA are easily updated to use memblock_add_node() instead of memblock_add() and thus have proper correspondence of memblock regions to NUMA nodes. Still, free_area_init_node() must have a backward compatible version because its semantics with and without CONFIG_HAVE_MEMBLOCK_NODE_MAP is different. Once all the architectures will use the new semantics, the entire compatibility layer can be dropped. To avoid addition of extra run time memory to store node id for architectures that keep memblock but have only a single node, the node id field of the memblock_region is guarded by CONFIG_NEED_MULTIPLE_NODES and the corresponding accessors presume that in those cases it is always 0. Link: http://lkml.kernel.org/r/20200412194859.12663-4-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64] Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arm64] Cc: Baoquan He <bhe@redhat.com> Cc: Brian Cain <bcain@codeaurora.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- Documentation/features/vm/numa-memblock/arch-support.txt | 34 --- arch/alpha/mm/numa.c | 4 arch/arm64/Kconfig | 1 arch/ia64/Kconfig | 1 arch/m68k/mm/motorola.c | 4 arch/microblaze/Kconfig | 1 arch/mips/Kconfig | 1 arch/powerpc/Kconfig | 1 arch/riscv/Kconfig | 1 arch/s390/Kconfig | 1 arch/sh/Kconfig | 1 arch/sparc/Kconfig | 1 arch/x86/Kconfig | 1 include/linux/memblock.h | 8 include/linux/mm.h | 12 - include/linux/mmzone.h | 2 mm/Kconfig | 3 mm/memblock.c | 11 - mm/memory_hotplug.c | 4 mm/page_alloc.c | 101 +++++----- 20 files changed, 74 insertions(+), 119 deletions(-) --- a/arch/alpha/mm/numa.c~mm-remove-config_have_memblock_node_map-option +++ a/arch/alpha/mm/numa.c @@ -144,8 +144,8 @@ setup_memory_node(int nid, void *kernel_ if (!nid && (node_max_pfn < end_kernel_pfn || node_min_pfn > start_kernel_pfn)) panic("kernel loaded out of ram"); - memblock_add(PFN_PHYS(node_min_pfn), - (node_max_pfn - node_min_pfn) << PAGE_SHIFT); + memblock_add_node(PFN_PHYS(node_min_pfn), + (node_max_pfn - node_min_pfn) << PAGE_SHIFT, nid); /* Zone start phys-addr must be 2^(MAX_ORDER-1) aligned. Note that we round this down, not up - node memory --- a/arch/arm64/Kconfig~mm-remove-config_have_memblock_node_map-option +++ a/arch/arm64/Kconfig @@ -162,7 +162,6 @@ config ARM64 select HAVE_GCC_PLUGINS select HAVE_HW_BREAKPOINT if PERF_EVENTS select HAVE_IRQ_TIME_ACCOUNTING - select HAVE_MEMBLOCK_NODE_MAP if NUMA select HAVE_NMI select HAVE_PATA_PLATFORM select HAVE_PERF_EVENTS --- a/arch/ia64/Kconfig~mm-remove-config_have_memblock_node_map-option +++ a/arch/ia64/Kconfig @@ -31,7 +31,6 @@ config IA64 select HAVE_FUNCTION_TRACER select TTY select HAVE_ARCH_TRACEHOOK - select HAVE_MEMBLOCK_NODE_MAP select HAVE_VIRT_CPU_ACCOUNTING select DMA_NONCOHERENT_MMAP select ARCH_HAS_SYNC_DMA_FOR_CPU --- a/arch/m68k/mm/motorola.c~mm-remove-config_have_memblock_node_map-option +++ a/arch/m68k/mm/motorola.c @@ -386,7 +386,7 @@ void __init paging_init(void) min_addr = m68k_memory[0].addr; max_addr = min_addr + m68k_memory[0].size; - memblock_add(m68k_memory[0].addr, m68k_memory[0].size); + memblock_add_node(m68k_memory[0].addr, m68k_memory[0].size, 0); for (i = 1; i < m68k_num_memory;) { if (m68k_memory[i].addr < min_addr) { printk("Ignoring memory chunk at 0x%lx:0x%lx before the first chunk\n", @@ -397,7 +397,7 @@ void __init paging_init(void) (m68k_num_memory - i) * sizeof(struct m68k_mem_info)); continue; } - memblock_add(m68k_memory[i].addr, m68k_memory[i].size); + memblock_add_node(m68k_memory[i].addr, m68k_memory[i].size, i); addr = m68k_memory[i].addr + m68k_memory[i].size; if (addr > max_addr) max_addr = addr; --- a/arch/microblaze/Kconfig~mm-remove-config_have_memblock_node_map-option +++ a/arch/microblaze/Kconfig @@ -32,7 +32,6 @@ config MICROBLAZE select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_TRACER - select HAVE_MEMBLOCK_NODE_MAP select HAVE_OPROFILE select HAVE_PCI select IRQ_DOMAIN --- a/arch/mips/Kconfig~mm-remove-config_have_memblock_node_map-option +++ a/arch/mips/Kconfig @@ -72,7 +72,6 @@ config MIPS select HAVE_KPROBES select HAVE_KRETPROBES select HAVE_LD_DEAD_CODE_DATA_ELIMINATION - select HAVE_MEMBLOCK_NODE_MAP select HAVE_MOD_ARCH_SPECIFIC select HAVE_NMI select HAVE_OPROFILE --- a/arch/powerpc/Kconfig~mm-remove-config_have_memblock_node_map-option +++ a/arch/powerpc/Kconfig @@ -211,7 +211,6 @@ config PPC select HAVE_KRETPROBES select HAVE_LD_DEAD_CODE_DATA_ELIMINATION select HAVE_LIVEPATCH if HAVE_DYNAMIC_FTRACE_WITH_REGS - select HAVE_MEMBLOCK_NODE_MAP select HAVE_MOD_ARCH_SPECIFIC select HAVE_NMI if PERF_EVENTS || (PPC64 && PPC_BOOK3S) select HAVE_HARDLOCKUP_DETECTOR_ARCH if (PPC64 && PPC_BOOK3S) --- a/arch/riscv/Kconfig~mm-remove-config_have_memblock_node_map-option +++ a/arch/riscv/Kconfig @@ -32,7 +32,6 @@ config RISCV select HAVE_ARCH_AUDITSYSCALL select HAVE_ARCH_SECCOMP_FILTER select HAVE_ASM_MODVERSIONS - select HAVE_MEMBLOCK_NODE_MAP select HAVE_DMA_CONTIGUOUS if MMU select HAVE_FUTEX_CMPXCHG if FUTEX select HAVE_PERF_EVENTS --- a/arch/s390/Kconfig~mm-remove-config_have_memblock_node_map-option +++ a/arch/s390/Kconfig @@ -162,7 +162,6 @@ config S390 select HAVE_LIVEPATCH select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP - select HAVE_MEMBLOCK_NODE_MAP select HAVE_MEMBLOCK_PHYS_MAP select MMU_GATHER_NO_GATHER select HAVE_MOD_ARCH_SPECIFIC --- a/arch/sh/Kconfig~mm-remove-config_have_memblock_node_map-option +++ a/arch/sh/Kconfig @@ -9,7 +9,6 @@ config SUPERH select CLKDEV_LOOKUP select DMA_DECLARE_COHERENT select HAVE_IDE if HAS_IOPORT_MAP - select HAVE_MEMBLOCK_NODE_MAP select HAVE_OPROFILE select HAVE_ARCH_TRACEHOOK select HAVE_PERF_EVENTS --- a/arch/sparc/Kconfig~mm-remove-config_have_memblock_node_map-option +++ a/arch/sparc/Kconfig @@ -65,7 +65,6 @@ config SPARC64 select HAVE_KRETPROBES select HAVE_KPROBES select MMU_GATHER_RCU_TABLE_FREE if SMP - select HAVE_MEMBLOCK_NODE_MAP select HAVE_ARCH_TRANSPARENT_HUGEPAGE select HAVE_DYNAMIC_FTRACE select HAVE_FTRACE_MCOUNT_RECORD --- a/arch/x86/Kconfig~mm-remove-config_have_memblock_node_map-option +++ a/arch/x86/Kconfig @@ -192,7 +192,6 @@ config X86 select HAVE_KRETPROBES select HAVE_KVM select HAVE_LIVEPATCH if X86_64 - select HAVE_MEMBLOCK_NODE_MAP select HAVE_MIXED_BREAKPOINTS_REGS select HAVE_MOD_ARCH_SPECIFIC select HAVE_MOVE_PMD --- a/Documentation/features/vm/numa-memblock/arch-support.txt +++ /dev/null @@ -1,34 +0,0 @@ -# -# Feature name: numa-memblock -# Kconfig: HAVE_MEMBLOCK_NODE_MAP -# description: arch supports NUMA aware memblocks -# - ----------------------- - | arch |status| - ----------------------- - | alpha: | TODO | - | arc: | .. | - | arm: | .. | - | arm64: | ok | - | c6x: | .. | - | csky: | .. | - | h8300: | .. | - | hexagon: | .. | - | ia64: | ok | - | m68k: | .. | - | microblaze: | ok | - | mips: | ok | - | nds32: | TODO | - | nios2: | .. | - | openrisc: | .. | - | parisc: | .. | - | powerpc: | ok | - | riscv: | ok | - | s390: | ok | - | sh: | ok | - | sparc: | ok | - | um: | .. | - | unicore32: | .. | - | x86: | ok | - | xtensa: | .. | - ----------------------- --- a/include/linux/memblock.h~mm-remove-config_have_memblock_node_map-option +++ a/include/linux/memblock.h @@ -50,7 +50,7 @@ struct memblock_region { phys_addr_t base; phys_addr_t size; enum memblock_flags flags; -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP +#ifdef CONFIG_NEED_MULTIPLE_NODES int nid; #endif }; @@ -215,7 +215,6 @@ static inline bool memblock_is_nomap(str return m->flags & MEMBLOCK_NOMAP; } -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP int memblock_search_pfn_nid(unsigned long pfn, unsigned long *start_pfn, unsigned long *end_pfn); void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn, @@ -234,7 +233,6 @@ void __next_mem_pfn_range(int *idx, int #define for_each_mem_pfn_range(i, nid, p_start, p_end, p_nid) \ for (i = -1, __next_mem_pfn_range(&i, nid, p_start, p_end, p_nid); \ i >= 0; __next_mem_pfn_range(&i, nid, p_start, p_end, p_nid)) -#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT void __next_mem_pfn_range_in_zone(u64 *idx, struct zone *zone, @@ -310,10 +308,10 @@ void __next_mem_pfn_range_in_zone(u64 *i for_each_mem_range_rev(i, &memblock.memory, &memblock.reserved, \ nid, flags, p_start, p_end, p_nid) -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP int memblock_set_node(phys_addr_t base, phys_addr_t size, struct memblock_type *type, int nid); +#ifdef CONFIG_NEED_MULTIPLE_NODES static inline void memblock_set_region_node(struct memblock_region *r, int nid) { r->nid = nid; @@ -332,7 +330,7 @@ static inline int memblock_get_region_no { return 0; } -#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ +#endif /* CONFIG_NEED_MULTIPLE_NODES */ /* Flags for memblock allocation APIs */ #define MEMBLOCK_ALLOC_ANYWHERE (~(phys_addr_t)0) --- a/include/linux/mm.h~mm-remove-config_have_memblock_node_map-option +++ a/include/linux/mm.h @@ -2401,9 +2401,8 @@ static inline unsigned long get_num_phys return phys_pages; } -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP /* - * With CONFIG_HAVE_MEMBLOCK_NODE_MAP set, an architecture may initialise its + * Using memblock node mappings, an architecture may initialise its * zones, allocate the backing mem_map and account for memory holes in a more * architecture independent manner. This is a substitute for creating the * zone_sizes[] and zholes_size[] arrays and passing them to @@ -2424,9 +2423,6 @@ static inline unsigned long get_num_phys * registered physical page range. Similarly * sparse_memory_present_with_active_regions() calls memory_present() for * each range when SPARSEMEM is enabled. - * - * See mm/page_alloc.c for more information on each function exposed by - * CONFIG_HAVE_MEMBLOCK_NODE_MAP. */ extern void free_area_init_nodes(unsigned long *max_zone_pfn); unsigned long node_map_pfn_alignment(void); @@ -2441,13 +2437,9 @@ extern void free_bootmem_with_active_reg unsigned long max_low_pfn); extern void sparse_memory_present_with_active_regions(int nid); -#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ - -#if !defined(CONFIG_HAVE_MEMBLOCK_NODE_MAP) && \ - !defined(CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID) +#ifndef CONFIG_NEED_MULTIPLE_NODES static inline int early_pfn_to_nid(unsigned long pfn) { - BUILD_BUG_ON(IS_ENABLED(CONFIG_NUMA)); return 0; } #else --- a/include/linux/mmzone.h~mm-remove-config_have_memblock_node_map-option +++ a/include/linux/mmzone.h @@ -876,7 +876,7 @@ extern int movable_zone; #ifdef CONFIG_HIGHMEM static inline int zone_movable_is_highmem(void) { -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP +#ifdef CONFIG_NEED_MULTIPLE_NODES return movable_zone == ZONE_HIGHMEM; #else return (ZONE_MOVABLE - 1) == ZONE_HIGHMEM; --- a/mm/Kconfig~mm-remove-config_have_memblock_node_map-option +++ a/mm/Kconfig @@ -126,9 +126,6 @@ config SPARSEMEM_VMEMMAP pfn_to_page and page_to_pfn operations. This is the most efficient option when sufficient kernel resources are available. -config HAVE_MEMBLOCK_NODE_MAP - bool - config HAVE_MEMBLOCK_PHYS_MAP bool --- a/mm/memblock.c~mm-remove-config_have_memblock_node_map-option +++ a/mm/memblock.c @@ -620,7 +620,7 @@ repeat: * area, insert that portion. */ if (rbase > base) { -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP +#ifdef CONFIG_NEED_MULTIPLE_NODES WARN_ON(nid != memblock_get_region_node(rgn)); #endif WARN_ON(flags != rgn->flags); @@ -1197,7 +1197,6 @@ void __init_memblock __next_mem_range_re *idx = ULLONG_MAX; } -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP /* * Common iterator interface used to define for_each_mem_pfn_range(). */ @@ -1247,6 +1246,7 @@ void __init_memblock __next_mem_pfn_rang int __init_memblock memblock_set_node(phys_addr_t base, phys_addr_t size, struct memblock_type *type, int nid) { +#ifdef CONFIG_NEED_MULTIPLE_NODES int start_rgn, end_rgn; int i, ret; @@ -1258,9 +1258,10 @@ int __init_memblock memblock_set_node(ph memblock_set_region_node(&type->regions[i], nid); memblock_merge_regions(type); +#endif return 0; } -#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ + #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT /** * __next_mem_pfn_range_in_zone - iterator for for_each_*_range_in_zone() @@ -1799,7 +1800,6 @@ bool __init_memblock memblock_is_map_mem return !memblock_is_nomap(&memblock.memory.regions[i]); } -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP int __init_memblock memblock_search_pfn_nid(unsigned long pfn, unsigned long *start_pfn, unsigned long *end_pfn) { @@ -1814,7 +1814,6 @@ int __init_memblock memblock_search_pfn_ return memblock_get_region_node(&type->regions[mid]); } -#endif /** * memblock_is_region_memory - check if a region is a subset of memory @@ -1905,7 +1904,7 @@ static void __init_memblock memblock_dum size = rgn->size; end = base + size - 1; flags = rgn->flags; -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP +#ifdef CONFIG_NEED_MULTIPLE_NODES if (memblock_get_region_node(rgn) != MAX_NUMNODES) snprintf(nid_buf, sizeof(nid_buf), " on node %d", memblock_get_region_node(rgn)); --- a/mm/memory_hotplug.c~mm-remove-config_have_memblock_node_map-option +++ a/mm/memory_hotplug.c @@ -1372,11 +1372,7 @@ check_pages_isolated_cb(unsigned long st static int __init cmdline_parse_movable_node(char *p) { -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP movable_node_enabled = true; -#else - pr_warn("movable_node parameter depends on CONFIG_HAVE_MEMBLOCK_NODE_MAP to work properly\n"); -#endif return 0; } early_param("movable_node", cmdline_parse_movable_node); --- a/mm/page_alloc.c~mm-remove-config_have_memblock_node_map-option +++ a/mm/page_alloc.c @@ -335,7 +335,6 @@ static unsigned long nr_kernel_pages __i static unsigned long nr_all_pages __initdata; static unsigned long dma_reserve __initdata; -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP static unsigned long arch_zone_lowest_possible_pfn[MAX_NR_ZONES] __initdata; static unsigned long arch_zone_highest_possible_pfn[MAX_NR_ZONES] __initdata; static unsigned long required_kernelcore __initdata; @@ -348,7 +347,6 @@ static bool mirrored_kernelcore __memini /* movable_zone is the "real" zone pages in ZONE_MOVABLE are taken from */ int movable_zone; EXPORT_SYMBOL(movable_zone); -#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ #if MAX_NUMNODES > 1 unsigned int nr_node_ids __read_mostly = MAX_NUMNODES; @@ -1499,8 +1497,7 @@ void __free_pages_core(struct page *page __free_pages(page, order); } -#if defined(CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID) || \ - defined(CONFIG_HAVE_MEMBLOCK_NODE_MAP) +#ifdef CONFIG_NEED_MULTIPLE_NODES static struct mminit_pfnnid_cache early_pfnnid_cache __meminitdata; @@ -1542,7 +1539,7 @@ int __meminit early_pfn_to_nid(unsigned return nid; } -#endif +#endif /* CONFIG_NEED_MULTIPLE_NODES */ #ifdef CONFIG_NODES_SPAN_OTHER_NODES /* Only safe to use early in boot when initialisation is single-threaded */ @@ -5936,7 +5933,6 @@ void __ref build_all_zonelists(pg_data_t static bool __meminit overlap_memmap_init(unsigned long zone, unsigned long *pfn) { -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP static struct memblock_region *r; if (mirrored_kernelcore && zone == ZONE_MOVABLE) { @@ -5952,7 +5948,6 @@ overlap_memmap_init(unsigned long zone, return true; } } -#endif return false; } @@ -6585,8 +6580,7 @@ static unsigned long __init zone_absent_ return nr_absent; } -#else /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ -static inline unsigned long __init zone_spanned_pages_in_node(int nid, +static inline unsigned long __init compat_zone_spanned_pages_in_node(int nid, unsigned long zone_type, unsigned long node_start_pfn, unsigned long node_end_pfn, @@ -6605,7 +6599,7 @@ static inline unsigned long __init zone_ return zones_size[zone_type]; } -static inline unsigned long __init zone_absent_pages_in_node(int nid, +static inline unsigned long __init compat_zone_absent_pages_in_node(int nid, unsigned long zone_type, unsigned long node_start_pfn, unsigned long node_end_pfn, @@ -6617,13 +6611,12 @@ static inline unsigned long __init zone_ return zholes_size[zone_type]; } -#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ - static void __init calculate_node_totalpages(struct pglist_data *pgdat, unsigned long node_start_pfn, unsigned long node_end_pfn, unsigned long *zones_size, - unsigned long *zholes_size) + unsigned long *zholes_size, + bool compat) { unsigned long realtotalpages = 0, totalpages = 0; enum zone_type i; @@ -6631,17 +6624,38 @@ static void __init calculate_node_totalp for (i = 0; i < MAX_NR_ZONES; i++) { struct zone *zone = pgdat->node_zones + i; unsigned long zone_start_pfn, zone_end_pfn; + unsigned long spanned, absent; unsigned long size, real_size; - size = zone_spanned_pages_in_node(pgdat->node_id, i, - node_start_pfn, - node_end_pfn, - &zone_start_pfn, - &zone_end_pfn, - zones_size); - real_size = size - zone_absent_pages_in_node(pgdat->node_id, i, - node_start_pfn, node_end_pfn, - zholes_size); + if (compat) { + spanned = compat_zone_spanned_pages_in_node( + pgdat->node_id, i, + node_start_pfn, + node_end_pfn, + &zone_start_pfn, + &zone_end_pfn, + zones_size); + absent = compat_zone_absent_pages_in_node( + pgdat->node_id, i, + node_start_pfn, + node_end_pfn, + zholes_size); + } else { + spanned = zone_spanned_pages_in_node(pgdat->node_id, i, + node_start_pfn, + node_end_pfn, + &zone_start_pfn, + &zone_end_pfn, + zones_size); + absent = zone_absent_pages_in_node(pgdat->node_id, i, + node_start_pfn, + node_end_pfn, + zholes_size); + } + + size = spanned; + real_size = size - absent; + if (size) zone->zone_start_pfn = zone_start_pfn; else @@ -6941,10 +6955,8 @@ static void __ref alloc_node_mem_map(str */ if (pgdat == NODE_DATA(0)) { mem_map = NODE_DATA(0)->node_mem_map; -#if defined(CONFIG_HAVE_MEMBLOCK_NODE_MAP) || defined(CONFIG_FLATMEM) if (page_to_pfn(mem_map) != pgdat->node_start_pfn) mem_map -= offset; -#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ } #endif } @@ -6961,9 +6973,10 @@ static inline void pgdat_set_deferred_ra static inline void pgdat_set_deferred_range(pg_data_t *pgdat) {} #endif -void __init free_area_init_node(int nid, unsigned long *zones_size, - unsigned long node_start_pfn, - unsigned long *zholes_size) +static void __init __free_area_init_node(int nid, unsigned long *zones_size, + unsigned long node_start_pfn, + unsigned long *zholes_size, + bool compat) { pg_data_t *pgdat = NODE_DATA(nid); unsigned long start_pfn = 0; @@ -6975,16 +6988,16 @@ void __init free_area_init_node(int nid, pgdat->node_id = nid; pgdat->node_start_pfn = node_start_pfn; pgdat->per_cpu_nodestats = NULL; -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP - get_pfn_range_for_nid(nid, &start_pfn, &end_pfn); - pr_info("Initmem setup node %d [mem %#018Lx-%#018Lx]\n", nid, - (u64)start_pfn << PAGE_SHIFT, - end_pfn ? ((u64)end_pfn << PAGE_SHIFT) - 1 : 0); -#else - start_pfn = node_start_pfn; -#endif + if (!compat) { + get_pfn_range_for_nid(nid, &start_pfn, &end_pfn); + pr_info("Initmem setup node %d [mem %#018Lx-%#018Lx]\n", nid, + (u64)start_pfn << PAGE_SHIFT, + end_pfn ? ((u64)end_pfn << PAGE_SHIFT) - 1 : 0); + } else { + start_pfn = node_start_pfn; + } calculate_node_totalpages(pgdat, start_pfn, end_pfn, - zones_size, zholes_size); + zones_size, zholes_size, compat); alloc_node_mem_map(pgdat); pgdat_set_deferred_range(pgdat); @@ -6992,6 +7005,14 @@ void __init free_area_init_node(int nid, free_area_init_core(pgdat); } +void __init free_area_init_node(int nid, unsigned long *zones_size, + unsigned long node_start_pfn, + unsigned long *zholes_size) +{ + __free_area_init_node(nid, zones_size, node_start_pfn, zholes_size, + true); +} + #if !defined(CONFIG_FLAT_NODE_MEM_MAP) /* * Initialize all valid struct pages in the range [spfn, epfn) and mark them @@ -7075,8 +7096,6 @@ static inline void __init init_unavailab } #endif /* !CONFIG_FLAT_NODE_MEM_MAP */ -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP - #if MAX_NUMNODES > 1 /* * Figure out the number of possible node ids. @@ -7505,8 +7524,8 @@ void __init free_area_init_nodes(unsigne init_unavailable_mem(); for_each_online_node(nid) { pg_data_t *pgdat = NODE_DATA(nid); - free_area_init_node(nid, NULL, - find_min_pfn_for_node(nid), NULL); + __free_area_init_node(nid, NULL, + find_min_pfn_for_node(nid), NULL, false); /* Any memory on that node */ if (pgdat->node_present_pages) @@ -7571,8 +7590,6 @@ static int __init cmdline_parse_movablec early_param("kernelcore", cmdline_parse_kernelcore); early_param("movablecore", cmdline_parse_movablecore); -#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ - void adjust_managed_page_count(struct page *page, long count) { atomic_long_add(count, &page_zone(page)->managed_pages); _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 014/131] mm: free_area_init: use maximal zone PFNs rather than zone sizes 2020-06-03 22:55 incoming Andrew Morton ` (12 preceding siblings ...) 2020-06-03 22:57 ` [patch 013/131] mm: remove CONFIG_HAVE_MEMBLOCK_NODE_MAP option Andrew Morton @ 2020-06-03 22:57 ` Andrew Morton 2020-06-03 22:57 ` [patch 015/131] mm: use free_area_init() instead of free_area_init_nodes() Andrew Morton ` (117 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:57 UTC (permalink / raw) To: akpm, bcain, bhe, catalin.marinas, corbet, dalias, davem, deller, geert, gerg, green.hu, guoren, gxt, heiko.carstens, hoan, James.Bottomley, jcmvbkbc, ley.foon.tan, linux-mm, linux, mattst88, mhocko, mm-commits, monstr, mpe, msalter, nickhu, paul.walmsley, richard, rppt, shorne, tony.luck, torvalds, tsbogend, vgupta, ysato From: Mike Rapoport <rppt@linux.ibm.com> Subject: mm: free_area_init: use maximal zone PFNs rather than zone sizes Currently, architectures that use free_area_init() to initialize memory map and node and zone structures need to calculate zone and hole sizes. We can use free_area_init_nodes() instead and let it detect the zone boundaries while the architectures will only have to supply the possible limits for the zones. Link: http://lkml.kernel.org/r/20200412194859.12663-5-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Reviewed-by: Baoquan He <bhe@redhat.com> Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64] Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/alpha/mm/init.c | 16 ++++++---------- arch/c6x/mm/init.c | 8 +++----- arch/h8300/mm/init.c | 6 +++--- arch/hexagon/mm/init.c | 6 +++--- arch/m68k/mm/init.c | 6 +++--- arch/m68k/mm/mcfmmu.c | 9 +++------ arch/nds32/mm/init.c | 11 ++++------- arch/nios2/mm/init.c | 8 +++----- arch/openrisc/mm/init.c | 9 +++------ arch/um/kernel/mem.c | 12 ++++-------- include/linux/mm.h | 2 +- mm/page_alloc.c | 5 ++--- 12 files changed, 38 insertions(+), 60 deletions(-) --- a/arch/alpha/mm/init.c~mm-free_area_init-use-maximal-zone-pfns-rather-than-zone-sizes +++ a/arch/alpha/mm/init.c @@ -243,21 +243,17 @@ callback_init(void * kernel_end) */ void __init paging_init(void) { - unsigned long zones_size[MAX_NR_ZONES] = {0, }; - unsigned long dma_pfn, high_pfn; + unsigned long max_zone_pfn[MAX_NR_ZONES] = {0, }; + unsigned long dma_pfn; dma_pfn = virt_to_phys((char *)MAX_DMA_ADDRESS) >> PAGE_SHIFT; - high_pfn = max_pfn = max_low_pfn; + max_pfn = max_low_pfn; - if (dma_pfn >= high_pfn) - zones_size[ZONE_DMA] = high_pfn; - else { - zones_size[ZONE_DMA] = dma_pfn; - zones_size[ZONE_NORMAL] = high_pfn - dma_pfn; - } + max_zone_pfn[ZONE_DMA] = dma_pfn; + max_zone_pfn[ZONE_NORMAL] = max_pfn; /* Initialize mem_map[]. */ - free_area_init(zones_size); + free_area_init(max_zone_pfn); /* Initialize the kernel's ZERO_PGE. */ memset((void *)ZERO_PGE, 0, PAGE_SIZE); --- a/arch/c6x/mm/init.c~mm-free_area_init-use-maximal-zone-pfns-rather-than-zone-sizes +++ a/arch/c6x/mm/init.c @@ -33,7 +33,7 @@ EXPORT_SYMBOL(empty_zero_page); void __init paging_init(void) { struct pglist_data *pgdat = NODE_DATA(0); - unsigned long zones_size[MAX_NR_ZONES] = {0, }; + unsigned long max_zone_pfn[MAX_NR_ZONES] = {0, }; empty_zero_page = (unsigned long) memblock_alloc(PAGE_SIZE, PAGE_SIZE); @@ -49,11 +49,9 @@ void __init paging_init(void) /* * Define zones */ - zones_size[ZONE_NORMAL] = (memory_end - PAGE_OFFSET) >> PAGE_SHIFT; - pgdat->node_zones[ZONE_NORMAL].zone_start_pfn = - __pa(PAGE_OFFSET) >> PAGE_SHIFT; + max_zone_pfn[ZONE_NORMAL] = memory_end >> PAGE_SHIFT; - free_area_init(zones_size); + free_area_init(max_zone_pfn); } void __init mem_init(void) --- a/arch/h8300/mm/init.c~mm-free_area_init-use-maximal-zone-pfns-rather-than-zone-sizes +++ a/arch/h8300/mm/init.c @@ -83,10 +83,10 @@ void __init paging_init(void) start_mem, end_mem); { - unsigned long zones_size[MAX_NR_ZONES] = {0, }; + unsigned long max_zone_pfn[MAX_NR_ZONES] = {0, }; - zones_size[ZONE_NORMAL] = (end_mem - PAGE_OFFSET) >> PAGE_SHIFT; - free_area_init(zones_size); + max_zone_pfn[ZONE_NORMAL] = end_mem >> PAGE_SHIFT; + free_area_init(max_zone_pfn); } } --- a/arch/hexagon/mm/init.c~mm-free_area_init-use-maximal-zone-pfns-rather-than-zone-sizes +++ a/arch/hexagon/mm/init.c @@ -91,7 +91,7 @@ void sync_icache_dcache(pte_t pte) */ void __init paging_init(void) { - unsigned long zones_sizes[MAX_NR_ZONES] = {0, }; + unsigned long max_zone_pfn[MAX_NR_ZONES] = {0, }; /* * This is not particularly well documented anywhere, but @@ -101,9 +101,9 @@ void __init paging_init(void) * adjust accordingly. */ - zones_sizes[ZONE_NORMAL] = max_low_pfn; + max_zone_pfn[ZONE_NORMAL] = max_low_pfn; - free_area_init(zones_sizes); /* sets up the zonelists and mem_map */ + free_area_init(max_zone_pfn); /* sets up the zonelists and mem_map */ /* * Start of high memory area. Will probably need something more --- a/arch/m68k/mm/init.c~mm-free_area_init-use-maximal-zone-pfns-rather-than-zone-sizes +++ a/arch/m68k/mm/init.c @@ -84,7 +84,7 @@ void __init paging_init(void) * page_alloc get different views of the world. */ unsigned long end_mem = memory_end & PAGE_MASK; - unsigned long zones_size[MAX_NR_ZONES] = { 0, }; + unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0, }; high_memory = (void *) end_mem; @@ -98,8 +98,8 @@ void __init paging_init(void) */ set_fs (USER_DS); - zones_size[ZONE_DMA] = (end_mem - PAGE_OFFSET) >> PAGE_SHIFT; - free_area_init(zones_size); + max_zone_pfn[ZONE_DMA] = end_mem >> PAGE_SHIFT; + free_area_init(max_zone_pfn); } #endif /* CONFIG_MMU */ --- a/arch/m68k/mm/mcfmmu.c~mm-free_area_init-use-maximal-zone-pfns-rather-than-zone-sizes +++ a/arch/m68k/mm/mcfmmu.c @@ -39,7 +39,7 @@ void __init paging_init(void) pte_t *pg_table; unsigned long address, size; unsigned long next_pgtable, bootmem_end; - unsigned long zones_size[MAX_NR_ZONES]; + unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 }; enum zone_type zone; int i; @@ -80,11 +80,8 @@ void __init paging_init(void) } current->mm = NULL; - - for (zone = 0; zone < MAX_NR_ZONES; zone++) - zones_size[zone] = 0x0; - zones_size[ZONE_DMA] = num_pages; - free_area_init(zones_size); + max_zone_pfn[ZONE_DMA] = PFN_DOWN(_ramend); + free_area_init(max_zone_pfn); } int cf_tlb_miss(struct pt_regs *regs, int write, int dtlb, int extension_word) --- a/arch/nds32/mm/init.c~mm-free_area_init-use-maximal-zone-pfns-rather-than-zone-sizes +++ a/arch/nds32/mm/init.c @@ -31,16 +31,13 @@ EXPORT_SYMBOL(empty_zero_page); static void __init zone_sizes_init(void) { - unsigned long zones_size[MAX_NR_ZONES]; + unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 }; - /* Clear the zone sizes */ - memset(zones_size, 0, sizeof(zones_size)); - - zones_size[ZONE_NORMAL] = max_low_pfn; + max_zone_pfn[ZONE_NORMAL] = max_low_pfn; #ifdef CONFIG_HIGHMEM - zones_size[ZONE_HIGHMEM] = max_pfn; + max_zone_pfn[ZONE_HIGHMEM] = max_pfn; #endif - free_area_init(zones_size); + free_area_init(max_zone_pfn); } --- a/arch/nios2/mm/init.c~mm-free_area_init-use-maximal-zone-pfns-rather-than-zone-sizes +++ a/arch/nios2/mm/init.c @@ -46,17 +46,15 @@ pgd_t *pgd_current; */ void __init paging_init(void) { - unsigned long zones_size[MAX_NR_ZONES]; - - memset(zones_size, 0, sizeof(zones_size)); + unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 }; pagetable_init(); pgd_current = swapper_pg_dir; - zones_size[ZONE_NORMAL] = max_mapnr; + max_zone_pfn[ZONE_NORMAL] = max_mapnr; /* pass the memory from the bootmem allocator to the main allocator */ - free_area_init(zones_size); + free_area_init(max_zone_pfn); flush_dcache_range((unsigned long)empty_zero_page, (unsigned long)empty_zero_page + PAGE_SIZE); --- a/arch/openrisc/mm/init.c~mm-free_area_init-use-maximal-zone-pfns-rather-than-zone-sizes +++ a/arch/openrisc/mm/init.c @@ -45,17 +45,14 @@ DEFINE_PER_CPU(struct mmu_gather, mmu_ga static void __init zone_sizes_init(void) { - unsigned long zones_size[MAX_NR_ZONES]; - - /* Clear the zone sizes */ - memset(zones_size, 0, sizeof(zones_size)); + unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 }; /* * We use only ZONE_NORMAL */ - zones_size[ZONE_NORMAL] = max_low_pfn; + max_zone_pfn[ZONE_NORMAL] = max_low_pfn; - free_area_init(zones_size); + free_area_init(max_zone_pfn); } extern const char _s_kernel_ro[], _e_kernel_ro[]; --- a/arch/um/kernel/mem.c~mm-free_area_init-use-maximal-zone-pfns-rather-than-zone-sizes +++ a/arch/um/kernel/mem.c @@ -158,8 +158,8 @@ static void __init fixaddr_user_init( vo void __init paging_init(void) { - unsigned long zones_size[MAX_NR_ZONES], vaddr; - int i; + unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 }; + unsigned long vaddr; empty_zero_page = (unsigned long *) memblock_alloc_low(PAGE_SIZE, PAGE_SIZE); @@ -167,12 +167,8 @@ void __init paging_init(void) panic("%s: Failed to allocate %lu bytes align=%lx\n", __func__, PAGE_SIZE, PAGE_SIZE); - for (i = 0; i < ARRAY_SIZE(zones_size); i++) - zones_size[i] = 0; - - zones_size[ZONE_NORMAL] = (end_iomem >> PAGE_SHIFT) - - (uml_physmem >> PAGE_SHIFT); - free_area_init(zones_size); + max_zone_pfn[ZONE_NORMAL] = end_iomem >> PAGE_SHIFT; + free_area_init(max_zone_pfn); /* * Fixed mappings, only the page table structure has to be --- a/include/linux/mm.h~mm-free_area_init-use-maximal-zone-pfns-rather-than-zone-sizes +++ a/include/linux/mm.h @@ -2329,7 +2329,7 @@ static inline spinlock_t *pud_lock(struc } extern void __init pagecache_init(void); -extern void free_area_init(unsigned long * zones_size); +extern void free_area_init(unsigned long * max_zone_pfn); extern void __init free_area_init_node(int nid, unsigned long * zones_size, unsigned long zone_start_pfn, unsigned long *zholes_size); extern void free_initmem(void); --- a/mm/page_alloc.c~mm-free_area_init-use-maximal-zone-pfns-rather-than-zone-sizes +++ a/mm/page_alloc.c @@ -7712,11 +7712,10 @@ void __init set_dma_reserve(unsigned lon dma_reserve = new_dma_reserve; } -void __init free_area_init(unsigned long *zones_size) +void __init free_area_init(unsigned long *max_zone_pfn) { init_unavailable_mem(); - free_area_init_node(0, zones_size, - __pa(PAGE_OFFSET) >> PAGE_SHIFT, NULL); + free_area_init_nodes(max_zone_pfn); } static int page_alloc_cpu_dead(unsigned int cpu) _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 015/131] mm: use free_area_init() instead of free_area_init_nodes() 2020-06-03 22:55 incoming Andrew Morton ` (13 preceding siblings ...) 2020-06-03 22:57 ` [patch 014/131] mm: free_area_init: use maximal zone PFNs rather than zone sizes Andrew Morton @ 2020-06-03 22:57 ` Andrew Morton 2020-06-03 22:57 ` [patch 016/131] alpha: simplify detection of memory zone boundaries Andrew Morton ` (116 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:57 UTC (permalink / raw) To: akpm, bcain, bhe, catalin.marinas, corbet, dalias, davem, deller, geert, gerg, green.hu, guoren, gxt, heiko.carstens, hoan, James.Bottomley, jcmvbkbc, ley.foon.tan, linux-mm, linux, mattst88, mhocko, mm-commits, monstr, mpe, msalter, nickhu, paul.walmsley, richard, rppt, shorne, tony.luck, torvalds, tsbogend, vgupta, ysato From: Mike Rapoport <rppt@linux.ibm.com> Subject: mm: use free_area_init() instead of free_area_init_nodes() free_area_init() has effectively became a wrapper for free_area_init_nodes() and there is no point of keeping it. Still free_area_init() name is shorter and more general as it does not imply necessity to initialize multiple nodes. Rename free_area_init_nodes() to free_area_init(), update the callers and drop old version of free_area_init(). Link: http://lkml.kernel.org/r/20200412194859.12663-6-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Reviewed-by: Baoquan He <bhe@redhat.com> Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64] Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Brian Cain <bcain@codeaurora.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/arm64/mm/init.c | 2 +- arch/ia64/mm/contig.c | 2 +- arch/ia64/mm/discontig.c | 2 +- arch/microblaze/mm/init.c | 2 +- arch/mips/loongson64/numa.c | 2 +- arch/mips/mm/init.c | 2 +- arch/mips/sgi-ip27/ip27-memory.c | 2 +- arch/powerpc/mm/mem.c | 2 +- arch/riscv/mm/init.c | 2 +- arch/s390/mm/init.c | 2 +- arch/sh/mm/init.c | 2 +- arch/sparc/mm/init_64.c | 2 +- arch/x86/mm/init.c | 2 +- include/linux/mm.h | 7 +++---- mm/page_alloc.c | 10 ++-------- 15 files changed, 18 insertions(+), 25 deletions(-) --- a/arch/arm64/mm/init.c~mm-use-free_area_init-instead-of-free_area_init_nodes +++ a/arch/arm64/mm/init.c @@ -206,7 +206,7 @@ static void __init zone_sizes_init(unsig #endif max_zone_pfns[ZONE_NORMAL] = max; - free_area_init_nodes(max_zone_pfns); + free_area_init(max_zone_pfns); } #else --- a/arch/ia64/mm/contig.c~mm-use-free_area_init-instead-of-free_area_init_nodes +++ a/arch/ia64/mm/contig.c @@ -210,6 +210,6 @@ paging_init (void) printk("Virtual mem_map starts at 0x%p\n", mem_map); } #endif /* !CONFIG_VIRTUAL_MEM_MAP */ - free_area_init_nodes(max_zone_pfns); + free_area_init(max_zone_pfns); zero_page_memmap_ptr = virt_to_page(ia64_imva(empty_zero_page)); } --- a/arch/ia64/mm/discontig.c~mm-use-free_area_init-instead-of-free_area_init_nodes +++ a/arch/ia64/mm/discontig.c @@ -627,7 +627,7 @@ void __init paging_init(void) max_zone_pfns[ZONE_DMA32] = max_dma; #endif max_zone_pfns[ZONE_NORMAL] = max_pfn; - free_area_init_nodes(max_zone_pfns); + free_area_init(max_zone_pfns); zero_page_memmap_ptr = virt_to_page(ia64_imva(empty_zero_page)); } --- a/arch/microblaze/mm/init.c~mm-use-free_area_init-instead-of-free_area_init_nodes +++ a/arch/microblaze/mm/init.c @@ -112,7 +112,7 @@ static void __init paging_init(void) #endif /* We don't have holes in memory map */ - free_area_init_nodes(zones_size); + free_area_init(zones_size); } void __init setup_memory(void) --- a/arch/mips/loongson64/numa.c~mm-use-free_area_init-instead-of-free_area_init_nodes +++ a/arch/mips/loongson64/numa.c @@ -247,7 +247,7 @@ void __init paging_init(void) zones_size[ZONE_DMA32] = MAX_DMA32_PFN; #endif zones_size[ZONE_NORMAL] = max_low_pfn; - free_area_init_nodes(zones_size); + free_area_init(zones_size); } void __init mem_init(void) --- a/arch/mips/mm/init.c~mm-use-free_area_init-instead-of-free_area_init_nodes +++ a/arch/mips/mm/init.c @@ -418,7 +418,7 @@ void __init paging_init(void) } #endif - free_area_init_nodes(max_zone_pfns); + free_area_init(max_zone_pfns); } #ifdef CONFIG_64BIT --- a/arch/mips/sgi-ip27/ip27-memory.c~mm-use-free_area_init-instead-of-free_area_init_nodes +++ a/arch/mips/sgi-ip27/ip27-memory.c @@ -419,7 +419,7 @@ void __init paging_init(void) pagetable_init(); zones_size[ZONE_NORMAL] = max_low_pfn; - free_area_init_nodes(zones_size); + free_area_init(zones_size); } void __init mem_init(void) --- a/arch/powerpc/mm/mem.c~mm-use-free_area_init-instead-of-free_area_init_nodes +++ a/arch/powerpc/mm/mem.c @@ -271,7 +271,7 @@ void __init paging_init(void) max_zone_pfns[ZONE_HIGHMEM] = max_pfn; #endif - free_area_init_nodes(max_zone_pfns); + free_area_init(max_zone_pfns); mark_nonram_nosave(); } --- a/arch/riscv/mm/init.c~mm-use-free_area_init-instead-of-free_area_init_nodes +++ a/arch/riscv/mm/init.c @@ -39,7 +39,7 @@ static void __init zone_sizes_init(void) #endif max_zone_pfns[ZONE_NORMAL] = max_low_pfn; - free_area_init_nodes(max_zone_pfns); + free_area_init(max_zone_pfns); } static void setup_zero_page(void) --- a/arch/s390/mm/init.c~mm-use-free_area_init-instead-of-free_area_init_nodes +++ a/arch/s390/mm/init.c @@ -122,7 +122,7 @@ void __init paging_init(void) memset(max_zone_pfns, 0, sizeof(max_zone_pfns)); max_zone_pfns[ZONE_DMA] = PFN_DOWN(MAX_DMA_ADDRESS); max_zone_pfns[ZONE_NORMAL] = max_low_pfn; - free_area_init_nodes(max_zone_pfns); + free_area_init(max_zone_pfns); } void mark_rodata_ro(void) --- a/arch/sh/mm/init.c~mm-use-free_area_init-instead-of-free_area_init_nodes +++ a/arch/sh/mm/init.c @@ -334,7 +334,7 @@ void __init paging_init(void) memset(max_zone_pfns, 0, sizeof(max_zone_pfns)); max_zone_pfns[ZONE_NORMAL] = max_low_pfn; - free_area_init_nodes(max_zone_pfns); + free_area_init(max_zone_pfns); } unsigned int mem_init_done = 0; --- a/arch/sparc/mm/init_64.c~mm-use-free_area_init-instead-of-free_area_init_nodes +++ a/arch/sparc/mm/init_64.c @@ -2488,7 +2488,7 @@ void __init paging_init(void) max_zone_pfns[ZONE_NORMAL] = end_pfn; - free_area_init_nodes(max_zone_pfns); + free_area_init(max_zone_pfns); } printk("Booting Linux...\n"); --- a/arch/x86/mm/init.c~mm-use-free_area_init-instead-of-free_area_init_nodes +++ a/arch/x86/mm/init.c @@ -947,7 +947,7 @@ void __init zone_sizes_init(void) max_zone_pfns[ZONE_HIGHMEM] = max_pfn; #endif - free_area_init_nodes(max_zone_pfns); + free_area_init(max_zone_pfns); } __visible DEFINE_PER_CPU_SHARED_ALIGNED(struct tlb_state, cpu_tlbstate) = { --- a/include/linux/mm.h~mm-use-free_area_init-instead-of-free_area_init_nodes +++ a/include/linux/mm.h @@ -2329,7 +2329,6 @@ static inline spinlock_t *pud_lock(struc } extern void __init pagecache_init(void); -extern void free_area_init(unsigned long * max_zone_pfn); extern void __init free_area_init_node(int nid, unsigned long * zones_size, unsigned long zone_start_pfn, unsigned long *zholes_size); extern void free_initmem(void); @@ -2410,21 +2409,21 @@ static inline unsigned long get_num_phys * * An architecture is expected to register range of page frames backed by * physical memory with memblock_add[_node]() before calling - * free_area_init_nodes() passing in the PFN each zone ends at. At a basic + * free_area_init() passing in the PFN each zone ends at. At a basic * usage, an architecture is expected to do something like * * unsigned long max_zone_pfns[MAX_NR_ZONES] = {max_dma, max_normal_pfn, * max_highmem_pfn}; * for_each_valid_physical_page_range() * memblock_add_node(base, size, nid) - * free_area_init_nodes(max_zone_pfns); + * free_area_init(max_zone_pfns); * * free_bootmem_with_active_regions() calls free_bootmem_node() for each * registered physical page range. Similarly * sparse_memory_present_with_active_regions() calls memory_present() for * each range when SPARSEMEM is enabled. */ -extern void free_area_init_nodes(unsigned long *max_zone_pfn); +void free_area_init(unsigned long *max_zone_pfn); unsigned long node_map_pfn_alignment(void); unsigned long __absent_pages_in_range(int nid, unsigned long start_pfn, unsigned long end_pfn); --- a/mm/page_alloc.c~mm-use-free_area_init-instead-of-free_area_init_nodes +++ a/mm/page_alloc.c @@ -7440,7 +7440,7 @@ static void check_for_memory(pg_data_t * } /** - * free_area_init_nodes - Initialise all pg_data_t and zone data + * free_area_init - Initialise all pg_data_t and zone data * @max_zone_pfn: an array of max PFNs for each zone * * This will call free_area_init_node() for each active node in the system. @@ -7452,7 +7452,7 @@ static void check_for_memory(pg_data_t * * starts where the previous one ended. For example, ZONE_DMA32 starts * at arch_max_dma_pfn. */ -void __init free_area_init_nodes(unsigned long *max_zone_pfn) +void __init free_area_init(unsigned long *max_zone_pfn) { unsigned long start_pfn, end_pfn; int i, nid; @@ -7712,12 +7712,6 @@ void __init set_dma_reserve(unsigned lon dma_reserve = new_dma_reserve; } -void __init free_area_init(unsigned long *max_zone_pfn) -{ - init_unavailable_mem(); - free_area_init_nodes(max_zone_pfn); -} - static int page_alloc_cpu_dead(unsigned int cpu) { _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 016/131] alpha: simplify detection of memory zone boundaries 2020-06-03 22:55 incoming Andrew Morton ` (14 preceding siblings ...) 2020-06-03 22:57 ` [patch 015/131] mm: use free_area_init() instead of free_area_init_nodes() Andrew Morton @ 2020-06-03 22:57 ` Andrew Morton 2020-06-03 22:57 ` [patch 017/131] arm: " Andrew Morton ` (115 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:57 UTC (permalink / raw) To: akpm, bcain, bhe, catalin.marinas, corbet, dalias, davem, deller, geert, gerg, green.hu, guoren, gxt, heiko.carstens, hoan, James.Bottomley, jcmvbkbc, ley.foon.tan, linux-mm, linux, mattst88, mhocko, mm-commits, monstr, mpe, msalter, nickhu, paul.walmsley, richard, rppt, shorne, tony.luck, torvalds, tsbogend, vgupta, ysato From: Mike Rapoport <rppt@linux.ibm.com> Subject: alpha: simplify detection of memory zone boundaries free_area_init() only requires the definition of maximal PFN for each of the supported zone rater than calculation of actual zone sizes and the sizes of the holes between the zones. After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is available to all architectures. Using this function instead of free_area_init_node() simplifies the zone detection. Link: http://lkml.kernel.org/r/20200412194859.12663-7-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64] Cc: Baoquan He <bhe@redhat.com> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/alpha/mm/numa.c | 18 ++++-------------- 1 file changed, 4 insertions(+), 14 deletions(-) --- a/arch/alpha/mm/numa.c~alpha-simplify-detection-of-memory-zone-boundaries +++ a/arch/alpha/mm/numa.c @@ -202,8 +202,7 @@ setup_memory(void *kernel_end) void __init paging_init(void) { - unsigned int nid; - unsigned long zones_size[MAX_NR_ZONES] = {0, }; + unsigned long max_zone_pfn[MAX_NR_ZONES] = {0, }; unsigned long dma_local_pfn; /* @@ -215,19 +214,10 @@ void __init paging_init(void) */ dma_local_pfn = virt_to_phys((char *)MAX_DMA_ADDRESS) >> PAGE_SHIFT; - for_each_online_node(nid) { - unsigned long start_pfn = NODE_DATA(nid)->node_start_pfn; - unsigned long end_pfn = start_pfn + NODE_DATA(nid)->node_present_pages; + max_zone_pfn[ZONE_DMA] = dma_local_pfn; + max_zone_pfn[ZONE_NORMAL] = max_pfn; - if (dma_local_pfn >= end_pfn - start_pfn) - zones_size[ZONE_DMA] = end_pfn - start_pfn; - else { - zones_size[ZONE_DMA] = dma_local_pfn; - zones_size[ZONE_NORMAL] = (end_pfn - start_pfn) - dma_local_pfn; - } - node_set_state(nid, N_NORMAL_MEMORY); - free_area_init_node(nid, zones_size, start_pfn, NULL); - } + free_area_init(max_zone_pfn); /* Initialize the kernel's ZERO_PGE. */ memset((void *)ZERO_PGE, 0, PAGE_SIZE); _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 017/131] arm: simplify detection of memory zone boundaries 2020-06-03 22:55 incoming Andrew Morton ` (15 preceding siblings ...) 2020-06-03 22:57 ` [patch 016/131] alpha: simplify detection of memory zone boundaries Andrew Morton @ 2020-06-03 22:57 ` Andrew Morton 2020-06-03 22:57 ` [patch 018/131] arm64: simplify detection of memory zone boundaries for UMA configs Andrew Morton ` (114 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:57 UTC (permalink / raw) To: akpm, bcain, bhe, catalin.marinas, corbet, dalias, davem, deller, geert, gerg, green.hu, guoren, gxt, heiko.carstens, hoan, James.Bottomley, jcmvbkbc, ley.foon.tan, linux-mm, linux, mattst88, mhocko, mm-commits, monstr, mpe, msalter, nickhu, paul.walmsley, richard, rppt, shorne, tony.luck, torvalds, tsbogend, vgupta, ysato From: Mike Rapoport <rppt@linux.ibm.com> Subject: arm: simplify detection of memory zone boundaries free_area_init() only requires the definition of maximal PFN for each of the supported zone rater than calculation of actual zone sizes and the sizes of the holes between the zones. After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is available to all architectures. Using this function instead of free_area_init_node() simplifies the zone detection. Link: http://lkml.kernel.org/r/20200412194859.12663-8-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64] Cc: Baoquan He <bhe@redhat.com> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/arm/mm/init.c | 66 ++++--------------------------------------- 1 file changed, 7 insertions(+), 59 deletions(-) --- a/arch/arm/mm/init.c~arm-simplify-detection-of-memory-zone-boundaries +++ a/arch/arm/mm/init.c @@ -92,18 +92,6 @@ EXPORT_SYMBOL(arm_dma_zone_size); */ phys_addr_t arm_dma_limit; unsigned long arm_dma_pfn_limit; - -static void __init arm_adjust_dma_zone(unsigned long *size, unsigned long *hole, - unsigned long dma_size) -{ - if (size[0] <= dma_size) - return; - - size[ZONE_NORMAL] = size[0] - dma_size; - size[ZONE_DMA] = dma_size; - hole[ZONE_NORMAL] = hole[0]; - hole[ZONE_DMA] = 0; -} #endif void __init setup_dma_zone(const struct machine_desc *mdesc) @@ -121,56 +109,16 @@ void __init setup_dma_zone(const struct static void __init zone_sizes_init(unsigned long min, unsigned long max_low, unsigned long max_high) { - unsigned long zone_size[MAX_NR_ZONES], zhole_size[MAX_NR_ZONES]; - struct memblock_region *reg; + unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 }; - /* - * initialise the zones. - */ - memset(zone_size, 0, sizeof(zone_size)); - - /* - * The memory size has already been determined. If we need - * to do anything fancy with the allocation of this memory - * to the zones, now is the time to do it. - */ - zone_size[0] = max_low - min; -#ifdef CONFIG_HIGHMEM - zone_size[ZONE_HIGHMEM] = max_high - max_low; +#ifdef CONFIG_ZONE_DMA + max_zone_pfn[ZONE_DMA] = min(arm_dma_pfn_limit, max_low); #endif - - /* - * Calculate the size of the holes. - * holes = node_size - sum(bank_sizes) - */ - memcpy(zhole_size, zone_size, sizeof(zhole_size)); - for_each_memblock(memory, reg) { - unsigned long start = memblock_region_memory_base_pfn(reg); - unsigned long end = memblock_region_memory_end_pfn(reg); - - if (start < max_low) { - unsigned long low_end = min(end, max_low); - zhole_size[0] -= low_end - start; - } + max_zone_pfn[ZONE_NORMAL] = max_low; #ifdef CONFIG_HIGHMEM - if (end > max_low) { - unsigned long high_start = max(start, max_low); - zhole_size[ZONE_HIGHMEM] -= end - high_start; - } + max_zone_pfn[ZONE_HIGHMEM] = max_high; #endif - } - -#ifdef CONFIG_ZONE_DMA - /* - * Adjust the sizes according to any special requirements for - * this machine type. - */ - if (arm_dma_zone_size) - arm_adjust_dma_zone(zone_size, zhole_size, - arm_dma_zone_size >> PAGE_SHIFT); -#endif - - free_area_init_node(0, zone_size, min, zhole_size); + free_area_init(max_zone_pfn); } #ifdef CONFIG_HAVE_ARCH_PFN_VALID @@ -306,7 +254,7 @@ void __init bootmem_init(void) sparse_init(); /* - * Now free the memory - free_area_init_node needs + * Now free the memory - free_area_init needs * the sparse mem_map arrays initialized by sparse_init() * for memmap_init_zone(), otherwise all PFNs are invalid. */ _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 018/131] arm64: simplify detection of memory zone boundaries for UMA configs 2020-06-03 22:55 incoming Andrew Morton ` (16 preceding siblings ...) 2020-06-03 22:57 ` [patch 017/131] arm: " Andrew Morton @ 2020-06-03 22:57 ` Andrew Morton 2020-06-03 22:57 ` [patch 019/131] csky: simplify detection of memory zone boundaries Andrew Morton ` (113 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:57 UTC (permalink / raw) To: akpm, bcain, bhe, catalin.marinas, corbet, dalias, davem, deller, geert, gerg, green.hu, guoren, gxt, heiko.carstens, hoan, James.Bottomley, jcmvbkbc, ley.foon.tan, linux-mm, linux, mattst88, mhocko, mm-commits, monstr, mpe, msalter, nickhu, paul.walmsley, richard, rppt, shorne, tony.luck, torvalds, tsbogend, vgupta, ysato From: Mike Rapoport <rppt@linux.ibm.com> Subject: arm64: simplify detection of memory zone boundaries for UMA configs The free_area_init() function only requires the definition of maximal PFN for each of the supported zone rater than calculation of actual zone sizes and the sizes of the holes between the zones. After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is available to all architectures. Using this function instead of free_area_init_node() simplifies the zone detection. Link: http://lkml.kernel.org/r/20200412194859.12663-9-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64] Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Baoquan He <bhe@redhat.com> Cc: Brian Cain <bcain@codeaurora.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/arm64/mm/init.c | 54 ----------------------------------------- 1 file changed, 54 deletions(-) --- a/arch/arm64/mm/init.c~arm64-simplify-detection-of-memory-zone-boundaries-for-uma-configs +++ a/arch/arm64/mm/init.c @@ -192,8 +192,6 @@ static phys_addr_t __init max_zone_phys( return min(offset + (1ULL << zone_bits), memblock_end_of_DRAM()); } -#ifdef CONFIG_NUMA - static void __init zone_sizes_init(unsigned long min, unsigned long max) { unsigned long max_zone_pfns[MAX_NR_ZONES] = {0}; @@ -209,58 +207,6 @@ static void __init zone_sizes_init(unsig free_area_init(max_zone_pfns); } -#else - -static void __init zone_sizes_init(unsigned long min, unsigned long max) -{ - struct memblock_region *reg; - unsigned long zone_size[MAX_NR_ZONES], zhole_size[MAX_NR_ZONES]; - unsigned long __maybe_unused max_dma, max_dma32; - - memset(zone_size, 0, sizeof(zone_size)); - - max_dma = max_dma32 = min; -#ifdef CONFIG_ZONE_DMA - max_dma = max_dma32 = PFN_DOWN(arm64_dma_phys_limit); - zone_size[ZONE_DMA] = max_dma - min; -#endif -#ifdef CONFIG_ZONE_DMA32 - max_dma32 = PFN_DOWN(arm64_dma32_phys_limit); - zone_size[ZONE_DMA32] = max_dma32 - max_dma; -#endif - zone_size[ZONE_NORMAL] = max - max_dma32; - - memcpy(zhole_size, zone_size, sizeof(zhole_size)); - - for_each_memblock(memory, reg) { - unsigned long start = memblock_region_memory_base_pfn(reg); - unsigned long end = memblock_region_memory_end_pfn(reg); - -#ifdef CONFIG_ZONE_DMA - if (start >= min && start < max_dma) { - unsigned long dma_end = min(end, max_dma); - zhole_size[ZONE_DMA] -= dma_end - start; - start = dma_end; - } -#endif -#ifdef CONFIG_ZONE_DMA32 - if (start >= max_dma && start < max_dma32) { - unsigned long dma32_end = min(end, max_dma32); - zhole_size[ZONE_DMA32] -= dma32_end - start; - start = dma32_end; - } -#endif - if (start >= max_dma32 && start < max) { - unsigned long normal_end = min(end, max); - zhole_size[ZONE_NORMAL] -= normal_end - start; - } - } - - free_area_init_node(0, zone_size, min, zhole_size); -} - -#endif /* CONFIG_NUMA */ - int pfn_valid(unsigned long pfn) { phys_addr_t addr = pfn << PAGE_SHIFT; _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 019/131] csky: simplify detection of memory zone boundaries 2020-06-03 22:55 incoming Andrew Morton ` (17 preceding siblings ...) 2020-06-03 22:57 ` [patch 018/131] arm64: simplify detection of memory zone boundaries for UMA configs Andrew Morton @ 2020-06-03 22:57 ` Andrew Morton 2020-06-03 22:57 ` [patch 020/131] m68k: mm: " Andrew Morton ` (112 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:57 UTC (permalink / raw) To: akpm, bcain, bhe, catalin.marinas, corbet, dalias, davem, deller, geert, gerg, green.hu, guoren, gxt, heiko.carstens, hoan, James.Bottomley, jcmvbkbc, ley.foon.tan, linux-mm, linux, mattst88, mhocko, mm-commits, monstr, mpe, msalter, nickhu, paul.walmsley, richard, rppt, shorne, tony.luck, torvalds, tsbogend, vgupta, ysato From: Mike Rapoport <rppt@linux.ibm.com> Subject: csky: simplify detection of memory zone boundaries The free_area_init() function only requires the definition of maximal PFN for each of the supported zone rater than calculation of actual zone sizes and the sizes of the holes between the zones. After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is available to all architectures. Using this function instead of free_area_init_node() simplifies the zone detection. Link: http://lkml.kernel.org/r/20200412194859.12663-10-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64] Cc: Baoquan He <bhe@redhat.com> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/csky/kernel/setup.c | 26 +++++++++++--------------- 1 file changed, 11 insertions(+), 15 deletions(-) --- a/arch/csky/kernel/setup.c~csky-simplify-detection-of-memory-zone-boundaries +++ a/arch/csky/kernel/setup.c @@ -26,7 +26,9 @@ struct screen_info screen_info = { static void __init csky_memblock_init(void) { - unsigned long zone_size[MAX_NR_ZONES]; + unsigned long lowmem_size = PFN_DOWN(LOWMEM_LIMIT - PHYS_OFFSET_OFFSET); + unsigned long sseg_size = PFN_DOWN(SSEG_SIZE - PHYS_OFFSET_OFFSET); + unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 }; signed long size; memblock_reserve(__pa(_stext), _end - _stext); @@ -36,28 +38,22 @@ static void __init csky_memblock_init(vo memblock_dump_all(); - memset(zone_size, 0, sizeof(zone_size)); - min_low_pfn = PFN_UP(memblock_start_of_DRAM()); max_low_pfn = max_pfn = PFN_DOWN(memblock_end_of_DRAM()); size = max_pfn - min_low_pfn; - if (size <= PFN_DOWN(SSEG_SIZE - PHYS_OFFSET_OFFSET)) - zone_size[ZONE_NORMAL] = size; - else if (size < PFN_DOWN(LOWMEM_LIMIT - PHYS_OFFSET_OFFSET)) { - zone_size[ZONE_NORMAL] = - PFN_DOWN(SSEG_SIZE - PHYS_OFFSET_OFFSET); - max_low_pfn = min_low_pfn + zone_size[ZONE_NORMAL]; - } else { - zone_size[ZONE_NORMAL] = - PFN_DOWN(LOWMEM_LIMIT - PHYS_OFFSET_OFFSET); - max_low_pfn = min_low_pfn + zone_size[ZONE_NORMAL]; + if (size >= lowmem_size) { + max_low_pfn = min_low_pfn + lowmem_size; write_mmu_msa1(read_mmu_msa0() + SSEG_SIZE); + } else if (size > sseg_size) { + max_low_pfn = min_low_pfn + sseg_size; } + max_zone_pfn[ZONE_NORMAL] = max_low_pfn; + #ifdef CONFIG_HIGHMEM - zone_size[ZONE_HIGHMEM] = max_pfn - max_low_pfn; + max_zone_pfn[ZONE_HIGHMEM] = max_pfn; highstart_pfn = max_low_pfn; highend_pfn = max_pfn; @@ -66,7 +62,7 @@ static void __init csky_memblock_init(vo dma_contiguous_reserve(0); - free_area_init_node(0, zone_size, min_low_pfn, NULL); + free_area_init(max_zone_pfn); } void __init setup_arch(char **cmdline_p) _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 020/131] m68k: mm: simplify detection of memory zone boundaries 2020-06-03 22:55 incoming Andrew Morton ` (18 preceding siblings ...) 2020-06-03 22:57 ` [patch 019/131] csky: simplify detection of memory zone boundaries Andrew Morton @ 2020-06-03 22:57 ` Andrew Morton 2020-06-03 22:57 ` [patch 021/131] parisc: " Andrew Morton ` (111 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:57 UTC (permalink / raw) To: akpm, bcain, bhe, catalin.marinas, corbet, dalias, davem, deller, geert, gerg, green.hu, guoren, gxt, heiko.carstens, hoan, James.Bottomley, jcmvbkbc, ley.foon.tan, linux-mm, linux, mattst88, mhocko, mm-commits, monstr, mpe, msalter, nickhu, paul.walmsley, richard, rppt, shorne, tony.luck, torvalds, tsbogend, vgupta, ysato From: Mike Rapoport <rppt@linux.ibm.com> Subject: m68k: mm: simplify detection of memory zone boundaries free_area_init() only requires the definition of maximal PFN for each of the supported zone rater than calculation of actual zone sizes and the sizes of the holes between the zones. After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is available to all architectures. Using this function instead of free_area_init_node() simplifies the zone detection. Link: http://lkml.kernel.org/r/20200412194859.12663-11-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64] Cc: Baoquan He <bhe@redhat.com> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/m68k/mm/motorola.c | 11 +++++------ arch/m68k/mm/sun3mmu.c | 10 +++------- 2 files changed, 8 insertions(+), 13 deletions(-) --- a/arch/m68k/mm/motorola.c~m68k-mm-simplify-detection-of-memory-zone-boundaries +++ a/arch/m68k/mm/motorola.c @@ -365,7 +365,7 @@ static void __init map_node(int node) */ void __init paging_init(void) { - unsigned long zones_size[MAX_NR_ZONES] = { 0, }; + unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0, }; unsigned long min_addr, max_addr; unsigned long addr; int i; @@ -448,11 +448,10 @@ void __init paging_init(void) #ifdef DEBUG printk ("before free_area_init\n"); #endif - for (i = 0; i < m68k_num_memory; i++) { - zones_size[ZONE_DMA] = m68k_memory[i].size >> PAGE_SHIFT; - free_area_init_node(i, zones_size, - m68k_memory[i].addr >> PAGE_SHIFT, NULL); + for (i = 0; i < m68k_num_memory; i++) if (node_present_pages(i)) node_set_state(i, N_NORMAL_MEMORY); - } + + max_zone_pfn[ZONE_DMA] = memblock_end_of_DRAM(); + free_area_init(max_zone_pfn); } --- a/arch/m68k/mm/sun3mmu.c~m68k-mm-simplify-detection-of-memory-zone-boundaries +++ a/arch/m68k/mm/sun3mmu.c @@ -42,7 +42,7 @@ void __init paging_init(void) unsigned long address; unsigned long next_pgtable; unsigned long bootmem_end; - unsigned long zones_size[MAX_NR_ZONES] = { 0, }; + unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0, }; unsigned long size; empty_zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE); @@ -89,14 +89,10 @@ void __init paging_init(void) current->mm = NULL; /* memory sizing is a hack stolen from motorola.c.. hope it works for us */ - zones_size[ZONE_DMA] = ((unsigned long)high_memory - PAGE_OFFSET) >> PAGE_SHIFT; + max_zone_pfn[ZONE_DMA] = ((unsigned long)high_memory) >> PAGE_SHIFT; /* I really wish I knew why the following change made things better... -- Sam */ -/* free_area_init(zones_size); */ - free_area_init_node(0, zones_size, - (__pa(PAGE_OFFSET) >> PAGE_SHIFT) + 1, NULL); + free_area_init(max_zone_pfn); } - - _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 021/131] parisc: simplify detection of memory zone boundaries 2020-06-03 22:55 incoming Andrew Morton ` (19 preceding siblings ...) 2020-06-03 22:57 ` [patch 020/131] m68k: mm: " Andrew Morton @ 2020-06-03 22:57 ` Andrew Morton 2020-06-03 22:57 ` [patch 022/131] sparc32: " Andrew Morton ` (110 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:57 UTC (permalink / raw) To: akpm, bcain, bhe, catalin.marinas, corbet, dalias, davem, deller, geert, gerg, green.hu, guoren, gxt, heiko.carstens, hoan, James.Bottomley, jcmvbkbc, ley.foon.tan, linux-mm, linux, mattst88, mhocko, mm-commits, monstr, mpe, msalter, nickhu, paul.walmsley, richard, rppt, shorne, tony.luck, torvalds, tsbogend, vgupta, ysato From: Mike Rapoport <rppt@linux.ibm.com> Subject: parisc: simplify detection of memory zone boundaries free_area_init() only requires the definition of maximal PFN for each of the supported zone rater than calculation of actual zone sizes and the sizes of the holes between the zones. After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is available to all architectures. Using this function instead of free_area_init_node() simplifies the zone detection. Link: http://lkml.kernel.org/r/20200412194859.12663-12-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64] Cc: Baoquan He <bhe@redhat.com> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/parisc/mm/init.c | 22 +++------------------- 1 file changed, 3 insertions(+), 19 deletions(-) --- a/arch/parisc/mm/init.c~parisc-simplify-detection-of-memory-zone-boundaries +++ a/arch/parisc/mm/init.c @@ -675,27 +675,11 @@ static void __init gateway_init(void) static void __init parisc_bootmem_free(void) { - unsigned long zones_size[MAX_NR_ZONES] = { 0, }; - unsigned long holes_size[MAX_NR_ZONES] = { 0, }; - unsigned long mem_start_pfn = ~0UL, mem_end_pfn = 0, mem_size_pfn = 0; - int i; + unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0, }; - for (i = 0; i < npmem_ranges; i++) { - unsigned long start = pmem_ranges[i].start_pfn; - unsigned long size = pmem_ranges[i].pages; - unsigned long end = start + size; + max_zone_pfn[0] = memblock_end_of_DRAM(); - if (mem_start_pfn > start) - mem_start_pfn = start; - if (mem_end_pfn < end) - mem_end_pfn = end; - mem_size_pfn += size; - } - - zones_size[0] = mem_end_pfn - mem_start_pfn; - holes_size[0] = zones_size[0] - mem_size_pfn; - - free_area_init_node(0, zones_size, mem_start_pfn, holes_size); + free_area_init(max_zone_pfn); } void __init paging_init(void) _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 022/131] sparc32: simplify detection of memory zone boundaries 2020-06-03 22:55 incoming Andrew Morton ` (20 preceding siblings ...) 2020-06-03 22:57 ` [patch 021/131] parisc: " Andrew Morton @ 2020-06-03 22:57 ` Andrew Morton 2020-06-03 22:57 ` [patch 023/131] unicore32: " Andrew Morton ` (109 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:57 UTC (permalink / raw) To: akpm, bcain, bhe, catalin.marinas, corbet, dalias, davem, deller, geert, gerg, green.hu, guoren, gxt, heiko.carstens, hoan, James.Bottomley, jcmvbkbc, ley.foon.tan, linux-mm, linux, mattst88, mhocko, mm-commits, monstr, mpe, msalter, nickhu, paul.walmsley, richard, rppt, shorne, tony.luck, torvalds, tsbogend, vgupta, ysato From: Mike Rapoport <rppt@linux.ibm.com> Subject: sparc32: simplify detection of memory zone boundaries free_area_init() only requires the definition of maximal PFN for each of the supported zone rater than calculation of actual zone sizes and the sizes of the holes between the zones. After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is available to all architectures. Using this function instead of free_area_init_node() simplifies the zone detection. Link: http://lkml.kernel.org/r/20200412194859.12663-13-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64] Cc: Baoquan He <bhe@redhat.com> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/sparc/mm/srmmu.c | 21 +++++---------------- 1 file changed, 5 insertions(+), 16 deletions(-) --- a/arch/sparc/mm/srmmu.c~sparc32-simplify-detection-of-memory-zone-boundaries +++ a/arch/sparc/mm/srmmu.c @@ -1008,24 +1008,13 @@ void __init srmmu_paging_init(void) kmap_init(); { - unsigned long zones_size[MAX_NR_ZONES]; - unsigned long zholes_size[MAX_NR_ZONES]; - unsigned long npages; - int znum; + unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 }; - for (znum = 0; znum < MAX_NR_ZONES; znum++) - zones_size[znum] = zholes_size[znum] = 0; + max_zone_pfn[ZONE_DMA] = max_low_pfn; + max_zone_pfn[ZONE_NORMAL] = max_low_pfn; + max_zone_pfn[ZONE_HIGHMEM] = highend_pfn; - npages = max_low_pfn - pfn_base; - - zones_size[ZONE_DMA] = npages; - zholes_size[ZONE_DMA] = npages - pages_avail; - - npages = highend_pfn - max_low_pfn; - zones_size[ZONE_HIGHMEM] = npages; - zholes_size[ZONE_HIGHMEM] = npages - calc_highpages(); - - free_area_init_node(0, zones_size, pfn_base, zholes_size); + free_area_init(max_zone_pfn); } } _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 023/131] unicore32: simplify detection of memory zone boundaries 2020-06-03 22:55 incoming Andrew Morton ` (21 preceding siblings ...) 2020-06-03 22:57 ` [patch 022/131] sparc32: " Andrew Morton @ 2020-06-03 22:57 ` Andrew Morton 2020-06-03 22:57 ` [patch 024/131] xtensa: " Andrew Morton ` (108 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:57 UTC (permalink / raw) To: akpm, bcain, bhe, catalin.marinas, corbet, dalias, davem, deller, geert, gerg, green.hu, guoren, gxt, heiko.carstens, hoan, James.Bottomley, jcmvbkbc, ley.foon.tan, linux-mm, linux, mattst88, mhocko, mm-commits, monstr, mpe, msalter, nickhu, paul.walmsley, richard, rppt, shorne, tony.luck, torvalds, tsbogend, vgupta, ysato From: Mike Rapoport <rppt@linux.ibm.com> Subject: unicore32: simplify detection of memory zone boundaries free_area_init() only requires the definition of maximal PFN for each of the supported zone rater than calculation of actual zone sizes and the sizes of the holes between the zones. After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is available to all architectures. Using this function instead of free_area_init_node() simplifies the zone detection. Link: http://lkml.kernel.org/r/20200412194859.12663-14-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64] Cc: Baoquan He <bhe@redhat.com> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/unicore32/include/asm/memory.h | 2 - arch/unicore32/include/mach/memory.h | 6 +-- arch/unicore32/kernel/pci.c | 14 +------- arch/unicore32/mm/init.c | 43 +++++-------------------- 4 files changed, 15 insertions(+), 50 deletions(-) --- a/arch/unicore32/include/asm/memory.h~unicore32-simplify-detection-of-memory-zone-boundaries +++ a/arch/unicore32/include/asm/memory.h @@ -60,7 +60,7 @@ #ifndef __ASSEMBLY__ #ifndef arch_adjust_zones -#define arch_adjust_zones(size, holes) do { } while (0) +#define arch_adjust_zones(max_zone_pfn) do { } while (0) #endif /* --- a/arch/unicore32/include/mach/memory.h~unicore32-simplify-detection-of-memory-zone-boundaries +++ a/arch/unicore32/include/mach/memory.h @@ -25,10 +25,10 @@ #if !defined(__ASSEMBLY__) && defined(CONFIG_PCI) -void puv3_pci_adjust_zones(unsigned long *size, unsigned long *holes); +void puv3_pci_adjust_zones(unsigned long *max_zone_pfn); -#define arch_adjust_zones(size, holes) \ - puv3_pci_adjust_zones(size, holes) +#define arch_adjust_zones(max_zone_pfn) \ + puv3_pci_adjust_zones(max_zone_pfn) #endif --- a/arch/unicore32/kernel/pci.c~unicore32-simplify-detection-of-memory-zone-boundaries +++ a/arch/unicore32/kernel/pci.c @@ -133,21 +133,11 @@ static int pci_puv3_map_irq(const struct * This is really ugly and we need a better way of specifying * DMA-capable regions of memory. */ -void __init puv3_pci_adjust_zones(unsigned long *zone_size, - unsigned long *zhole_size) +void __init puv3_pci_adjust_zones(unsigned long max_zone_pfn) { unsigned int sz = SZ_128M >> PAGE_SHIFT; - /* - * Only adjust if > 128M on current system - */ - if (zone_size[0] <= sz) - return; - - zone_size[1] = zone_size[0] - sz; - zone_size[0] = sz; - zhole_size[1] = zhole_size[0]; - zhole_size[0] = 0; + max_zone_pfn[ZONE_DMA] = sz; } /* --- a/arch/unicore32/mm/init.c~unicore32-simplify-detection-of-memory-zone-boundaries +++ a/arch/unicore32/mm/init.c @@ -61,46 +61,21 @@ static void __init find_limits(unsigned } } -static void __init uc32_bootmem_free(unsigned long min, unsigned long max_low, - unsigned long max_high) +static void __init uc32_bootmem_free(unsigned long max_low) { - unsigned long zone_size[MAX_NR_ZONES], zhole_size[MAX_NR_ZONES]; - struct memblock_region *reg; + unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 }; - /* - * initialise the zones. - */ - memset(zone_size, 0, sizeof(zone_size)); - - /* - * The memory size has already been determined. If we need - * to do anything fancy with the allocation of this memory - * to the zones, now is the time to do it. - */ - zone_size[0] = max_low - min; - - /* - * Calculate the size of the holes. - * holes = node_size - sum(bank_sizes) - */ - memcpy(zhole_size, zone_size, sizeof(zhole_size)); - for_each_memblock(memory, reg) { - unsigned long start = memblock_region_memory_base_pfn(reg); - unsigned long end = memblock_region_memory_end_pfn(reg); - - if (start < max_low) { - unsigned long low_end = min(end, max_low); - zhole_size[0] -= low_end - start; - } - } + max_zone_pfn[ZONE_DMA] = max_low; + max_zone_pfn[ZONE_NORMAL] = max_low; /* * Adjust the sizes according to any special requirements for * this machine type. + * This might lower ZONE_DMA limit. */ - arch_adjust_zones(zone_size, zhole_size); + arch_adjust_zones(max_zone_pfn); - free_area_init_node(0, zone_size, min, zhole_size); + free_area_init(max_zone_pfn); } int pfn_valid(unsigned long pfn) @@ -176,11 +151,11 @@ void __init bootmem_init(void) sparse_init(); /* - * Now free the memory - free_area_init_node needs + * Now free the memory - free_area_init needs * the sparse mem_map arrays initialized by sparse_init() * for memmap_init_zone(), otherwise all PFNs are invalid. */ - uc32_bootmem_free(min, max_low, max_high); + uc32_bootmem_free(max_low); high_memory = __va((max_low << PAGE_SHIFT) - 1) + 1; _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 024/131] xtensa: simplify detection of memory zone boundaries 2020-06-03 22:55 incoming Andrew Morton ` (22 preceding siblings ...) 2020-06-03 22:57 ` [patch 023/131] unicore32: " Andrew Morton @ 2020-06-03 22:57 ` Andrew Morton 2020-06-03 22:57 ` [patch 025/131] mm: memmap_init: iterate over memblock regions rather that check each PFN Andrew Morton ` (107 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:57 UTC (permalink / raw) To: akpm, bcain, bhe, catalin.marinas, corbet, dalias, davem, deller, geert, gerg, green.hu, guoren, gxt, heiko.carstens, hoan, James.Bottomley, jcmvbkbc, ley.foon.tan, linux-mm, linux, mattst88, mhocko, mm-commits, monstr, mpe, msalter, nickhu, paul.walmsley, richard, rppt, shorne, tony.luck, torvalds, tsbogend, vgupta, ysato From: Mike Rapoport <rppt@linux.ibm.com> Subject: xtensa: simplify detection of memory zone boundaries free_area_init() only requires the definition of maximal PFN for each of the supported zone rater than calculation of actual zone sizes and the sizes of the holes between the zones. After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is available to all architectures. Using this function instead of free_area_init_node() simplifies the zone detection. Link: http://lkml.kernel.org/r/20200412194859.12663-15-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64] Cc: Baoquan He <bhe@redhat.com> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/xtensa/mm/init.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) --- a/arch/xtensa/mm/init.c~xtensa-simplify-detection-of-memory-zone-boundaries +++ a/arch/xtensa/mm/init.c @@ -70,13 +70,13 @@ void __init bootmem_init(void) void __init zones_init(void) { /* All pages are DMA-able, so we put them all in the DMA zone. */ - unsigned long zones_size[MAX_NR_ZONES] = { - [ZONE_NORMAL] = max_low_pfn - ARCH_PFN_OFFSET, + unsigned long max_zone_pfn[MAX_NR_ZONES] = { + [ZONE_NORMAL] = max_low_pfn, #ifdef CONFIG_HIGHMEM - [ZONE_HIGHMEM] = max_pfn - max_low_pfn, + [ZONE_HIGHMEM] = max_pfn, #endif }; - free_area_init_node(0, zones_size, ARCH_PFN_OFFSET, NULL); + free_area_init(max_zone_pfn); } #ifdef CONFIG_HIGHMEM _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 025/131] mm: memmap_init: iterate over memblock regions rather that check each PFN 2020-06-03 22:55 incoming Andrew Morton ` (23 preceding siblings ...) 2020-06-03 22:57 ` [patch 024/131] xtensa: " Andrew Morton @ 2020-06-03 22:57 ` Andrew Morton 2020-06-03 22:57 ` [patch 026/131] mm: remove early_pfn_in_nid() and CONFIG_NODES_SPAN_OTHER_NODES Andrew Morton ` (106 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:57 UTC (permalink / raw) To: akpm, bcain, bhe, cai, catalin.marinas, corbet, dalias, davem, deller, geert, gerg, green.hu, guoren, gxt, heiko.carstens, hoan, James.Bottomley, jcmvbkbc, ley.foon.tan, linux-mm, linux, mattst88, mhocko, mm-commits, monstr, mpe, msalter, nickhu, paul.walmsley, richard, rppt, shorne, tony.luck, torvalds, tsbogend, vgupta, ysato From: Baoquan He <bhe@redhat.com> Subject: mm: memmap_init: iterate over memblock regions rather that check each PFN When called during boot the memmap_init_zone() function checks if each PFN is valid and actually belongs to the node being initialized using early_pfn_valid() and early_pfn_in_nid(). Each such check may cost up to O(log(n)) where n is the number of memory banks, so for large amount of memory overall time spent in early_pfn*() becomes substantial. Since the information is anyway present in memblock, we can iterate over memblock memory regions in memmap_init() and only call memmap_init_zone() for PFN ranges that are know to be valid and in the appropriate node. [cai@lca.pw: fix a compilation warning from Clang] Link: http://lkml.kernel.org/r/CF6E407F-17DC-427C-8203-21979FB882EF@lca.pw [bhe@redhat.com: fix the incorrect hole in fast_isolate_freepages()] Link: http://lkml.kernel.org/r/8C537EB7-85EE-4DCF-943E-3CC0ED0DF56D@lca.pw Link: http://lkml.kernel.org/r/20200521014407.29690-1-bhe@redhat.com Link: http://lkml.kernel.org/r/20200412194859.12663-16-rppt@kernel.org Signed-off-by: Baoquan He <bhe@redhat.com> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64] Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Qian Cai <cai@lca.pw> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/compaction.c | 4 +++- mm/page_alloc.c | 43 ++++++++++++++++--------------------------- 2 files changed, 19 insertions(+), 28 deletions(-) --- a/mm/compaction.c~mm-memmap_init-iterate-over-memblock-regions-rather-that-check-each-pfn +++ a/mm/compaction.c @@ -1409,7 +1409,9 @@ fast_isolate_freepages(struct compact_co cc->free_pfn = highest; } else { if (cc->direct_compaction && pfn_valid(min_pfn)) { - page = pfn_to_page(min_pfn); + page = pageblock_pfn_to_page(min_pfn, + pageblock_end_pfn(min_pfn), + cc->zone); cc->free_pfn = min_pfn; } } --- a/mm/page_alloc.c~mm-memmap_init-iterate-over-memblock-regions-rather-that-check-each-pfn +++ a/mm/page_alloc.c @@ -5951,23 +5951,6 @@ overlap_memmap_init(unsigned long zone, return false; } -#ifdef CONFIG_SPARSEMEM -/* Skip PFNs that belong to non-present sections */ -static inline __meminit unsigned long next_pfn(unsigned long pfn) -{ - const unsigned long section_nr = pfn_to_section_nr(++pfn); - - if (present_section_nr(section_nr)) - return pfn; - return section_nr_to_pfn(next_present_section_nr(section_nr)); -} -#else -static inline __meminit unsigned long next_pfn(unsigned long pfn) -{ - return pfn++; -} -#endif - /* * Initially all pages are reserved - free ones are freed * up by memblock_free_all() once the early boot process is @@ -6007,14 +5990,6 @@ void __meminit memmap_init_zone(unsigned * function. They do not exist on hotplugged memory. */ if (context == MEMMAP_EARLY) { - if (!early_pfn_valid(pfn)) { - pfn = next_pfn(pfn); - continue; - } - if (!early_pfn_in_nid(pfn, nid)) { - pfn++; - continue; - } if (overlap_memmap_init(zone, &pfn)) continue; if (defer_init(nid, pfn, end_pfn)) @@ -6130,9 +6105,23 @@ static void __meminit zone_init_free_lis } void __meminit __weak memmap_init(unsigned long size, int nid, - unsigned long zone, unsigned long start_pfn) + unsigned long zone, + unsigned long range_start_pfn) { - memmap_init_zone(size, nid, zone, start_pfn, MEMMAP_EARLY, NULL); + unsigned long start_pfn, end_pfn; + unsigned long range_end_pfn = range_start_pfn + size; + int i; + + for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) { + start_pfn = clamp(start_pfn, range_start_pfn, range_end_pfn); + end_pfn = clamp(end_pfn, range_start_pfn, range_end_pfn); + + if (end_pfn > start_pfn) { + size = end_pfn - start_pfn; + memmap_init_zone(size, nid, zone, start_pfn, + MEMMAP_EARLY, NULL); + } + } } static int zone_batchsize(struct zone *zone) _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 026/131] mm: remove early_pfn_in_nid() and CONFIG_NODES_SPAN_OTHER_NODES 2020-06-03 22:55 incoming Andrew Morton ` (24 preceding siblings ...) 2020-06-03 22:57 ` [patch 025/131] mm: memmap_init: iterate over memblock regions rather that check each PFN Andrew Morton @ 2020-06-03 22:57 ` Andrew Morton 2020-06-03 22:58 ` [patch 027/131] mm: free_area_init: allow defining max_zone_pfn in descending order Andrew Morton ` (105 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:57 UTC (permalink / raw) To: akpm, bcain, bhe, catalin.marinas, corbet, dalias, davem, deller, geert, gerg, green.hu, guoren, gxt, heiko.carstens, Hoan, James.Bottomley, jcmvbkbc, ley.foon.tan, linux-mm, linux, mattst88, mhocko, mm-commits, monstr, mpe, msalter, nickhu, paul.walmsley, richard, rppt, shorne, tony.luck, torvalds, tsbogend, vgupta, ysato From: Mike Rapoport <rppt@linux.ibm.com> Subject: mm: remove early_pfn_in_nid() and CONFIG_NODES_SPAN_OTHER_NODES The memmap_init() function was made to iterate over memblock regions and as the result the early_pfn_in_nid() function became obsolete. Since CONFIG_NODES_SPAN_OTHER_NODES is only used to pick a stub or a real implementation of early_pfn_in_nid(), it is also not needed anymore. Remove both early_pfn_in_nid() and the CONFIG_NODES_SPAN_OTHER_NODES. Link: http://lkml.kernel.org/r/20200412194859.12663-17-rppt@kernel.org Signed-off-by: Hoan Tran <Hoan@os.amperecomputing.com> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Co-developed-by: Hoan Tran <Hoan@os.amperecomputing.com> Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64] Cc: Baoquan He <bhe@redhat.com> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/powerpc/Kconfig | 9 --------- arch/sparc/Kconfig | 9 --------- arch/x86/Kconfig | 9 --------- mm/page_alloc.c | 20 -------------------- 4 files changed, 47 deletions(-) --- a/arch/powerpc/Kconfig~mm-remove-early_pfn_in_nid-and-config_nodes_span_other_nodes +++ a/arch/powerpc/Kconfig @@ -686,15 +686,6 @@ config ARCH_MEMORY_PROBE def_bool y depends on MEMORY_HOTPLUG -# Some NUMA nodes have memory ranges that span -# other nodes. Even though a pfn is valid and -# between a node's start and end pfns, it may not -# reside on that node. See memmap_init_zone() -# for details. -config NODES_SPAN_OTHER_NODES - def_bool y - depends on NEED_MULTIPLE_NODES - config STDBINUTILS bool "Using standard binutils settings" depends on 44x --- a/arch/sparc/Kconfig~mm-remove-early_pfn_in_nid-and-config_nodes_span_other_nodes +++ a/arch/sparc/Kconfig @@ -286,15 +286,6 @@ config NODES_SHIFT Specify the maximum number of NUMA Nodes available on the target system. Increases memory reserved to accommodate various tables. -# Some NUMA nodes have memory ranges that span -# other nodes. Even though a pfn is valid and -# between a node's start and end pfns, it may not -# reside on that node. See memmap_init_zone() -# for details. -config NODES_SPAN_OTHER_NODES - def_bool y - depends on NEED_MULTIPLE_NODES - config ARCH_SPARSEMEM_ENABLE def_bool y if SPARC64 select SPARSEMEM_VMEMMAP_ENABLE --- a/arch/x86/Kconfig~mm-remove-early_pfn_in_nid-and-config_nodes_span_other_nodes +++ a/arch/x86/Kconfig @@ -1583,15 +1583,6 @@ config X86_64_ACPI_NUMA ---help--- Enable ACPI SRAT based node topology detection. -# Some NUMA nodes have memory ranges that span -# other nodes. Even though a pfn is valid and -# between a node's start and end pfns, it may not -# reside on that node. See memmap_init_zone() -# for details. -config NODES_SPAN_OTHER_NODES - def_bool y - depends on X86_64_ACPI_NUMA - config NUMA_EMU bool "NUMA emulation" depends on NUMA --- a/mm/page_alloc.c~mm-remove-early_pfn_in_nid-and-config_nodes_span_other_nodes +++ a/mm/page_alloc.c @@ -1541,26 +1541,6 @@ int __meminit early_pfn_to_nid(unsigned } #endif /* CONFIG_NEED_MULTIPLE_NODES */ -#ifdef CONFIG_NODES_SPAN_OTHER_NODES -/* Only safe to use early in boot when initialisation is single-threaded */ -static inline bool __meminit early_pfn_in_nid(unsigned long pfn, int node) -{ - int nid; - - nid = __early_pfn_to_nid(pfn, &early_pfnnid_cache); - if (nid >= 0 && nid != node) - return false; - return true; -} - -#else -static inline bool __meminit early_pfn_in_nid(unsigned long pfn, int node) -{ - return true; -} -#endif - - void __init memblock_free_pages(struct page *page, unsigned long pfn, unsigned int order) { _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 027/131] mm: free_area_init: allow defining max_zone_pfn in descending order 2020-06-03 22:55 incoming Andrew Morton ` (25 preceding siblings ...) 2020-06-03 22:57 ` [patch 026/131] mm: remove early_pfn_in_nid() and CONFIG_NODES_SPAN_OTHER_NODES Andrew Morton @ 2020-06-03 22:58 ` Andrew Morton 2020-06-03 22:58 ` [patch 028/131] mm: rename free_area_init_node() to free_area_init_memoryless_node() Andrew Morton ` (104 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:58 UTC (permalink / raw) To: akpm, bcain, bhe, catalin.marinas, corbet, dalias, davem, deller, geert, gerg, green.hu, guoren, gxt, heiko.carstens, hoan, James.Bottomley, jcmvbkbc, ley.foon.tan, linux-mm, linux, linux, mattst88, mhocko, mm-commits, monstr, mpe, msalter, nickhu, paul.walmsley, richard, rppt, shorne, tony.luck, torvalds, tsbogend, vgupta, ysato From: Mike Rapoport <rppt@linux.ibm.com> Subject: mm: free_area_init: allow defining max_zone_pfn in descending order Some architectures (e.g. ARC) have the ZONE_HIGHMEM zone below the ZONE_NORMAL. Allowing free_area_init() parse max_zone_pfn array even it is sorted in descending order allows using free_area_init() on such architectures. Add top -> down traversal of max_zone_pfn array in free_area_init() and use the latter in ARC node/zone initialization. [rppt@kernel.org: ARC fix] Link: http://lkml.kernel.org/r/20200504153901.GM14260@kernel.org [rppt@linux.ibm.com: arc: free_area_init(): take into account PAE40 mode] Link: http://lkml.kernel.org/r/20200507205900.GH683243@linux.ibm.com [akpm@linux-foundation.org: declare arch_has_descending_max_zone_pfns()] Link: http://lkml.kernel.org/r/20200412194859.12663-18-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Reviewed-by: Baoquan He <bhe@redhat.com> Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64] Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/arc/mm/init.c | 41 ++++++++++++----------------------------- include/linux/mm.h | 1 + mm/page_alloc.c | 26 +++++++++++++++++++++----- 3 files changed, 34 insertions(+), 34 deletions(-) --- a/arch/arc/mm/init.c~mm-free_area_init-allow-defining-max_zone_pfn-in-descending-order +++ a/arch/arc/mm/init.c @@ -63,11 +63,13 @@ void __init early_init_dt_add_memory_arc low_mem_sz = size; in_use = 1; + memblock_add_node(base, size, 0); } else { #ifdef CONFIG_HIGHMEM high_mem_start = base; high_mem_sz = size; in_use = 1; + memblock_add_node(base, size, 1); #endif } @@ -75,6 +77,11 @@ void __init early_init_dt_add_memory_arc base, TO_MB(size), !in_use ? "Not used":""); } +bool arch_has_descending_max_zone_pfns(void) +{ + return !IS_ENABLED(CONFIG_ARC_HAS_PAE40); +} + /* * First memory setup routine called from setup_arch() * 1. setup swapper's mm @init_mm @@ -83,8 +90,7 @@ void __init early_init_dt_add_memory_arc */ void __init setup_arch_memory(void) { - unsigned long zones_size[MAX_NR_ZONES]; - unsigned long zones_holes[MAX_NR_ZONES]; + unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 }; init_mm.start_code = (unsigned long)_text; init_mm.end_code = (unsigned long)_etext; @@ -115,7 +121,6 @@ void __init setup_arch_memory(void) * the crash */ - memblock_add_node(low_mem_start, low_mem_sz, 0); memblock_reserve(CONFIG_LINUX_LINK_BASE, __pa(_end) - CONFIG_LINUX_LINK_BASE); @@ -133,22 +138,7 @@ void __init setup_arch_memory(void) memblock_dump_all(); /*----------------- node/zones setup --------------------------*/ - memset(zones_size, 0, sizeof(zones_size)); - memset(zones_holes, 0, sizeof(zones_holes)); - - zones_size[ZONE_NORMAL] = max_low_pfn - min_low_pfn; - zones_holes[ZONE_NORMAL] = 0; - - /* - * We can't use the helper free_area_init(zones[]) because it uses - * PAGE_OFFSET to compute the @min_low_pfn which would be wrong - * when our kernel doesn't start at PAGE_OFFSET, i.e. - * PAGE_OFFSET != CONFIG_LINUX_RAM_BASE - */ - free_area_init_node(0, /* node-id */ - zones_size, /* num pages per zone */ - min_low_pfn, /* first pfn of node */ - zones_holes); /* holes */ + max_zone_pfn[ZONE_NORMAL] = max_low_pfn; #ifdef CONFIG_HIGHMEM /* @@ -168,20 +158,13 @@ void __init setup_arch_memory(void) min_high_pfn = PFN_DOWN(high_mem_start); max_high_pfn = PFN_DOWN(high_mem_start + high_mem_sz); - zones_size[ZONE_NORMAL] = 0; - zones_holes[ZONE_NORMAL] = 0; - - zones_size[ZONE_HIGHMEM] = max_high_pfn - min_high_pfn; - zones_holes[ZONE_HIGHMEM] = 0; - - free_area_init_node(1, /* node-id */ - zones_size, /* num pages per zone */ - min_high_pfn, /* first pfn of node */ - zones_holes); /* holes */ + max_zone_pfn[ZONE_HIGHMEM] = max_high_pfn; high_memory = (void *)(min_high_pfn << PAGE_SHIFT); kmap_init(); #endif + + free_area_init(max_zone_pfn); } /* --- a/include/linux/mm.h~mm-free_area_init-allow-defining-max_zone_pfn-in-descending-order +++ a/include/linux/mm.h @@ -2473,6 +2473,7 @@ extern void setup_per_cpu_pageset(void); extern int min_free_kbytes; extern int watermark_boost_factor; extern int watermark_scale_factor; +extern bool arch_has_descending_max_zone_pfns(void); /* nommu.c */ extern atomic_long_t mmap_pages_allocated; --- a/mm/page_alloc.c~mm-free_area_init-allow-defining-max_zone_pfn-in-descending-order +++ a/mm/page_alloc.c @@ -7408,6 +7408,15 @@ static void check_for_memory(pg_data_t * } } +/* + * Some architecturs, e.g. ARC may have ZONE_HIGHMEM below ZONE_NORMAL. For + * such cases we allow max_zone_pfn sorted in the descending order + */ +bool __weak arch_has_descending_max_zone_pfns(void) +{ + return false; +} + /** * free_area_init - Initialise all pg_data_t and zone data * @max_zone_pfn: an array of max PFNs for each zone @@ -7424,7 +7433,8 @@ static void check_for_memory(pg_data_t * void __init free_area_init(unsigned long *max_zone_pfn) { unsigned long start_pfn, end_pfn; - int i, nid; + int i, nid, zone; + bool descending; /* Record where the zone boundaries are */ memset(arch_zone_lowest_possible_pfn, 0, @@ -7433,14 +7443,20 @@ void __init free_area_init(unsigned long sizeof(arch_zone_highest_possible_pfn)); start_pfn = find_min_pfn_with_active_regions(); + descending = arch_has_descending_max_zone_pfns(); for (i = 0; i < MAX_NR_ZONES; i++) { - if (i == ZONE_MOVABLE) + if (descending) + zone = MAX_NR_ZONES - i - 1; + else + zone = i; + + if (zone == ZONE_MOVABLE) continue; - end_pfn = max(max_zone_pfn[i], start_pfn); - arch_zone_lowest_possible_pfn[i] = start_pfn; - arch_zone_highest_possible_pfn[i] = end_pfn; + end_pfn = max(max_zone_pfn[zone], start_pfn); + arch_zone_lowest_possible_pfn[zone] = start_pfn; + arch_zone_highest_possible_pfn[zone] = end_pfn; start_pfn = end_pfn; } _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 028/131] mm: rename free_area_init_node() to free_area_init_memoryless_node() 2020-06-03 22:55 incoming Andrew Morton ` (26 preceding siblings ...) 2020-06-03 22:58 ` [patch 027/131] mm: free_area_init: allow defining max_zone_pfn in descending order Andrew Morton @ 2020-06-03 22:58 ` Andrew Morton 2020-06-03 22:58 ` [patch 029/131] mm: clean up free_area_init_node() and its helpers Andrew Morton ` (103 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:58 UTC (permalink / raw) To: akpm, bcain, bhe, catalin.marinas, corbet, dalias, davem, deller, geert, gerg, green.hu, guoren, gxt, heiko.carstens, hoan, James.Bottomley, jcmvbkbc, ley.foon.tan, linux-mm, linux, mattst88, mhocko, mm-commits, monstr, mpe, msalter, nickhu, paul.walmsley, richard, rppt, shorne, tony.luck, torvalds, tsbogend, vgupta, ysato From: Mike Rapoport <rppt@linux.ibm.com> Subject: mm: rename free_area_init_node() to free_area_init_memoryless_node() free_area_init_node() is only used by x86 to initialize a memory-less nodes. Make its name reflect this and drop all the function parameters except node ID as they are anyway zero. Link: http://lkml.kernel.org/r/20200412194859.12663-19-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64] Cc: Baoquan He <bhe@redhat.com> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/x86/mm/numa.c | 5 +---- include/linux/mm.h | 9 +++------ mm/page_alloc.c | 7 ++----- 3 files changed, 6 insertions(+), 15 deletions(-) --- a/arch/x86/mm/numa.c~mm-rename-free_area_init_node-to-free_area_init_memoryless_node +++ a/arch/x86/mm/numa.c @@ -737,12 +737,9 @@ void __init x86_numa_init(void) static void __init init_memory_less_node(int nid) { - unsigned long zones_size[MAX_NR_ZONES] = {0}; - unsigned long zholes_size[MAX_NR_ZONES] = {0}; - /* Allocate and initialize node data. Memory-less node is now online.*/ alloc_node_data(nid); - free_area_init_node(nid, zones_size, 0, zholes_size); + free_area_init_memoryless_node(nid); /* * All zonelists will be built later in start_kernel() after per cpu --- a/include/linux/mm.h~mm-rename-free_area_init_node-to-free_area_init_memoryless_node +++ a/include/linux/mm.h @@ -2329,8 +2329,7 @@ static inline spinlock_t *pud_lock(struc } extern void __init pagecache_init(void); -extern void __init free_area_init_node(int nid, unsigned long * zones_size, - unsigned long zone_start_pfn, unsigned long *zholes_size); +extern void __init free_area_init_memoryless_node(int nid); extern void free_initmem(void); /* @@ -2402,10 +2401,8 @@ static inline unsigned long get_num_phys /* * Using memblock node mappings, an architecture may initialise its - * zones, allocate the backing mem_map and account for memory holes in a more - * architecture independent manner. This is a substitute for creating the - * zone_sizes[] and zholes_size[] arrays and passing them to - * free_area_init_node() + * zones, allocate the backing mem_map and account for memory holes in an + * architecture independent manner. * * An architecture is expected to register range of page frames backed by * physical memory with memblock_add[_node]() before calling --- a/mm/page_alloc.c~mm-rename-free_area_init_node-to-free_area_init_memoryless_node +++ a/mm/page_alloc.c @@ -6974,12 +6974,9 @@ static void __init __free_area_init_node free_area_init_core(pgdat); } -void __init free_area_init_node(int nid, unsigned long *zones_size, - unsigned long node_start_pfn, - unsigned long *zholes_size) +void __init free_area_init_memoryless_node(int nid) { - __free_area_init_node(nid, zones_size, node_start_pfn, zholes_size, - true); + __free_area_init_node(nid, NULL, 0, NULL, false); } #if !defined(CONFIG_FLAT_NODE_MEM_MAP) _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 029/131] mm: clean up free_area_init_node() and its helpers 2020-06-03 22:55 incoming Andrew Morton ` (27 preceding siblings ...) 2020-06-03 22:58 ` [patch 028/131] mm: rename free_area_init_node() to free_area_init_memoryless_node() Andrew Morton @ 2020-06-03 22:58 ` Andrew Morton 2020-06-03 22:58 ` [patch 030/131] mm: simplify find_min_pfn_with_active_regions() Andrew Morton ` (102 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:58 UTC (permalink / raw) To: akpm, bcain, bhe, catalin.marinas, corbet, dalias, davem, deller, geert, gerg, green.hu, guoren, gxt, heiko.carstens, hoan, James.Bottomley, jcmvbkbc, ley.foon.tan, linux-mm, linux, mattst88, mhocko, mm-commits, monstr, mpe, msalter, nickhu, paul.walmsley, richard, rppt, shorne, tony.luck, torvalds, tsbogend, vgupta, ysato From: Mike Rapoport <rppt@linux.ibm.com> Subject: mm: clean up free_area_init_node() and its helpers free_area_init_node() now always uses memblock info and the zone PFN limits so it does not need the backwards compatibility functions to calculate the zone spanned and absent pages. The removal of the compat_ versions of zone_{abscent,spanned}_pages_in_node() in turn, makes zone_size and zhole_size parameters unused. The node_start_pfn is determined by get_pfn_range_for_nid(), so there is no need to pass it to free_area_init_node(). As a result, the only required parameter to free_area_init_node() is the node ID, all the rest are removed along with no longer used compat_zone_{abscent,spanned}_pages_in_node() helpers. Link: http://lkml.kernel.org/r/20200412194859.12663-20-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64] Cc: Baoquan He <bhe@redhat.com> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/page_alloc.c | 104 +++++++++------------------------------------- 1 file changed, 22 insertions(+), 82 deletions(-) --- a/mm/page_alloc.c~mm-clean-up-free_area_init_node-and-its-helpers +++ a/mm/page_alloc.c @@ -6436,8 +6436,7 @@ static unsigned long __init zone_spanned unsigned long node_start_pfn, unsigned long node_end_pfn, unsigned long *zone_start_pfn, - unsigned long *zone_end_pfn, - unsigned long *ignored) + unsigned long *zone_end_pfn) { unsigned long zone_low = arch_zone_lowest_possible_pfn[zone_type]; unsigned long zone_high = arch_zone_highest_possible_pfn[zone_type]; @@ -6501,8 +6500,7 @@ unsigned long __init absent_pages_in_ran static unsigned long __init zone_absent_pages_in_node(int nid, unsigned long zone_type, unsigned long node_start_pfn, - unsigned long node_end_pfn, - unsigned long *ignored) + unsigned long node_end_pfn) { unsigned long zone_low = arch_zone_lowest_possible_pfn[zone_type]; unsigned long zone_high = arch_zone_highest_possible_pfn[zone_type]; @@ -6549,43 +6547,9 @@ static unsigned long __init zone_absent_ return nr_absent; } -static inline unsigned long __init compat_zone_spanned_pages_in_node(int nid, - unsigned long zone_type, - unsigned long node_start_pfn, - unsigned long node_end_pfn, - unsigned long *zone_start_pfn, - unsigned long *zone_end_pfn, - unsigned long *zones_size) -{ - unsigned int zone; - - *zone_start_pfn = node_start_pfn; - for (zone = 0; zone < zone_type; zone++) - *zone_start_pfn += zones_size[zone]; - - *zone_end_pfn = *zone_start_pfn + zones_size[zone_type]; - - return zones_size[zone_type]; -} - -static inline unsigned long __init compat_zone_absent_pages_in_node(int nid, - unsigned long zone_type, - unsigned long node_start_pfn, - unsigned long node_end_pfn, - unsigned long *zholes_size) -{ - if (!zholes_size) - return 0; - - return zholes_size[zone_type]; -} - static void __init calculate_node_totalpages(struct pglist_data *pgdat, unsigned long node_start_pfn, - unsigned long node_end_pfn, - unsigned long *zones_size, - unsigned long *zholes_size, - bool compat) + unsigned long node_end_pfn) { unsigned long realtotalpages = 0, totalpages = 0; enum zone_type i; @@ -6596,31 +6560,14 @@ static void __init calculate_node_totalp unsigned long spanned, absent; unsigned long size, real_size; - if (compat) { - spanned = compat_zone_spanned_pages_in_node( - pgdat->node_id, i, - node_start_pfn, - node_end_pfn, - &zone_start_pfn, - &zone_end_pfn, - zones_size); - absent = compat_zone_absent_pages_in_node( - pgdat->node_id, i, - node_start_pfn, - node_end_pfn, - zholes_size); - } else { - spanned = zone_spanned_pages_in_node(pgdat->node_id, i, - node_start_pfn, - node_end_pfn, - &zone_start_pfn, - &zone_end_pfn, - zones_size); - absent = zone_absent_pages_in_node(pgdat->node_id, i, - node_start_pfn, - node_end_pfn, - zholes_size); - } + spanned = zone_spanned_pages_in_node(pgdat->node_id, i, + node_start_pfn, + node_end_pfn, + &zone_start_pfn, + &zone_end_pfn); + absent = zone_absent_pages_in_node(pgdat->node_id, i, + node_start_pfn, + node_end_pfn); size = spanned; real_size = size - absent; @@ -6942,10 +6889,7 @@ static inline void pgdat_set_deferred_ra static inline void pgdat_set_deferred_range(pg_data_t *pgdat) {} #endif -static void __init __free_area_init_node(int nid, unsigned long *zones_size, - unsigned long node_start_pfn, - unsigned long *zholes_size, - bool compat) +static void __init free_area_init_node(int nid) { pg_data_t *pgdat = NODE_DATA(nid); unsigned long start_pfn = 0; @@ -6954,19 +6898,16 @@ static void __init __free_area_init_node /* pg_data_t should be reset to zero when it's allocated */ WARN_ON(pgdat->nr_zones || pgdat->kswapd_classzone_idx); + get_pfn_range_for_nid(nid, &start_pfn, &end_pfn); + pgdat->node_id = nid; - pgdat->node_start_pfn = node_start_pfn; + pgdat->node_start_pfn = start_pfn; pgdat->per_cpu_nodestats = NULL; - if (!compat) { - get_pfn_range_for_nid(nid, &start_pfn, &end_pfn); - pr_info("Initmem setup node %d [mem %#018Lx-%#018Lx]\n", nid, - (u64)start_pfn << PAGE_SHIFT, - end_pfn ? ((u64)end_pfn << PAGE_SHIFT) - 1 : 0); - } else { - start_pfn = node_start_pfn; - } - calculate_node_totalpages(pgdat, start_pfn, end_pfn, - zones_size, zholes_size, compat); + + pr_info("Initmem setup node %d [mem %#018Lx-%#018Lx]\n", nid, + (u64)start_pfn << PAGE_SHIFT, + end_pfn ? ((u64)end_pfn << PAGE_SHIFT) - 1 : 0); + calculate_node_totalpages(pgdat, start_pfn, end_pfn); alloc_node_mem_map(pgdat); pgdat_set_deferred_range(pgdat); @@ -6976,7 +6917,7 @@ static void __init __free_area_init_node void __init free_area_init_memoryless_node(int nid) { - __free_area_init_node(nid, NULL, 0, NULL, false); + free_area_init_node(nid); } #if !defined(CONFIG_FLAT_NODE_MEM_MAP) @@ -7506,8 +7447,7 @@ void __init free_area_init(unsigned long init_unavailable_mem(); for_each_online_node(nid) { pg_data_t *pgdat = NODE_DATA(nid); - __free_area_init_node(nid, NULL, - find_min_pfn_for_node(nid), NULL, false); + free_area_init_node(nid); /* Any memory on that node */ if (pgdat->node_present_pages) _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 030/131] mm: simplify find_min_pfn_with_active_regions() 2020-06-03 22:55 incoming Andrew Morton ` (28 preceding siblings ...) 2020-06-03 22:58 ` [patch 029/131] mm: clean up free_area_init_node() and its helpers Andrew Morton @ 2020-06-03 22:58 ` Andrew Morton 2020-06-03 22:58 ` [patch 031/131] docs/vm: update memory-models documentation Andrew Morton ` (101 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:58 UTC (permalink / raw) To: akpm, bcain, bhe, catalin.marinas, corbet, dalias, davem, deller, geert, gerg, green.hu, guoren, gxt, heiko.carstens, hoan, James.Bottomley, jcmvbkbc, ley.foon.tan, linux-mm, linux, mattst88, mhocko, mm-commits, monstr, mpe, msalter, nickhu, paul.walmsley, richard, rppt, shorne, tony.luck, torvalds, tsbogend, vgupta, ysato From: Mike Rapoport <rppt@linux.ibm.com> Subject: mm: simplify find_min_pfn_with_active_regions() find_min_pfn_with_active_regions() calls find_min_pfn_for_node() with nid parameter set to MAX_NUMNODES. This makes the find_min_pfn_for_node() traverse all memblock memory regions although the first PFN in the system can be easily found with memblock_start_of_DRAM(). Use memblock_start_of_DRAM() in find_min_pfn_with_active_regions() and drop now unused find_min_pfn_for_node(). Link: http://lkml.kernel.org/r/20200412194859.12663-21-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64] Cc: Baoquan He <bhe@redhat.com> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/page_alloc.c | 20 +------------------- 1 file changed, 1 insertion(+), 19 deletions(-) --- a/mm/page_alloc.c~mm-simplify-find_min_pfn_with_active_regions +++ a/mm/page_alloc.c @@ -7066,24 +7066,6 @@ unsigned long __init node_map_pfn_alignm return ~accl_mask + 1; } -/* Find the lowest pfn for a node */ -static unsigned long __init find_min_pfn_for_node(int nid) -{ - unsigned long min_pfn = ULONG_MAX; - unsigned long start_pfn; - int i; - - for_each_mem_pfn_range(i, nid, &start_pfn, NULL, NULL) - min_pfn = min(min_pfn, start_pfn); - - if (min_pfn == ULONG_MAX) { - pr_warn("Could not find start_pfn for node %d\n", nid); - return 0; - } - - return min_pfn; -} - /** * find_min_pfn_with_active_regions - Find the minimum PFN registered * @@ -7092,7 +7074,7 @@ static unsigned long __init find_min_pfn */ unsigned long __init find_min_pfn_with_active_regions(void) { - return find_min_pfn_for_node(MAX_NUMNODES); + return PHYS_PFN(memblock_start_of_DRAM()); } /* _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 031/131] docs/vm: update memory-models documentation 2020-06-03 22:55 incoming Andrew Morton ` (29 preceding siblings ...) 2020-06-03 22:58 ` [patch 030/131] mm: simplify find_min_pfn_with_active_regions() Andrew Morton @ 2020-06-03 22:58 ` Andrew Morton 2020-06-03 22:58 ` [patch 032/131] mm/page_alloc.c: bad_[reason|flags] is not necessary when PageHWPoison Andrew Morton ` (100 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:58 UTC (permalink / raw) To: akpm, bcain, bhe, catalin.marinas, corbet, dalias, davem, deller, geert, gerg, green.hu, guoren, gxt, heiko.carstens, hoan, James.Bottomley, jcmvbkbc, ley.foon.tan, linux-mm, linux, mattst88, mhocko, mm-commits, monstr, mpe, msalter, nickhu, paul.walmsley, richard, rppt, shorne, tony.luck, torvalds, tsbogend, vgupta, ysato From: Mike Rapoport <rppt@linux.ibm.com> Subject: docs/vm: update memory-models documentation To reflect the updates to free_area_init() family of functions. Link: http://lkml.kernel.org/r/20200412194859.12663-22-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64] Cc: Baoquan He <bhe@redhat.com> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- Documentation/vm/memory-model.rst | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) --- a/Documentation/vm/memory-model.rst~docs-vm-update-memory-models-documentation +++ a/Documentation/vm/memory-model.rst @@ -46,11 +46,10 @@ maps the entire physical memory. For mos have entries in the `mem_map` array. The `struct page` objects corresponding to the holes are never fully initialized. -To allocate the `mem_map` array, architecture specific setup code -should call :c:func:`free_area_init_node` function or its convenience -wrapper :c:func:`free_area_init`. Yet, the mappings array is not -usable until the call to :c:func:`memblock_free_all` that hands all -the memory to the page allocator. +To allocate the `mem_map` array, architecture specific setup code should +call :c:func:`free_area_init` function. Yet, the mappings array is not +usable until the call to :c:func:`memblock_free_all` that hands all the +memory to the page allocator. If an architecture enables `CONFIG_ARCH_HAS_HOLES_MEMORYMODEL` option, it may free parts of the `mem_map` array that do not cover the _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 032/131] mm/page_alloc.c: bad_[reason|flags] is not necessary when PageHWPoison 2020-06-03 22:55 incoming Andrew Morton ` (30 preceding siblings ...) 2020-06-03 22:58 ` [patch 031/131] docs/vm: update memory-models documentation Andrew Morton @ 2020-06-03 22:58 ` Andrew Morton 2020-06-03 22:58 ` [patch 033/131] mm/page_alloc.c: bad_flags is not necessary for bad_page() Andrew Morton ` (99 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:58 UTC (permalink / raw) To: akpm, anshuman.khandual, david, linux-mm, mhocko, mm-commits, richard.weiyang, rientjes, torvalds From: Wei Yang <richard.weiyang@gmail.com> Subject: mm/page_alloc.c: bad_[reason|flags] is not necessary when PageHWPoison Patch series "mm/page_alloc.c: cleanup on check page", v3. This patchset does some cleanup related to check page. 1. Remove unnecessary bad_reason assignment 2. Remove bad_flags to bad_page() 3. Rename function for naming convention 4. Extract common part to check page Thanks for suggestions from David Rientjes and Anshuman Khandual. This patch (of 5): Since function returns directly, bad_[reason|flags] is not used any where. And move this to the first. This is a following cleanup for commit e570f56cccd21 ("mm: check_new_page_bad() directly returns in __PG_HWPOISON case") Link: http://lkml.kernel.org/r/20200411220357.9636-2-richard.weiyang@gmail.com Signed-off-by: Wei Yang <richard.weiyang@gmail.com> Acked-by: Michal Hocko <mhocko@suse.com> Reviewed-by: David Hildenbrand <david@redhat.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/page_alloc.c | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) --- a/mm/page_alloc.c~mm-page_allocc-bad_-is-not-necessary-when-pagehwpoison +++ a/mm/page_alloc.c @@ -2097,19 +2097,17 @@ static void check_new_page_bad(struct pa const char *bad_reason = NULL; unsigned long bad_flags = 0; + if (unlikely(page->flags & __PG_HWPOISON)) { + /* Don't complain about hwpoisoned pages */ + page_mapcount_reset(page); /* remove PageBuddy */ + return; + } if (unlikely(atomic_read(&page->_mapcount) != -1)) bad_reason = "nonzero mapcount"; if (unlikely(page->mapping != NULL)) bad_reason = "non-NULL mapping"; if (unlikely(page_ref_count(page) != 0)) bad_reason = "nonzero _refcount"; - if (unlikely(page->flags & __PG_HWPOISON)) { - bad_reason = "HWPoisoned (hardware-corrupted)"; - bad_flags = __PG_HWPOISON; - /* Don't complain about hwpoisoned pages */ - page_mapcount_reset(page); /* remove PageBuddy */ - return; - } if (unlikely(page->flags & PAGE_FLAGS_CHECK_AT_PREP)) { bad_reason = "PAGE_FLAGS_CHECK_AT_PREP flag set"; bad_flags = PAGE_FLAGS_CHECK_AT_PREP; _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 033/131] mm/page_alloc.c: bad_flags is not necessary for bad_page() 2020-06-03 22:55 incoming Andrew Morton ` (31 preceding siblings ...) 2020-06-03 22:58 ` [patch 032/131] mm/page_alloc.c: bad_[reason|flags] is not necessary when PageHWPoison Andrew Morton @ 2020-06-03 22:58 ` Andrew Morton 2020-06-03 22:58 ` [patch 034/131] mm/page_alloc.c: rename free_pages_check_bad() to check_free_page_bad() Andrew Morton ` (98 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:58 UTC (permalink / raw) To: akpm, anshuman.khandual, david, linux-mm, mhocko, mm-commits, richard.weiyang, rientjes, torvalds From: Wei Yang <richard.weiyang@gmail.com> Subject: mm/page_alloc.c: bad_flags is not necessary for bad_page() After commit 5b57b8f22709 ("mm/debug.c: always print flags in dump_page()"), page->flags is always printed for a bad page. It is not necessary to have bad_flags any more. Link: http://lkml.kernel.org/r/20200411220357.9636-3-richard.weiyang@gmail.com Signed-off-by: Wei Yang <richard.weiyang@gmail.com> Suggested-by: Anshuman Khandual <anshuman.khandual@arm.com> Cc: David Hildenbrand <david@redhat.com> Cc: David Rientjes <rientjes@google.com> Cc: Michal Hocko <mhocko@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/page_alloc.c | 34 ++++++++++------------------------ 1 file changed, 10 insertions(+), 24 deletions(-) --- a/mm/page_alloc.c~mm-page_allocc-bad_flags-is-not-necessary-for-bad_page +++ a/mm/page_alloc.c @@ -607,8 +607,7 @@ static inline int __maybe_unused bad_ran } #endif -static void bad_page(struct page *page, const char *reason, - unsigned long bad_flags) +static void bad_page(struct page *page, const char *reason) { static unsigned long resume; static unsigned long nr_shown; @@ -637,10 +636,6 @@ static void bad_page(struct page *page, pr_alert("BUG: Bad page state in process %s pfn:%05lx\n", current->comm, page_to_pfn(page)); __dump_page(page, reason); - bad_flags &= page->flags; - if (bad_flags) - pr_alert("bad because of flags: %#lx(%pGp)\n", - bad_flags, &bad_flags); dump_page_owner(page); print_modules(); @@ -1077,11 +1072,7 @@ static inline bool page_expected_state(s static void free_pages_check_bad(struct page *page) { - const char *bad_reason; - unsigned long bad_flags; - - bad_reason = NULL; - bad_flags = 0; + const char *bad_reason = NULL; if (unlikely(atomic_read(&page->_mapcount) != -1)) bad_reason = "nonzero mapcount"; @@ -1089,15 +1080,13 @@ static void free_pages_check_bad(struct bad_reason = "non-NULL mapping"; if (unlikely(page_ref_count(page) != 0)) bad_reason = "nonzero _refcount"; - if (unlikely(page->flags & PAGE_FLAGS_CHECK_AT_FREE)) { + if (unlikely(page->flags & PAGE_FLAGS_CHECK_AT_FREE)) bad_reason = "PAGE_FLAGS_CHECK_AT_FREE flag(s) set"; - bad_flags = PAGE_FLAGS_CHECK_AT_FREE; - } #ifdef CONFIG_MEMCG if (unlikely(page->mem_cgroup)) bad_reason = "page still charged to cgroup"; #endif - bad_page(page, bad_reason, bad_flags); + bad_page(page, bad_reason); } static inline int free_pages_check(struct page *page) @@ -1128,7 +1117,7 @@ static int free_tail_pages_check(struct case 1: /* the first tail page: ->mapping may be compound_mapcount() */ if (unlikely(compound_mapcount(page))) { - bad_page(page, "nonzero compound_mapcount", 0); + bad_page(page, "nonzero compound_mapcount"); goto out; } break; @@ -1140,17 +1129,17 @@ static int free_tail_pages_check(struct break; default: if (page->mapping != TAIL_MAPPING) { - bad_page(page, "corrupted mapping in tail page", 0); + bad_page(page, "corrupted mapping in tail page"); goto out; } break; } if (unlikely(!PageTail(page))) { - bad_page(page, "PageTail not set", 0); + bad_page(page, "PageTail not set"); goto out; } if (unlikely(compound_head(page) != head_page)) { - bad_page(page, "compound_head not consistent", 0); + bad_page(page, "compound_head not consistent"); goto out; } ret = 0; @@ -2095,7 +2084,6 @@ static inline void expand(struct zone *z static void check_new_page_bad(struct page *page) { const char *bad_reason = NULL; - unsigned long bad_flags = 0; if (unlikely(page->flags & __PG_HWPOISON)) { /* Don't complain about hwpoisoned pages */ @@ -2108,15 +2096,13 @@ static void check_new_page_bad(struct pa bad_reason = "non-NULL mapping"; if (unlikely(page_ref_count(page) != 0)) bad_reason = "nonzero _refcount"; - if (unlikely(page->flags & PAGE_FLAGS_CHECK_AT_PREP)) { + if (unlikely(page->flags & PAGE_FLAGS_CHECK_AT_PREP)) bad_reason = "PAGE_FLAGS_CHECK_AT_PREP flag set"; - bad_flags = PAGE_FLAGS_CHECK_AT_PREP; - } #ifdef CONFIG_MEMCG if (unlikely(page->mem_cgroup)) bad_reason = "page still charged to cgroup"; #endif - bad_page(page, bad_reason, bad_flags); + bad_page(page, bad_reason); } /* _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 034/131] mm/page_alloc.c: rename free_pages_check_bad() to check_free_page_bad() 2020-06-03 22:55 incoming Andrew Morton ` (32 preceding siblings ...) 2020-06-03 22:58 ` [patch 033/131] mm/page_alloc.c: bad_flags is not necessary for bad_page() Andrew Morton @ 2020-06-03 22:58 ` Andrew Morton 2020-06-03 22:58 ` [patch 035/131] mm/page_alloc.c: rename free_pages_check() to check_free_page() Andrew Morton ` (97 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:58 UTC (permalink / raw) To: akpm, anshuman.khandual, david, linux-mm, mhocko, mm-commits, richard.weiyang, rientjes, torvalds From: Wei Yang <richard.weiyang@gmail.com> Subject: mm/page_alloc.c: rename free_pages_check_bad() to check_free_page_bad() free_pages_check_bad() is the counterpart of check_new_page_bad(). Rename it to use the same naming convention. Link: http://lkml.kernel.org/r/20200411220357.9636-4-richard.weiyang@gmail.com Signed-off-by: Wei Yang <richard.weiyang@gmail.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: David Hildenbrand <david@redhat.com> Cc: David Rientjes <rientjes@google.com> Cc: Michal Hocko <mhocko@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/page_alloc.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- a/mm/page_alloc.c~mm-page_allocc-rename-free_pages_check_bad-to-check_free_page_bad +++ a/mm/page_alloc.c @@ -1070,7 +1070,7 @@ static inline bool page_expected_state(s return true; } -static void free_pages_check_bad(struct page *page) +static void check_free_page_bad(struct page *page) { const char *bad_reason = NULL; @@ -1095,7 +1095,7 @@ static inline int free_pages_check(struc return 0; /* Something has gone sideways, find it */ - free_pages_check_bad(page); + check_free_page_bad(page); return 1; } _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 035/131] mm/page_alloc.c: rename free_pages_check() to check_free_page() 2020-06-03 22:55 incoming Andrew Morton ` (33 preceding siblings ...) 2020-06-03 22:58 ` [patch 034/131] mm/page_alloc.c: rename free_pages_check_bad() to check_free_page_bad() Andrew Morton @ 2020-06-03 22:58 ` Andrew Morton 2020-06-03 22:58 ` [patch 036/131] mm/page_alloc.c: extract check_[new|free]_page_bad() common part to page_bad_reason() Andrew Morton ` (96 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:58 UTC (permalink / raw) To: akpm, anshuman.khandual, david, linux-mm, mhocko, mm-commits, richard.weiyang, rientjes, torvalds From: Wei Yang <richard.weiyang@gmail.com> Subject: mm/page_alloc.c: rename free_pages_check() to check_free_page() free_pages_check() is the counterpart of check_new_page(). Rename it to use the same naming convention. Link: http://lkml.kernel.org/r/20200411220357.9636-5-richard.weiyang@gmail.com Signed-off-by: Wei Yang <richard.weiyang@gmail.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: David Hildenbrand <david@redhat.com> Cc: David Rientjes <rientjes@google.com> Cc: Michal Hocko <mhocko@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/page_alloc.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) --- a/mm/page_alloc.c~mm-page_allocc-rename-free_pages_check-to-check_free_page +++ a/mm/page_alloc.c @@ -1089,7 +1089,7 @@ static void check_free_page_bad(struct p bad_page(page, bad_reason); } -static inline int free_pages_check(struct page *page) +static inline int check_free_page(struct page *page) { if (likely(page_expected_state(page, PAGE_FLAGS_CHECK_AT_FREE))) return 0; @@ -1181,7 +1181,7 @@ static __always_inline bool free_pages_p for (i = 1; i < (1 << order); i++) { if (compound) bad += free_tail_pages_check(page, page + i); - if (unlikely(free_pages_check(page + i))) { + if (unlikely(check_free_page(page + i))) { bad++; continue; } @@ -1193,7 +1193,7 @@ static __always_inline bool free_pages_p if (memcg_kmem_enabled() && PageKmemcg(page)) __memcg_kmem_uncharge_page(page, order); if (check_free) - bad += free_pages_check(page); + bad += check_free_page(page); if (bad) return false; @@ -1240,7 +1240,7 @@ static bool free_pcp_prepare(struct page static bool bulkfree_pcp_prepare(struct page *page) { if (debug_pagealloc_enabled_static()) - return free_pages_check(page); + return check_free_page(page); else return false; } @@ -1261,7 +1261,7 @@ static bool free_pcp_prepare(struct page static bool bulkfree_pcp_prepare(struct page *page) { - return free_pages_check(page); + return check_free_page(page); } #endif /* CONFIG_DEBUG_VM */ _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 036/131] mm/page_alloc.c: extract check_[new|free]_page_bad() common part to page_bad_reason() 2020-06-03 22:55 incoming Andrew Morton ` (34 preceding siblings ...) 2020-06-03 22:58 ` [patch 035/131] mm/page_alloc.c: rename free_pages_check() to check_free_page() Andrew Morton @ 2020-06-03 22:58 ` Andrew Morton 2020-06-03 22:58 ` [patch 037/131] mm,page_alloc,cma: conditionally prefer cma pageblocks for movable allocations Andrew Morton ` (95 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:58 UTC (permalink / raw) To: akpm, anshuman.khandual, david, linux-mm, mhocko, mm-commits, richard.weiyang, rientjes, torvalds From: Wei Yang <richard.weiyang@gmail.com> Subject: mm/page_alloc.c: extract check_[new|free]_page_bad() common part to page_bad_reason() We share similar code in check_[new|free]_page_bad() to get the page's bad reason. Let's extract it and reduce code duplication. Link: http://lkml.kernel.org/r/20200411220357.9636-6-richard.weiyang@gmail.com Signed-off-by: Wei Yang <richard.weiyang@gmail.com> Cc: David Rientjes <rientjes@google.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: David Hildenbrand <david@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/page_alloc.c | 36 +++++++++++++++++------------------- 1 file changed, 17 insertions(+), 19 deletions(-) --- a/mm/page_alloc.c~mm-page_allocc-extract-check__page_bad-common-part-to-page_bad_reason +++ a/mm/page_alloc.c @@ -1070,7 +1070,7 @@ static inline bool page_expected_state(s return true; } -static void check_free_page_bad(struct page *page) +static const char *page_bad_reason(struct page *page, unsigned long flags) { const char *bad_reason = NULL; @@ -1080,13 +1080,23 @@ static void check_free_page_bad(struct p bad_reason = "non-NULL mapping"; if (unlikely(page_ref_count(page) != 0)) bad_reason = "nonzero _refcount"; - if (unlikely(page->flags & PAGE_FLAGS_CHECK_AT_FREE)) - bad_reason = "PAGE_FLAGS_CHECK_AT_FREE flag(s) set"; + if (unlikely(page->flags & flags)) { + if (flags == PAGE_FLAGS_CHECK_AT_PREP) + bad_reason = "PAGE_FLAGS_CHECK_AT_PREP flag(s) set"; + else + bad_reason = "PAGE_FLAGS_CHECK_AT_FREE flag(s) set"; + } #ifdef CONFIG_MEMCG if (unlikely(page->mem_cgroup)) bad_reason = "page still charged to cgroup"; #endif - bad_page(page, bad_reason); + return bad_reason; +} + +static void check_free_page_bad(struct page *page) +{ + bad_page(page, + page_bad_reason(page, PAGE_FLAGS_CHECK_AT_FREE)); } static inline int check_free_page(struct page *page) @@ -2083,26 +2093,14 @@ static inline void expand(struct zone *z static void check_new_page_bad(struct page *page) { - const char *bad_reason = NULL; - if (unlikely(page->flags & __PG_HWPOISON)) { /* Don't complain about hwpoisoned pages */ page_mapcount_reset(page); /* remove PageBuddy */ return; } - if (unlikely(atomic_read(&page->_mapcount) != -1)) - bad_reason = "nonzero mapcount"; - if (unlikely(page->mapping != NULL)) - bad_reason = "non-NULL mapping"; - if (unlikely(page_ref_count(page) != 0)) - bad_reason = "nonzero _refcount"; - if (unlikely(page->flags & PAGE_FLAGS_CHECK_AT_PREP)) - bad_reason = "PAGE_FLAGS_CHECK_AT_PREP flag set"; -#ifdef CONFIG_MEMCG - if (unlikely(page->mem_cgroup)) - bad_reason = "page still charged to cgroup"; -#endif - bad_page(page, bad_reason); + + bad_page(page, + page_bad_reason(page, PAGE_FLAGS_CHECK_AT_PREP)); } /* _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 037/131] mm,page_alloc,cma: conditionally prefer cma pageblocks for movable allocations 2020-06-03 22:55 incoming Andrew Morton ` (35 preceding siblings ...) 2020-06-03 22:58 ` [patch 036/131] mm/page_alloc.c: extract check_[new|free]_page_bad() common part to page_bad_reason() Andrew Morton @ 2020-06-03 22:58 ` Andrew Morton 2020-06-03 22:58 ` [patch 038/131] mm/page_alloc.c: remove unused free_bootmem_with_active_regions Andrew Morton ` (94 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:58 UTC (permalink / raw) To: akpm, anshuman.khandual, cai, guro, js1304, linux-mm, mgorman, minchan, mm-commits, riel, torvalds, vbabka From: Roman Gushchin <guro@fb.com> Subject: mm,page_alloc,cma: conditionally prefer cma pageblocks for movable allocations Currently a cma area is barely used by the page allocator because it's used only as a fallback from movable, however kswapd tries hard to make sure that the fallback path isn't used. This results in a system evicting memory and pushing data into swap, while lots of CMA memory is still available. This happens despite the fact that alloc_contig_range is perfectly capable of moving any movable allocations out of the way of an allocation. To effectively use the cma area let's alter the rules: if the zone has more free cma pages than the half of total free pages in the zone, use cma pageblocks first and fallback to movable blocks in the case of failure. [guro@fb.com: ifdef the cma-specific code] Link: http://lkml.kernel.org/r/20200311225832.GA178154@carbon.DHCP.thefacebook.com Link: http://lkml.kernel.org/r/20200306150102.3e77354b@imladris.surriel.com Signed-off-by: Roman Gushchin <guro@fb.com> Signed-off-by: Rik van Riel <riel@surriel.com> Co-developed-by: Rik van Riel <riel@surriel.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Minchan Kim <minchan@kernel.org> Cc: Qian Cai <cai@lca.pw> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Joonsoo Kim <js1304@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/page_alloc.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) --- a/mm/page_alloc.c~mmpage_alloccma-conditionally-prefer-cma-pageblocks-for-movable-allocations +++ a/mm/page_alloc.c @@ -2752,6 +2752,20 @@ __rmqueue(struct zone *zone, unsigned in { struct page *page; +#ifdef CONFIG_CMA + /* + * Balance movable allocations between regular and CMA areas by + * allocating from CMA when over half of the zone's free memory + * is in the CMA area. + */ + if (migratetype == MIGRATE_MOVABLE && + zone_page_state(zone, NR_FREE_CMA_PAGES) > + zone_page_state(zone, NR_FREE_PAGES) / 2) { + page = __rmqueue_cma_fallback(zone, order); + if (page) + return page; + } +#endif retry: page = __rmqueue_smallest(zone, order, migratetype); if (unlikely(!page)) { _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 038/131] mm/page_alloc.c: remove unused free_bootmem_with_active_regions 2020-06-03 22:55 incoming Andrew Morton ` (36 preceding siblings ...) 2020-06-03 22:58 ` [patch 037/131] mm,page_alloc,cma: conditionally prefer cma pageblocks for movable allocations Andrew Morton @ 2020-06-03 22:58 ` Andrew Morton 2020-06-03 22:58 ` [patch 039/131] mm/page_alloc.c: only tune sysctl_lowmem_reserve_ratio value once when changing it Andrew Morton ` (93 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:58 UTC (permalink / raw) To: akpm, bhe, david, linux-mm, mhocko, mm-commits, torvalds From: Baoquan He <bhe@redhat.com> Subject: mm/page_alloc.c: remove unused free_bootmem_with_active_regions Since commit 397dc00e249ec64e10 ("mips: sgi-ip27: switch from DISCONTIGMEM to SPARSEMEM"), the last caller of free_bootmem_with_active_regions() was gone. Now no user calls it any more. Let's remove it. Link: http://lkml.kernel.org/r/20200402143455.5145-1-bhe@redhat.com Signed-off-by: Baoquan He <bhe@redhat.com> Acked-by: Michal Hocko <mhocko@suse.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/mm.h | 4 ---- mm/page_alloc.c | 25 ------------------------- 2 files changed, 29 deletions(-) --- a/include/linux/mm.h~mm-remove-unused-free_bootmem_with_active_regions +++ a/include/linux/mm.h @@ -2415,8 +2415,6 @@ static inline unsigned long get_num_phys * memblock_add_node(base, size, nid) * free_area_init(max_zone_pfns); * - * free_bootmem_with_active_regions() calls free_bootmem_node() for each - * registered physical page range. Similarly * sparse_memory_present_with_active_regions() calls memory_present() for * each range when SPARSEMEM is enabled. */ @@ -2429,8 +2427,6 @@ extern unsigned long absent_pages_in_ran extern void get_pfn_range_for_nid(unsigned int nid, unsigned long *start_pfn, unsigned long *end_pfn); extern unsigned long find_min_pfn_with_active_regions(void); -extern void free_bootmem_with_active_regions(int nid, - unsigned long max_low_pfn); extern void sparse_memory_present_with_active_regions(int nid); #ifndef CONFIG_NEED_MULTIPLE_NODES --- a/mm/page_alloc.c~mm-remove-unused-free_bootmem_with_active_regions +++ a/mm/page_alloc.c @@ -6296,31 +6296,6 @@ void __meminit init_currently_empty_zone } /** - * free_bootmem_with_active_regions - Call memblock_free_early_nid for each active range - * @nid: The node to free memory on. If MAX_NUMNODES, all nodes are freed. - * @max_low_pfn: The highest PFN that will be passed to memblock_free_early_nid - * - * If an architecture guarantees that all ranges registered contain no holes - * and may be freed, this this function may be used instead of calling - * memblock_free_early_nid() manually. - */ -void __init free_bootmem_with_active_regions(int nid, unsigned long max_low_pfn) -{ - unsigned long start_pfn, end_pfn; - int i, this_nid; - - for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, &this_nid) { - start_pfn = min(start_pfn, max_low_pfn); - end_pfn = min(end_pfn, max_low_pfn); - - if (start_pfn < end_pfn) - memblock_free_early_nid(PFN_PHYS(start_pfn), - (end_pfn - start_pfn) << PAGE_SHIFT, - this_nid); - } -} - -/** * sparse_memory_present_with_active_regions - Call memory_present for each active range * @nid: The node to call memory_present for. If MAX_NUMNODES, all nodes will be used. * _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 039/131] mm/page_alloc.c: only tune sysctl_lowmem_reserve_ratio value once when changing it 2020-06-03 22:55 incoming Andrew Morton ` (37 preceding siblings ...) 2020-06-03 22:58 ` [patch 038/131] mm/page_alloc.c: remove unused free_bootmem_with_active_regions Andrew Morton @ 2020-06-03 22:58 ` Andrew Morton 2020-06-03 22:58 ` [patch 040/131] mm/page_alloc.c: clear out zone->lowmem_reserve[] if the zone is empty Andrew Morton ` (92 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:58 UTC (permalink / raw) To: akpm, bhe, iamjoonsoo.kim, linux-mm, mgorman, mhocko, mm-commits, rientjes, torvalds From: Baoquan He <bhe@redhat.com> Subject: mm/page_alloc.c: only tune sysctl_lowmem_reserve_ratio value once when changing it Patch series "improvements about lowmem_reserve and /proc/zoneinfo", v2. This patch (of 3): When people write to /proc/sys/vm/lowmem_reserve_ratio to change sysctl_lowmem_reserve_ratio[], setup_per_zone_lowmem_reserve() is called to recalculate all ->lowmem_reserve[] for each zone of all nodes as below: static void setup_per_zone_lowmem_reserve(void) { ... for_each_online_pgdat(pgdat) { for (j = 0; j < MAX_NR_ZONES; j++) { ... while (idx) { ... if (sysctl_lowmem_reserve_ratio[idx] < 1) { sysctl_lowmem_reserve_ratio[idx] = 0; lower_zone->lowmem_reserve[j] = 0; } else { ... } } } } Meanwhile, here, sysctl_lowmem_reserve_ratio[idx] will be tuned if its value is smaller than '1'. As we know, sysctl_lowmem_reserve_ratio[] is set for zone without regarding to which node it belongs to. That means the tuning will be done on all nodes, even though it has been done in the first node. And the tuning will be done too even when init_per_zone_wmark_min() calls setup_per_zone_lowmem_reserve(), where actually nobody tries to change sysctl_lowmem_reserve_ratio[]. So now move the tuning into lowmem_reserve_ratio_sysctl_handler(), to make code logic more reasonable. Link: http://lkml.kernel.org/r/20200402140113.3696-1-bhe@redhat.com Link: http://lkml.kernel.org/r/20200402140113.3696-2-bhe@redhat.com Signed-off-by: Baoquan He <bhe@redhat.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/page_alloc.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) --- a/mm/page_alloc.c~mm-page_allocc-only-tune-sysctl_lowmem_reserve_ratio-value-once-when-changing-it +++ a/mm/page_alloc.c @@ -7704,8 +7704,7 @@ static void setup_per_zone_lowmem_reserv idx--; lower_zone = pgdat->node_zones + idx; - if (sysctl_lowmem_reserve_ratio[idx] < 1) { - sysctl_lowmem_reserve_ratio[idx] = 0; + if (!sysctl_lowmem_reserve_ratio[idx]) { lower_zone->lowmem_reserve[j] = 0; } else { lower_zone->lowmem_reserve[j] = @@ -7970,7 +7969,15 @@ int sysctl_min_slab_ratio_sysctl_handler int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *table, int write, void __user *buffer, size_t *length, loff_t *ppos) { + int i; + proc_dointvec_minmax(table, write, buffer, length, ppos); + + for (i = 0; i < MAX_NR_ZONES; i++) { + if (sysctl_lowmem_reserve_ratio[i] < 1) + sysctl_lowmem_reserve_ratio[i] = 0; + } + setup_per_zone_lowmem_reserve(); return 0; } _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 040/131] mm/page_alloc.c: clear out zone->lowmem_reserve[] if the zone is empty 2020-06-03 22:55 incoming Andrew Morton ` (38 preceding siblings ...) 2020-06-03 22:58 ` [patch 039/131] mm/page_alloc.c: only tune sysctl_lowmem_reserve_ratio value once when changing it Andrew Morton @ 2020-06-03 22:58 ` Andrew Morton 2020-06-03 22:58 ` [patch 041/131] mm/vmstat.c: do not show lowmem reserve protection information of empty zone Andrew Morton ` (91 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:58 UTC (permalink / raw) To: akpm, bhe, linux-mm, mm-commits, torvalds From: Baoquan He <bhe@redhat.com> Subject: mm/page_alloc.c: clear out zone->lowmem_reserve[] if the zone is empty When requesting memory allocation from a specific zone is not satisfied, it will fall to lower zone to try allocating memory. In this case, lower zone's ->lowmem_reserve[] will help protect its own memory resource. The higher the relevant ->lowmem_reserve[] is, the harder the upper zone can get memory from this lower zone. However, this protection mechanism should be applied to populated zone, but not an empty zone. So filling ->lowmem_reserve[] for empty zone is not necessary, and may mislead people that it's valid data in that zone. Node 2, zone DMA pages free 0 min 0 low 0 high 0 spanned 0 present 0 managed 0 protection: (0, 0, 1024, 1024) Node 2, zone DMA32 pages free 0 min 0 low 0 high 0 spanned 0 present 0 managed 0 protection: (0, 0, 1024, 1024) Node 2, zone Normal per-node stats nr_inactive_anon 0 nr_active_anon 143 nr_inactive_file 0 nr_active_file 0 nr_unevictable 0 nr_slab_reclaimable 45 nr_slab_unreclaimable 254 Here clear out zone->lowmem_reserve[] if zone is empty. Link: http://lkml.kernel.org/r/20200402140113.3696-3-bhe@redhat.com Signed-off-by: Baoquan He <bhe@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/page_alloc.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) --- a/mm/page_alloc.c~mm-page_allocc-clear-out-zone-lowmem_reserve-if-the-zone-is-empty +++ a/mm/page_alloc.c @@ -7704,8 +7704,10 @@ static void setup_per_zone_lowmem_reserv idx--; lower_zone = pgdat->node_zones + idx; - if (!sysctl_lowmem_reserve_ratio[idx]) { + if (!sysctl_lowmem_reserve_ratio[idx] || + !zone_managed_pages(lower_zone)) { lower_zone->lowmem_reserve[j] = 0; + continue; } else { lower_zone->lowmem_reserve[j] = managed_pages / sysctl_lowmem_reserve_ratio[idx]; _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 041/131] mm/vmstat.c: do not show lowmem reserve protection information of empty zone 2020-06-03 22:55 incoming Andrew Morton ` (39 preceding siblings ...) 2020-06-03 22:58 ` [patch 040/131] mm/page_alloc.c: clear out zone->lowmem_reserve[] if the zone is empty Andrew Morton @ 2020-06-03 22:58 ` Andrew Morton 2020-06-03 22:58 ` [patch 042/131] mm/page_alloc: use ac->high_zoneidx for classzone_idx Andrew Morton ` (90 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:58 UTC (permalink / raw) To: akpm, bhe, linux-mm, mm-commits, torvalds From: Baoquan He <bhe@redhat.com> Subject: mm/vmstat.c: do not show lowmem reserve protection information of empty zone Because the lowmem reserve protection of a zone can't tell anything if the zone is empty, except of adding one more line in /proc/zoneinfo. Let's remove it from that zone's showing. Link: http://lkml.kernel.org/r/20200402140113.3696-4-bhe@redhat.com Signed-off-by: Baoquan He <bhe@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/vmstat.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) --- a/mm/vmstat.c~mm-vmstatc-do-not-show-lowmem-reserve-protection-information-of-empty-zone +++ a/mm/vmstat.c @@ -1592,6 +1592,12 @@ static void zoneinfo_show_print(struct s zone->present_pages, zone_managed_pages(zone)); + /* If unpopulated, no other information is useful */ + if (!populated_zone(zone)) { + seq_putc(m, '\n'); + return; + } + seq_printf(m, "\n protection: (%ld", zone->lowmem_reserve[0]); @@ -1599,12 +1605,6 @@ static void zoneinfo_show_print(struct s seq_printf(m, ", %ld", zone->lowmem_reserve[i]); seq_putc(m, ')'); - /* If unpopulated, no other information is useful */ - if (!populated_zone(zone)) { - seq_putc(m, '\n'); - return; - } - for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++) seq_printf(m, "\n %-12s %lu", zone_stat_name(i), zone_page_state(zone, i)); _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 042/131] mm/page_alloc: use ac->high_zoneidx for classzone_idx 2020-06-03 22:55 incoming Andrew Morton ` (40 preceding siblings ...) 2020-06-03 22:58 ` [patch 041/131] mm/vmstat.c: do not show lowmem reserve protection information of empty zone Andrew Morton @ 2020-06-03 22:58 ` Andrew Morton 2020-06-03 22:59 ` [patch 043/131] mm/page_alloc: integrate classzone_idx and high_zoneidx Andrew Morton ` (89 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:58 UTC (permalink / raw) To: akpm, bhe, hannes, iamjoonsoo.kim, linux-mm, mgorman, mhocko, minchan, mm-commits, rientjes, torvalds, vbabka, xiaolong.ye From: Joonsoo Kim <iamjoonsoo.kim@lge.com> Subject: mm/page_alloc: use ac->high_zoneidx for classzone_idx Patch series "integrate classzone_idx and high_zoneidx", v5. This patchset is followup of the problem reported and discussed two years ago [1, 2]. The problem this patchset solves is related to the classzone_idx on the NUMA system. It causes a problem when the lowmem reserve protection exists for some zones on a node that do not exist on other nodes. This problem was reported two years ago, and, at that time, the solution got general agreements [2]. But it was not upstreamed. [1]: http://lkml.kernel.org/r/20180102063528.GG30397@yexl-desktop [2]: http://lkml.kernel.org/r/1525408246-14768-1-git-send-email-iamjoonsoo.kim@lge.com This patch (of 2): Currently, we use classzone_idx to calculate lowmem reserve proetection for an allocation request. This classzone_idx causes a problem on NUMA systems when the lowmem reserve protection exists for some zones on a node that do not exist on other nodes. Before further explanation, I should first clarify how to compute the classzone_idx and the high_zoneidx. - ac->high_zoneidx is computed via the arcane gfp_zone(gfp_mask) and represents the index of the highest zone the allocation can use - classzone_idx was supposed to be the index of the highest zone on the local node that the allocation can use, that is actually available in the system Think about following example. Node 0 has 4 populated zone, DMA/DMA32/NORMAL/MOVABLE. Node 1 has 1 populated zone, NORMAL. Some zones, such as MOVABLE, doesn't exist on node 1 and this makes following difference. Assume that there is an allocation request whose gfp_zone(gfp_mask) is the zone, MOVABLE. Then, it's high_zoneidx is 3. If this allocation is initiated on node 0, it's classzone_idx is 3 since actually available/usable zone on local (node 0) is MOVABLE. If this allocation is initiated on node 1, it's classzone_idx is 2 since actually available/usable zone on local (node 1) is NORMAL. You can see that classzone_idx of the allocation request are different according to their starting node, even if their high_zoneidx is the same. Think more about these two allocation requests. If they are processed on local, there is no problem. However, if allocation is initiated on node 1 are processed on remote, in this example, at the NORMAL zone on node 0, due to memory shortage, problem occurs. Their different classzone_idx leads to different lowmem reserve and then different min watermark. See the following example. root@ubuntu:/sys/devices/system/memory# cat /proc/zoneinfo Node 0, zone DMA per-node stats ... pages free 3965 min 5 low 8 high 11 spanned 4095 present 3998 managed 3977 protection: (0, 2961, 4928, 5440) ... Node 0, zone DMA32 pages free 757955 min 1129 low 1887 high 2645 spanned 1044480 present 782303 managed 758116 protection: (0, 0, 1967, 2479) ... Node 0, zone Normal pages free 459806 min 750 low 1253 high 1756 spanned 524288 present 524288 managed 503620 protection: (0, 0, 0, 4096) ... Node 0, zone Movable pages free 130759 min 195 low 326 high 457 spanned 1966079 present 131072 managed 131072 protection: (0, 0, 0, 0) ... Node 1, zone DMA pages free 0 min 0 low 0 high 0 spanned 0 present 0 managed 0 protection: (0, 0, 1006, 1006) Node 1, zone DMA32 pages free 0 min 0 low 0 high 0 spanned 0 present 0 managed 0 protection: (0, 0, 1006, 1006) Node 1, zone Normal per-node stats ... pages free 233277 min 383 low 640 high 897 spanned 262144 present 262144 managed 257744 protection: (0, 0, 0, 0) ... Node 1, zone Movable pages free 0 min 0 low 0 high 0 spanned 262144 present 0 managed 0 protection: (0, 0, 0, 0) - static min watermark for the NORMAL zone on node 0 is 750. - lowmem reserve for the request with classzone idx 3 at the NORMAL on node 0 is 4096. - lowmem reserve for the request with classzone idx 2 at the NORMAL on node 0 is 0. So, overall min watermark is: allocation initiated on node 0 (classzone_idx 3): 750 + 4096 = 4846 allocation initiated on node 1 (classzone_idx 2): 750 + 0 = 750 Allocation initiated on node 1 will have some precedence than allocation initiated on node 0 because min watermark of the former allocation is lower than the other. So, allocation initiated on node 1 could succeed on node 0 when allocation initiated on node 0 could not, and, this could cause too many numa_miss allocation. Then, performance could be downgraded. Recently, there was a regression report about this problem on CMA patches since CMA memory are placed in ZONE_MOVABLE by those patches. I checked that problem is disappeared with this fix that uses high_zoneidx for classzone_idx. http://lkml.kernel.org/r/20180102063528.GG30397@yexl-desktop Using high_zoneidx for classzone_idx is more consistent way than previous approach because system's memory layout doesn't affect anything to it. With this patch, both classzone_idx on above example will be 3 so will have the same min watermark. allocation initiated on node 0: 750 + 4096 = 4846 allocation initiated on node 1: 750 + 4096 = 4846 One could wonder if there is a side effect that allocation initiated on node 1 will use higher bar when allocation is handled on local since classzone_idx could be higher than before. It will not happen because the zone without managed page doesn't contributes lowmem_reserve at all. Link: http://lkml.kernel.org/r/1587095923-7515-1-git-send-email-iamjoonsoo.kim@lge.com Link: http://lkml.kernel.org/r/1587095923-7515-2-git-send-email-iamjoonsoo.kim@lge.com Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Reported-by: Ye Xiaolong <xiaolong.ye@intel.com> Tested-by: Ye Xiaolong <xiaolong.ye@intel.com> Reviewed-by: Baoquan He <bhe@redhat.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: David Rientjes <rientjes@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Minchan Kim <minchan@kernel.org> Cc: Mel Gorman <mgorman@techsingularity.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/internal.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/mm/internal.h~mm-page_alloc-use-ac-high_zoneidx-for-classzone_idx +++ a/mm/internal.h @@ -144,7 +144,7 @@ struct alloc_context { bool spread_dirty_pages; }; -#define ac_classzone_idx(ac) zonelist_zone_idx(ac->preferred_zoneref) +#define ac_classzone_idx(ac) (ac->high_zoneidx) /* * Locate the struct page for both the matching buddy in our _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 043/131] mm/page_alloc: integrate classzone_idx and high_zoneidx 2020-06-03 22:55 incoming Andrew Morton ` (41 preceding siblings ...) 2020-06-03 22:58 ` [patch 042/131] mm/page_alloc: use ac->high_zoneidx for classzone_idx Andrew Morton @ 2020-06-03 22:59 ` Andrew Morton 2020-06-03 22:59 ` [patch 044/131] mm/page_alloc.c: use NODE_MASK_NONE in build_zonelists() Andrew Morton ` (88 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:59 UTC (permalink / raw) To: akpm, bhe, hannes, iamjoonsoo.kim, linux-mm, mgorman, mhocko, minchan, mm-commits, rientjes, torvalds, vbabka, xiaolong.ye From: Joonsoo Kim <iamjoonsoo.kim@lge.com> Subject: mm/page_alloc: integrate classzone_idx and high_zoneidx classzone_idx is just different name for high_zoneidx now. So, integrate them and add some comment to struct alloc_context in order to reduce future confusion about the meaning of this variable. The accessor, ac_classzone_idx() is also removed since it isn't needed after integration. In addition to integration, this patch also renames high_zoneidx to highest_zoneidx since it represents more precise meaning. Link: http://lkml.kernel.org/r/1587095923-7515-3-git-send-email-iamjoonsoo.kim@lge.com Reviewed-by: Baoquan He <bhe@redhat.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Acked-by: David Rientjes <rientjes@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Michal Hocko <mhocko@kernel.org> Cc: Minchan Kim <minchan@kernel.org> Cc: Ye Xiaolong <xiaolong.ye@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/compaction.h | 9 +- include/linux/mmzone.h | 12 +-- include/trace/events/compaction.h | 22 +++-- include/trace/events/vmscan.h | 14 ++- mm/compaction.c | 64 ++++++++--------- mm/internal.h | 21 ++++- mm/memory_hotplug.c | 6 - mm/oom_kill.c | 4 - mm/page_alloc.c | 60 ++++++++-------- mm/slab.c | 4 - mm/slub.c | 4 - mm/vmscan.c | 105 ++++++++++++++-------------- 12 files changed, 175 insertions(+), 150 deletions(-) --- a/include/linux/compaction.h~mm-page_alloc-integrate-classzone_idx-and-high_zoneidx +++ a/include/linux/compaction.h @@ -97,7 +97,7 @@ extern enum compact_result try_to_compac struct page **page); extern void reset_isolation_suitable(pg_data_t *pgdat); extern enum compact_result compaction_suitable(struct zone *zone, int order, - unsigned int alloc_flags, int classzone_idx); + unsigned int alloc_flags, int highest_zoneidx); extern void defer_compaction(struct zone *zone, int order); extern bool compaction_deferred(struct zone *zone, int order); @@ -182,7 +182,7 @@ bool compaction_zonelist_suitable(struct extern int kcompactd_run(int nid); extern void kcompactd_stop(int nid); -extern void wakeup_kcompactd(pg_data_t *pgdat, int order, int classzone_idx); +extern void wakeup_kcompactd(pg_data_t *pgdat, int order, int highest_zoneidx); #else static inline void reset_isolation_suitable(pg_data_t *pgdat) @@ -190,7 +190,7 @@ static inline void reset_isolation_suita } static inline enum compact_result compaction_suitable(struct zone *zone, int order, - int alloc_flags, int classzone_idx) + int alloc_flags, int highest_zoneidx) { return COMPACT_SKIPPED; } @@ -232,7 +232,8 @@ static inline void kcompactd_stop(int ni { } -static inline void wakeup_kcompactd(pg_data_t *pgdat, int order, int classzone_idx) +static inline void wakeup_kcompactd(pg_data_t *pgdat, + int order, int highest_zoneidx) { } --- a/include/linux/mmzone.h~mm-page_alloc-integrate-classzone_idx-and-high_zoneidx +++ a/include/linux/mmzone.h @@ -699,13 +699,13 @@ typedef struct pglist_data { struct task_struct *kswapd; /* Protected by mem_hotplug_begin/end() */ int kswapd_order; - enum zone_type kswapd_classzone_idx; + enum zone_type kswapd_highest_zoneidx; int kswapd_failures; /* Number of 'reclaimed == 0' runs */ #ifdef CONFIG_COMPACTION int kcompactd_max_order; - enum zone_type kcompactd_classzone_idx; + enum zone_type kcompactd_highest_zoneidx; wait_queue_head_t kcompactd_wait; struct task_struct *kcompactd; #endif @@ -783,15 +783,15 @@ static inline bool pgdat_is_empty(pg_dat void build_all_zonelists(pg_data_t *pgdat); void wakeup_kswapd(struct zone *zone, gfp_t gfp_mask, int order, - enum zone_type classzone_idx); + enum zone_type highest_zoneidx); bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark, - int classzone_idx, unsigned int alloc_flags, + int highest_zoneidx, unsigned int alloc_flags, long free_pages); bool zone_watermark_ok(struct zone *z, unsigned int order, - unsigned long mark, int classzone_idx, + unsigned long mark, int highest_zoneidx, unsigned int alloc_flags); bool zone_watermark_ok_safe(struct zone *z, unsigned int order, - unsigned long mark, int classzone_idx); + unsigned long mark, int highest_zoneidx); enum memmap_context { MEMMAP_EARLY, MEMMAP_HOTPLUG, --- a/include/trace/events/compaction.h~mm-page_alloc-integrate-classzone_idx-and-high_zoneidx +++ a/include/trace/events/compaction.h @@ -314,40 +314,44 @@ TRACE_EVENT(mm_compaction_kcompactd_slee DECLARE_EVENT_CLASS(kcompactd_wake_template, - TP_PROTO(int nid, int order, enum zone_type classzone_idx), + TP_PROTO(int nid, int order, enum zone_type highest_zoneidx), - TP_ARGS(nid, order, classzone_idx), + TP_ARGS(nid, order, highest_zoneidx), TP_STRUCT__entry( __field(int, nid) __field(int, order) - __field(enum zone_type, classzone_idx) + __field(enum zone_type, highest_zoneidx) ), TP_fast_assign( __entry->nid = nid; __entry->order = order; - __entry->classzone_idx = classzone_idx; + __entry->highest_zoneidx = highest_zoneidx; ), + /* + * classzone_idx is previous name of the highest_zoneidx. + * Reason not to change it is the ABI requirement of the tracepoint. + */ TP_printk("nid=%d order=%d classzone_idx=%-8s", __entry->nid, __entry->order, - __print_symbolic(__entry->classzone_idx, ZONE_TYPE)) + __print_symbolic(__entry->highest_zoneidx, ZONE_TYPE)) ); DEFINE_EVENT(kcompactd_wake_template, mm_compaction_wakeup_kcompactd, - TP_PROTO(int nid, int order, enum zone_type classzone_idx), + TP_PROTO(int nid, int order, enum zone_type highest_zoneidx), - TP_ARGS(nid, order, classzone_idx) + TP_ARGS(nid, order, highest_zoneidx) ); DEFINE_EVENT(kcompactd_wake_template, mm_compaction_kcompactd_wake, - TP_PROTO(int nid, int order, enum zone_type classzone_idx), + TP_PROTO(int nid, int order, enum zone_type highest_zoneidx), - TP_ARGS(nid, order, classzone_idx) + TP_ARGS(nid, order, highest_zoneidx) ); #endif --- a/include/trace/events/vmscan.h~mm-page_alloc-integrate-classzone_idx-and-high_zoneidx +++ a/include/trace/events/vmscan.h @@ -265,7 +265,7 @@ TRACE_EVENT(mm_shrink_slab_end, ); TRACE_EVENT(mm_vmscan_lru_isolate, - TP_PROTO(int classzone_idx, + TP_PROTO(int highest_zoneidx, int order, unsigned long nr_requested, unsigned long nr_scanned, @@ -274,10 +274,10 @@ TRACE_EVENT(mm_vmscan_lru_isolate, isolate_mode_t isolate_mode, int lru), - TP_ARGS(classzone_idx, order, nr_requested, nr_scanned, nr_skipped, nr_taken, isolate_mode, lru), + TP_ARGS(highest_zoneidx, order, nr_requested, nr_scanned, nr_skipped, nr_taken, isolate_mode, lru), TP_STRUCT__entry( - __field(int, classzone_idx) + __field(int, highest_zoneidx) __field(int, order) __field(unsigned long, nr_requested) __field(unsigned long, nr_scanned) @@ -288,7 +288,7 @@ TRACE_EVENT(mm_vmscan_lru_isolate, ), TP_fast_assign( - __entry->classzone_idx = classzone_idx; + __entry->highest_zoneidx = highest_zoneidx; __entry->order = order; __entry->nr_requested = nr_requested; __entry->nr_scanned = nr_scanned; @@ -298,9 +298,13 @@ TRACE_EVENT(mm_vmscan_lru_isolate, __entry->lru = lru; ), + /* + * classzone is previous name of the highest_zoneidx. + * Reason not to change it is the ABI requirement of the tracepoint. + */ TP_printk("isolate_mode=%d classzone=%d order=%d nr_requested=%lu nr_scanned=%lu nr_skipped=%lu nr_taken=%lu lru=%s", __entry->isolate_mode, - __entry->classzone_idx, + __entry->highest_zoneidx, __entry->order, __entry->nr_requested, __entry->nr_scanned, --- a/mm/compaction.c~mm-page_alloc-integrate-classzone_idx-and-high_zoneidx +++ a/mm/compaction.c @@ -1968,7 +1968,7 @@ static enum compact_result compact_finis */ static enum compact_result __compaction_suitable(struct zone *zone, int order, unsigned int alloc_flags, - int classzone_idx, + int highest_zoneidx, unsigned long wmark_target) { unsigned long watermark; @@ -1981,7 +1981,7 @@ static enum compact_result __compaction_ * If watermarks for high-order allocation are already met, there * should be no need for compaction at all. */ - if (zone_watermark_ok(zone, order, watermark, classzone_idx, + if (zone_watermark_ok(zone, order, watermark, highest_zoneidx, alloc_flags)) return COMPACT_SUCCESS; @@ -1991,9 +1991,9 @@ static enum compact_result __compaction_ * watermark and alloc_flags have to match, or be more pessimistic than * the check in __isolate_free_page(). We don't use the direct * compactor's alloc_flags, as they are not relevant for freepage - * isolation. We however do use the direct compactor's classzone_idx to - * skip over zones where lowmem reserves would prevent allocation even - * if compaction succeeds. + * isolation. We however do use the direct compactor's highest_zoneidx + * to skip over zones where lowmem reserves would prevent allocation + * even if compaction succeeds. * For costly orders, we require low watermark instead of min for * compaction to proceed to increase its chances. * ALLOC_CMA is used, as pages in CMA pageblocks are considered @@ -2002,7 +2002,7 @@ static enum compact_result __compaction_ watermark = (order > PAGE_ALLOC_COSTLY_ORDER) ? low_wmark_pages(zone) : min_wmark_pages(zone); watermark += compact_gap(order); - if (!__zone_watermark_ok(zone, 0, watermark, classzone_idx, + if (!__zone_watermark_ok(zone, 0, watermark, highest_zoneidx, ALLOC_CMA, wmark_target)) return COMPACT_SKIPPED; @@ -2011,12 +2011,12 @@ static enum compact_result __compaction_ enum compact_result compaction_suitable(struct zone *zone, int order, unsigned int alloc_flags, - int classzone_idx) + int highest_zoneidx) { enum compact_result ret; int fragindex; - ret = __compaction_suitable(zone, order, alloc_flags, classzone_idx, + ret = __compaction_suitable(zone, order, alloc_flags, highest_zoneidx, zone_page_state(zone, NR_FREE_PAGES)); /* * fragmentation index determines if allocation failures are due to @@ -2057,8 +2057,8 @@ bool compaction_zonelist_suitable(struct * Make sure at least one zone would pass __compaction_suitable if we continue * retrying the reclaim. */ - for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, - ac->nodemask) { + for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, + ac->highest_zoneidx, ac->nodemask) { unsigned long available; enum compact_result compact_result; @@ -2071,7 +2071,7 @@ bool compaction_zonelist_suitable(struct available = zone_reclaimable_pages(zone) / order; available += zone_page_state_snapshot(zone, NR_FREE_PAGES); compact_result = __compaction_suitable(zone, order, alloc_flags, - ac_classzone_idx(ac), available); + ac->highest_zoneidx, available); if (compact_result != COMPACT_SKIPPED) return true; } @@ -2102,7 +2102,7 @@ compact_zone(struct compact_control *cc, cc->migratetype = gfpflags_to_migratetype(cc->gfp_mask); ret = compaction_suitable(cc->zone, cc->order, cc->alloc_flags, - cc->classzone_idx); + cc->highest_zoneidx); /* Compaction is likely to fail */ if (ret == COMPACT_SUCCESS || ret == COMPACT_SKIPPED) return ret; @@ -2293,7 +2293,7 @@ out: static enum compact_result compact_zone_order(struct zone *zone, int order, gfp_t gfp_mask, enum compact_priority prio, - unsigned int alloc_flags, int classzone_idx, + unsigned int alloc_flags, int highest_zoneidx, struct page **capture) { enum compact_result ret; @@ -2305,7 +2305,7 @@ static enum compact_result compact_zone_ .mode = (prio == COMPACT_PRIO_ASYNC) ? MIGRATE_ASYNC : MIGRATE_SYNC_LIGHT, .alloc_flags = alloc_flags, - .classzone_idx = classzone_idx, + .highest_zoneidx = highest_zoneidx, .direct_compaction = true, .whole_zone = (prio == MIN_COMPACT_PRIORITY), .ignore_skip_hint = (prio == MIN_COMPACT_PRIORITY), @@ -2361,8 +2361,8 @@ enum compact_result try_to_compact_pages trace_mm_compaction_try_to_compact_pages(order, gfp_mask, prio); /* Compact each zone in the list */ - for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, - ac->nodemask) { + for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, + ac->highest_zoneidx, ac->nodemask) { enum compact_result status; if (prio > MIN_COMPACT_PRIORITY @@ -2372,7 +2372,7 @@ enum compact_result try_to_compact_pages } status = compact_zone_order(zone, order, gfp_mask, prio, - alloc_flags, ac_classzone_idx(ac), capture); + alloc_flags, ac->highest_zoneidx, capture); rc = max(status, rc); /* The allocation should succeed, stop compacting */ @@ -2507,16 +2507,16 @@ static bool kcompactd_node_suitable(pg_d { int zoneid; struct zone *zone; - enum zone_type classzone_idx = pgdat->kcompactd_classzone_idx; + enum zone_type highest_zoneidx = pgdat->kcompactd_highest_zoneidx; - for (zoneid = 0; zoneid <= classzone_idx; zoneid++) { + for (zoneid = 0; zoneid <= highest_zoneidx; zoneid++) { zone = &pgdat->node_zones[zoneid]; if (!populated_zone(zone)) continue; if (compaction_suitable(zone, pgdat->kcompactd_max_order, 0, - classzone_idx) == COMPACT_CONTINUE) + highest_zoneidx) == COMPACT_CONTINUE) return true; } @@ -2534,16 +2534,16 @@ static void kcompactd_do_work(pg_data_t struct compact_control cc = { .order = pgdat->kcompactd_max_order, .search_order = pgdat->kcompactd_max_order, - .classzone_idx = pgdat->kcompactd_classzone_idx, + .highest_zoneidx = pgdat->kcompactd_highest_zoneidx, .mode = MIGRATE_SYNC_LIGHT, .ignore_skip_hint = false, .gfp_mask = GFP_KERNEL, }; trace_mm_compaction_kcompactd_wake(pgdat->node_id, cc.order, - cc.classzone_idx); + cc.highest_zoneidx); count_compact_event(KCOMPACTD_WAKE); - for (zoneid = 0; zoneid <= cc.classzone_idx; zoneid++) { + for (zoneid = 0; zoneid <= cc.highest_zoneidx; zoneid++) { int status; zone = &pgdat->node_zones[zoneid]; @@ -2592,16 +2592,16 @@ static void kcompactd_do_work(pg_data_t /* * Regardless of success, we are done until woken up next. But remember - * the requested order/classzone_idx in case it was higher/tighter than - * our current ones + * the requested order/highest_zoneidx in case it was higher/tighter + * than our current ones */ if (pgdat->kcompactd_max_order <= cc.order) pgdat->kcompactd_max_order = 0; - if (pgdat->kcompactd_classzone_idx >= cc.classzone_idx) - pgdat->kcompactd_classzone_idx = pgdat->nr_zones - 1; + if (pgdat->kcompactd_highest_zoneidx >= cc.highest_zoneidx) + pgdat->kcompactd_highest_zoneidx = pgdat->nr_zones - 1; } -void wakeup_kcompactd(pg_data_t *pgdat, int order, int classzone_idx) +void wakeup_kcompactd(pg_data_t *pgdat, int order, int highest_zoneidx) { if (!order) return; @@ -2609,8 +2609,8 @@ void wakeup_kcompactd(pg_data_t *pgdat, if (pgdat->kcompactd_max_order < order) pgdat->kcompactd_max_order = order; - if (pgdat->kcompactd_classzone_idx > classzone_idx) - pgdat->kcompactd_classzone_idx = classzone_idx; + if (pgdat->kcompactd_highest_zoneidx > highest_zoneidx) + pgdat->kcompactd_highest_zoneidx = highest_zoneidx; /* * Pairs with implicit barrier in wait_event_freezable() @@ -2623,7 +2623,7 @@ void wakeup_kcompactd(pg_data_t *pgdat, return; trace_mm_compaction_wakeup_kcompactd(pgdat->node_id, order, - classzone_idx); + highest_zoneidx); wake_up_interruptible(&pgdat->kcompactd_wait); } @@ -2644,7 +2644,7 @@ static int kcompactd(void *p) set_freezable(); pgdat->kcompactd_max_order = 0; - pgdat->kcompactd_classzone_idx = pgdat->nr_zones - 1; + pgdat->kcompactd_highest_zoneidx = pgdat->nr_zones - 1; while (!kthread_should_stop()) { unsigned long pflags; --- a/mm/internal.h~mm-page_alloc-integrate-classzone_idx-and-high_zoneidx +++ a/mm/internal.h @@ -127,10 +127,10 @@ extern pmd_t *mm_find_pmd(struct mm_stru * between functions involved in allocations, including the alloc_pages* * family of functions. * - * nodemask, migratetype and high_zoneidx are initialized only once in + * nodemask, migratetype and highest_zoneidx are initialized only once in * __alloc_pages_nodemask() and then never change. * - * zonelist, preferred_zone and classzone_idx are set first in + * zonelist, preferred_zone and highest_zoneidx are set first in * __alloc_pages_nodemask() for the fast path, and might be later changed * in __alloc_pages_slowpath(). All other functions pass the whole strucure * by a const pointer. @@ -140,12 +140,21 @@ struct alloc_context { nodemask_t *nodemask; struct zoneref *preferred_zoneref; int migratetype; - enum zone_type high_zoneidx; + + /* + * highest_zoneidx represents highest usable zone index of + * the allocation request. Due to the nature of the zone, + * memory on lower zone than the highest_zoneidx will be + * protected by lowmem_reserve[highest_zoneidx]. + * + * highest_zoneidx is also used by reclaim/compaction to limit + * the target zone since higher zone than this index cannot be + * usable for this allocation request. + */ + enum zone_type highest_zoneidx; bool spread_dirty_pages; }; -#define ac_classzone_idx(ac) (ac->high_zoneidx) - /* * Locate the struct page for both the matching buddy in our * pair (buddy1) and the combined O(n+1) page they form (page). @@ -224,7 +233,7 @@ struct compact_control { int order; /* order a direct compactor needs */ int migratetype; /* migratetype of direct compactor */ const unsigned int alloc_flags; /* alloc flags of a direct compactor */ - const int classzone_idx; /* zone index of a direct compactor */ + const int highest_zoneidx; /* zone index of a direct compactor */ enum migrate_mode mode; /* Async or sync migration mode */ bool ignore_skip_hint; /* Scan blocks even if marked skip */ bool no_set_skip_hint; /* Don't mark blocks for skipping */ --- a/mm/memory_hotplug.c~mm-page_alloc-integrate-classzone_idx-and-high_zoneidx +++ a/mm/memory_hotplug.c @@ -879,13 +879,13 @@ static pg_data_t __ref *hotadd_new_pgdat } else { int cpu; /* - * Reset the nr_zones, order and classzone_idx before reuse. - * Note that kswapd will init kswapd_classzone_idx properly + * Reset the nr_zones, order and highest_zoneidx before reuse. + * Note that kswapd will init kswapd_highest_zoneidx properly * when it starts in the near future. */ pgdat->nr_zones = 0; pgdat->kswapd_order = 0; - pgdat->kswapd_classzone_idx = 0; + pgdat->kswapd_highest_zoneidx = 0; for_each_online_cpu(cpu) { struct per_cpu_nodestat *p; --- a/mm/oom_kill.c~mm-page_alloc-integrate-classzone_idx-and-high_zoneidx +++ a/mm/oom_kill.c @@ -254,7 +254,7 @@ static enum oom_constraint constrained_a { struct zone *zone; struct zoneref *z; - enum zone_type high_zoneidx = gfp_zone(oc->gfp_mask); + enum zone_type highest_zoneidx = gfp_zone(oc->gfp_mask); bool cpuset_limited = false; int nid; @@ -294,7 +294,7 @@ static enum oom_constraint constrained_a /* Check this allocation failure is caused by cpuset's wall function */ for_each_zone_zonelist_nodemask(zone, z, oc->zonelist, - high_zoneidx, oc->nodemask) + highest_zoneidx, oc->nodemask) if (!cpuset_zone_allowed(zone, oc->gfp_mask)) cpuset_limited = true; --- a/mm/page_alloc.c~mm-page_alloc-integrate-classzone_idx-and-high_zoneidx +++ a/mm/page_alloc.c @@ -2593,7 +2593,7 @@ static bool unreserve_highatomic_pageblo int order; bool ret; - for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx, + for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->highest_zoneidx, ac->nodemask) { /* * Preserve at least one pageblock unless memory pressure @@ -3462,7 +3462,7 @@ ALLOW_ERROR_INJECTION(should_fail_alloc_ * to check in the allocation paths if no pages are free. */ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark, - int classzone_idx, unsigned int alloc_flags, + int highest_zoneidx, unsigned int alloc_flags, long free_pages) { long min = mark; @@ -3507,7 +3507,7 @@ bool __zone_watermark_ok(struct zone *z, * are not met, then a high-order request also cannot go ahead * even if a suitable page happened to be free. */ - if (free_pages <= min + z->lowmem_reserve[classzone_idx]) + if (free_pages <= min + z->lowmem_reserve[highest_zoneidx]) return false; /* If this is an order-0 request then the watermark is fine */ @@ -3540,14 +3540,15 @@ bool __zone_watermark_ok(struct zone *z, } bool zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark, - int classzone_idx, unsigned int alloc_flags) + int highest_zoneidx, unsigned int alloc_flags) { - return __zone_watermark_ok(z, order, mark, classzone_idx, alloc_flags, + return __zone_watermark_ok(z, order, mark, highest_zoneidx, alloc_flags, zone_page_state(z, NR_FREE_PAGES)); } static inline bool zone_watermark_fast(struct zone *z, unsigned int order, - unsigned long mark, int classzone_idx, unsigned int alloc_flags) + unsigned long mark, int highest_zoneidx, + unsigned int alloc_flags) { long free_pages = zone_page_state(z, NR_FREE_PAGES); long cma_pages = 0; @@ -3565,22 +3566,23 @@ static inline bool zone_watermark_fast(s * the caller is !atomic then it'll uselessly search the free * list. That corner case is then slower but it is harmless. */ - if (!order && (free_pages - cma_pages) > mark + z->lowmem_reserve[classzone_idx]) + if (!order && (free_pages - cma_pages) > + mark + z->lowmem_reserve[highest_zoneidx]) return true; - return __zone_watermark_ok(z, order, mark, classzone_idx, alloc_flags, + return __zone_watermark_ok(z, order, mark, highest_zoneidx, alloc_flags, free_pages); } bool zone_watermark_ok_safe(struct zone *z, unsigned int order, - unsigned long mark, int classzone_idx) + unsigned long mark, int highest_zoneidx) { long free_pages = zone_page_state(z, NR_FREE_PAGES); if (z->percpu_drift_mark && free_pages < z->percpu_drift_mark) free_pages = zone_page_state_snapshot(z, NR_FREE_PAGES); - return __zone_watermark_ok(z, order, mark, classzone_idx, 0, + return __zone_watermark_ok(z, order, mark, highest_zoneidx, 0, free_pages); } @@ -3657,8 +3659,8 @@ retry: */ no_fallback = alloc_flags & ALLOC_NOFRAGMENT; z = ac->preferred_zoneref; - for_next_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, - ac->nodemask) { + for_next_zone_zonelist_nodemask(zone, z, ac->zonelist, + ac->highest_zoneidx, ac->nodemask) { struct page *page; unsigned long mark; @@ -3713,7 +3715,7 @@ retry: mark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK); if (!zone_watermark_fast(zone, order, mark, - ac_classzone_idx(ac), alloc_flags)) { + ac->highest_zoneidx, alloc_flags)) { int ret; #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT @@ -3746,7 +3748,7 @@ retry: default: /* did we reclaim enough */ if (zone_watermark_ok(zone, order, mark, - ac_classzone_idx(ac), alloc_flags)) + ac->highest_zoneidx, alloc_flags)) goto try_this_zone; continue; @@ -3905,7 +3907,7 @@ __alloc_pages_may_oom(gfp_t gfp_mask, un if (gfp_mask & __GFP_RETRY_MAYFAIL) goto out; /* The OOM killer does not needlessly kill tasks for lowmem */ - if (ac->high_zoneidx < ZONE_NORMAL) + if (ac->highest_zoneidx < ZONE_NORMAL) goto out; if (pm_suspended_storage()) goto out; @@ -4108,10 +4110,10 @@ should_compact_retry(struct alloc_contex * Let's give them a good hope and keep retrying while the order-0 * watermarks are OK. */ - for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, - ac->nodemask) { + for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, + ac->highest_zoneidx, ac->nodemask) { if (zone_watermark_ok(zone, 0, min_wmark_pages(zone), - ac_classzone_idx(ac), alloc_flags)) + ac->highest_zoneidx, alloc_flags)) return true; } return false; @@ -4235,12 +4237,12 @@ static void wake_all_kswapds(unsigned in struct zoneref *z; struct zone *zone; pg_data_t *last_pgdat = NULL; - enum zone_type high_zoneidx = ac->high_zoneidx; + enum zone_type highest_zoneidx = ac->highest_zoneidx; - for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, high_zoneidx, + for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, highest_zoneidx, ac->nodemask) { if (last_pgdat != zone->zone_pgdat) - wakeup_kswapd(zone, gfp_mask, order, high_zoneidx); + wakeup_kswapd(zone, gfp_mask, order, highest_zoneidx); last_pgdat = zone->zone_pgdat; } } @@ -4375,8 +4377,8 @@ should_reclaim_retry(gfp_t gfp_mask, uns * request even if all reclaimable pages are considered then we are * screwed and have to go OOM. */ - for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, - ac->nodemask) { + for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, + ac->highest_zoneidx, ac->nodemask) { unsigned long available; unsigned long reclaimable; unsigned long min_wmark = min_wmark_pages(zone); @@ -4390,7 +4392,7 @@ should_reclaim_retry(gfp_t gfp_mask, uns * reclaimable pages? */ wmark = __zone_watermark_ok(zone, order, min_wmark, - ac_classzone_idx(ac), alloc_flags, available); + ac->highest_zoneidx, alloc_flags, available); trace_reclaim_retry_zone(z, order, reclaimable, available, min_wmark, *no_progress_loops, wmark); if (wmark) { @@ -4509,7 +4511,7 @@ retry_cpuset: * could end up iterating over non-eligible zones endlessly. */ ac->preferred_zoneref = first_zones_zonelist(ac->zonelist, - ac->high_zoneidx, ac->nodemask); + ac->highest_zoneidx, ac->nodemask); if (!ac->preferred_zoneref->zone) goto nopage; @@ -4596,7 +4598,7 @@ retry: if (!(alloc_flags & ALLOC_CPUSET) || reserve_flags) { ac->nodemask = NULL; ac->preferred_zoneref = first_zones_zonelist(ac->zonelist, - ac->high_zoneidx, ac->nodemask); + ac->highest_zoneidx, ac->nodemask); } /* Attempt with potentially adjusted zonelist and alloc_flags */ @@ -4730,7 +4732,7 @@ static inline bool prepare_alloc_pages(g struct alloc_context *ac, gfp_t *alloc_mask, unsigned int *alloc_flags) { - ac->high_zoneidx = gfp_zone(gfp_mask); + ac->highest_zoneidx = gfp_zone(gfp_mask); ac->zonelist = node_zonelist(preferred_nid, gfp_mask); ac->nodemask = nodemask; ac->migratetype = gfpflags_to_migratetype(gfp_mask); @@ -4769,7 +4771,7 @@ static inline void finalise_ac(gfp_t gfp * may get reset for allocations that ignore memory policies. */ ac->preferred_zoneref = first_zones_zonelist(ac->zonelist, - ac->high_zoneidx, ac->nodemask); + ac->highest_zoneidx, ac->nodemask); } /* @@ -6867,7 +6869,7 @@ static void __init free_area_init_node(i unsigned long end_pfn = 0; /* pg_data_t should be reset to zero when it's allocated */ - WARN_ON(pgdat->nr_zones || pgdat->kswapd_classzone_idx); + WARN_ON(pgdat->nr_zones || pgdat->kswapd_highest_zoneidx); get_pfn_range_for_nid(nid, &start_pfn, &end_pfn); --- a/mm/slab.c~mm-page_alloc-integrate-classzone_idx-and-high_zoneidx +++ a/mm/slab.c @@ -3106,7 +3106,7 @@ static void *fallback_alloc(struct kmem_ struct zonelist *zonelist; struct zoneref *z; struct zone *zone; - enum zone_type high_zoneidx = gfp_zone(flags); + enum zone_type highest_zoneidx = gfp_zone(flags); void *obj = NULL; struct page *page; int nid; @@ -3124,7 +3124,7 @@ retry: * Look through allowed nodes for objects available * from existing per node queues. */ - for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) { + for_each_zone_zonelist(zone, z, zonelist, highest_zoneidx) { nid = zone_to_nid(zone); if (cpuset_zone_allowed(zone, flags) && --- a/mm/slub.c~mm-page_alloc-integrate-classzone_idx-and-high_zoneidx +++ a/mm/slub.c @@ -1938,7 +1938,7 @@ static void *get_any_partial(struct kmem struct zonelist *zonelist; struct zoneref *z; struct zone *zone; - enum zone_type high_zoneidx = gfp_zone(flags); + enum zone_type highest_zoneidx = gfp_zone(flags); void *object; unsigned int cpuset_mems_cookie; @@ -1967,7 +1967,7 @@ static void *get_any_partial(struct kmem do { cpuset_mems_cookie = read_mems_allowed_begin(); zonelist = node_zonelist(mempolicy_slab_node(), flags); - for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) { + for_each_zone_zonelist(zone, z, zonelist, highest_zoneidx) { struct kmem_cache_node *n; n = get_node(s, zone_to_nid(zone)); --- a/mm/vmscan.c~mm-page_alloc-integrate-classzone_idx-and-high_zoneidx +++ a/mm/vmscan.c @@ -3131,8 +3131,8 @@ static bool allow_direct_reclaim(pg_data /* kswapd must be awake if processes are being throttled */ if (!wmark_ok && waitqueue_active(&pgdat->kswapd_wait)) { - if (READ_ONCE(pgdat->kswapd_classzone_idx) > ZONE_NORMAL) - WRITE_ONCE(pgdat->kswapd_classzone_idx, ZONE_NORMAL); + if (READ_ONCE(pgdat->kswapd_highest_zoneidx) > ZONE_NORMAL) + WRITE_ONCE(pgdat->kswapd_highest_zoneidx, ZONE_NORMAL); wake_up_interruptible(&pgdat->kswapd_wait); } @@ -3385,7 +3385,7 @@ static void age_active_anon(struct pglis } while (memcg); } -static bool pgdat_watermark_boosted(pg_data_t *pgdat, int classzone_idx) +static bool pgdat_watermark_boosted(pg_data_t *pgdat, int highest_zoneidx) { int i; struct zone *zone; @@ -3397,7 +3397,7 @@ static bool pgdat_watermark_boosted(pg_d * start prematurely when there is no boosting and a lower * zone is balanced. */ - for (i = classzone_idx; i >= 0; i--) { + for (i = highest_zoneidx; i >= 0; i--) { zone = pgdat->node_zones + i; if (!managed_zone(zone)) continue; @@ -3411,9 +3411,9 @@ static bool pgdat_watermark_boosted(pg_d /* * Returns true if there is an eligible zone balanced for the request order - * and classzone_idx + * and highest_zoneidx */ -static bool pgdat_balanced(pg_data_t *pgdat, int order, int classzone_idx) +static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx) { int i; unsigned long mark = -1; @@ -3423,19 +3423,19 @@ static bool pgdat_balanced(pg_data_t *pg * Check watermarks bottom-up as lower zones are more likely to * meet watermarks. */ - for (i = 0; i <= classzone_idx; i++) { + for (i = 0; i <= highest_zoneidx; i++) { zone = pgdat->node_zones + i; if (!managed_zone(zone)) continue; mark = high_wmark_pages(zone); - if (zone_watermark_ok_safe(zone, order, mark, classzone_idx)) + if (zone_watermark_ok_safe(zone, order, mark, highest_zoneidx)) return true; } /* - * If a node has no populated zone within classzone_idx, it does not + * If a node has no populated zone within highest_zoneidx, it does not * need balancing by definition. This can happen if a zone-restricted * allocation tries to wake a remote kswapd. */ @@ -3461,7 +3461,8 @@ static void clear_pgdat_congested(pg_dat * * Returns true if kswapd is ready to sleep */ -static bool prepare_kswapd_sleep(pg_data_t *pgdat, int order, int classzone_idx) +static bool prepare_kswapd_sleep(pg_data_t *pgdat, int order, + int highest_zoneidx) { /* * The throttled processes are normally woken up in balance_pgdat() as @@ -3483,7 +3484,7 @@ static bool prepare_kswapd_sleep(pg_data if (pgdat->kswapd_failures >= MAX_RECLAIM_RETRIES) return true; - if (pgdat_balanced(pgdat, order, classzone_idx)) { + if (pgdat_balanced(pgdat, order, highest_zoneidx)) { clear_pgdat_congested(pgdat); return true; } @@ -3547,7 +3548,7 @@ static bool kswapd_shrink_node(pg_data_t * or lower is eligible for reclaim until at least one usable zone is * balanced. */ -static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx) +static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx) { int i; unsigned long nr_soft_reclaimed; @@ -3575,7 +3576,7 @@ static int balance_pgdat(pg_data_t *pgda * stall or direct reclaim until kswapd is finished. */ nr_boost_reclaim = 0; - for (i = 0; i <= classzone_idx; i++) { + for (i = 0; i <= highest_zoneidx; i++) { zone = pgdat->node_zones + i; if (!managed_zone(zone)) continue; @@ -3593,7 +3594,7 @@ restart: bool balanced; bool ret; - sc.reclaim_idx = classzone_idx; + sc.reclaim_idx = highest_zoneidx; /* * If the number of buffer_heads exceeds the maximum allowed @@ -3623,7 +3624,7 @@ restart: * on the grounds that the normal reclaim should be enough to * re-evaluate if boosting is required when kswapd next wakes. */ - balanced = pgdat_balanced(pgdat, sc.order, classzone_idx); + balanced = pgdat_balanced(pgdat, sc.order, highest_zoneidx); if (!balanced && nr_boost_reclaim) { nr_boost_reclaim = 0; goto restart; @@ -3723,7 +3724,7 @@ out: if (boosted) { unsigned long flags; - for (i = 0; i <= classzone_idx; i++) { + for (i = 0; i <= highest_zoneidx; i++) { if (!zone_boosts[i]) continue; @@ -3738,7 +3739,7 @@ out: * As there is now likely space, wakeup kcompact to defragment * pageblocks. */ - wakeup_kcompactd(pgdat, pageblock_order, classzone_idx); + wakeup_kcompactd(pgdat, pageblock_order, highest_zoneidx); } snapshot_refaults(NULL, pgdat); @@ -3756,22 +3757,22 @@ out: } /* - * The pgdat->kswapd_classzone_idx is used to pass the highest zone index to be - * reclaimed by kswapd from the waker. If the value is MAX_NR_ZONES which is not - * a valid index then either kswapd runs for first time or kswapd couldn't sleep - * after previous reclaim attempt (node is still unbalanced). In that case - * return the zone index of the previous kswapd reclaim cycle. + * The pgdat->kswapd_highest_zoneidx is used to pass the highest zone index to + * be reclaimed by kswapd from the waker. If the value is MAX_NR_ZONES which is + * not a valid index then either kswapd runs for first time or kswapd couldn't + * sleep after previous reclaim attempt (node is still unbalanced). In that + * case return the zone index of the previous kswapd reclaim cycle. */ -static enum zone_type kswapd_classzone_idx(pg_data_t *pgdat, - enum zone_type prev_classzone_idx) +static enum zone_type kswapd_highest_zoneidx(pg_data_t *pgdat, + enum zone_type prev_highest_zoneidx) { - enum zone_type curr_idx = READ_ONCE(pgdat->kswapd_classzone_idx); + enum zone_type curr_idx = READ_ONCE(pgdat->kswapd_highest_zoneidx); - return curr_idx == MAX_NR_ZONES ? prev_classzone_idx : curr_idx; + return curr_idx == MAX_NR_ZONES ? prev_highest_zoneidx : curr_idx; } static void kswapd_try_to_sleep(pg_data_t *pgdat, int alloc_order, int reclaim_order, - unsigned int classzone_idx) + unsigned int highest_zoneidx) { long remaining = 0; DEFINE_WAIT(wait); @@ -3788,7 +3789,7 @@ static void kswapd_try_to_sleep(pg_data_ * eligible zone balanced that it's also unlikely that compaction will * succeed. */ - if (prepare_kswapd_sleep(pgdat, reclaim_order, classzone_idx)) { + if (prepare_kswapd_sleep(pgdat, reclaim_order, highest_zoneidx)) { /* * Compaction records what page blocks it recently failed to * isolate pages from and skips them in the future scanning. @@ -3801,18 +3802,19 @@ static void kswapd_try_to_sleep(pg_data_ * We have freed the memory, now we should compact it to make * allocation of the requested order possible. */ - wakeup_kcompactd(pgdat, alloc_order, classzone_idx); + wakeup_kcompactd(pgdat, alloc_order, highest_zoneidx); remaining = schedule_timeout(HZ/10); /* - * If woken prematurely then reset kswapd_classzone_idx and + * If woken prematurely then reset kswapd_highest_zoneidx and * order. The values will either be from a wakeup request or * the previous request that slept prematurely. */ if (remaining) { - WRITE_ONCE(pgdat->kswapd_classzone_idx, - kswapd_classzone_idx(pgdat, classzone_idx)); + WRITE_ONCE(pgdat->kswapd_highest_zoneidx, + kswapd_highest_zoneidx(pgdat, + highest_zoneidx)); if (READ_ONCE(pgdat->kswapd_order) < reclaim_order) WRITE_ONCE(pgdat->kswapd_order, reclaim_order); @@ -3827,7 +3829,7 @@ static void kswapd_try_to_sleep(pg_data_ * go fully to sleep until explicitly woken up. */ if (!remaining && - prepare_kswapd_sleep(pgdat, reclaim_order, classzone_idx)) { + prepare_kswapd_sleep(pgdat, reclaim_order, highest_zoneidx)) { trace_mm_vmscan_kswapd_sleep(pgdat->node_id); /* @@ -3869,7 +3871,7 @@ static void kswapd_try_to_sleep(pg_data_ static int kswapd(void *p) { unsigned int alloc_order, reclaim_order; - unsigned int classzone_idx = MAX_NR_ZONES - 1; + unsigned int highest_zoneidx = MAX_NR_ZONES - 1; pg_data_t *pgdat = (pg_data_t*)p; struct task_struct *tsk = current; const struct cpumask *cpumask = cpumask_of_node(pgdat->node_id); @@ -3893,22 +3895,24 @@ static int kswapd(void *p) set_freezable(); WRITE_ONCE(pgdat->kswapd_order, 0); - WRITE_ONCE(pgdat->kswapd_classzone_idx, MAX_NR_ZONES); + WRITE_ONCE(pgdat->kswapd_highest_zoneidx, MAX_NR_ZONES); for ( ; ; ) { bool ret; alloc_order = reclaim_order = READ_ONCE(pgdat->kswapd_order); - classzone_idx = kswapd_classzone_idx(pgdat, classzone_idx); + highest_zoneidx = kswapd_highest_zoneidx(pgdat, + highest_zoneidx); kswapd_try_sleep: kswapd_try_to_sleep(pgdat, alloc_order, reclaim_order, - classzone_idx); + highest_zoneidx); - /* Read the new order and classzone_idx */ + /* Read the new order and highest_zoneidx */ alloc_order = reclaim_order = READ_ONCE(pgdat->kswapd_order); - classzone_idx = kswapd_classzone_idx(pgdat, classzone_idx); + highest_zoneidx = kswapd_highest_zoneidx(pgdat, + highest_zoneidx); WRITE_ONCE(pgdat->kswapd_order, 0); - WRITE_ONCE(pgdat->kswapd_classzone_idx, MAX_NR_ZONES); + WRITE_ONCE(pgdat->kswapd_highest_zoneidx, MAX_NR_ZONES); ret = try_to_freeze(); if (kthread_should_stop()) @@ -3929,9 +3933,10 @@ kswapd_try_sleep: * but kcompactd is woken to compact for the original * request (alloc_order). */ - trace_mm_vmscan_kswapd_wake(pgdat->node_id, classzone_idx, + trace_mm_vmscan_kswapd_wake(pgdat->node_id, highest_zoneidx, alloc_order); - reclaim_order = balance_pgdat(pgdat, alloc_order, classzone_idx); + reclaim_order = balance_pgdat(pgdat, alloc_order, + highest_zoneidx); if (reclaim_order < alloc_order) goto kswapd_try_sleep; } @@ -3949,7 +3954,7 @@ kswapd_try_sleep: * needed. */ void wakeup_kswapd(struct zone *zone, gfp_t gfp_flags, int order, - enum zone_type classzone_idx) + enum zone_type highest_zoneidx) { pg_data_t *pgdat; enum zone_type curr_idx; @@ -3961,10 +3966,10 @@ void wakeup_kswapd(struct zone *zone, gf return; pgdat = zone->zone_pgdat; - curr_idx = READ_ONCE(pgdat->kswapd_classzone_idx); + curr_idx = READ_ONCE(pgdat->kswapd_highest_zoneidx); - if (curr_idx == MAX_NR_ZONES || curr_idx < classzone_idx) - WRITE_ONCE(pgdat->kswapd_classzone_idx, classzone_idx); + if (curr_idx == MAX_NR_ZONES || curr_idx < highest_zoneidx) + WRITE_ONCE(pgdat->kswapd_highest_zoneidx, highest_zoneidx); if (READ_ONCE(pgdat->kswapd_order) < order) WRITE_ONCE(pgdat->kswapd_order, order); @@ -3974,8 +3979,8 @@ void wakeup_kswapd(struct zone *zone, gf /* Hopeless node, leave it to direct reclaim if possible */ if (pgdat->kswapd_failures >= MAX_RECLAIM_RETRIES || - (pgdat_balanced(pgdat, order, classzone_idx) && - !pgdat_watermark_boosted(pgdat, classzone_idx))) { + (pgdat_balanced(pgdat, order, highest_zoneidx) && + !pgdat_watermark_boosted(pgdat, highest_zoneidx))) { /* * There may be plenty of free memory available, but it's too * fragmented for high-order allocations. Wake up kcompactd @@ -3984,11 +3989,11 @@ void wakeup_kswapd(struct zone *zone, gf * ratelimit its work. */ if (!(gfp_flags & __GFP_DIRECT_RECLAIM)) - wakeup_kcompactd(pgdat, order, classzone_idx); + wakeup_kcompactd(pgdat, order, highest_zoneidx); return; } - trace_mm_vmscan_wakeup_kswapd(pgdat->node_id, classzone_idx, order, + trace_mm_vmscan_wakeup_kswapd(pgdat->node_id, highest_zoneidx, order, gfp_flags); wake_up_interruptible(&pgdat->kswapd_wait); } _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 044/131] mm/page_alloc.c: use NODE_MASK_NONE in build_zonelists() 2020-06-03 22:55 incoming Andrew Morton ` (42 preceding siblings ...) 2020-06-03 22:59 ` [patch 043/131] mm/page_alloc: integrate classzone_idx and high_zoneidx Andrew Morton @ 2020-06-03 22:59 ` Andrew Morton 2020-06-03 22:59 ` [patch 045/131] mm: rename gfpflags_to_migratetype to gfp_migratetype for same convention Andrew Morton ` (87 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:59 UTC (permalink / raw) To: akpm, david, jhubbard, linux-mm, mm-commits, pankaj.gupta.linux, richard.weiyang, torvalds From: Wei Yang <richard.weiyang@gmail.com> Subject: mm/page_alloc.c: use NODE_MASK_NONE in build_zonelists() Slightly simplify the code by initializing user_mask with NODE_MASK_NONE, instead of later calling nodes_clear(). This saves a line of code. Link: http://lkml.kernel.org/r/20200330220840.21228-1-richard.weiyang@gmail.com Signed-off-by: Wei Yang <richard.weiyang@gmail.com> Reviewed-by: John Hubbard <jhubbard@nvidia.com> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/page_alloc.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) --- a/mm/page_alloc.c~mm-page_allocc-use-node_mask_none-in-build_zonelists +++ a/mm/page_alloc.c @@ -5692,14 +5692,13 @@ static void build_zonelists(pg_data_t *p { static int node_order[MAX_NUMNODES]; int node, load, nr_nodes = 0; - nodemask_t used_mask; + nodemask_t used_mask = NODE_MASK_NONE; int local_node, prev_node; /* NUMA-aware ordering of nodes */ local_node = pgdat->node_id; load = nr_online_nodes; prev_node = local_node; - nodes_clear(used_mask); memset(node_order, 0, sizeof(node_order)); while ((node = find_next_best_node(local_node, &used_mask)) >= 0) { _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 045/131] mm: rename gfpflags_to_migratetype to gfp_migratetype for same convention 2020-06-03 22:55 incoming Andrew Morton ` (43 preceding siblings ...) 2020-06-03 22:59 ` [patch 044/131] mm/page_alloc.c: use NODE_MASK_NONE in build_zonelists() Andrew Morton @ 2020-06-03 22:59 ` Andrew Morton 2020-06-03 22:59 ` [patch 046/131] mm/page_alloc.c: reset numa stats for boot pagesets Andrew Morton ` (86 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:59 UTC (permalink / raw) To: akpm, linux-mm, mm-commits, pankaj.gupta.linux, richard.weiyang, torvalds From: Wei Yang <richard.weiyang@gmail.com> Subject: mm: rename gfpflags_to_migratetype to gfp_migratetype for same convention Pageblock migrate type is encoded in GFP flags, just as zone_type and zonelist. Currently we use gfp_zone() and gfp_zonelist() to extract related information, it would be proper to use the same naming convention for migrate type. Link: http://lkml.kernel.org/r/20200329080823.7735-1-richard.weiyang@gmail.com Signed-off-by: Wei Yang <richard.weiyang@gmail.com> Reviewed-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/gfp.h | 2 +- mm/compaction.c | 2 +- mm/page_alloc.c | 4 ++-- mm/page_owner.c | 7 +++---- 4 files changed, 7 insertions(+), 8 deletions(-) --- a/include/linux/gfp.h~mm-rename-gfpflags_to_migratetype-to-gfp_migratetype-for-same-convention +++ a/include/linux/gfp.h @@ -312,7 +312,7 @@ struct vm_area_struct; #define GFP_MOVABLE_MASK (__GFP_RECLAIMABLE|__GFP_MOVABLE) #define GFP_MOVABLE_SHIFT 3 -static inline int gfpflags_to_migratetype(const gfp_t gfp_flags) +static inline int gfp_migratetype(const gfp_t gfp_flags) { VM_WARN_ON((gfp_flags & GFP_MOVABLE_MASK) == GFP_MOVABLE_MASK); BUILD_BUG_ON((1UL << GFP_MOVABLE_SHIFT) != ___GFP_MOVABLE); --- a/mm/compaction.c~mm-rename-gfpflags_to_migratetype-to-gfp_migratetype-for-same-convention +++ a/mm/compaction.c @@ -2100,7 +2100,7 @@ compact_zone(struct compact_control *cc, INIT_LIST_HEAD(&cc->freepages); INIT_LIST_HEAD(&cc->migratepages); - cc->migratetype = gfpflags_to_migratetype(cc->gfp_mask); + cc->migratetype = gfp_migratetype(cc->gfp_mask); ret = compaction_suitable(cc->zone, cc->order, cc->alloc_flags, cc->highest_zoneidx); /* Compaction is likely to fail */ --- a/mm/page_alloc.c~mm-rename-gfpflags_to_migratetype-to-gfp_migratetype-for-same-convention +++ a/mm/page_alloc.c @@ -4285,7 +4285,7 @@ gfp_to_alloc_flags(gfp_t gfp_mask) alloc_flags |= ALLOC_HARDER; #ifdef CONFIG_CMA - if (gfpflags_to_migratetype(gfp_mask) == MIGRATE_MOVABLE) + if (gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE) alloc_flags |= ALLOC_CMA; #endif return alloc_flags; @@ -4735,7 +4735,7 @@ static inline bool prepare_alloc_pages(g ac->highest_zoneidx = gfp_zone(gfp_mask); ac->zonelist = node_zonelist(preferred_nid, gfp_mask); ac->nodemask = nodemask; - ac->migratetype = gfpflags_to_migratetype(gfp_mask); + ac->migratetype = gfp_migratetype(gfp_mask); if (cpusets_enabled()) { *alloc_mask |= __GFP_HARDWALL; --- a/mm/page_owner.c~mm-rename-gfpflags_to_migratetype-to-gfp_migratetype-for-same-convention +++ a/mm/page_owner.c @@ -312,8 +312,7 @@ void pagetypeinfo_showmixedcount_print(s continue; page_owner = get_page_owner(page_ext); - page_mt = gfpflags_to_migratetype( - page_owner->gfp_mask); + page_mt = gfp_migratetype(page_owner->gfp_mask); if (pageblock_mt != page_mt) { if (is_migrate_cma(pageblock_mt)) count[MIGRATE_MOVABLE]++; @@ -359,7 +358,7 @@ print_page_owner(char __user *buf, size_ /* Print information relevant to grouping pages by mobility */ pageblock_mt = get_pageblock_migratetype(page); - page_mt = gfpflags_to_migratetype(page_owner->gfp_mask); + page_mt = gfp_migratetype(page_owner->gfp_mask); ret += snprintf(kbuf + ret, count - ret, "PFN %lu type %s Block %lu type %s Flags %#lx(%pGp)\n", pfn, @@ -416,7 +415,7 @@ void __dump_page_owner(struct page *page page_owner = get_page_owner(page_ext); gfp_mask = page_owner->gfp_mask; - mt = gfpflags_to_migratetype(gfp_mask); + mt = gfp_migratetype(gfp_mask); if (!test_bit(PAGE_EXT_OWNER, &page_ext->flags)) { pr_alert("page_owner info is not present (never set?)\n"); _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 046/131] mm/page_alloc.c: reset numa stats for boot pagesets 2020-06-03 22:55 incoming Andrew Morton ` (44 preceding siblings ...) 2020-06-03 22:59 ` [patch 045/131] mm: rename gfpflags_to_migratetype to gfp_migratetype for same convention Andrew Morton @ 2020-06-03 22:59 ` Andrew Morton 2020-06-03 22:59 ` [patch 047/131] mm, page_alloc: reset the zone->watermark_boost early Andrew Morton ` (85 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:59 UTC (permalink / raw) To: akpm, aneesh.kumar, khlebnikov, kirill, linux-mm, mhocko, mm-commits, sandipan, torvalds, vbabka From: Sandipan Das <sandipan@linux.ibm.com> Subject: mm/page_alloc.c: reset numa stats for boot pagesets Initially, the per-cpu pagesets of each zone are set to the boot pagesets. The real pagesets are allocated later but before that happens, page allocations do occur and the numa stats for the boot pagesets get incremented since they are common to all zones at that point. The real pagesets, however, are allocated for the populated zones only. Unpopulated zones, like those associated with memory-less nodes, continue using the boot pageset and end up skewing the numa stats of the corresponding node. E.g. $ numactl -H available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 node 0 size: 0 MB node 0 free: 0 MB node 1 cpus: 4 5 6 7 node 1 size: 8131 MB node 1 free: 6980 MB node distances: node 0 1 0: 10 40 1: 40 10 $ numastat node0 node1 numa_hit 108 56495 numa_miss 0 0 numa_foreign 0 0 interleave_hit 0 4537 local_node 108 31547 other_node 0 24948 Hence, the boot pageset stats need to be cleared after the real pagesets are allocated. After this point, the stats of the boot pagesets do not change as page allocations requested for a memory-less node will either fail (if __GFP_THISNODE is used) or get fulfilled by a preferred zone of a different node based on the fallback zonelist. [sandipan@linux.ibm.com: v3] Link: http://lkml.kernel.org/r/20200511170356.162531-1-sandipan@linux.ibm.com Link: http://lkml.kernel.org/r/9c9c2d1b15e37f6e6bf32f99e3100035e90c4ac9.1588868430.git.sandipan@linux.ibm.com Signed-off-by: Sandipan Das <sandipan@linux.ibm.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> Cc: Michal Hocko <mhocko@suse.com> Cc: "Kirill A . Shutemov" <kirill@shutemov.name> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/page_alloc.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) --- a/mm/page_alloc.c~mm-reset-numa-stats-for-boot-pagesets +++ a/mm/page_alloc.c @@ -6250,10 +6250,25 @@ void __init setup_per_cpu_pageset(void) { struct pglist_data *pgdat; struct zone *zone; + int __maybe_unused cpu; for_each_populated_zone(zone) setup_zone_pageset(zone); +#ifdef CONFIG_NUMA + /* + * Unpopulated zones continue using the boot pagesets. + * The numa stats for these pagesets need to be reset. + * Otherwise, they will end up skewing the stats of + * the nodes these zones are associated with. + */ + for_each_possible_cpu(cpu) { + struct per_cpu_pageset *pcp = &per_cpu(boot_pageset, cpu); + memset(pcp->vm_numa_stat_diff, 0, + sizeof(pcp->vm_numa_stat_diff)); + } +#endif + for_each_online_pgdat(pgdat) pgdat->per_cpu_nodestats = alloc_percpu(struct per_cpu_nodestat); _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 047/131] mm, page_alloc: reset the zone->watermark_boost early 2020-06-03 22:55 incoming Andrew Morton ` (45 preceding siblings ...) 2020-06-03 22:59 ` [patch 046/131] mm/page_alloc.c: reset numa stats for boot pagesets Andrew Morton @ 2020-06-03 22:59 ` Andrew Morton 2020-06-03 22:59 ` [patch 048/131] mm/page_alloc: restrict and formalize compound_page_dtors[] Andrew Morton ` (84 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:59 UTC (permalink / raw) To: akpm, bhe, charante, linux-mm, mm-commits, torvalds, vinmenon From: Charan Teja Reddy <charante@codeaurora.org> Subject: mm, page_alloc: reset the zone->watermark_boost early Updating the zone watermarks by any means, like min_free_kbytes, water_mark_scale_factor etc, when ->watermark_boost is set will result in higher low and high watermarks than the user asked. Below are the steps to reproduce the problem on system setup of Android kernel running on Snapdragon hardware. 1) Default settings of the system are as below: #cat /proc/sys/vm/min_free_kbytes = 5162 #cat /proc/zoneinfo | grep -e boost -e low -e "high " -e min -e Node Node 0, zone Normal min 797 low 8340 high 8539 2) Monitor the zone->watermark_boost(by adding a debug print in the kernel) and whenever it is greater than zero value, write the same value of min_free_kbytes obtained from step 1. #echo 5162 > /proc/sys/vm/min_free_kbytes 3) Then read the zone watermarks in the system while the ->watermark_boost is zero. This should show the same values of watermarks as step 1 but shown a higher values than asked. #cat /proc/zoneinfo | grep -e boost -e low -e "high " -e min -e Node Node 0, zone Normal min 797 low 21148 high 21347 These higher values are because of updating the zone watermarks using the macro min_wmark_pages(zone) which also adds the zone->watermark_boost. #define min_wmark_pages(z) (z->_watermark[WMARK_MIN] + z->watermark_boost) So the steps that lead to the issue are: 1) On the extfrag event, watermarks are boosted by storing the required value in ->watermark_boost. 2) User tries to update the zone watermarks level in the system through min_free_kbytes or watermark_scale_factor. 3) Later, when kswapd woke up, it resets the zone->watermark_boost to zero. In step 2), we use the min_wmark_pages() macro to store the watermarks in the zone structure thus the values are always offsetted by ->watermark_boost value. This can be avoided by resetting the ->watermark_boost to zero before it is used. Link: http://lkml.kernel.org/r/1589457511-4255-1-git-send-email-charante@codeaurora.org Signed-off-by: Charan Teja Reddy <charante@codeaurora.org> Reviewed-by: Baoquan He <bhe@redhat.com> Cc: Vinayak Menon <vinmenon@codeaurora.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/page_alloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/mm/page_alloc.c~mm-page_alloc-reset-the-zone-watermark_boost-early +++ a/mm/page_alloc.c @@ -7788,9 +7788,9 @@ static void __setup_per_zone_wmarks(void mult_frac(zone_managed_pages(zone), watermark_scale_factor, 10000)); + zone->watermark_boost = 0; zone->_watermark[WMARK_LOW] = min_wmark_pages(zone) + tmp; zone->_watermark[WMARK_HIGH] = min_wmark_pages(zone) + tmp * 2; - zone->watermark_boost = 0; spin_unlock_irqrestore(&zone->lock, flags); } _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 048/131] mm/page_alloc: restrict and formalize compound_page_dtors[] 2020-06-03 22:55 incoming Andrew Morton ` (46 preceding siblings ...) 2020-06-03 22:59 ` [patch 047/131] mm, page_alloc: reset the zone->watermark_boost early Andrew Morton @ 2020-06-03 22:59 ` Andrew Morton 2020-06-03 22:59 ` [patch 049/131] mm/pagealloc.c: call touch_nmi_watchdog() on max order boundaries in deferred init Andrew Morton ` (83 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:59 UTC (permalink / raw) To: akpm, anshuman.khandual, david, linux-mm, mm-commits, torvalds From: Anshuman Khandual <anshuman.khandual@arm.com> Subject: mm/page_alloc: restrict and formalize compound_page_dtors[] Restrict elements in compound_page_dtors[] array per NR_COMPOUND_DTORS and explicitly position them according to enum compound_dtor_id. This improves protection against possible misalignment between compound_page_dtors[] and enum compound_dtor_id later on. Link: http://lkml.kernel.org/r/1589795958-19317-1-git-send-email-anshuman.khandual@arm.com Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/mm.h | 2 +- mm/page_alloc.c | 10 +++++----- 2 files changed, 6 insertions(+), 6 deletions(-) --- a/include/linux/mm.h~mm-page_alloc-restrict-and-formalize-compound_page_dtors +++ a/include/linux/mm.h @@ -867,7 +867,7 @@ enum compound_dtor_id { #endif NR_COMPOUND_DTORS, }; -extern compound_page_dtor * const compound_page_dtors[]; +extern compound_page_dtor * const compound_page_dtors[NR_COMPOUND_DTORS]; static inline void set_compound_page_dtor(struct page *page, enum compound_dtor_id compound_dtor) --- a/mm/page_alloc.c~mm-page_alloc-restrict-and-formalize-compound_page_dtors +++ a/mm/page_alloc.c @@ -302,14 +302,14 @@ const char * const migratetype_names[MIG #endif }; -compound_page_dtor * const compound_page_dtors[] = { - NULL, - free_compound_page, +compound_page_dtor * const compound_page_dtors[NR_COMPOUND_DTORS] = { + [NULL_COMPOUND_DTOR] = NULL, + [COMPOUND_PAGE_DTOR] = free_compound_page, #ifdef CONFIG_HUGETLB_PAGE - free_huge_page, + [HUGETLB_PAGE_DTOR] = free_huge_page, #endif #ifdef CONFIG_TRANSPARENT_HUGEPAGE - free_transhuge_page, + [TRANSHUGE_PAGE_DTOR] = free_transhuge_page, #endif }; _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 049/131] mm/pagealloc.c: call touch_nmi_watchdog() on max order boundaries in deferred init 2020-06-03 22:55 incoming Andrew Morton ` (47 preceding siblings ...) 2020-06-03 22:59 ` [patch 048/131] mm/page_alloc: restrict and formalize compound_page_dtors[] Andrew Morton @ 2020-06-03 22:59 ` Andrew Morton 2020-06-03 22:59 ` [patch 050/131] mm: initialize deferred pages with interrupts enabled Andrew Morton ` (82 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:59 UTC (permalink / raw) To: akpm, dan.j.williams, daniel.m.jordan, david, jmorris, ktkhai, linux-mm, mhocko, mm-commits, pasha.tatashin, sashal, shile.zhang, stable, torvalds, vbabka, yiwei From: Daniel Jordan <daniel.m.jordan@oracle.com> Subject: mm/pagealloc.c: call touch_nmi_watchdog() on max order boundaries in deferred init Patch series "initialize deferred pages with interrupts enabled", v4. Keep interrupts enabled during deferred page initialization in order to make code more modular and allow jiffies to update. Original approach, and discussion can be found here: http://lkml.kernel.org/r/20200311123848.118638-1-shile.zhang@linux.alibaba.com This patch (of 3): deferred_init_memmap() disables interrupts the entire time, so it calls touch_nmi_watchdog() periodically to avoid soft lockup splats. Soon it will run with interrupts enabled, at which point cond_resched() should be used instead. deferred_grow_zone() makes the same watchdog calls through code shared with deferred init but will continue to run with interrupts disabled, so it can't call cond_resched(). Pull the watchdog calls up to these two places to allow the first to be changed later, independently of the second. The frequency reduces from twice per pageblock (init and free) to once per max order block. Link: http://lkml.kernel.org/r/20200403140952.17177-2-pasha.tatashin@soleen.com Fixes: 3a2d7fa8a3d5 ("mm: disable interrupts while initializing deferred pages") Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> Reviewed-by: David Hildenbrand <david@redhat.com> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Shile Zhang <shile.zhang@linux.alibaba.com> Cc: Kirill Tkhai <ktkhai@virtuozzo.com> Cc: James Morris <jmorris@namei.org> Cc: Sasha Levin <sashal@kernel.org> Cc: Yiqian Wei <yiwei@redhat.com> Cc: <stable@vger.kernel.org> [4.17+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/page_alloc.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) --- a/mm/page_alloc.c~mm-call-touch_nmi_watchdog-on-max-order-boundaries-in-deferred-init +++ a/mm/page_alloc.c @@ -1693,7 +1693,6 @@ static void __init deferred_free_pages(u } else if (!(pfn & nr_pgmask)) { deferred_free_range(pfn - nr_free, nr_free); nr_free = 1; - touch_nmi_watchdog(); } else { nr_free++; } @@ -1723,7 +1722,6 @@ static unsigned long __init deferred_in continue; } else if (!page || !(pfn & nr_pgmask)) { page = pfn_to_page(pfn); - touch_nmi_watchdog(); } else { page++; } @@ -1863,8 +1861,10 @@ static int __init deferred_init_memmap(v * that we can avoid introducing any issues with the buddy * allocator. */ - while (spfn < epfn) + while (spfn < epfn) { nr_pages += deferred_init_maxorder(&i, zone, &spfn, &epfn); + touch_nmi_watchdog(); + } zone_empty: pgdat_resize_unlock(pgdat, &flags); @@ -1948,6 +1948,7 @@ deferred_grow_zone(struct zone *zone, un first_deferred_pfn = spfn; nr_pages += deferred_init_maxorder(&i, zone, &spfn, &epfn); + touch_nmi_watchdog(); /* We should only stop along section boundaries */ if ((first_deferred_pfn ^ spfn) < PAGES_PER_SECTION) _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 050/131] mm: initialize deferred pages with interrupts enabled 2020-06-03 22:55 incoming Andrew Morton ` (48 preceding siblings ...) 2020-06-03 22:59 ` [patch 049/131] mm/pagealloc.c: call touch_nmi_watchdog() on max order boundaries in deferred init Andrew Morton @ 2020-06-03 22:59 ` Andrew Morton 2020-06-03 22:59 ` [patch 051/131] mm: call cond_resched() from deferred_init_memmap() Andrew Morton ` (81 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:59 UTC (permalink / raw) To: akpm, dan.j.williams, daniel.m.jordan, david, jmorris, ktkhai, linux-mm, mhocko, mm-commits, pasha.tatashin, sashal, shile.zhang, stable, torvalds, vbabka, yiwei From: Pavel Tatashin <pasha.tatashin@soleen.com> Subject: mm: initialize deferred pages with interrupts enabled Initializing struct pages is a long task and keeping interrupts disabled for the duration of this operation introduces a number of problems. 1. jiffies are not updated for long period of time, and thus incorrect time is reported. See proposed solution and discussion here: lkml/20200311123848.118638-1-shile.zhang@linux.alibaba.com 2. It prevents farther improving deferred page initialization by allowing intra-node multi-threading. We are keeping interrupts disabled to solve a rather theoretical problem that was never observed in real world (See 3a2d7fa8a3d5). Let's keep interrupts enabled. In case we ever encounter a scenario where an interrupt thread wants to allocate large amount of memory this early in boot we can deal with that by growing zone (see deferred_grow_zone()) by the needed amount before starting deferred_init_memmap() threads. Before: [ 1.232459] node 0 initialised, 12058412 pages in 1ms After: [ 1.632580] node 0 initialised, 12051227 pages in 436ms Link: http://lkml.kernel.org/r/20200403140952.17177-3-pasha.tatashin@soleen.com Fixes: 3a2d7fa8a3d5 ("mm: disable interrupts while initializing deferred pages") Reported-by: Shile Zhang <shile.zhang@linux.alibaba.com> Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Reviewed-by: David Hildenbrand <david@redhat.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: James Morris <jmorris@namei.org> Cc: Kirill Tkhai <ktkhai@virtuozzo.com> Cc: Sasha Levin <sashal@kernel.org> Cc: Yiqian Wei <yiwei@redhat.com> Cc: <stable@vger.kernel.org> [4.17+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/mmzone.h | 2 ++ mm/page_alloc.c | 20 +++++++------------- 2 files changed, 9 insertions(+), 13 deletions(-) --- a/include/linux/mmzone.h~mm-initialize-deferred-pages-with-interrupts-enabled +++ a/include/linux/mmzone.h @@ -680,6 +680,8 @@ typedef struct pglist_data { /* * Must be held any time you expect node_start_pfn, * node_present_pages, node_spanned_pages or nr_zones to stay constant. + * Also synchronizes pgdat->first_deferred_pfn during deferred page + * init. * * pgdat_resize_lock() and pgdat_resize_unlock() are provided to * manipulate node_size_lock without checking for CONFIG_MEMORY_HOTPLUG --- a/mm/page_alloc.c~mm-initialize-deferred-pages-with-interrupts-enabled +++ a/mm/page_alloc.c @@ -1844,6 +1844,13 @@ static int __init deferred_init_memmap(v BUG_ON(pgdat->first_deferred_pfn > pgdat_end_pfn(pgdat)); pgdat->first_deferred_pfn = ULONG_MAX; + /* + * Once we unlock here, the zone cannot be grown anymore, thus if an + * interrupt thread must allocate this early in boot, zone must be + * pre-grown prior to start of deferred page initialization. + */ + pgdat_resize_unlock(pgdat, &flags); + /* Only the highest zone is deferred so find it */ for (zid = 0; zid < MAX_NR_ZONES; zid++) { zone = pgdat->node_zones + zid; @@ -1866,8 +1873,6 @@ static int __init deferred_init_memmap(v touch_nmi_watchdog(); } zone_empty: - pgdat_resize_unlock(pgdat, &flags); - /* Sanity check that the next zone really is unpopulated */ WARN_ON(++zid < MAX_NR_ZONES && populated_zone(++zone)); @@ -1910,17 +1915,6 @@ deferred_grow_zone(struct zone *zone, un pgdat_resize_lock(pgdat, &flags); /* - * If deferred pages have been initialized while we were waiting for - * the lock, return true, as the zone was grown. The caller will retry - * this zone. We won't return to this function since the caller also - * has this static branch. - */ - if (!static_branch_unlikely(&deferred_pages)) { - pgdat_resize_unlock(pgdat, &flags); - return true; - } - - /* * If someone grew this zone while we were waiting for spinlock, return * true, as there might be enough pages already. */ _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 051/131] mm: call cond_resched() from deferred_init_memmap() 2020-06-03 22:55 incoming Andrew Morton ` (49 preceding siblings ...) 2020-06-03 22:59 ` [patch 050/131] mm: initialize deferred pages with interrupts enabled Andrew Morton @ 2020-06-03 22:59 ` Andrew Morton 2020-06-03 22:59 ` [patch 052/131] padata: remove exit routine Andrew Morton ` (80 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:59 UTC (permalink / raw) To: akpm, dan.j.williams, daniel.m.jordan, david, jmorris, ktkhai, linux-mm, mhocko, mm-commits, pankaj.gupta.linux, pasha.tatashin, sashal, shile.zhang, stable, torvalds, vbabka, yiwei From: Pavel Tatashin <pasha.tatashin@soleen.com> Subject: mm: call cond_resched() from deferred_init_memmap() Now that deferred pages are initialized with interrupts enabled we can replace touch_nmi_watchdog() with cond_resched(), as it was before 3a2d7fa8a3d5. For now, we cannot do the same in deferred_grow_zone() as it is still initializes pages with interrupts disabled. This change fixes RCU problem described in https://lkml.kernel.org/r/20200401104156.11564-2-david@redhat.com [ 60.474005] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [ 60.475000] rcu: 1-...0: (0 ticks this GP) idle=02a/1/0x4000000000000000 softirq=1/1 fqs=15000 [ 60.475000] rcu: (detected by 0, t=60002 jiffies, g=-1199, q=1) [ 60.475000] Sending NMI from CPU 0 to CPUs 1: [ 1.760091] NMI backtrace for cpu 1 [ 1.760091] CPU: 1 PID: 20 Comm: pgdatinit0 Not tainted 4.18.0-147.9.1.el8_1.x86_64 #1 [ 1.760091] Hardware name: Red Hat KVM, BIOS 1.13.0-1.module+el8.2.0+5520+4e5817f3 04/01/2014 [ 1.760091] RIP: 0010:__init_single_page.isra.65+0x10/0x4f [ 1.760091] Code: 48 83 cf 63 48 89 f8 0f 1f 40 00 48 89 c6 48 89 d7 e8 6b 18 80 ff 66 90 5b c3 31 c0 b9 10 00 00 00 49 89 f8 48 c1 e6 33 f3 ab <b8> 07 00 00 00 48 c1 e2 36 41 c7 40 34 01 00 00 00 48 c1 e0 33 41 [ 1.760091] RSP: 0000:ffffba783123be40 EFLAGS: 00000006 [ 1.760091] RAX: 0000000000000000 RBX: fffffad34405e300 RCX: 0000000000000000 [ 1.760091] RDX: 0000000000000000 RSI: 0010000000000000 RDI: fffffad34405e340 [ 1.760091] RBP: 0000000033f3177e R08: fffffad34405e300 R09: 0000000000000002 [ 1.760091] R10: 000000000000002b R11: ffff98afb691a500 R12: 0000000000000002 [ 1.760091] R13: 0000000000000000 R14: 000000003f03ea00 R15: 000000003e10178c [ 1.760091] FS: 0000000000000000(0000) GS:ffff9c9ebeb00000(0000) knlGS:0000000000000000 [ 1.760091] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 1.760091] CR2: 00000000ffffffff CR3: 000000a1cf20a001 CR4: 00000000003606e0 [ 1.760091] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 1.760091] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 1.760091] Call Trace: [ 1.760091] deferred_init_pages+0x8f/0xbf [ 1.760091] deferred_init_memmap+0x184/0x29d [ 1.760091] ? deferred_free_pages.isra.97+0xba/0xba [ 1.760091] kthread+0x112/0x130 [ 1.760091] ? kthread_flush_work_fn+0x10/0x10 [ 1.760091] ret_from_fork+0x35/0x40 [ 89.123011] node 0 initialised, 1055935372 pages in 88650ms Link: http://lkml.kernel.org/r/20200403140952.17177-4-pasha.tatashin@soleen.com Fixes: 3a2d7fa8a3d5 ("mm: disable interrupts while initializing deferred pages") Reported-by: Yiqian Wei <yiwei@redhat.com> Tested-by: David Hildenbrand <david@redhat.com> Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com> Acked-by: Michal Hocko <mhocko@suse.com> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: James Morris <jmorris@namei.org> Cc: Kirill Tkhai <ktkhai@virtuozzo.com> Cc: Sasha Levin <sashal@kernel.org> Cc: Shile Zhang <shile.zhang@linux.alibaba.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: <stable@vger.kernel.org> [4.17+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/page_alloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/mm/page_alloc.c~mm-call-cond_resched-from-deferred_init_memmap +++ a/mm/page_alloc.c @@ -1870,7 +1870,7 @@ static int __init deferred_init_memmap(v */ while (spfn < epfn) { nr_pages += deferred_init_maxorder(&i, zone, &spfn, &epfn); - touch_nmi_watchdog(); + cond_resched(); } zone_empty: /* Sanity check that the next zone really is unpopulated */ _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 052/131] padata: remove exit routine 2020-06-03 22:55 incoming Andrew Morton ` (50 preceding siblings ...) 2020-06-03 22:59 ` [patch 051/131] mm: call cond_resched() from deferred_init_memmap() Andrew Morton @ 2020-06-03 22:59 ` Andrew Morton 2020-06-03 22:59 ` [patch 053/131] padata: initialize earlier Andrew Morton ` (79 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:59 UTC (permalink / raw) To: akpm, alex.williamson, alexander.h.duyck, corbet, dan.j.williams, daniel.m.jordan, dave.hansen, david, elliott, herbert, jgg, josh, ktkhai, linux-mm, mhocko, mm-commits, pasha.tatashin, pavel, peterz, rdunlap, shile.zhang, steffen.klassert, steven.sistare, tj, torvalds, ziy From: Daniel Jordan <daniel.m.jordan@oracle.com> Subject: padata: remove exit routine Patch series "padata: parallelize deferred page init", v3. Deferred struct page init is a bottleneck in kernel boot--the biggest for us and probably others. Optimizing it maximizes availability for large-memory systems and allows spinning up short-lived VMs as needed without having to leave them running. It also benefits bare metal machines hosting VMs that are sensitive to downtime. In projects such as VMM Fast Restart[1], where guest state is preserved across kexec reboot, it helps prevent application and network timeouts in the guests. So, multithread deferred init to take full advantage of system memory bandwidth. Extend padata, a framework that handles many parallel singlethreaded jobs, to handle multithreaded jobs as well by adding support for splitting up the work evenly, specifying a minimum amount of work that's appropriate for one helper thread to do, load balancing between helpers, and coordinating them. More documentation in patches 4 and 8. This series is the first step in a project to address other memory proportional bottlenecks in the kernel such as pmem struct page init, vfio page pinning, hugetlb fallocate, and munmap. Deferred page init doesn't require concurrency limits, resource control, or priority adjustments like these other users will because it happens during boot when the system is otherwise idle and waiting for page init to finish. This has been run on a variety of x86 systems and speeds up kernel boot by 4% to 49%, saving up to 1.6 out of 4 seconds. Patch 6 has more numbers. This patch (of 8): padata_driver_exit() is unnecessary because padata isn't built as a module and doesn't exit. padata's init routine will soon allocate memory, so getting rid of the exit function now avoids pointless code to free it. Link: http://lkml.kernel.org/r/20200527173608.2885243-1-daniel.m.jordan@oracle.com Link: http://lkml.kernel.org/r/20200527173608.2885243-2-daniel.m.jordan@oracle.com Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Tested-by: Josh Triplett <josh@joshtriplett.org> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com> Cc: Alex Williamson <alex.williamson@redhat.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David Hildenbrand <david@redhat.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Kirill Tkhai <ktkhai@virtuozzo.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Pavel Machek <pavel@ucw.cz> Cc: Pavel Tatashin <pasha.tatashin@soleen.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Robert Elliott <elliott@hpe.com> Cc: Shile Zhang <shile.zhang@linux.alibaba.com> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: Steven Sistare <steven.sistare@oracle.com> Cc: Tejun Heo <tj@kernel.org> Cc: Zi Yan <ziy@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- kernel/padata.c | 6 ------ 1 file changed, 6 deletions(-) --- a/kernel/padata.c~padata-remove-exit-routine +++ a/kernel/padata.c @@ -1074,10 +1074,4 @@ static __init int padata_driver_init(voi } module_init(padata_driver_init); -static __exit void padata_driver_exit(void) -{ - cpuhp_remove_multi_state(CPUHP_PADATA_DEAD); - cpuhp_remove_multi_state(hp_online); -} -module_exit(padata_driver_exit); #endif _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 053/131] padata: initialize earlier 2020-06-03 22:55 incoming Andrew Morton ` (51 preceding siblings ...) 2020-06-03 22:59 ` [patch 052/131] padata: remove exit routine Andrew Morton @ 2020-06-03 22:59 ` Andrew Morton 2020-06-03 22:59 ` [patch 054/131] padata: allocate work structures for parallel jobs from a pool Andrew Morton ` (78 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:59 UTC (permalink / raw) To: akpm, alex.williamson, alexander.h.duyck, corbet, dan.j.williams, daniel.m.jordan, dave.hansen, david, elliott, herbert, jgg, josh, ktkhai, linux-mm, mhocko, mm-commits, pasha.tatashin, pavel, peterz, rdunlap, shile.zhang, steffen.klassert, steven.sistare, tj, torvalds, ziy From: Daniel Jordan <daniel.m.jordan@oracle.com> Subject: padata: initialize earlier padata will soon initialize the system's struct pages in parallel, so it needs to be ready by page_alloc_init_late(). The error return from padata_driver_init() triggers an initcall warning, so add a warning to padata_init() to avoid silent failure. Link: http://lkml.kernel.org/r/20200527173608.2885243-3-daniel.m.jordan@oracle.com Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Tested-by: Josh Triplett <josh@joshtriplett.org> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com> Cc: Alex Williamson <alex.williamson@redhat.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David Hildenbrand <david@redhat.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Kirill Tkhai <ktkhai@virtuozzo.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Pavel Machek <pavel@ucw.cz> Cc: Pavel Tatashin <pasha.tatashin@soleen.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Robert Elliott <elliott@hpe.com> Cc: Shile Zhang <shile.zhang@linux.alibaba.com> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: Steven Sistare <steven.sistare@oracle.com> Cc: Tejun Heo <tj@kernel.org> Cc: Zi Yan <ziy@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/padata.h | 6 ++++++ init/main.c | 2 ++ kernel/padata.c | 17 ++++++++--------- 3 files changed, 16 insertions(+), 9 deletions(-) --- a/include/linux/padata.h~padata-initialize-earlier +++ a/include/linux/padata.h @@ -166,6 +166,12 @@ struct padata_instance { #define PADATA_INVALID 4 }; +#ifdef CONFIG_PADATA +extern void __init padata_init(void); +#else +static inline void __init padata_init(void) {} +#endif + extern struct padata_instance *padata_alloc_possible(const char *name); extern void padata_free(struct padata_instance *pinst); extern struct padata_shell *padata_alloc_shell(struct padata_instance *pinst); --- a/init/main.c~padata-initialize-earlier +++ a/init/main.c @@ -63,6 +63,7 @@ #include <linux/debugobjects.h> #include <linux/lockdep.h> #include <linux/kmemleak.h> +#include <linux/padata.h> #include <linux/pid_namespace.h> #include <linux/device/driver.h> #include <linux/kthread.h> @@ -1482,6 +1483,7 @@ static noinline void __init kernel_init_ smp_init(); sched_init_smp(); + padata_init(); page_alloc_init_late(); /* Initialize page ext after all struct pages are initialized. */ page_ext_init(); --- a/kernel/padata.c~padata-initialize-earlier +++ a/kernel/padata.c @@ -31,7 +31,6 @@ #include <linux/slab.h> #include <linux/sysfs.h> #include <linux/rcupdate.h> -#include <linux/module.h> #define MAX_OBJ_NUM 1000 @@ -1052,26 +1051,26 @@ void padata_free_shell(struct padata_she } EXPORT_SYMBOL(padata_free_shell); -#ifdef CONFIG_HOTPLUG_CPU - -static __init int padata_driver_init(void) +void __init padata_init(void) { +#ifdef CONFIG_HOTPLUG_CPU int ret; ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, "padata:online", padata_cpu_online, NULL); if (ret < 0) - return ret; + goto err; hp_online = ret; ret = cpuhp_setup_state_multi(CPUHP_PADATA_DEAD, "padata:dead", NULL, padata_cpu_dead); if (ret < 0) { cpuhp_remove_multi_state(hp_online); - return ret; + goto err; } - return 0; -} -module_init(padata_driver_init); + return; +err: + pr_warn("padata: initialization failed\n"); #endif +} _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 054/131] padata: allocate work structures for parallel jobs from a pool 2020-06-03 22:55 incoming Andrew Morton ` (52 preceding siblings ...) 2020-06-03 22:59 ` [patch 053/131] padata: initialize earlier Andrew Morton @ 2020-06-03 22:59 ` Andrew Morton 2020-06-03 22:59 ` [patch 055/131] padata: add basic support for multithreaded jobs Andrew Morton ` (77 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:59 UTC (permalink / raw) To: akpm, alex.williamson, alexander.h.duyck, corbet, dan.j.williams, daniel.m.jordan, dave.hansen, david, elliott, herbert, jgg, josh, ktkhai, linux-mm, mhocko, mm-commits, pasha.tatashin, pavel, peterz, rdunlap, shile.zhang, steffen.klassert, steven.sistare, tj, torvalds, ziy From: Daniel Jordan <daniel.m.jordan@oracle.com> Subject: padata: allocate work structures for parallel jobs from a pool padata allocates per-CPU, per-instance work structs for parallel jobs. A do_parallel call assigns a job to a sequence number and hashes the number to a CPU, where the job will eventually run using the corresponding work. This approach fit with how padata used to bind a job to each CPU round-robin, makes less sense after commit bfde23ce200e6 ("padata: unbind parallel jobs from specific CPUs") because a work isn't bound to a particular CPU anymore, and isn't needed at all for multithreaded jobs because they don't have sequence numbers. Replace the per-CPU works with a preallocated pool, which allows sharing them between existing padata users and the upcoming multithreaded user. The pool will also facilitate setting NUMA-aware concurrency limits with later users. The pool is sized according to the number of possible CPUs. With this limit, MAX_OBJ_NUM no longer makes sense, so remove it. If the global pool is exhausted, a parallel job is run in the current task instead to throttle a system trying to do too much in parallel. Link: http://lkml.kernel.org/r/20200527173608.2885243-4-daniel.m.jordan@oracle.com Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Tested-by: Josh Triplett <josh@joshtriplett.org> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com> Cc: Alex Williamson <alex.williamson@redhat.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David Hildenbrand <david@redhat.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Kirill Tkhai <ktkhai@virtuozzo.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Pavel Machek <pavel@ucw.cz> Cc: Pavel Tatashin <pasha.tatashin@soleen.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Robert Elliott <elliott@hpe.com> Cc: Shile Zhang <shile.zhang@linux.alibaba.com> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: Steven Sistare <steven.sistare@oracle.com> Cc: Tejun Heo <tj@kernel.org> Cc: Zi Yan <ziy@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/padata.h | 8 -- kernel/padata.c | 118 +++++++++++++++++++++++++-------------- 2 files changed, 78 insertions(+), 48 deletions(-) --- a/include/linux/padata.h~padata-allocate-work-structures-for-parallel-jobs-from-a-pool +++ a/include/linux/padata.h @@ -24,7 +24,6 @@ * @list: List entry, to attach to the padata lists. * @pd: Pointer to the internal control structure. * @cb_cpu: Callback cpu for serializatioon. - * @cpu: Cpu for parallelization. * @seq_nr: Sequence number of the parallelized data object. * @info: Used to pass information from the parallel to the serial function. * @parallel: Parallel execution function. @@ -34,7 +33,6 @@ struct padata_priv { struct list_head list; struct parallel_data *pd; int cb_cpu; - int cpu; unsigned int seq_nr; int info; void (*parallel)(struct padata_priv *padata); @@ -68,15 +66,11 @@ struct padata_serial_queue { /** * struct padata_parallel_queue - The percpu padata parallel queue * - * @parallel: List to wait for parallelization. * @reorder: List to wait for reordering after parallel processing. - * @work: work struct for parallelization. * @num_obj: Number of objects that are processed by this cpu. */ struct padata_parallel_queue { - struct padata_list parallel; struct padata_list reorder; - struct work_struct work; atomic_t num_obj; }; @@ -111,7 +105,7 @@ struct parallel_data { struct padata_parallel_queue __percpu *pqueue; struct padata_serial_queue __percpu *squeue; atomic_t refcnt; - atomic_t seq_nr; + unsigned int seq_nr; unsigned int processed; int cpu; struct padata_cpumask cpumask; --- a/kernel/padata.c~padata-allocate-work-structures-for-parallel-jobs-from-a-pool +++ a/kernel/padata.c @@ -32,7 +32,15 @@ #include <linux/sysfs.h> #include <linux/rcupdate.h> -#define MAX_OBJ_NUM 1000 +struct padata_work { + struct work_struct pw_work; + struct list_head pw_list; /* padata_free_works linkage */ + void *pw_data; +}; + +static DEFINE_SPINLOCK(padata_works_lock); +static struct padata_work *padata_works; +static LIST_HEAD(padata_free_works); static void padata_free_pd(struct parallel_data *pd); @@ -58,30 +66,44 @@ static int padata_cpu_hash(struct parall return padata_index_to_cpu(pd, cpu_index); } -static void padata_parallel_worker(struct work_struct *parallel_work) +static struct padata_work *padata_work_alloc(void) { - struct padata_parallel_queue *pqueue; - LIST_HEAD(local_list); + struct padata_work *pw; - local_bh_disable(); - pqueue = container_of(parallel_work, - struct padata_parallel_queue, work); + lockdep_assert_held(&padata_works_lock); - spin_lock(&pqueue->parallel.lock); - list_replace_init(&pqueue->parallel.list, &local_list); - spin_unlock(&pqueue->parallel.lock); + if (list_empty(&padata_free_works)) + return NULL; /* No more work items allowed to be queued. */ - while (!list_empty(&local_list)) { - struct padata_priv *padata; + pw = list_first_entry(&padata_free_works, struct padata_work, pw_list); + list_del(&pw->pw_list); + return pw; +} - padata = list_entry(local_list.next, - struct padata_priv, list); +static void padata_work_init(struct padata_work *pw, work_func_t work_fn, + void *data) +{ + INIT_WORK(&pw->pw_work, work_fn); + pw->pw_data = data; +} - list_del_init(&padata->list); +static void padata_work_free(struct padata_work *pw) +{ + lockdep_assert_held(&padata_works_lock); + list_add(&pw->pw_list, &padata_free_works); +} - padata->parallel(padata); - } +static void padata_parallel_worker(struct work_struct *parallel_work) +{ + struct padata_work *pw = container_of(parallel_work, struct padata_work, + pw_work); + struct padata_priv *padata = pw->pw_data; + local_bh_disable(); + padata->parallel(padata); + spin_lock(&padata_works_lock); + padata_work_free(pw); + spin_unlock(&padata_works_lock); local_bh_enable(); } @@ -105,9 +127,9 @@ int padata_do_parallel(struct padata_she struct padata_priv *padata, int *cb_cpu) { struct padata_instance *pinst = ps->pinst; - int i, cpu, cpu_index, target_cpu, err; - struct padata_parallel_queue *queue; + int i, cpu, cpu_index, err; struct parallel_data *pd; + struct padata_work *pw; rcu_read_lock_bh(); @@ -135,25 +157,25 @@ int padata_do_parallel(struct padata_she if ((pinst->flags & PADATA_RESET)) goto out; - if (atomic_read(&pd->refcnt) >= MAX_OBJ_NUM) - goto out; - - err = 0; atomic_inc(&pd->refcnt); padata->pd = pd; padata->cb_cpu = *cb_cpu; - padata->seq_nr = atomic_inc_return(&pd->seq_nr); - target_cpu = padata_cpu_hash(pd, padata->seq_nr); - padata->cpu = target_cpu; - queue = per_cpu_ptr(pd->pqueue, target_cpu); - - spin_lock(&queue->parallel.lock); - list_add_tail(&padata->list, &queue->parallel.list); - spin_unlock(&queue->parallel.lock); + rcu_read_unlock_bh(); - queue_work(pinst->parallel_wq, &queue->work); + spin_lock(&padata_works_lock); + padata->seq_nr = ++pd->seq_nr; + pw = padata_work_alloc(); + spin_unlock(&padata_works_lock); + if (pw) { + padata_work_init(pw, padata_parallel_worker, padata); + queue_work(pinst->parallel_wq, &pw->pw_work); + } else { + /* Maximum works limit exceeded, run in the current task. */ + padata->parallel(padata); + } + return 0; out: rcu_read_unlock_bh(); @@ -324,8 +346,9 @@ static void padata_serial_worker(struct void padata_do_serial(struct padata_priv *padata) { struct parallel_data *pd = padata->pd; + int hashed_cpu = padata_cpu_hash(pd, padata->seq_nr); struct padata_parallel_queue *pqueue = per_cpu_ptr(pd->pqueue, - padata->cpu); + hashed_cpu); struct padata_priv *cur; spin_lock(&pqueue->reorder.lock); @@ -416,8 +439,6 @@ static void padata_init_pqueues(struct p pqueue = per_cpu_ptr(pd->pqueue, cpu); __padata_list_init(&pqueue->reorder); - __padata_list_init(&pqueue->parallel); - INIT_WORK(&pqueue->work, padata_parallel_worker); atomic_set(&pqueue->num_obj, 0); } } @@ -451,7 +472,7 @@ static struct parallel_data *padata_allo padata_init_pqueues(pd); padata_init_squeues(pd); - atomic_set(&pd->seq_nr, -1); + pd->seq_nr = -1; atomic_set(&pd->refcnt, 1); spin_lock_init(&pd->lock); pd->cpu = cpumask_first(pd->cpumask.pcpu); @@ -1053,6 +1074,7 @@ EXPORT_SYMBOL(padata_free_shell); void __init padata_init(void) { + unsigned int i, possible_cpus; #ifdef CONFIG_HOTPLUG_CPU int ret; @@ -1064,13 +1086,27 @@ void __init padata_init(void) ret = cpuhp_setup_state_multi(CPUHP_PADATA_DEAD, "padata:dead", NULL, padata_cpu_dead); - if (ret < 0) { - cpuhp_remove_multi_state(hp_online); - goto err; - } + if (ret < 0) + goto remove_online_state; +#endif + + possible_cpus = num_possible_cpus(); + padata_works = kmalloc_array(possible_cpus, sizeof(struct padata_work), + GFP_KERNEL); + if (!padata_works) + goto remove_dead_state; + + for (i = 0; i < possible_cpus; ++i) + list_add(&padata_works[i].pw_list, &padata_free_works); return; + +remove_dead_state: +#ifdef CONFIG_HOTPLUG_CPU + cpuhp_remove_multi_state(CPUHP_PADATA_DEAD); +remove_online_state: + cpuhp_remove_multi_state(hp_online); err: - pr_warn("padata: initialization failed\n"); #endif + pr_warn("padata: initialization failed\n"); } _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 055/131] padata: add basic support for multithreaded jobs 2020-06-03 22:55 incoming Andrew Morton ` (53 preceding siblings ...) 2020-06-03 22:59 ` [patch 054/131] padata: allocate work structures for parallel jobs from a pool Andrew Morton @ 2020-06-03 22:59 ` Andrew Morton 2020-06-03 22:59 ` [patch 056/131] mm: don't track number of pages during deferred initialization Andrew Morton ` (76 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:59 UTC (permalink / raw) To: akpm, alex.williamson, alexander.h.duyck, corbet, dan.j.williams, daniel.m.jordan, dave.hansen, david, elliott, herbert, jgg, josh, ktkhai, linux-mm, mhocko, mm-commits, pasha.tatashin, pavel, peterz, rdunlap, shile.zhang, steffen.klassert, steven.sistare, tj, torvalds, ziy From: Daniel Jordan <daniel.m.jordan@oracle.com> Subject: padata: add basic support for multithreaded jobs Sometimes the kernel doesn't take full advantage of system memory bandwidth, leading to a single CPU spending excessive time in initialization paths where the data scales with memory size. Multithreading naturally addresses this problem. Extend padata, a framework that handles many parallel yet singlethreaded jobs, to also handle multithreaded jobs by adding support for splitting up the work evenly, specifying a minimum amount of work that's appropriate for one helper thread to do, load balancing between helpers, and coordinating them. This is inspired by work from Pavel Tatashin and Steve Sistare. Link: http://lkml.kernel.org/r/20200527173608.2885243-5-daniel.m.jordan@oracle.com Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Tested-by: Josh Triplett <josh@joshtriplett.org> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com> Cc: Alex Williamson <alex.williamson@redhat.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David Hildenbrand <david@redhat.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Kirill Tkhai <ktkhai@virtuozzo.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Pavel Machek <pavel@ucw.cz> Cc: Pavel Tatashin <pasha.tatashin@soleen.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Robert Elliott <elliott@hpe.com> Cc: Shile Zhang <shile.zhang@linux.alibaba.com> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: Steven Sistare <steven.sistare@oracle.com> Cc: Tejun Heo <tj@kernel.org> Cc: Zi Yan <ziy@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/padata.h | 29 +++++++ kernel/padata.c | 152 ++++++++++++++++++++++++++++++++++++++- 2 files changed, 178 insertions(+), 3 deletions(-) --- a/include/linux/padata.h~padata-add-basic-support-for-multithreaded-jobs +++ a/include/linux/padata.h @@ -4,6 +4,9 @@ * * Copyright (C) 2008, 2009 secunet Security Networks AG * Copyright (C) 2008, 2009 Steffen Klassert <steffen.klassert@secunet.com> + * + * Copyright (c) 2020 Oracle and/or its affiliates. + * Author: Daniel Jordan <daniel.m.jordan@oracle.com> */ #ifndef PADATA_H @@ -131,6 +134,31 @@ struct padata_shell { }; /** + * struct padata_mt_job - represents one multithreaded job + * + * @thread_fn: Called for each chunk of work that a padata thread does. + * @fn_arg: The thread function argument. + * @start: The start of the job (units are job-specific). + * @size: size of this node's work (units are job-specific). + * @align: Ranges passed to the thread function fall on this boundary, with the + * possible exceptions of the beginning and end of the job. + * @min_chunk: The minimum chunk size in job-specific units. This allows + * the client to communicate the minimum amount of work that's + * appropriate for one worker thread to do at once. + * @max_threads: Max threads to use for the job, actual number may be less + * depending on task size and minimum chunk size. + */ +struct padata_mt_job { + void (*thread_fn)(unsigned long start, unsigned long end, void *arg); + void *fn_arg; + unsigned long start; + unsigned long size; + unsigned long align; + unsigned long min_chunk; + int max_threads; +}; + +/** * struct padata_instance - The overall control structure. * * @cpu_online_node: Linkage for CPU online callback. @@ -173,6 +201,7 @@ extern void padata_free_shell(struct pad extern int padata_do_parallel(struct padata_shell *ps, struct padata_priv *padata, int *cb_cpu); extern void padata_do_serial(struct padata_priv *padata); +extern void __init padata_do_multithreaded(struct padata_mt_job *job); extern int padata_set_cpumask(struct padata_instance *pinst, int cpumask_type, cpumask_var_t cpumask); extern int padata_start(struct padata_instance *pinst); --- a/kernel/padata.c~padata-add-basic-support-for-multithreaded-jobs +++ a/kernel/padata.c @@ -7,6 +7,9 @@ * Copyright (C) 2008, 2009 secunet Security Networks AG * Copyright (C) 2008, 2009 Steffen Klassert <steffen.klassert@secunet.com> * + * Copyright (c) 2020 Oracle and/or its affiliates. + * Author: Daniel Jordan <daniel.m.jordan@oracle.com> + * * This program is free software; you can redistribute it and/or modify it * under the terms and conditions of the GNU General Public License, * version 2, as published by the Free Software Foundation. @@ -21,6 +24,7 @@ * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. */ +#include <linux/completion.h> #include <linux/export.h> #include <linux/cpumask.h> #include <linux/err.h> @@ -32,6 +36,8 @@ #include <linux/sysfs.h> #include <linux/rcupdate.h> +#define PADATA_WORK_ONSTACK 1 /* Work's memory is on stack */ + struct padata_work { struct work_struct pw_work; struct list_head pw_list; /* padata_free_works linkage */ @@ -42,7 +48,17 @@ static DEFINE_SPINLOCK(padata_works_lock static struct padata_work *padata_works; static LIST_HEAD(padata_free_works); +struct padata_mt_job_state { + spinlock_t lock; + struct completion completion; + struct padata_mt_job *job; + int nworks; + int nworks_fini; + unsigned long chunk_size; +}; + static void padata_free_pd(struct parallel_data *pd); +static void __init padata_mt_helper(struct work_struct *work); static int padata_index_to_cpu(struct parallel_data *pd, int cpu_index) { @@ -81,18 +97,56 @@ static struct padata_work *padata_work_a } static void padata_work_init(struct padata_work *pw, work_func_t work_fn, - void *data) + void *data, int flags) { - INIT_WORK(&pw->pw_work, work_fn); + if (flags & PADATA_WORK_ONSTACK) + INIT_WORK_ONSTACK(&pw->pw_work, work_fn); + else + INIT_WORK(&pw->pw_work, work_fn); pw->pw_data = data; } +static int __init padata_work_alloc_mt(int nworks, void *data, + struct list_head *head) +{ + int i; + + spin_lock(&padata_works_lock); + /* Start at 1 because the current task participates in the job. */ + for (i = 1; i < nworks; ++i) { + struct padata_work *pw = padata_work_alloc(); + + if (!pw) + break; + padata_work_init(pw, padata_mt_helper, data, 0); + list_add(&pw->pw_list, head); + } + spin_unlock(&padata_works_lock); + + return i; +} + static void padata_work_free(struct padata_work *pw) { lockdep_assert_held(&padata_works_lock); list_add(&pw->pw_list, &padata_free_works); } +static void __init padata_works_free(struct list_head *works) +{ + struct padata_work *cur, *next; + + if (list_empty(works)) + return; + + spin_lock(&padata_works_lock); + list_for_each_entry_safe(cur, next, works, pw_list) { + list_del(&cur->pw_list); + padata_work_free(cur); + } + spin_unlock(&padata_works_lock); +} + static void padata_parallel_worker(struct work_struct *parallel_work) { struct padata_work *pw = container_of(parallel_work, struct padata_work, @@ -168,7 +222,7 @@ int padata_do_parallel(struct padata_she pw = padata_work_alloc(); spin_unlock(&padata_works_lock); if (pw) { - padata_work_init(pw, padata_parallel_worker, padata); + padata_work_init(pw, padata_parallel_worker, padata, 0); queue_work(pinst->parallel_wq, &pw->pw_work); } else { /* Maximum works limit exceeded, run in the current task. */ @@ -409,6 +463,98 @@ out: return err; } +static void __init padata_mt_helper(struct work_struct *w) +{ + struct padata_work *pw = container_of(w, struct padata_work, pw_work); + struct padata_mt_job_state *ps = pw->pw_data; + struct padata_mt_job *job = ps->job; + bool done; + + spin_lock(&ps->lock); + + while (job->size > 0) { + unsigned long start, size, end; + + start = job->start; + /* So end is chunk size aligned if enough work remains. */ + size = roundup(start + 1, ps->chunk_size) - start; + size = min(size, job->size); + end = start + size; + + job->start = end; + job->size -= size; + + spin_unlock(&ps->lock); + job->thread_fn(start, end, job->fn_arg); + spin_lock(&ps->lock); + } + + ++ps->nworks_fini; + done = (ps->nworks_fini == ps->nworks); + spin_unlock(&ps->lock); + + if (done) + complete(&ps->completion); +} + +/** + * padata_do_multithreaded - run a multithreaded job + * @job: Description of the job. + * + * See the definition of struct padata_mt_job for more details. + */ +void __init padata_do_multithreaded(struct padata_mt_job *job) +{ + /* In case threads finish at different times. */ + static const unsigned long load_balance_factor = 4; + struct padata_work my_work, *pw; + struct padata_mt_job_state ps; + LIST_HEAD(works); + int nworks; + + if (job->size == 0) + return; + + /* Ensure at least one thread when size < min_chunk. */ + nworks = max(job->size / job->min_chunk, 1ul); + nworks = min(nworks, job->max_threads); + + if (nworks == 1) { + /* Single thread, no coordination needed, cut to the chase. */ + job->thread_fn(job->start, job->start + job->size, job->fn_arg); + return; + } + + spin_lock_init(&ps.lock); + init_completion(&ps.completion); + ps.job = job; + ps.nworks = padata_work_alloc_mt(nworks, &ps, &works); + ps.nworks_fini = 0; + + /* + * Chunk size is the amount of work a helper does per call to the + * thread function. Load balance large jobs between threads by + * increasing the number of chunks, guarantee at least the minimum + * chunk size from the caller, and honor the caller's alignment. + */ + ps.chunk_size = job->size / (ps.nworks * load_balance_factor); + ps.chunk_size = max(ps.chunk_size, job->min_chunk); + ps.chunk_size = roundup(ps.chunk_size, job->align); + + list_for_each_entry(pw, &works, pw_list) + queue_work(system_unbound_wq, &pw->pw_work); + + /* Use the current thread, which saves starting a workqueue worker. */ + padata_work_init(&my_work, padata_mt_helper, &ps, PADATA_WORK_ONSTACK); + padata_mt_helper(&my_work.pw_work); + + /* Wait for all the helpers to finish. */ + wait_for_completion(&ps.completion); + + destroy_work_on_stack(&my_work.pw_work); + padata_works_free(&works); +} + static void __padata_list_init(struct padata_list *pd_list) { INIT_LIST_HEAD(&pd_list->list); _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 056/131] mm: don't track number of pages during deferred initialization 2020-06-03 22:55 incoming Andrew Morton ` (54 preceding siblings ...) 2020-06-03 22:59 ` [patch 055/131] padata: add basic support for multithreaded jobs Andrew Morton @ 2020-06-03 22:59 ` Andrew Morton 2020-06-03 22:59 ` [patch 057/131] mm: parallelize deferred_init_memmap() Andrew Morton ` (75 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:59 UTC (permalink / raw) To: akpm, alex.williamson, alexander.h.duyck, corbet, dan.j.williams, daniel.m.jordan, dave.hansen, david, elliott, herbert, jgg, josh, ktkhai, linux-mm, mhocko, mm-commits, pasha.tatashin, pavel, peterz, rdunlap, shile.zhang, steffen.klassert, steven.sistare, tj, torvalds, ziy From: Daniel Jordan <daniel.m.jordan@oracle.com> Subject: mm: don't track number of pages during deferred initialization Deferred page init used to report the number of pages initialized: node 0 initialised, 32439114 pages in 97ms Tracking this makes the code more complicated when using multiple threads. Given that the statistic probably has limited value, especially since a zone grows on demand so that the page count can vary, just remove it. The boot message now looks like node 0 deferred pages initialised in 97ms Link: http://lkml.kernel.org/r/20200527173608.2885243-6-daniel.m.jordan@oracle.com Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Suggested-by: Alexander Duyck <alexander.h.duyck@linux.intel.com> Reviewed-by: Alexander Duyck <alexander.h.duyck@linux.intel.com> Cc: Alex Williamson <alex.williamson@redhat.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David Hildenbrand <david@redhat.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Josh Triplett <josh@joshtriplett.org> Cc: Kirill Tkhai <ktkhai@virtuozzo.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Pavel Machek <pavel@ucw.cz> Cc: Pavel Tatashin <pasha.tatashin@soleen.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Robert Elliott <elliott@hpe.com> Cc: Shile Zhang <shile.zhang@linux.alibaba.com> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: Steven Sistare <steven.sistare@oracle.com> Cc: Tejun Heo <tj@kernel.org> Cc: Zi Yan <ziy@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/page_alloc.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) --- a/mm/page_alloc.c~mm-dont-track-number-of-pages-during-deferred-initialization +++ a/mm/page_alloc.c @@ -1820,7 +1820,7 @@ static int __init deferred_init_memmap(v { pg_data_t *pgdat = data; const struct cpumask *cpumask = cpumask_of_node(pgdat->node_id); - unsigned long spfn = 0, epfn = 0, nr_pages = 0; + unsigned long spfn = 0, epfn = 0; unsigned long first_init_pfn, flags; unsigned long start = jiffies; struct zone *zone; @@ -1869,15 +1869,15 @@ static int __init deferred_init_memmap(v * allocator. */ while (spfn < epfn) { - nr_pages += deferred_init_maxorder(&i, zone, &spfn, &epfn); + deferred_init_maxorder(&i, zone, &spfn, &epfn); cond_resched(); } zone_empty: /* Sanity check that the next zone really is unpopulated */ WARN_ON(++zid < MAX_NR_ZONES && populated_zone(++zone)); - pr_info("node %d initialised, %lu pages in %ums\n", - pgdat->node_id, nr_pages, jiffies_to_msecs(jiffies - start)); + pr_info("node %d deferred pages initialised in %ums\n", + pgdat->node_id, jiffies_to_msecs(jiffies - start)); pgdat_init_report_one_done(); return 0; _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 057/131] mm: parallelize deferred_init_memmap() 2020-06-03 22:55 incoming Andrew Morton ` (55 preceding siblings ...) 2020-06-03 22:59 ` [patch 056/131] mm: don't track number of pages during deferred initialization Andrew Morton @ 2020-06-03 22:59 ` Andrew Morton 2020-06-03 22:59 ` [patch 058/131] mm: make deferred init's max threads arch-specific Andrew Morton ` (74 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:59 UTC (permalink / raw) To: akpm, alex.williamson, alexander.h.duyck, corbet, dan.j.williams, daniel.m.jordan, dave.hansen, david, elliott, herbert, jgg, josh, ktkhai, linux-mm, mhocko, mm-commits, pasha.tatashin, pavel, peterz, rdunlap, shile.zhang, steffen.klassert, steven.sistare, tj, torvalds, ziy From: Daniel Jordan <daniel.m.jordan@oracle.com> Subject: mm: parallelize deferred_init_memmap() Deferred struct page init is a significant bottleneck in kernel boot. Optimizing it maximizes availability for large-memory systems and allows spinning up short-lived VMs as needed without having to leave them running. It also benefits bare metal machines hosting VMs that are sensitive to downtime. In projects such as VMM Fast Restart[1], where guest state is preserved across kexec reboot, it helps prevent application and network timeouts in the guests. Multithread to take full advantage of system memory bandwidth. The maximum number of threads is capped at the number of CPUs on the node because speedups always improve with additional threads on every system tested, and at this phase of boot, the system is otherwise idle and waiting on page init to finish. Helper threads operate on section-aligned ranges to both avoid false sharing when setting the pageblock's migrate type and to avoid accessing uninitialized buddy pages, though max order alignment is enough for the latter. The minimum chunk size is also a section. There was benefit to using multiple threads even on relatively small memory (1G) systems, and this is the smallest size that the alignment allows. The time (milliseconds) is the slowest node to initialize since boot blocks until all nodes finish. intel_pstate is loaded in active mode without hwp and with turbo enabled, and intel_idle is active as well. Intel(R) Xeon(R) Platinum 8167M CPU @ 2.00GHz (Skylake, bare metal) 2 nodes * 26 cores * 2 threads = 104 CPUs 384G/node = 768G memory kernel boot deferred init ------------------------ ------------------------ node% (thr) speedup time_ms (stdev) speedup time_ms (stdev) ( 0) -- 4089.7 ( 8.1) -- 1785.7 ( 7.6) 2% ( 1) 1.7% 4019.3 ( 1.5) 3.8% 1717.7 ( 11.8) 12% ( 6) 34.9% 2662.7 ( 2.9) 79.9% 359.3 ( 0.6) 25% ( 13) 39.9% 2459.0 ( 3.6) 91.2% 157.0 ( 0.0) 37% ( 19) 39.2% 2485.0 ( 29.7) 90.4% 172.0 ( 28.6) 50% ( 26) 39.3% 2482.7 ( 25.7) 90.3% 173.7 ( 30.0) 75% ( 39) 39.0% 2495.7 ( 5.5) 89.4% 190.0 ( 1.0) 100% ( 52) 40.2% 2443.7 ( 3.8) 92.3% 138.0 ( 1.0) Intel(R) Xeon(R) CPU E5-2699C v4 @ 2.20GHz (Broadwell, kvm guest) 1 node * 16 cores * 2 threads = 32 CPUs 192G/node = 192G memory kernel boot deferred init ------------------------ ------------------------ node% (thr) speedup time_ms (stdev) speedup time_ms (stdev) ( 0) -- 1988.7 ( 9.6) -- 1096.0 ( 11.5) 3% ( 1) 1.1% 1967.0 ( 17.6) 0.3% 1092.7 ( 11.0) 12% ( 4) 41.1% 1170.3 ( 14.2) 73.8% 287.0 ( 3.6) 25% ( 8) 47.1% 1052.7 ( 21.9) 83.9% 177.0 ( 13.5) 38% ( 12) 48.9% 1016.3 ( 12.1) 86.8% 144.7 ( 1.5) 50% ( 16) 48.9% 1015.7 ( 8.1) 87.8% 134.0 ( 4.4) 75% ( 24) 49.1% 1012.3 ( 3.1) 88.1% 130.3 ( 2.3) 100% ( 32) 49.5% 1004.0 ( 5.3) 88.5% 125.7 ( 2.1) Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz (Haswell, bare metal) 2 nodes * 18 cores * 2 threads = 72 CPUs 128G/node = 256G memory kernel boot deferred init ------------------------ ------------------------ node% (thr) speedup time_ms (stdev) speedup time_ms (stdev) ( 0) -- 1680.0 ( 4.6) -- 627.0 ( 4.0) 3% ( 1) 0.3% 1675.7 ( 4.5) -0.2% 628.0 ( 3.6) 11% ( 4) 25.6% 1250.7 ( 2.1) 67.9% 201.0 ( 0.0) 25% ( 9) 30.7% 1164.0 ( 17.3) 81.8% 114.3 ( 17.7) 36% ( 13) 31.4% 1152.7 ( 10.8) 84.0% 100.3 ( 17.9) 50% ( 18) 31.5% 1150.7 ( 9.3) 83.9% 101.0 ( 14.1) 75% ( 27) 31.7% 1148.0 ( 5.6) 84.5% 97.3 ( 6.4) 100% ( 36) 32.0% 1142.3 ( 4.0) 85.6% 90.0 ( 1.0) AMD EPYC 7551 32-Core Processor (Zen, kvm guest) 1 node * 8 cores * 2 threads = 16 CPUs 64G/node = 64G memory kernel boot deferred init ------------------------ ------------------------ node% (thr) speedup time_ms (stdev) speedup time_ms (stdev) ( 0) -- 1029.3 ( 25.1) -- 240.7 ( 1.5) 6% ( 1) -0.6% 1036.0 ( 7.8) -2.2% 246.0 ( 0.0) 12% ( 2) 11.8% 907.7 ( 8.6) 44.7% 133.0 ( 1.0) 25% ( 4) 13.9% 886.0 ( 10.6) 62.6% 90.0 ( 6.0) 38% ( 6) 17.8% 845.7 ( 14.2) 69.1% 74.3 ( 3.8) 50% ( 8) 16.8% 856.0 ( 22.1) 72.9% 65.3 ( 5.7) 75% ( 12) 15.4% 871.0 ( 29.2) 79.8% 48.7 ( 7.4) 100% ( 16) 21.0% 813.7 ( 21.0) 80.5% 47.0 ( 5.2) Server-oriented distros that enable deferred page init sometimes run in small VMs, and they still benefit even though the fraction of boot time saved is smaller: AMD EPYC 7551 32-Core Processor (Zen, kvm guest) 1 node * 2 cores * 2 threads = 4 CPUs 16G/node = 16G memory kernel boot deferred init ------------------------ ------------------------ node% (thr) speedup time_ms (stdev) speedup time_ms (stdev) ( 0) -- 716.0 ( 14.0) -- 49.7 ( 0.6) 25% ( 1) 1.8% 703.0 ( 5.3) -4.0% 51.7 ( 0.6) 50% ( 2) 1.6% 704.7 ( 1.2) 43.0% 28.3 ( 0.6) 75% ( 3) 2.7% 696.7 ( 13.1) 49.7% 25.0 ( 0.0) 100% ( 4) 4.1% 687.0 ( 10.4) 55.7% 22.0 ( 0.0) Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz (Haswell, kvm guest) 1 node * 2 cores * 2 threads = 4 CPUs 14G/node = 14G memory kernel boot deferred init ------------------------ ------------------------ node% (thr) speedup time_ms (stdev) speedup time_ms (stdev) ( 0) -- 787.7 ( 6.4) -- 122.3 ( 0.6) 25% ( 1) 0.2% 786.3 ( 10.8) -2.5% 125.3 ( 2.1) 50% ( 2) 5.9% 741.0 ( 13.9) 37.6% 76.3 ( 19.7) 75% ( 3) 8.3% 722.0 ( 19.0) 49.9% 61.3 ( 3.2) 100% ( 4) 9.3% 714.7 ( 9.5) 56.4% 53.3 ( 1.5) On Josh's 96-CPU and 192G memory system: Without this patch series: [ 0.487132] node 0 initialised, 23398907 pages in 292ms [ 0.499132] node 1 initialised, 24189223 pages in 304ms ... [ 0.629376] Run /sbin/init as init process With this patch series: [ 0.231435] node 1 initialised, 24189223 pages in 32ms [ 0.236718] node 0 initialised, 23398907 pages in 36ms [1] https://static.sched.com/hosted_files/kvmforum2019/66/VMM-fast-restart_kvmforum2019.pdf Link: http://lkml.kernel.org/r/20200527173608.2885243-7-daniel.m.jordan@oracle.com Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Tested-by: Josh Triplett <josh@joshtriplett.org> Reviewed-by: Alexander Duyck <alexander.h.duyck@linux.intel.com> Cc: Alex Williamson <alex.williamson@redhat.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David Hildenbrand <david@redhat.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Kirill Tkhai <ktkhai@virtuozzo.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Pavel Machek <pavel@ucw.cz> Cc: Pavel Tatashin <pasha.tatashin@soleen.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Robert Elliott <elliott@hpe.com> Cc: Shile Zhang <shile.zhang@linux.alibaba.com> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: Steven Sistare <steven.sistare@oracle.com> Cc: Tejun Heo <tj@kernel.org> Cc: Zi Yan <ziy@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/Kconfig | 6 +++--- mm/page_alloc.c | 46 ++++++++++++++++++++++++++++++++++++++++------ 2 files changed, 43 insertions(+), 9 deletions(-) --- a/mm/Kconfig~mm-parallelize-deferred_init_memmap +++ a/mm/Kconfig @@ -747,13 +747,13 @@ config DEFERRED_STRUCT_PAGE_INIT depends on SPARSEMEM depends on !NEED_PER_CPU_KM depends on 64BIT + select PADATA help Ordinarily all struct pages are initialised during early boot in a single thread. On very large machines this can take a considerable amount of time. If this option is set, large machines will bring up - a subset of memmap at boot and then initialise the rest in parallel - by starting one-off "pgdatinitX" kernel thread for each node X. This - has a potential performance impact on processes running early in the + a subset of memmap at boot and then initialise the rest in parallel. + This has a potential performance impact on tasks running early in the lifetime of the system until these kthreads finish the initialisation. --- a/mm/page_alloc.c~mm-parallelize-deferred_init_memmap +++ a/mm/page_alloc.c @@ -68,6 +68,7 @@ #include <linux/lockdep.h> #include <linux/nmi.h> #include <linux/psi.h> +#include <linux/padata.h> #include <asm/sections.h> #include <asm/tlbflush.h> @@ -1815,6 +1816,26 @@ deferred_init_maxorder(u64 *i, struct zo return nr_pages; } +static void __init +deferred_init_memmap_chunk(unsigned long start_pfn, unsigned long end_pfn, + void *arg) +{ + unsigned long spfn, epfn; + struct zone *zone = arg; + u64 i; + + deferred_init_mem_pfn_range_in_zone(&i, zone, &spfn, &epfn, start_pfn); + + /* + * Initialize and free pages in MAX_ORDER sized increments so that we + * can avoid introducing any issues with the buddy allocator. + */ + while (spfn < end_pfn) { + deferred_init_maxorder(&i, zone, &spfn, &epfn); + cond_resched(); + } +} + /* Initialise remaining memory on a node */ static int __init deferred_init_memmap(void *data) { @@ -1824,7 +1845,7 @@ static int __init deferred_init_memmap(v unsigned long first_init_pfn, flags; unsigned long start = jiffies; struct zone *zone; - int zid; + int zid, max_threads; u64 i; /* Bind memory initialisation thread to a local node if possible */ @@ -1864,13 +1885,26 @@ static int __init deferred_init_memmap(v goto zone_empty; /* - * Initialize and free pages in MAX_ORDER sized increments so - * that we can avoid introducing any issues with the buddy - * allocator. + * More CPUs always led to greater speedups on tested systems, up to + * all the nodes' CPUs. Use all since the system is otherwise idle now. */ + max_threads = max(cpumask_weight(cpumask), 1u); + while (spfn < epfn) { - deferred_init_maxorder(&i, zone, &spfn, &epfn); - cond_resched(); + unsigned long epfn_align = ALIGN(epfn, PAGES_PER_SECTION); + struct padata_mt_job job = { + .thread_fn = deferred_init_memmap_chunk, + .fn_arg = zone, + .start = spfn, + .size = epfn_align - spfn, + .align = PAGES_PER_SECTION, + .min_chunk = PAGES_PER_SECTION, + .max_threads = max_threads, + }; + + padata_do_multithreaded(&job); + deferred_init_mem_pfn_range_in_zone(&i, zone, &spfn, &epfn, + epfn_align); } zone_empty: /* Sanity check that the next zone really is unpopulated */ _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 058/131] mm: make deferred init's max threads arch-specific 2020-06-03 22:55 incoming Andrew Morton ` (56 preceding siblings ...) 2020-06-03 22:59 ` [patch 057/131] mm: parallelize deferred_init_memmap() Andrew Morton @ 2020-06-03 22:59 ` Andrew Morton 2020-06-03 22:59 ` [patch 059/131] padata: document multithreaded jobs Andrew Morton ` (73 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:59 UTC (permalink / raw) To: akpm, alex.williamson, alexander.h.duyck, corbet, dan.j.williams, daniel.m.jordan, dave.hansen, david, elliott, herbert, jgg, josh, ktkhai, linux-mm, mhocko, mm-commits, pasha.tatashin, pavel, peterz, rdunlap, shile.zhang, steffen.klassert, steven.sistare, tj, torvalds, ziy From: Daniel Jordan <daniel.m.jordan@oracle.com> Subject: mm: make deferred init's max threads arch-specific Using padata during deferred init has only been tested on x86, so for now limit it to this architecture. If another arch wants this, it can find the max thread limit that's best for it and override deferred_page_init_max_threads(). Link: http://lkml.kernel.org/r/20200527173608.2885243-8-daniel.m.jordan@oracle.com Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Tested-by: Josh Triplett <josh@joshtriplett.org> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com> Cc: Alex Williamson <alex.williamson@redhat.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David Hildenbrand <david@redhat.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Kirill Tkhai <ktkhai@virtuozzo.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Pavel Machek <pavel@ucw.cz> Cc: Pavel Tatashin <pasha.tatashin@soleen.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Robert Elliott <elliott@hpe.com> Cc: Shile Zhang <shile.zhang@linux.alibaba.com> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: Steven Sistare <steven.sistare@oracle.com> Cc: Tejun Heo <tj@kernel.org> Cc: Zi Yan <ziy@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/x86/mm/init_64.c | 12 ++++++++++++ include/linux/memblock.h | 3 +++ mm/page_alloc.c | 13 ++++++++----- 3 files changed, 23 insertions(+), 5 deletions(-) --- a/arch/x86/mm/init_64.c~mm-make-deferred-inits-max-threads-arch-specific +++ a/arch/x86/mm/init_64.c @@ -1265,6 +1265,18 @@ void __init mem_init(void) mem_init_print_info(NULL); } +#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT +int __init deferred_page_init_max_threads(const struct cpumask *node_cpumask) +{ + /* + * More CPUs always led to greater speedups on tested systems, up to + * all the nodes' CPUs. Use all since the system is otherwise idle + * now. + */ + return max_t(int, cpumask_weight(node_cpumask), 1); +} +#endif + int kernel_set_to_readonly; void mark_rodata_ro(void) --- a/include/linux/memblock.h~mm-make-deferred-inits-max-threads-arch-specific +++ a/include/linux/memblock.h @@ -273,6 +273,9 @@ void __next_mem_pfn_range_in_zone(u64 *i #define for_each_free_mem_pfn_range_in_zone_from(i, zone, p_start, p_end) \ for (; i != U64_MAX; \ __next_mem_pfn_range_in_zone(&i, zone, p_start, p_end)) + +int __init deferred_page_init_max_threads(const struct cpumask *node_cpumask); + #endif /* CONFIG_DEFERRED_STRUCT_PAGE_INIT */ /** --- a/mm/page_alloc.c~mm-make-deferred-inits-max-threads-arch-specific +++ a/mm/page_alloc.c @@ -1836,6 +1836,13 @@ deferred_init_memmap_chunk(unsigned long } } +/* An arch may override for more concurrency. */ +__weak int __init +deferred_page_init_max_threads(const struct cpumask *node_cpumask) +{ + return 1; +} + /* Initialise remaining memory on a node */ static int __init deferred_init_memmap(void *data) { @@ -1884,11 +1891,7 @@ static int __init deferred_init_memmap(v first_init_pfn)) goto zone_empty; - /* - * More CPUs always led to greater speedups on tested systems, up to - * all the nodes' CPUs. Use all since the system is otherwise idle now. - */ - max_threads = max(cpumask_weight(cpumask), 1u); + max_threads = deferred_page_init_max_threads(cpumask); while (spfn < epfn) { unsigned long epfn_align = ALIGN(epfn, PAGES_PER_SECTION); _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 059/131] padata: document multithreaded jobs 2020-06-03 22:55 incoming Andrew Morton ` (57 preceding siblings ...) 2020-06-03 22:59 ` [patch 058/131] mm: make deferred init's max threads arch-specific Andrew Morton @ 2020-06-03 22:59 ` Andrew Morton 2020-06-03 23:00 ` [patch 060/131] mm/page_alloc.c: add missing newline Andrew Morton ` (72 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 22:59 UTC (permalink / raw) To: akpm, alex.williamson, alexander.h.duyck, corbet, dan.j.williams, daniel.m.jordan, dave.hansen, david, elliott, herbert, jgg, josh, ktkhai, linux-mm, mhocko, mm-commits, pasha.tatashin, pavel, peterz, rdunlap, shile.zhang, steffen.klassert, steven.sistare, tj, torvalds, ziy From: Daniel Jordan <daniel.m.jordan@oracle.com> Subject: padata: document multithreaded jobs Add Documentation for multithreaded jobs. Link: http://lkml.kernel.org/r/20200527173608.2885243-9-daniel.m.jordan@oracle.com Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Tested-by: Josh Triplett <josh@joshtriplett.org> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com> Cc: Alex Williamson <alex.williamson@redhat.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David Hildenbrand <david@redhat.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Kirill Tkhai <ktkhai@virtuozzo.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Pavel Machek <pavel@ucw.cz> Cc: Pavel Tatashin <pasha.tatashin@soleen.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Robert Elliott <elliott@hpe.com> Cc: Shile Zhang <shile.zhang@linux.alibaba.com> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: Steven Sistare <steven.sistare@oracle.com> Cc: Tejun Heo <tj@kernel.org> Cc: Zi Yan <ziy@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- Documentation/core-api/padata.rst | 41 +++++++++++++++++++++------- 1 file changed, 31 insertions(+), 10 deletions(-) --- a/Documentation/core-api/padata.rst~padata-document-multithreaded-jobs +++ a/Documentation/core-api/padata.rst @@ -4,23 +4,26 @@ The padata parallel execution mechanism ======================================= -:Date: December 2019 +:Date: May 2020 Padata is a mechanism by which the kernel can farm jobs out to be done in -parallel on multiple CPUs while retaining their ordering. It was developed for -use with the IPsec code, which needs to be able to perform encryption and -decryption on large numbers of packets without reordering those packets. The -crypto developers made a point of writing padata in a sufficiently general -fashion that it could be put to other uses as well. +parallel on multiple CPUs while optionally retaining their ordering. -Usage -===== +It was originally developed for IPsec, which needs to perform encryption and +decryption on large numbers of packets without reordering those packets. This +is currently the sole consumer of padata's serialized job support. + +Padata also supports multithreaded jobs, splitting up the job evenly while load +balancing and coordinating between threads. + +Running Serialized Jobs +======================= Initializing ------------ -The first step in using padata is to set up a padata_instance structure for -overall control of how jobs are to be run:: +The first step in using padata to run serialized jobs is to set up a +padata_instance structure for overall control of how jobs are to be run:: #include <linux/padata.h> @@ -162,6 +165,24 @@ functions that correspond to the allocat It is the user's responsibility to ensure all outstanding jobs are complete before any of the above are called. +Running Multithreaded Jobs +========================== + +A multithreaded job has a main thread and zero or more helper threads, with the +main thread participating in the job and then waiting until all helpers have +finished. padata splits the job into units called chunks, where a chunk is a +piece of the job that one thread completes in one call to the thread function. + +A user has to do three things to run a multithreaded job. First, describe the +job by defining a padata_mt_job structure, which is explained in the Interface +section. This includes a pointer to the thread function, which padata will +call each time it assigns a job chunk to a thread. Then, define the thread +function, which accepts three arguments, ``start``, ``end``, and ``arg``, where +the first two delimit the range that the thread operates on and the last is a +pointer to the job's shared state, if any. Prepare the shared state, which is +typically allocated on the main thread's stack. Last, call +padata_do_multithreaded(), which will return once the job is finished. + Interface ========= _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 060/131] mm/page_alloc.c: add missing newline 2020-06-03 22:55 incoming Andrew Morton ` (58 preceding siblings ...) 2020-06-03 22:59 ` [patch 059/131] padata: document multithreaded jobs Andrew Morton @ 2020-06-03 23:00 ` Andrew Morton 2020-06-03 23:00 ` [patch 061/131] khugepaged: add self test Andrew Morton ` (71 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:00 UTC (permalink / raw) To: akpm, chentao107, linux-mm, mm-commits, torvalds From: Chen Tao <chentao107@huawei.com> Subject: mm/page_alloc.c: add missing newline Add missing line breaks on pr_warn(). Link: http://lkml.kernel.org/r/20200603063547.235825-1-chentao107@huawei.com Signed-off-by: Chen Tao <chentao107@huawei.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/page_alloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/mm/page_alloc.c~mm-page_allocc-add-missing-line-breaks +++ a/mm/page_alloc.c @@ -7182,7 +7182,7 @@ static void __init find_zone_movable_pfn } if (mem_below_4gb_not_mirrored) - pr_warn("This configuration results in unmirrored kernel memory."); + pr_warn("This configuration results in unmirrored kernel memory.\n"); goto out2; } _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 061/131] khugepaged: add self test 2020-06-03 22:55 incoming Andrew Morton ` (59 preceding siblings ...) 2020-06-03 23:00 ` [patch 060/131] mm/page_alloc.c: add missing newline Andrew Morton @ 2020-06-03 23:00 ` Andrew Morton 2020-06-03 23:00 ` [patch 062/131] khugepaged: do not stop collapse if less than half PTEs are referenced Andrew Morton ` (70 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:00 UTC (permalink / raw) To: aarcange, akpm, aneesh.kumar, colin.king, jhubbard, kirill.shutemov, linux-mm, mike.kravetz, mm-commits, rcampbell, torvalds, william.kucharski, yang.shi, ziy From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Subject: khugepaged: add self test Patch series "thp/khugepaged improvements and CoW semantics", v4. The patchset adds khugepaged selftest (anon-THP only for now), expands cases khugepaged can handle and switches anon-THP copy-on-write handling to 4k. This patch (of 8): The test checks if khugepaged is able to recover huge page where we expect to do so. It only covers anon-THP for now. Currently the test shows few failures. They are going to be addressed by the following patches. [colin.king@canonical.com: fix several spelling mistakes] Link: http://lkml.kernel.org/r/20200420084241.65433-1-colin.king@canonical.com [aneesh.kumar@linux.ibm.com: replace the usage of system(3) in the test] Link: http://lkml.kernel.org/r/20200429110727.89388-1-aneesh.kumar@linux.ibm.com [kirill@shutemov.name: fixup for issues I've noticed] Link: http://lkml.kernel.org/r/20200429124816.jp272trghrzxx5j5@box [jhubbard@nvidia.com: add khugepaged to .gitignore] Link: http://lkml.kernel.org/r/20200517002509.362401-1-jhubbard@nvidia.com Link: http://lkml.kernel.org/r/20200416160026.16538-1-kirill.shutemov@linux.intel.com Link: http://lkml.kernel.org/r/20200416160026.16538-2-kirill.shutemov@linux.intel.com Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: John Hubbard <jhubbard@nvidia.com> Reviewed-by: William Kucharski <william.kucharski@oracle.com> Reviewed-by: Zi Yan <ziy@nvidia.com> Tested-by: Zi Yan <ziy@nvidia.com> Acked-by: Yang Shi <yang.shi@linux.alibaba.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Ralph Campbell <rcampbell@nvidia.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: William Kucharski <william.kucharski@oracle.com> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- tools/testing/selftests/vm/.gitignore | 1 tools/testing/selftests/vm/Makefile | 1 tools/testing/selftests/vm/khugepaged.c | 952 ++++++++++++++++++++++ 3 files changed, 954 insertions(+) --- a/tools/testing/selftests/vm/.gitignore~khugepaged-add-self-test +++ a/tools/testing/selftests/vm/.gitignore @@ -1,6 +1,7 @@ # SPDX-License-Identifier: GPL-2.0-only hugepage-mmap hugepage-shm +khugepaged map_hugetlb map_populate thuge-gen --- /dev/null +++ a/tools/testing/selftests/vm/khugepaged.c @@ -0,0 +1,952 @@ +#define _GNU_SOURCE +#include <fcntl.h> +#include <limits.h> +#include <signal.h> +#include <stdio.h> +#include <stdlib.h> +#include <stdbool.h> +#include <string.h> +#include <unistd.h> + +#include <sys/mman.h> +#include <sys/wait.h> + +#ifndef MADV_PAGEOUT +#define MADV_PAGEOUT 21 +#endif + +#define BASE_ADDR ((void *)(1UL << 30)) +static unsigned long hpage_pmd_size; +static unsigned long page_size; +static int hpage_pmd_nr; + +#define THP_SYSFS "/sys/kernel/mm/transparent_hugepage/" +#define PID_SMAPS "/proc/self/smaps" + +enum thp_enabled { + THP_ALWAYS, + THP_MADVISE, + THP_NEVER, +}; + +static const char *thp_enabled_strings[] = { + "always", + "madvise", + "never", + NULL +}; + +enum thp_defrag { + THP_DEFRAG_ALWAYS, + THP_DEFRAG_DEFER, + THP_DEFRAG_DEFER_MADVISE, + THP_DEFRAG_MADVISE, + THP_DEFRAG_NEVER, +}; + +static const char *thp_defrag_strings[] = { + "always", + "defer", + "defer+madvise", + "madvise", + "never", + NULL +}; + +enum shmem_enabled { + SHMEM_ALWAYS, + SHMEM_WITHIN_SIZE, + SHMEM_ADVISE, + SHMEM_NEVER, + SHMEM_DENY, + SHMEM_FORCE, +}; + +static const char *shmem_enabled_strings[] = { + "always", + "within_size", + "advise", + "never", + "deny", + "force", + NULL +}; + +struct khugepaged_settings { + bool defrag; + unsigned int alloc_sleep_millisecs; + unsigned int scan_sleep_millisecs; + unsigned int max_ptes_none; + unsigned int max_ptes_swap; + unsigned long pages_to_scan; +}; + +struct settings { + enum thp_enabled thp_enabled; + enum thp_defrag thp_defrag; + enum shmem_enabled shmem_enabled; + bool debug_cow; + bool use_zero_page; + struct khugepaged_settings khugepaged; +}; + +static struct settings default_settings = { + .thp_enabled = THP_MADVISE, + .thp_defrag = THP_DEFRAG_ALWAYS, + .shmem_enabled = SHMEM_NEVER, + .debug_cow = 0, + .use_zero_page = 0, + .khugepaged = { + .defrag = 1, + .alloc_sleep_millisecs = 10, + .scan_sleep_millisecs = 10, + }, +}; + +static struct settings saved_settings; +static bool skip_settings_restore; + +static int exit_status; + +static void success(const char *msg) +{ + printf(" \e[32m%s\e[0m\n", msg); +} + +static void fail(const char *msg) +{ + printf(" \e[31m%s\e[0m\n", msg); + exit_status++; +} + +static int read_file(const char *path, char *buf, size_t buflen) +{ + int fd; + ssize_t numread; + + fd = open(path, O_RDONLY); + if (fd == -1) + return 0; + + numread = read(fd, buf, buflen - 1); + if (numread < 1) { + close(fd); + return 0; + } + + buf[numread] = '\0'; + close(fd); + + return (unsigned int) numread; +} + +static int write_file(const char *path, const char *buf, size_t buflen) +{ + int fd; + ssize_t numwritten; + + fd = open(path, O_WRONLY); + if (fd == -1) + return 0; + + numwritten = write(fd, buf, buflen - 1); + close(fd); + if (numwritten < 1) + return 0; + + return (unsigned int) numwritten; +} + +static int read_string(const char *name, const char *strings[]) +{ + char path[PATH_MAX]; + char buf[256]; + char *c; + int ret; + + ret = snprintf(path, PATH_MAX, THP_SYSFS "%s", name); + if (ret >= PATH_MAX) { + printf("%s: Pathname is too long\n", __func__); + exit(EXIT_FAILURE); + } + + if (!read_file(path, buf, sizeof(buf))) { + perror(path); + exit(EXIT_FAILURE); + } + + c = strchr(buf, '['); + if (!c) { + printf("%s: Parse failure\n", __func__); + exit(EXIT_FAILURE); + } + + c++; + memmove(buf, c, sizeof(buf) - (c - buf)); + + c = strchr(buf, ']'); + if (!c) { + printf("%s: Parse failure\n", __func__); + exit(EXIT_FAILURE); + } + *c = '\0'; + + ret = 0; + while (strings[ret]) { + if (!strcmp(strings[ret], buf)) + return ret; + ret++; + } + + printf("Failed to parse %s\n", name); + exit(EXIT_FAILURE); +} + +static void write_string(const char *name, const char *val) +{ + char path[PATH_MAX]; + int ret; + + ret = snprintf(path, PATH_MAX, THP_SYSFS "%s", name); + if (ret >= PATH_MAX) { + printf("%s: Pathname is too long\n", __func__); + exit(EXIT_FAILURE); + } + + if (!write_file(path, val, strlen(val) + 1)) { + perror(path); + exit(EXIT_FAILURE); + } +} + +static const unsigned long read_num(const char *name) +{ + char path[PATH_MAX]; + char buf[21]; + int ret; + + ret = snprintf(path, PATH_MAX, THP_SYSFS "%s", name); + if (ret >= PATH_MAX) { + printf("%s: Pathname is too long\n", __func__); + exit(EXIT_FAILURE); + } + + ret = read_file(path, buf, sizeof(buf)); + if (ret < 0) { + perror("read_file(read_num)"); + exit(EXIT_FAILURE); + } + + return strtoul(buf, NULL, 10); +} + +static void write_num(const char *name, unsigned long num) +{ + char path[PATH_MAX]; + char buf[21]; + int ret; + + ret = snprintf(path, PATH_MAX, THP_SYSFS "%s", name); + if (ret >= PATH_MAX) { + printf("%s: Pathname is too long\n", __func__); + exit(EXIT_FAILURE); + } + + sprintf(buf, "%ld", num); + if (!write_file(path, buf, strlen(buf) + 1)) { + perror(path); + exit(EXIT_FAILURE); + } +} + +static void write_settings(struct settings *settings) +{ + struct khugepaged_settings *khugepaged = &settings->khugepaged; + + write_string("enabled", thp_enabled_strings[settings->thp_enabled]); + write_string("defrag", thp_defrag_strings[settings->thp_defrag]); + write_string("shmem_enabled", + shmem_enabled_strings[settings->shmem_enabled]); + write_num("debug_cow", settings->debug_cow); + write_num("use_zero_page", settings->use_zero_page); + + write_num("khugepaged/defrag", khugepaged->defrag); + write_num("khugepaged/alloc_sleep_millisecs", + khugepaged->alloc_sleep_millisecs); + write_num("khugepaged/scan_sleep_millisecs", + khugepaged->scan_sleep_millisecs); + write_num("khugepaged/max_ptes_none", khugepaged->max_ptes_none); + write_num("khugepaged/max_ptes_swap", khugepaged->max_ptes_swap); + write_num("khugepaged/pages_to_scan", khugepaged->pages_to_scan); +} + +static void restore_settings(int sig) +{ + if (skip_settings_restore) + goto out; + + printf("Restore THP and khugepaged settings..."); + write_settings(&saved_settings); + success("OK"); + if (sig) + exit(EXIT_FAILURE); +out: + exit(exit_status); +} + +static void save_settings(void) +{ + printf("Save THP and khugepaged settings..."); + saved_settings = (struct settings) { + .thp_enabled = read_string("enabled", thp_enabled_strings), + .thp_defrag = read_string("defrag", thp_defrag_strings), + .shmem_enabled = + read_string("shmem_enabled", shmem_enabled_strings), + .debug_cow = read_num("debug_cow"), + .use_zero_page = read_num("use_zero_page"), + }; + saved_settings.khugepaged = (struct khugepaged_settings) { + .defrag = read_num("khugepaged/defrag"), + .alloc_sleep_millisecs = + read_num("khugepaged/alloc_sleep_millisecs"), + .scan_sleep_millisecs = + read_num("khugepaged/scan_sleep_millisecs"), + .max_ptes_none = read_num("khugepaged/max_ptes_none"), + .max_ptes_swap = read_num("khugepaged/max_ptes_swap"), + .pages_to_scan = read_num("khugepaged/pages_to_scan"), + }; + success("OK"); + + signal(SIGTERM, restore_settings); + signal(SIGINT, restore_settings); + signal(SIGHUP, restore_settings); + signal(SIGQUIT, restore_settings); +} + +static void adjust_settings(void) +{ + + printf("Adjust settings..."); + write_settings(&default_settings); + success("OK"); +} + +#define MAX_LINE_LENGTH 500 + +static bool check_for_pattern(FILE *fp, char *pattern, char *buf) +{ + while (fgets(buf, MAX_LINE_LENGTH, fp) != NULL) { + if (!strncmp(buf, pattern, strlen(pattern))) + return true; + } + return false; +} + +static bool check_huge(void *addr) +{ + bool thp = false; + int ret; + FILE *fp; + char buffer[MAX_LINE_LENGTH]; + char addr_pattern[MAX_LINE_LENGTH]; + + ret = snprintf(addr_pattern, MAX_LINE_LENGTH, "%08lx-", + (unsigned long) addr); + if (ret >= MAX_LINE_LENGTH) { + printf("%s: Pattern is too long\n", __func__); + exit(EXIT_FAILURE); + } + + + fp = fopen(PID_SMAPS, "r"); + if (!fp) { + printf("%s: Failed to open file %s\n", __func__, PID_SMAPS); + exit(EXIT_FAILURE); + } + if (!check_for_pattern(fp, addr_pattern, buffer)) + goto err_out; + + ret = snprintf(addr_pattern, MAX_LINE_LENGTH, "AnonHugePages:%10ld kB", + hpage_pmd_size >> 10); + if (ret >= MAX_LINE_LENGTH) { + printf("%s: Pattern is too long\n", __func__); + exit(EXIT_FAILURE); + } + /* + * Fetch the AnonHugePages: in the same block and check whether it got + * the expected number of hugeepages next. + */ + if (!check_for_pattern(fp, "AnonHugePages:", buffer)) + goto err_out; + + if (strncmp(buffer, addr_pattern, strlen(addr_pattern))) + goto err_out; + + thp = true; +err_out: + fclose(fp); + return thp; +} + + +static bool check_swap(void *addr, unsigned long size) +{ + bool swap = false; + int ret; + FILE *fp; + char buffer[MAX_LINE_LENGTH]; + char addr_pattern[MAX_LINE_LENGTH]; + + ret = snprintf(addr_pattern, MAX_LINE_LENGTH, "%08lx-", + (unsigned long) addr); + if (ret >= MAX_LINE_LENGTH) { + printf("%s: Pattern is too long\n", __func__); + exit(EXIT_FAILURE); + } + + + fp = fopen(PID_SMAPS, "r"); + if (!fp) { + printf("%s: Failed to open file %s\n", __func__, PID_SMAPS); + exit(EXIT_FAILURE); + } + if (!check_for_pattern(fp, addr_pattern, buffer)) + goto err_out; + + ret = snprintf(addr_pattern, MAX_LINE_LENGTH, "Swap:%19ld kB", + size >> 10); + if (ret >= MAX_LINE_LENGTH) { + printf("%s: Pattern is too long\n", __func__); + exit(EXIT_FAILURE); + } + /* + * Fetch the Swap: in the same block and check whether it got + * the expected number of hugeepages next. + */ + if (!check_for_pattern(fp, "Swap:", buffer)) + goto err_out; + + if (strncmp(buffer, addr_pattern, strlen(addr_pattern))) + goto err_out; + + swap = true; +err_out: + fclose(fp); + return swap; +} + +static void *alloc_mapping(void) +{ + void *p; + + p = mmap(BASE_ADDR, hpage_pmd_size, PROT_READ | PROT_WRITE, + MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); + if (p != BASE_ADDR) { + printf("Failed to allocate VMA at %p\n", BASE_ADDR); + exit(EXIT_FAILURE); + } + + return p; +} + +static void fill_memory(int *p, unsigned long start, unsigned long end) +{ + int i; + + for (i = start / page_size; i < end / page_size; i++) + p[i * page_size / sizeof(*p)] = i + 0xdead0000; +} + +static void validate_memory(int *p, unsigned long start, unsigned long end) +{ + int i; + + for (i = start / page_size; i < end / page_size; i++) { + if (p[i * page_size / sizeof(*p)] != i + 0xdead0000) { + printf("Page %d is corrupted: %#x\n", + i, p[i * page_size / sizeof(*p)]); + exit(EXIT_FAILURE); + } + } +} + +#define TICK 500000 +static bool wait_for_scan(const char *msg, char *p) +{ + int full_scans; + int timeout = 6; /* 3 seconds */ + + /* Sanity check */ + if (check_huge(p)) { + printf("Unexpected huge page\n"); + exit(EXIT_FAILURE); + } + + madvise(p, hpage_pmd_size, MADV_HUGEPAGE); + + /* Wait until the second full_scan completed */ + full_scans = read_num("khugepaged/full_scans") + 2; + + printf("%s...", msg); + while (timeout--) { + if (check_huge(p)) + break; + if (read_num("khugepaged/full_scans") >= full_scans) + break; + printf("."); + usleep(TICK); + } + + madvise(p, hpage_pmd_size, MADV_NOHUGEPAGE); + + return !timeout; +} + +static void alloc_at_fault(void) +{ + struct settings settings = default_settings; + char *p; + + settings.thp_enabled = THP_ALWAYS; + write_settings(&settings); + + p = alloc_mapping(); + *p = 1; + printf("Allocate huge page on fault..."); + if (check_huge(p)) + success("OK"); + else + fail("Fail"); + + write_settings(&default_settings); + + madvise(p, page_size, MADV_DONTNEED); + printf("Split huge PMD on MADV_DONTNEED..."); + if (!check_huge(p)) + success("OK"); + else + fail("Fail"); + munmap(p, hpage_pmd_size); +} + +static void collapse_full(void) +{ + void *p; + + p = alloc_mapping(); + fill_memory(p, 0, hpage_pmd_size); + if (wait_for_scan("Collapse fully populated PTE table", p)) + fail("Timeout"); + else if (check_huge(p)) + success("OK"); + else + fail("Fail"); + validate_memory(p, 0, hpage_pmd_size); + munmap(p, hpage_pmd_size); +} + +static void collapse_empty(void) +{ + void *p; + + p = alloc_mapping(); + if (wait_for_scan("Do not collapse empty PTE table", p)) + fail("Timeout"); + else if (check_huge(p)) + fail("Fail"); + else + success("OK"); + munmap(p, hpage_pmd_size); +} + +static void collapse_single_pte_entry(void) +{ + void *p; + + p = alloc_mapping(); + fill_memory(p, 0, page_size); + if (wait_for_scan("Collapse PTE table with single PTE entry present", p)) + fail("Timeout"); + else if (check_huge(p)) + success("OK"); + else + fail("Fail"); + validate_memory(p, 0, page_size); + munmap(p, hpage_pmd_size); +} + +static void collapse_max_ptes_none(void) +{ + int max_ptes_none = hpage_pmd_nr / 2; + struct settings settings = default_settings; + void *p; + + settings.khugepaged.max_ptes_none = max_ptes_none; + write_settings(&settings); + + p = alloc_mapping(); + + fill_memory(p, 0, (hpage_pmd_nr - max_ptes_none - 1) * page_size); + if (wait_for_scan("Do not collapse with max_ptes_none exceeded", p)) + fail("Timeout"); + else if (check_huge(p)) + fail("Fail"); + else + success("OK"); + validate_memory(p, 0, (hpage_pmd_nr - max_ptes_none - 1) * page_size); + + fill_memory(p, 0, (hpage_pmd_nr - max_ptes_none) * page_size); + if (wait_for_scan("Collapse with max_ptes_none PTEs empty", p)) + fail("Timeout"); + else if (check_huge(p)) + success("OK"); + else + fail("Fail"); + validate_memory(p, 0, (hpage_pmd_nr - max_ptes_none) * page_size); + + munmap(p, hpage_pmd_size); + write_settings(&default_settings); +} + +static void collapse_swapin_single_pte(void) +{ + void *p; + p = alloc_mapping(); + fill_memory(p, 0, hpage_pmd_size); + + printf("Swapout one page..."); + if (madvise(p, page_size, MADV_PAGEOUT)) { + perror("madvise(MADV_PAGEOUT)"); + exit(EXIT_FAILURE); + } + if (check_swap(p, page_size)) { + success("OK"); + } else { + fail("Fail"); + goto out; + } + + if (wait_for_scan("Collapse with swapping in single PTE entry", p)) + fail("Timeout"); + else if (check_huge(p)) + success("OK"); + else + fail("Fail"); + validate_memory(p, 0, hpage_pmd_size); +out: + munmap(p, hpage_pmd_size); +} + +static void collapse_max_ptes_swap(void) +{ + int max_ptes_swap = read_num("khugepaged/max_ptes_swap"); + void *p; + + p = alloc_mapping(); + + fill_memory(p, 0, hpage_pmd_size); + printf("Swapout %d of %d pages...", max_ptes_swap + 1, hpage_pmd_nr); + if (madvise(p, (max_ptes_swap + 1) * page_size, MADV_PAGEOUT)) { + perror("madvise(MADV_PAGEOUT)"); + exit(EXIT_FAILURE); + } + if (check_swap(p, (max_ptes_swap + 1) * page_size)) { + success("OK"); + } else { + fail("Fail"); + goto out; + } + + if (wait_for_scan("Do not collapse with max_ptes_swap exceeded", p)) + fail("Timeout"); + else if (check_huge(p)) + fail("Fail"); + else + success("OK"); + validate_memory(p, 0, hpage_pmd_size); + + fill_memory(p, 0, hpage_pmd_size); + printf("Swapout %d of %d pages...", max_ptes_swap, hpage_pmd_nr); + if (madvise(p, max_ptes_swap * page_size, MADV_PAGEOUT)) { + perror("madvise(MADV_PAGEOUT)"); + exit(EXIT_FAILURE); + } + if (check_swap(p, max_ptes_swap * page_size)) { + success("OK"); + } else { + fail("Fail"); + goto out; + } + + if (wait_for_scan("Collapse with max_ptes_swap pages swapped out", p)) + fail("Timeout"); + else if (check_huge(p)) + success("OK"); + else + fail("Fail"); + validate_memory(p, 0, hpage_pmd_size); +out: + munmap(p, hpage_pmd_size); +} + +static void collapse_single_pte_entry_compound(void) +{ + void *p; + + p = alloc_mapping(); + + printf("Allocate huge page..."); + madvise(p, hpage_pmd_size, MADV_HUGEPAGE); + fill_memory(p, 0, hpage_pmd_size); + if (check_huge(p)) + success("OK"); + else + fail("Fail"); + madvise(p, hpage_pmd_size, MADV_NOHUGEPAGE); + + printf("Split huge page leaving single PTE mapping compound page..."); + madvise(p + page_size, hpage_pmd_size - page_size, MADV_DONTNEED); + if (!check_huge(p)) + success("OK"); + else + fail("Fail"); + + if (wait_for_scan("Collapse PTE table with single PTE mapping compound page", p)) + fail("Timeout"); + else if (check_huge(p)) + success("OK"); + else + fail("Fail"); + validate_memory(p, 0, page_size); + munmap(p, hpage_pmd_size); +} + +static void collapse_full_of_compound(void) +{ + void *p; + + p = alloc_mapping(); + + printf("Allocate huge page..."); + madvise(p, hpage_pmd_size, MADV_HUGEPAGE); + fill_memory(p, 0, hpage_pmd_size); + if (check_huge(p)) + success("OK"); + else + fail("Fail"); + + printf("Split huge page leaving single PTE page table full of compound pages..."); + madvise(p, page_size, MADV_NOHUGEPAGE); + madvise(p, hpage_pmd_size, MADV_NOHUGEPAGE); + if (!check_huge(p)) + success("OK"); + else + fail("Fail"); + + if (wait_for_scan("Collapse PTE table full of compound pages", p)) + fail("Timeout"); + else if (check_huge(p)) + success("OK"); + else + fail("Fail"); + validate_memory(p, 0, hpage_pmd_size); + munmap(p, hpage_pmd_size); +} + +static void collapse_compound_extreme(void) +{ + void *p; + int i; + + p = alloc_mapping(); + for (i = 0; i < hpage_pmd_nr; i++) { + printf("\rConstruct PTE page table full of different PTE-mapped compound pages %3d/%d...", + i + 1, hpage_pmd_nr); + + madvise(BASE_ADDR, hpage_pmd_size, MADV_HUGEPAGE); + fill_memory(BASE_ADDR, 0, hpage_pmd_size); + if (!check_huge(BASE_ADDR)) { + printf("Failed to allocate huge page\n"); + exit(EXIT_FAILURE); + } + madvise(BASE_ADDR, hpage_pmd_size, MADV_NOHUGEPAGE); + + p = mremap(BASE_ADDR - i * page_size, + i * page_size + hpage_pmd_size, + (i + 1) * page_size, + MREMAP_MAYMOVE | MREMAP_FIXED, + BASE_ADDR + 2 * hpage_pmd_size); + if (p == MAP_FAILED) { + perror("mremap+unmap"); + exit(EXIT_FAILURE); + } + + p = mremap(BASE_ADDR + 2 * hpage_pmd_size, + (i + 1) * page_size, + (i + 1) * page_size + hpage_pmd_size, + MREMAP_MAYMOVE | MREMAP_FIXED, + BASE_ADDR - (i + 1) * page_size); + if (p == MAP_FAILED) { + perror("mremap+alloc"); + exit(EXIT_FAILURE); + } + } + + munmap(BASE_ADDR, hpage_pmd_size); + fill_memory(p, 0, hpage_pmd_size); + if (!check_huge(p)) + success("OK"); + else + fail("Fail"); + + if (wait_for_scan("Collapse PTE table full of different compound pages", p)) + fail("Timeout"); + else if (check_huge(p)) + success("OK"); + else + fail("Fail"); + + validate_memory(p, 0, hpage_pmd_size); + munmap(p, hpage_pmd_size); +} + +static void collapse_fork(void) +{ + int wstatus; + void *p; + + p = alloc_mapping(); + + printf("Allocate small page..."); + fill_memory(p, 0, page_size); + if (!check_huge(p)) + success("OK"); + else + fail("Fail"); + + printf("Share small page over fork()..."); + if (!fork()) { + /* Do not touch settings on child exit */ + skip_settings_restore = true; + exit_status = 0; + + if (!check_huge(p)) + success("OK"); + else + fail("Fail"); + + fill_memory(p, page_size, 2 * page_size); + + if (wait_for_scan("Collapse PTE table with single page shared with parent process", p)) + fail("Timeout"); + else if (check_huge(p)) + success("OK"); + else + fail("Fail"); + + validate_memory(p, 0, page_size); + munmap(p, hpage_pmd_size); + exit(exit_status); + } + + wait(&wstatus); + exit_status += WEXITSTATUS(wstatus); + + printf("Check if parent still has small page..."); + if (!check_huge(p)) + success("OK"); + else + fail("Fail"); + validate_memory(p, 0, page_size); + munmap(p, hpage_pmd_size); +} + +static void collapse_fork_compound(void) +{ + int wstatus; + void *p; + + p = alloc_mapping(); + + printf("Allocate huge page..."); + madvise(p, hpage_pmd_size, MADV_HUGEPAGE); + fill_memory(p, 0, hpage_pmd_size); + if (check_huge(p)) + success("OK"); + else + fail("Fail"); + + printf("Share huge page over fork()..."); + if (!fork()) { + /* Do not touch settings on child exit */ + skip_settings_restore = true; + exit_status = 0; + + if (check_huge(p)) + success("OK"); + else + fail("Fail"); + + printf("Split huge page PMD in child process..."); + madvise(p, page_size, MADV_NOHUGEPAGE); + madvise(p, hpage_pmd_size, MADV_NOHUGEPAGE); + if (!check_huge(p)) + success("OK"); + else + fail("Fail"); + fill_memory(p, 0, page_size); + + if (wait_for_scan("Collapse PTE table full of compound pages in child", p)) + fail("Timeout"); + else if (check_huge(p)) + success("OK"); + else + fail("Fail"); + + validate_memory(p, 0, hpage_pmd_size); + munmap(p, hpage_pmd_size); + exit(exit_status); + } + + wait(&wstatus); + exit_status += WEXITSTATUS(wstatus); + + printf("Check if parent still has huge page..."); + if (check_huge(p)) + success("OK"); + else + fail("Fail"); + validate_memory(p, 0, hpage_pmd_size); + munmap(p, hpage_pmd_size); +} + +int main(void) +{ + setbuf(stdout, NULL); + + page_size = getpagesize(); + hpage_pmd_size = read_num("hpage_pmd_size"); + hpage_pmd_nr = hpage_pmd_size / page_size; + + default_settings.khugepaged.max_ptes_none = hpage_pmd_nr - 1; + default_settings.khugepaged.max_ptes_swap = hpage_pmd_nr / 8; + default_settings.khugepaged.pages_to_scan = hpage_pmd_nr * 8; + + save_settings(); + adjust_settings(); + + alloc_at_fault(); + collapse_full(); + collapse_empty(); + collapse_single_pte_entry(); + collapse_max_ptes_none(); + collapse_swapin_single_pte(); + collapse_max_ptes_swap(); + collapse_single_pte_entry_compound(); + collapse_full_of_compound(); + collapse_compound_extreme(); + collapse_fork(); + collapse_fork_compound(); + + restore_settings(0); +} --- a/tools/testing/selftests/vm/Makefile~khugepaged-add-self-test +++ a/tools/testing/selftests/vm/Makefile @@ -20,6 +20,7 @@ TEST_GEN_FILES += on-fault-limit TEST_GEN_FILES += thuge-gen TEST_GEN_FILES += transhuge-stress TEST_GEN_FILES += userfaultfd +TEST_GEN_FILES += khugepaged ifneq (,$(filter $(MACHINE),arm64 ia64 mips64 parisc64 ppc64 ppc64le riscv64 s390x sh64 sparc64 x86_64)) TEST_GEN_FILES += va_128TBswitch _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 062/131] khugepaged: do not stop collapse if less than half PTEs are referenced 2020-06-03 22:55 incoming Andrew Morton ` (60 preceding siblings ...) 2020-06-03 23:00 ` [patch 061/131] khugepaged: add self test Andrew Morton @ 2020-06-03 23:00 ` Andrew Morton 2020-06-03 23:00 ` [patch 063/131] khugepaged: drain all LRU caches before scanning pages Andrew Morton ` (69 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:00 UTC (permalink / raw) To: aarcange, akpm, jhubbard, kirill.shutemov, linux-mm, mike.kravetz, mm-commits, rcampbell, torvalds, william.kucharski, yang.shi, ziy From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Subject: khugepaged: do not stop collapse if less than half PTEs are referenced __collapse_huge_page_swapin() checks the number of referenced PTE to decide if the memory range is hot enough to justify swapin. We have few problems with the approach: - It is way too late: we can do the check much earlier and safe time. khugepaged_scan_pmd() already knows if we have any pages to swap in and number of referenced page. - It stops collapse altogether if there's not enough referenced pages, not only swappingin. Fix it by making the right check early. We also can avoid additional page table scanning if khugepaged_scan_pmd() haven't found any swap entries. Link: http://lkml.kernel.org/r/20200416160026.16538-3-kirill.shutemov@linux.intel.com Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Fixes: 0db501f7a34c ("mm, thp: convert from optimistic swapin collapsing to conservative") Reviewed-by: William Kucharski <william.kucharski@oracle.com> Tested-by: Zi Yan <ziy@nvidia.com> Reviewed-by: Zi Yan <ziy@nvidia.com> Acked-by: Yang Shi <yang.shi@linux.alibaba.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Ralph Campbell <rcampbell@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/khugepaged.c | 27 +++++++++++---------------- 1 file changed, 11 insertions(+), 16 deletions(-) --- a/mm/khugepaged.c~khugepaged-do-not-stop-collapse-if-less-than-half-ptes-are-referenced +++ a/mm/khugepaged.c @@ -899,11 +899,6 @@ static bool __collapse_huge_page_swapin( .pgoff = linear_page_index(vma, address), }; - /* we only decide to swapin, if there is enough young ptes */ - if (referenced < HPAGE_PMD_NR/2) { - trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 0); - return false; - } vmf.pte = pte_offset_map(pmd, address); for (; vmf.address < address + HPAGE_PMD_NR*PAGE_SIZE; vmf.pte++, vmf.address += PAGE_SIZE) { @@ -943,7 +938,7 @@ static bool __collapse_huge_page_swapin( static void collapse_huge_page(struct mm_struct *mm, unsigned long address, struct page **hpage, - int node, int referenced) + int node, int referenced, int unmapped) { pmd_t *pmd, _pmd; pte_t *pte; @@ -1000,7 +995,8 @@ static void collapse_huge_page(struct mm * If it fails, we release mmap_sem and jump out_nolock. * Continuing to collapse causes inconsistency. */ - if (!__collapse_huge_page_swapin(mm, vma, address, pmd, referenced)) { + if (unmapped && !__collapse_huge_page_swapin(mm, vma, address, + pmd, referenced)) { mem_cgroup_cancel_charge(new_page, memcg, true); up_read(&mm->mmap_sem); goto out_nolock; @@ -1233,22 +1229,21 @@ static int khugepaged_scan_pmd(struct mm mmu_notifier_test_young(vma->vm_mm, address)) referenced++; } - if (writable) { - if (referenced) { - result = SCAN_SUCCEED; - ret = 1; - } else { - result = SCAN_LACK_REFERENCED_PAGE; - } - } else { + if (!writable) { result = SCAN_PAGE_RO; + } else if (!referenced || (unmapped && referenced < HPAGE_PMD_NR/2)) { + result = SCAN_LACK_REFERENCED_PAGE; + } else { + result = SCAN_SUCCEED; + ret = 1; } out_unmap: pte_unmap_unlock(pte, ptl); if (ret) { node = khugepaged_find_target_node(); /* collapse_huge_page will return with the mmap_sem released */ - collapse_huge_page(mm, address, hpage, node, referenced); + collapse_huge_page(mm, address, hpage, node, + referenced, unmapped); } out: trace_mm_khugepaged_scan_pmd(mm, page, writable, referenced, _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 063/131] khugepaged: drain all LRU caches before scanning pages 2020-06-03 22:55 incoming Andrew Morton ` (61 preceding siblings ...) 2020-06-03 23:00 ` [patch 062/131] khugepaged: do not stop collapse if less than half PTEs are referenced Andrew Morton @ 2020-06-03 23:00 ` Andrew Morton 2020-06-03 23:00 ` [patch 064/131] khugepaged: drain LRU add pagevec after swapin Andrew Morton ` (68 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:00 UTC (permalink / raw) To: aarcange, akpm, jhubbard, kirill.shutemov, linux-mm, mike.kravetz, mm-commits, rcampbell, torvalds, william.kucharski, yang.shi, ziy From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Subject: khugepaged: drain all LRU caches before scanning pages Having a page in LRU add cache offsets page refcount and gives false-negative on PageLRU(). It reduces collapse success rate. Drain all LRU add caches before scanning. It happens relatively rare and should not disturb the system too much. Link: http://lkml.kernel.org/r/20200416160026.16538-4-kirill.shutemov@linux.intel.com Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reviewed-by: William Kucharski <william.kucharski@oracle.com> Reviewed-by: Zi Yan <ziy@nvidia.com> Tested-by: Zi Yan <ziy@nvidia.com> Acked-by: Yang Shi <yang.shi@linux.alibaba.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Ralph Campbell <rcampbell@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/khugepaged.c | 2 ++ 1 file changed, 2 insertions(+) --- a/mm/khugepaged.c~khugepaged-drain-all-lru-caches-before-scanning-pages +++ a/mm/khugepaged.c @@ -2079,6 +2079,8 @@ static void khugepaged_do_scan(void) barrier(); /* write khugepaged_pages_to_scan to local stack */ + lru_add_drain_all(); + while (progress < pages) { if (!khugepaged_prealloc_page(&hpage, &wait)) break; _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 064/131] khugepaged: drain LRU add pagevec after swapin 2020-06-03 22:55 incoming Andrew Morton ` (62 preceding siblings ...) 2020-06-03 23:00 ` [patch 063/131] khugepaged: drain all LRU caches before scanning pages Andrew Morton @ 2020-06-03 23:00 ` Andrew Morton 2020-06-03 23:00 ` [patch 065/131] khugepaged: allow to collapse a page shared across fork Andrew Morton ` (67 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:00 UTC (permalink / raw) To: aarcange, akpm, jhubbard, kirill.shutemov, linux-mm, mike.kravetz, mm-commits, rcampbell, torvalds, william.kucharski, yang.shi, ziy From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Subject: khugepaged: drain LRU add pagevec after swapin collapse_huge_page() tries to swap in pages that are part of the PMD range. Just swapped in page goes though LRU add cache. The cache gets extra reference on the page. The extra reference can lead to the collapse fail: the following __collapse_huge_page_isolate() would check refcount and abort collapse seeing unexpected refcount. The fix is to drain local LRU add cache in __collapse_huge_page_swapin() if we successfully swapped in any pages. Link: http://lkml.kernel.org/r/20200416160026.16538-5-kirill.shutemov@linux.intel.com Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reviewed-by: William Kucharski <william.kucharski@oracle.com> Reviewed-by: Zi Yan <ziy@nvidia.com> Tested-by: Zi Yan <ziy@nvidia.com> Acked-by: Yang Shi <yang.shi@linux.alibaba.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Ralph Campbell <rcampbell@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/khugepaged.c | 5 +++++ 1 file changed, 5 insertions(+) --- a/mm/khugepaged.c~khugepaged-drain-lru-add-pagevec-after-swapin +++ a/mm/khugepaged.c @@ -931,6 +931,11 @@ static bool __collapse_huge_page_swapin( } vmf.pte--; pte_unmap(vmf.pte); + + /* Drain LRU add pagevec to remove extra pin on the swapped in pages */ + if (swapped_in) + lru_add_drain(); + trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 1); return true; } _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 065/131] khugepaged: allow to collapse a page shared across fork 2020-06-03 22:55 incoming Andrew Morton ` (63 preceding siblings ...) 2020-06-03 23:00 ` [patch 064/131] khugepaged: drain LRU add pagevec after swapin Andrew Morton @ 2020-06-03 23:00 ` Andrew Morton 2020-06-03 23:00 ` [patch 066/131] khugepaged: allow to collapse PTE-mapped compound pages Andrew Morton ` (66 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:00 UTC (permalink / raw) To: aarcange, akpm, jhubbard, kirill.shutemov, linux-mm, mike.kravetz, mm-commits, rcampbell, torvalds, william.kucharski, yang.shi, ziy From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Subject: khugepaged: allow to collapse a page shared across fork The page can be included into collapse as long as it doesn't have extra pins (from GUP or otherwise). Logic to check the refcount is moved to a separate function. For pages in swap cache, add compound_nr(page) to the expected refcount, in order to handle the compound page case. This is in preparation for the following patch. VM_BUG_ON_PAGE() was removed from __collapse_huge_page_copy() as the invariant it checks is no longer valid: the source can be mapped multiple times now. [yang.shi@linux.alibaba.com: remove error message when checking external pins] Link: http://lkml.kernel.org/r/1589317383-9595-1-git-send-email-yang.shi@linux.alibaba.com [cai@lca.pw: fix set-but-not-used warning] Link: http://lkml.kernel.org/r/20200521145644.GA6367@ovpn-112-192.phx2.redhat.com Link: http://lkml.kernel.org/r/20200416160026.16538-6-kirill.shutemov@linux.intel.com Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com> Reviewed-by: William Kucharski <william.kucharski@oracle.com> Reviewed-by: Zi Yan <ziy@nvidia.com> Tested-by: Zi Yan <ziy@nvidia.com> Acked-by: Yang Shi <yang.shi@linux.alibaba.com> Reviewed-by: John Hubbard <jhubbard@nvidia.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Ralph Campbell <rcampbell@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/khugepaged.c | 46 +++++++++++++++++++++++++++++++++++++--------- 1 file changed, 37 insertions(+), 9 deletions(-) --- a/mm/khugepaged.c~khugepaged-allow-to-collapse-a-page-shared-across-fork +++ a/mm/khugepaged.c @@ -526,6 +526,17 @@ static void release_pte_pages(pte_t *pte } } +static bool is_refcount_suitable(struct page *page) +{ + int expected_refcount; + + expected_refcount = total_mapcount(page); + if (PageSwapCache(page)) + expected_refcount += compound_nr(page); + + return page_count(page) == expected_refcount; +} + static int __collapse_huge_page_isolate(struct vm_area_struct *vma, unsigned long address, pte_t *pte) @@ -578,11 +589,17 @@ static int __collapse_huge_page_isolate( } /* - * cannot use mapcount: can't collapse if there's a gup pin. - * The page must only be referenced by the scanned process - * and page swap cache. + * Check if the page has any GUP (or other external) pins. + * + * The page table that maps the page has been already unlinked + * from the page table tree and this process cannot get + * an additinal pin on the page. + * + * New pins can come later if the page is shared across fork, + * but not from this process. The other process cannot write to + * the page, only trigger CoW. */ - if (page_count(page) != 1 + PageSwapCache(page)) { + if (!is_refcount_suitable(page)) { unlock_page(page); result = SCAN_PAGE_COUNT; goto out; @@ -669,7 +686,6 @@ static void __collapse_huge_page_copy(pt } else { src_page = pte_page(pteval); copy_user_highpage(page, src_page, address, vma); - VM_BUG_ON_PAGE(page_mapcount(src_page) != 1, src_page); release_pte_page(src_page); /* * ptl mostly unnecessary, but preempt has to @@ -1221,11 +1237,23 @@ static int khugepaged_scan_pmd(struct mm } /* - * cannot use mapcount: can't collapse if there's a gup pin. - * The page must only be referenced by the scanned process - * and page swap cache. + * Check if the page has any GUP (or other external) pins. + * + * Here the check is racy it may see totmal_mapcount > refcount + * in some cases. + * For example, one process with one forked child process. + * The parent has the PMD split due to MADV_DONTNEED, then + * the child is trying unmap the whole PMD, but khugepaged + * may be scanning the parent between the child has + * PageDoubleMap flag cleared and dec the mapcount. So + * khugepaged may see total_mapcount > refcount. + * + * But such case is ephemeral we could always retry collapse + * later. However it may report false positive if the page + * has excessive GUP pins (i.e. 512). Anyway the same check + * will be done again later the risk seems low. */ - if (page_count(page) != 1 + PageSwapCache(page)) { + if (!is_refcount_suitable(page)) { result = SCAN_PAGE_COUNT; goto out_unmap; } _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 066/131] khugepaged: allow to collapse PTE-mapped compound pages 2020-06-03 22:55 incoming Andrew Morton ` (64 preceding siblings ...) 2020-06-03 23:00 ` [patch 065/131] khugepaged: allow to collapse a page shared across fork Andrew Morton @ 2020-06-03 23:00 ` Andrew Morton 2020-06-03 23:00 ` [patch 067/131] thp: change CoW semantics for anon-THP Andrew Morton ` (65 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:00 UTC (permalink / raw) To: aarcange, akpm, jhubbard, kirill.shutemov, linux-mm, mike.kravetz, mm-commits, rcampbell, torvalds, william.kucharski, yang.shi, ziy From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Subject: khugepaged: allow to collapse PTE-mapped compound pages We can collapse PTE-mapped compound pages. We only need to avoid handling them more than once: lock/unlock page only once if it's present in the PMD range multiple times as it handled on compound level. The same goes for LRU isolation and putback. Link: http://lkml.kernel.org/r/20200416160026.16538-7-kirill.shutemov@linux.intel.com Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reviewed-by: William Kucharski <william.kucharski@oracle.com> Reviewed-by: Zi Yan <ziy@nvidia.com> Tested-by: Zi Yan <ziy@nvidia.com> Acked-by: Yang Shi <yang.shi@linux.alibaba.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Ralph Campbell <rcampbell@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/khugepaged.c | 99 ++++++++++++++++++++++++++++++---------------- 1 file changed, 65 insertions(+), 34 deletions(-) --- a/mm/khugepaged.c~khugepaged-allow-to-collapse-pte-mapped-compound-pages +++ a/mm/khugepaged.c @@ -512,17 +512,30 @@ void __khugepaged_exit(struct mm_struct static void release_pte_page(struct page *page) { - dec_node_page_state(page, NR_ISOLATED_ANON + page_is_file_lru(page)); + mod_node_page_state(page_pgdat(page), + NR_ISOLATED_ANON + page_is_file_lru(page), + -compound_nr(page)); unlock_page(page); putback_lru_page(page); } -static void release_pte_pages(pte_t *pte, pte_t *_pte) +static void release_pte_pages(pte_t *pte, pte_t *_pte, + struct list_head *compound_pagelist) { + struct page *page, *tmp; + while (--_pte >= pte) { pte_t pteval = *_pte; - if (!pte_none(pteval) && !is_zero_pfn(pte_pfn(pteval))) - release_pte_page(pte_page(pteval)); + + page = pte_page(pteval); + if (!pte_none(pteval) && !is_zero_pfn(pte_pfn(pteval)) && + !PageCompound(page)) + release_pte_page(page); + } + + list_for_each_entry_safe(page, tmp, compound_pagelist, lru) { + list_del(&page->lru); + release_pte_page(page); } } @@ -539,7 +552,8 @@ static bool is_refcount_suitable(struct static int __collapse_huge_page_isolate(struct vm_area_struct *vma, unsigned long address, - pte_t *pte) + pte_t *pte, + struct list_head *compound_pagelist) { struct page *page = NULL; pte_t *_pte; @@ -569,13 +583,21 @@ static int __collapse_huge_page_isolate( goto out; } - /* TODO: teach khugepaged to collapse THP mapped with pte */ + VM_BUG_ON_PAGE(!PageAnon(page), page); + if (PageCompound(page)) { - result = SCAN_PAGE_COMPOUND; - goto out; - } + struct page *p; + page = compound_head(page); - VM_BUG_ON_PAGE(!PageAnon(page), page); + /* + * Check if we have dealt with the compound page + * already + */ + list_for_each_entry(p, compound_pagelist, lru) { + if (page == p) + goto next; + } + } /* * We can do it before isolate_lru_page because the @@ -604,19 +626,15 @@ static int __collapse_huge_page_isolate( result = SCAN_PAGE_COUNT; goto out; } - if (pte_write(pteval)) { - writable = true; - } else { - if (PageSwapCache(page) && - !reuse_swap_page(page, NULL)) { - unlock_page(page); - result = SCAN_SWAP_CACHE_PAGE; - goto out; - } + if (!pte_write(pteval) && PageSwapCache(page) && + !reuse_swap_page(page, NULL)) { /* - * Page is not in the swap cache. It can be collapsed - * into a THP. + * Page is in the swap cache and cannot be re-used. + * It cannot be collapsed into a THP. */ + unlock_page(page); + result = SCAN_SWAP_CACHE_PAGE; + goto out; } /* @@ -628,16 +646,23 @@ static int __collapse_huge_page_isolate( result = SCAN_DEL_PAGE_LRU; goto out; } - inc_node_page_state(page, - NR_ISOLATED_ANON + page_is_file_lru(page)); + mod_node_page_state(page_pgdat(page), + NR_ISOLATED_ANON + page_is_file_lru(page), + compound_nr(page)); VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(PageLRU(page), page); + if (PageCompound(page)) + list_add_tail(&page->lru, compound_pagelist); +next: /* There should be enough young pte to collapse the page */ if (pte_young(pteval) || page_is_young(page) || PageReferenced(page) || mmu_notifier_test_young(vma->vm_mm, address)) referenced++; + + if (pte_write(pteval)) + writable = true; } if (likely(writable)) { if (likely(referenced)) { @@ -651,7 +676,7 @@ static int __collapse_huge_page_isolate( } out: - release_pte_pages(pte, _pte); + release_pte_pages(pte, _pte, compound_pagelist); trace_mm_collapse_huge_page_isolate(page, none_or_zero, referenced, writable, result); return 0; @@ -660,13 +685,14 @@ out: static void __collapse_huge_page_copy(pte_t *pte, struct page *page, struct vm_area_struct *vma, unsigned long address, - spinlock_t *ptl) + spinlock_t *ptl, + struct list_head *compound_pagelist) { + struct page *src_page, *tmp; pte_t *_pte; for (_pte = pte; _pte < pte + HPAGE_PMD_NR; _pte++, page++, address += PAGE_SIZE) { pte_t pteval = *_pte; - struct page *src_page; if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) { clear_user_highpage(page, address); @@ -686,7 +712,8 @@ static void __collapse_huge_page_copy(pt } else { src_page = pte_page(pteval); copy_user_highpage(page, src_page, address, vma); - release_pte_page(src_page); + if (!PageCompound(src_page)) + release_pte_page(src_page); /* * ptl mostly unnecessary, but preempt has to * be disabled to update the per-cpu stats @@ -703,6 +730,11 @@ static void __collapse_huge_page_copy(pt free_page_and_swap_cache(src_page); } } + + list_for_each_entry_safe(src_page, tmp, compound_pagelist, lru) { + list_del(&src_page->lru); + release_pte_page(src_page); + } } static void khugepaged_alloc_sleep(void) @@ -961,6 +993,7 @@ static void collapse_huge_page(struct mm struct page **hpage, int node, int referenced, int unmapped) { + LIST_HEAD(compound_pagelist); pmd_t *pmd, _pmd; pte_t *pte; pgtable_t pgtable; @@ -1061,7 +1094,8 @@ static void collapse_huge_page(struct mm mmu_notifier_invalidate_range_end(&range); spin_lock(pte_ptl); - isolated = __collapse_huge_page_isolate(vma, address, pte); + isolated = __collapse_huge_page_isolate(vma, address, pte, + &compound_pagelist); spin_unlock(pte_ptl); if (unlikely(!isolated)) { @@ -1086,7 +1120,8 @@ static void collapse_huge_page(struct mm */ anon_vma_unlock_write(vma->anon_vma); - __collapse_huge_page_copy(pte, new_page, vma, address, pte_ptl); + __collapse_huge_page_copy(pte, new_page, vma, address, pte_ptl, + &compound_pagelist); pte_unmap(pte); __SetPageUptodate(new_page); pgtable = pmd_pgtable(_pmd); @@ -1205,11 +1240,7 @@ static int khugepaged_scan_pmd(struct mm goto out_unmap; } - /* TODO: teach khugepaged to collapse THP mapped with pte */ - if (PageCompound(page)) { - result = SCAN_PAGE_COMPOUND; - goto out_unmap; - } + page = compound_head(page); /* * Record which node the original page is from and save this _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 067/131] thp: change CoW semantics for anon-THP 2020-06-03 22:55 incoming Andrew Morton ` (65 preceding siblings ...) 2020-06-03 23:00 ` [patch 066/131] khugepaged: allow to collapse PTE-mapped compound pages Andrew Morton @ 2020-06-03 23:00 ` Andrew Morton 2020-06-03 23:00 ` [patch 068/131] khugepaged: introduce 'max_ptes_shared' tunable Andrew Morton ` (64 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:00 UTC (permalink / raw) To: aarcange, akpm, jhubbard, kirill.shutemov, linux-mm, mike.kravetz, mm-commits, rcampbell, torvalds, william.kucharski, yang.shi, ziy From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Subject: thp: change CoW semantics for anon-THP Currently we have different copy-on-write semantics for anon- and file-THP. For anon-THP we try to allocate huge page on the write fault, but on file-THP we split PMD and allocate 4k page. Arguably, file-THP semantics is more desirable: we don't necessary want to unshare full PMD range from the parent on the first access. This is the primary reason THP is unusable for some workloads, like Redis. The original THP refcounting didn't allow to have PTE-mapped compound pages, so we had no options, but to allocate huge page on CoW (with fallback to 512 4k pages). The current refcounting doesn't have such limitations and we can cut a lot of complex code out of fault path. khugepaged is now able to recover THP from such ranges if the configuration allows. Link: http://lkml.kernel.org/r/20200416160026.16538-8-kirill.shutemov@linux.intel.com Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reviewed-by: William Kucharski <william.kucharski@oracle.com> Reviewed-by: Zi Yan <ziy@nvidia.com> Tested-by: Zi Yan <ziy@nvidia.com> Acked-by: Yang Shi <yang.shi@linux.alibaba.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Ralph Campbell <rcampbell@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/huge_memory.c | 250 ++++----------------------------------------- 1 file changed, 25 insertions(+), 225 deletions(-) --- a/mm/huge_memory.c~thp-change-cow-semantics-for-anon-thp +++ a/mm/huge_memory.c @@ -1255,263 +1255,63 @@ unlock: spin_unlock(vmf->ptl); } -static vm_fault_t do_huge_pmd_wp_page_fallback(struct vm_fault *vmf, - pmd_t orig_pmd, struct page *page) -{ - struct vm_area_struct *vma = vmf->vma; - unsigned long haddr = vmf->address & HPAGE_PMD_MASK; - struct mem_cgroup *memcg; - pgtable_t pgtable; - pmd_t _pmd; - int i; - vm_fault_t ret = 0; - struct page **pages; - struct mmu_notifier_range range; - - pages = kmalloc_array(HPAGE_PMD_NR, sizeof(struct page *), - GFP_KERNEL); - if (unlikely(!pages)) { - ret |= VM_FAULT_OOM; - goto out; - } - - for (i = 0; i < HPAGE_PMD_NR; i++) { - pages[i] = alloc_page_vma_node(GFP_HIGHUSER_MOVABLE, vma, - vmf->address, page_to_nid(page)); - if (unlikely(!pages[i] || - mem_cgroup_try_charge_delay(pages[i], vma->vm_mm, - GFP_KERNEL, &memcg, false))) { - if (pages[i]) - put_page(pages[i]); - while (--i >= 0) { - memcg = (void *)page_private(pages[i]); - set_page_private(pages[i], 0); - mem_cgroup_cancel_charge(pages[i], memcg, - false); - put_page(pages[i]); - } - kfree(pages); - ret |= VM_FAULT_OOM; - goto out; - } - set_page_private(pages[i], (unsigned long)memcg); - } - - for (i = 0; i < HPAGE_PMD_NR; i++) { - copy_user_highpage(pages[i], page + i, - haddr + PAGE_SIZE * i, vma); - __SetPageUptodate(pages[i]); - cond_resched(); - } - - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, - haddr, haddr + HPAGE_PMD_SIZE); - mmu_notifier_invalidate_range_start(&range); - - vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); - if (unlikely(!pmd_same(*vmf->pmd, orig_pmd))) - goto out_free_pages; - VM_BUG_ON_PAGE(!PageHead(page), page); - - /* - * Leave pmd empty until pte is filled note we must notify here as - * concurrent CPU thread might write to new page before the call to - * mmu_notifier_invalidate_range_end() happens which can lead to a - * device seeing memory write in different order than CPU. - * - * See Documentation/vm/mmu_notifier.rst - */ - pmdp_huge_clear_flush_notify(vma, haddr, vmf->pmd); - - pgtable = pgtable_trans_huge_withdraw(vma->vm_mm, vmf->pmd); - pmd_populate(vma->vm_mm, &_pmd, pgtable); - - for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) { - pte_t entry; - entry = mk_pte(pages[i], vma->vm_page_prot); - entry = maybe_mkwrite(pte_mkdirty(entry), vma); - memcg = (void *)page_private(pages[i]); - set_page_private(pages[i], 0); - page_add_new_anon_rmap(pages[i], vmf->vma, haddr, false); - mem_cgroup_commit_charge(pages[i], memcg, false, false); - lru_cache_add_active_or_unevictable(pages[i], vma); - vmf->pte = pte_offset_map(&_pmd, haddr); - VM_BUG_ON(!pte_none(*vmf->pte)); - set_pte_at(vma->vm_mm, haddr, vmf->pte, entry); - pte_unmap(vmf->pte); - } - kfree(pages); - - smp_wmb(); /* make pte visible before pmd */ - pmd_populate(vma->vm_mm, vmf->pmd, pgtable); - page_remove_rmap(page, true); - spin_unlock(vmf->ptl); - - /* - * No need to double call mmu_notifier->invalidate_range() callback as - * the above pmdp_huge_clear_flush_notify() did already call it. - */ - mmu_notifier_invalidate_range_only_end(&range); - - ret |= VM_FAULT_WRITE; - put_page(page); - -out: - return ret; - -out_free_pages: - spin_unlock(vmf->ptl); - mmu_notifier_invalidate_range_end(&range); - for (i = 0; i < HPAGE_PMD_NR; i++) { - memcg = (void *)page_private(pages[i]); - set_page_private(pages[i], 0); - mem_cgroup_cancel_charge(pages[i], memcg, false); - put_page(pages[i]); - } - kfree(pages); - goto out; -} - vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) { struct vm_area_struct *vma = vmf->vma; - struct page *page = NULL, *new_page; - struct mem_cgroup *memcg; + struct page *page; unsigned long haddr = vmf->address & HPAGE_PMD_MASK; - struct mmu_notifier_range range; - gfp_t huge_gfp; /* for allocation and charge */ - vm_fault_t ret = 0; vmf->ptl = pmd_lockptr(vma->vm_mm, vmf->pmd); VM_BUG_ON_VMA(!vma->anon_vma, vma); + if (is_huge_zero_pmd(orig_pmd)) - goto alloc; + goto fallback; + spin_lock(vmf->ptl); - if (unlikely(!pmd_same(*vmf->pmd, orig_pmd))) - goto out_unlock; + + if (unlikely(!pmd_same(*vmf->pmd, orig_pmd))) { + spin_unlock(vmf->ptl); + return 0; + } page = pmd_page(orig_pmd); VM_BUG_ON_PAGE(!PageCompound(page) || !PageHead(page), page); - /* - * We can only reuse the page if nobody else maps the huge page or it's - * part. - */ + + /* Lock page for reuse_swap_page() */ if (!trylock_page(page)) { get_page(page); spin_unlock(vmf->ptl); lock_page(page); spin_lock(vmf->ptl); if (unlikely(!pmd_same(*vmf->pmd, orig_pmd))) { + spin_unlock(vmf->ptl); unlock_page(page); put_page(page); - goto out_unlock; + return 0; } put_page(page); } + + /* + * We can only reuse the page if nobody else maps the huge page or it's + * part. + */ if (reuse_swap_page(page, NULL)) { pmd_t entry; entry = pmd_mkyoung(orig_pmd); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); - if (pmdp_set_access_flags(vma, haddr, vmf->pmd, entry, 1)) + if (pmdp_set_access_flags(vma, haddr, vmf->pmd, entry, 1)) update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); - ret |= VM_FAULT_WRITE; unlock_page(page); - goto out_unlock; - } - unlock_page(page); - get_page(page); - spin_unlock(vmf->ptl); -alloc: - if (__transparent_hugepage_enabled(vma) && - !transparent_hugepage_debug_cow()) { - huge_gfp = alloc_hugepage_direct_gfpmask(vma); - new_page = alloc_hugepage_vma(huge_gfp, vma, haddr, HPAGE_PMD_ORDER); - } else - new_page = NULL; - - if (likely(new_page)) { - prep_transhuge_page(new_page); - } else { - if (!page) { - split_huge_pmd(vma, vmf->pmd, vmf->address); - ret |= VM_FAULT_FALLBACK; - } else { - ret = do_huge_pmd_wp_page_fallback(vmf, orig_pmd, page); - if (ret & VM_FAULT_OOM) { - split_huge_pmd(vma, vmf->pmd, vmf->address); - ret |= VM_FAULT_FALLBACK; - } - put_page(page); - } - count_vm_event(THP_FAULT_FALLBACK); - goto out; - } - - if (unlikely(mem_cgroup_try_charge_delay(new_page, vma->vm_mm, - huge_gfp, &memcg, true))) { - put_page(new_page); - split_huge_pmd(vma, vmf->pmd, vmf->address); - if (page) - put_page(page); - ret |= VM_FAULT_FALLBACK; - count_vm_event(THP_FAULT_FALLBACK); - count_vm_event(THP_FAULT_FALLBACK_CHARGE); - goto out; - } - - count_vm_event(THP_FAULT_ALLOC); - count_memcg_events(memcg, THP_FAULT_ALLOC, 1); - - if (!page) - clear_huge_page(new_page, vmf->address, HPAGE_PMD_NR); - else - copy_user_huge_page(new_page, page, vmf->address, - vma, HPAGE_PMD_NR); - __SetPageUptodate(new_page); - - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, - haddr, haddr + HPAGE_PMD_SIZE); - mmu_notifier_invalidate_range_start(&range); - - spin_lock(vmf->ptl); - if (page) - put_page(page); - if (unlikely(!pmd_same(*vmf->pmd, orig_pmd))) { spin_unlock(vmf->ptl); - mem_cgroup_cancel_charge(new_page, memcg, true); - put_page(new_page); - goto out_mn; - } else { - pmd_t entry; - entry = mk_huge_pmd(new_page, vma->vm_page_prot); - entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); - pmdp_huge_clear_flush_notify(vma, haddr, vmf->pmd); - page_add_new_anon_rmap(new_page, vma, haddr, true); - mem_cgroup_commit_charge(new_page, memcg, false, true); - lru_cache_add_active_or_unevictable(new_page, vma); - set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry); - update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); - if (!page) { - add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR); - } else { - VM_BUG_ON_PAGE(!PageHead(page), page); - page_remove_rmap(page, true); - put_page(page); - } - ret |= VM_FAULT_WRITE; + return VM_FAULT_WRITE; } + + unlock_page(page); spin_unlock(vmf->ptl); -out_mn: - /* - * No need to double call mmu_notifier->invalidate_range() callback as - * the above pmdp_huge_clear_flush_notify() did already call it. - */ - mmu_notifier_invalidate_range_only_end(&range); -out: - return ret; -out_unlock: - spin_unlock(vmf->ptl); - return ret; +fallback: + __split_huge_pmd(vma, vmf->pmd, vmf->address, false, NULL); + return VM_FAULT_FALLBACK; } /* _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 068/131] khugepaged: introduce 'max_ptes_shared' tunable 2020-06-03 22:55 incoming Andrew Morton ` (66 preceding siblings ...) 2020-06-03 23:00 ` [patch 067/131] thp: change CoW semantics for anon-THP Andrew Morton @ 2020-06-03 23:00 ` Andrew Morton 2020-06-03 23:00 ` [patch 069/131] hugetlbfs: add arch_hugetlb_valid_size Andrew Morton ` (63 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:00 UTC (permalink / raw) To: aarcange, akpm, colin.king, jhubbard, kirill.shutemov, linux-mm, mike.kravetz, mm-commits, rcampbell, torvalds, william.kucharski, yang.shi, ziy From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Subject: khugepaged: introduce 'max_ptes_shared' tunable 'max_ptes_shared' specifies how many pages can be shared across multiple processes. Exceeding the number would block the collapse:: /sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_shared A higher value may increase memory footprint for some workloads. By default, at least half of pages has to be not shared. [colin.king@canonical.com: fix several spelling mistakes] Link: http://lkml.kernel.org/r/20200420084241.65433-1-colin.king@canonical.com Link: http://lkml.kernel.org/r/20200416160026.16538-9-kirill.shutemov@linux.intel.com Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Colin Ian King <colin.king@canonical.com> Reviewed-by: William Kucharski <william.kucharski@oracle.com> Reviewed-by: Zi Yan <ziy@nvidia.com> Tested-by: Zi Yan <ziy@nvidia.com> Acked-by: Yang Shi <yang.shi@linux.alibaba.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Ralph Campbell <rcampbell@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- Documentation/admin-guide/mm/transhuge.rst | 7 + include/trace/events/huge_memory.h | 3 mm/khugepaged.c | 52 ++++++++++- tools/testing/selftests/vm/khugepaged.c | 83 +++++++++++++++++++ 4 files changed, 140 insertions(+), 5 deletions(-) --- a/Documentation/admin-guide/mm/transhuge.rst~khugepaged-introduce-max_ptes_shared-tunable +++ a/Documentation/admin-guide/mm/transhuge.rst @@ -220,6 +220,13 @@ memory. A lower value can prevent THPs f collapsed, resulting fewer pages being collapsed into THPs, and lower memory access performance. +``max_ptes_shared`` specifies how many pages can be shared across multiple +processes. Exceeding the number would block the collapse:: + + /sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_shared + +A higher value may increase memory footprint for some workloads. + Boot parameter ============== --- a/include/trace/events/huge_memory.h~khugepaged-introduce-max_ptes_shared-tunable +++ a/include/trace/events/huge_memory.h @@ -12,6 +12,8 @@ EM( SCAN_SUCCEED, "succeeded") \ EM( SCAN_PMD_NULL, "pmd_null") \ EM( SCAN_EXCEED_NONE_PTE, "exceed_none_pte") \ + EM( SCAN_EXCEED_SWAP_PTE, "exceed_swap_pte") \ + EM( SCAN_EXCEED_SHARED_PTE, "exceed_shared_pte") \ EM( SCAN_PTE_NON_PRESENT, "pte_non_present") \ EM( SCAN_PTE_UFFD_WP, "pte_uffd_wp") \ EM( SCAN_PAGE_RO, "no_writable_page") \ @@ -31,7 +33,6 @@ EM( SCAN_DEL_PAGE_LRU, "could_not_delete_page_from_lru")\ EM( SCAN_ALLOC_HUGE_PAGE_FAIL, "alloc_huge_page_failed") \ EM( SCAN_CGROUP_CHARGE_FAIL, "ccgroup_charge_failed") \ - EM( SCAN_EXCEED_SWAP_PTE, "exceed_swap_pte") \ EM( SCAN_TRUNCATED, "truncated") \ EMe(SCAN_PAGE_HAS_PRIVATE, "page_has_private") \ --- a/mm/khugepaged.c~khugepaged-introduce-max_ptes_shared-tunable +++ a/mm/khugepaged.c @@ -28,6 +28,8 @@ enum scan_result { SCAN_SUCCEED, SCAN_PMD_NULL, SCAN_EXCEED_NONE_PTE, + SCAN_EXCEED_SWAP_PTE, + SCAN_EXCEED_SHARED_PTE, SCAN_PTE_NON_PRESENT, SCAN_PTE_UFFD_WP, SCAN_PAGE_RO, @@ -47,7 +49,6 @@ enum scan_result { SCAN_DEL_PAGE_LRU, SCAN_ALLOC_HUGE_PAGE_FAIL, SCAN_CGROUP_CHARGE_FAIL, - SCAN_EXCEED_SWAP_PTE, SCAN_TRUNCATED, SCAN_PAGE_HAS_PRIVATE, }; @@ -72,6 +73,7 @@ static DECLARE_WAIT_QUEUE_HEAD(khugepage */ static unsigned int khugepaged_max_ptes_none __read_mostly; static unsigned int khugepaged_max_ptes_swap __read_mostly; +static unsigned int khugepaged_max_ptes_shared __read_mostly; #define MM_SLOTS_HASH_BITS 10 static __read_mostly DEFINE_HASHTABLE(mm_slots_hash, MM_SLOTS_HASH_BITS); @@ -291,15 +293,43 @@ static struct kobj_attribute khugepaged_ __ATTR(max_ptes_swap, 0644, khugepaged_max_ptes_swap_show, khugepaged_max_ptes_swap_store); +static ssize_t khugepaged_max_ptes_shared_show(struct kobject *kobj, + struct kobj_attribute *attr, + char *buf) +{ + return sprintf(buf, "%u\n", khugepaged_max_ptes_shared); +} + +static ssize_t khugepaged_max_ptes_shared_store(struct kobject *kobj, + struct kobj_attribute *attr, + const char *buf, size_t count) +{ + int err; + unsigned long max_ptes_shared; + + err = kstrtoul(buf, 10, &max_ptes_shared); + if (err || max_ptes_shared > HPAGE_PMD_NR-1) + return -EINVAL; + + khugepaged_max_ptes_shared = max_ptes_shared; + + return count; +} + +static struct kobj_attribute khugepaged_max_ptes_shared_attr = + __ATTR(max_ptes_shared, 0644, khugepaged_max_ptes_shared_show, + khugepaged_max_ptes_shared_store); + static struct attribute *khugepaged_attr[] = { &khugepaged_defrag_attr.attr, &khugepaged_max_ptes_none_attr.attr, + &khugepaged_max_ptes_swap_attr.attr, + &khugepaged_max_ptes_shared_attr.attr, &pages_to_scan_attr.attr, &pages_collapsed_attr.attr, &full_scans_attr.attr, &scan_sleep_millisecs_attr.attr, &alloc_sleep_millisecs_attr.attr, - &khugepaged_max_ptes_swap_attr.attr, NULL, }; @@ -359,6 +389,7 @@ int __init khugepaged_init(void) khugepaged_pages_to_scan = HPAGE_PMD_NR * 8; khugepaged_max_ptes_none = HPAGE_PMD_NR - 1; khugepaged_max_ptes_swap = HPAGE_PMD_NR / 8; + khugepaged_max_ptes_shared = HPAGE_PMD_NR / 2; return 0; } @@ -557,7 +588,7 @@ static int __collapse_huge_page_isolate( { struct page *page = NULL; pte_t *_pte; - int none_or_zero = 0, result = 0, referenced = 0; + int none_or_zero = 0, shared = 0, result = 0, referenced = 0; bool writable = false; for (_pte = pte; _pte < pte+HPAGE_PMD_NR; @@ -585,6 +616,12 @@ static int __collapse_huge_page_isolate( VM_BUG_ON_PAGE(!PageAnon(page), page); + if (page_mapcount(page) > 1 && + ++shared > khugepaged_max_ptes_shared) { + result = SCAN_EXCEED_SHARED_PTE; + goto out; + } + if (PageCompound(page)) { struct page *p; page = compound_head(page); @@ -1168,7 +1205,8 @@ static int khugepaged_scan_pmd(struct mm { pmd_t *pmd; pte_t *pte, *_pte; - int ret = 0, none_or_zero = 0, result = 0, referenced = 0; + int ret = 0, result = 0, referenced = 0; + int none_or_zero = 0, shared = 0; struct page *page = NULL; unsigned long _address; spinlock_t *ptl; @@ -1240,6 +1278,12 @@ static int khugepaged_scan_pmd(struct mm goto out_unmap; } + if (page_mapcount(page) > 1 && + ++shared > khugepaged_max_ptes_shared) { + result = SCAN_EXCEED_SHARED_PTE; + goto out_unmap; + } + page = compound_head(page); /* --- a/tools/testing/selftests/vm/khugepaged.c~khugepaged-introduce-max_ptes_shared-tunable +++ a/tools/testing/selftests/vm/khugepaged.c @@ -78,6 +78,7 @@ struct khugepaged_settings { unsigned int scan_sleep_millisecs; unsigned int max_ptes_none; unsigned int max_ptes_swap; + unsigned int max_ptes_shared; unsigned long pages_to_scan; }; @@ -277,6 +278,7 @@ static void write_settings(struct settin khugepaged->scan_sleep_millisecs); write_num("khugepaged/max_ptes_none", khugepaged->max_ptes_none); write_num("khugepaged/max_ptes_swap", khugepaged->max_ptes_swap); + write_num("khugepaged/max_ptes_shared", khugepaged->max_ptes_shared); write_num("khugepaged/pages_to_scan", khugepaged->pages_to_scan); } @@ -313,6 +315,7 @@ static void save_settings(void) read_num("khugepaged/scan_sleep_millisecs"), .max_ptes_none = read_num("khugepaged/max_ptes_none"), .max_ptes_swap = read_num("khugepaged/max_ptes_swap"), + .max_ptes_shared = read_num("khugepaged/max_ptes_shared"), .pages_to_scan = read_num("khugepaged/pages_to_scan"), }; success("OK"); @@ -896,12 +899,90 @@ static void collapse_fork_compound(void) fail("Fail"); fill_memory(p, 0, page_size); + write_num("khugepaged/max_ptes_shared", hpage_pmd_nr - 1); if (wait_for_scan("Collapse PTE table full of compound pages in child", p)) fail("Timeout"); else if (check_huge(p)) success("OK"); else fail("Fail"); + write_num("khugepaged/max_ptes_shared", + default_settings.khugepaged.max_ptes_shared); + + validate_memory(p, 0, hpage_pmd_size); + munmap(p, hpage_pmd_size); + exit(exit_status); + } + + wait(&wstatus); + exit_status += WEXITSTATUS(wstatus); + + printf("Check if parent still has huge page..."); + if (check_huge(p)) + success("OK"); + else + fail("Fail"); + validate_memory(p, 0, hpage_pmd_size); + munmap(p, hpage_pmd_size); +} + +static void collapse_max_ptes_shared() +{ + int max_ptes_shared = read_num("khugepaged/max_ptes_shared"); + int wstatus; + void *p; + + p = alloc_mapping(); + + printf("Allocate huge page..."); + madvise(p, hpage_pmd_size, MADV_HUGEPAGE); + fill_memory(p, 0, hpage_pmd_size); + if (check_huge(p)) + success("OK"); + else + fail("Fail"); + + printf("Share huge page over fork()..."); + if (!fork()) { + /* Do not touch settings on child exit */ + skip_settings_restore = true; + exit_status = 0; + + if (check_huge(p)) + success("OK"); + else + fail("Fail"); + + printf("Trigger CoW on page %d of %d...", + hpage_pmd_nr - max_ptes_shared - 1, hpage_pmd_nr); + fill_memory(p, 0, (hpage_pmd_nr - max_ptes_shared - 1) * page_size); + if (!check_huge(p)) + success("OK"); + else + fail("Fail"); + + if (wait_for_scan("Do not collapse with max_ptes_shared exceeded", p)) + fail("Timeout"); + else if (!check_huge(p)) + success("OK"); + else + fail("Fail"); + + printf("Trigger CoW on page %d of %d...", + hpage_pmd_nr - max_ptes_shared, hpage_pmd_nr); + fill_memory(p, 0, (hpage_pmd_nr - max_ptes_shared) * page_size); + if (!check_huge(p)) + success("OK"); + else + fail("Fail"); + + + if (wait_for_scan("Collapse with max_ptes_shared PTEs shared", p)) + fail("Timeout"); + else if (check_huge(p)) + success("OK"); + else + fail("Fail"); validate_memory(p, 0, hpage_pmd_size); munmap(p, hpage_pmd_size); @@ -930,6 +1011,7 @@ int main(void) default_settings.khugepaged.max_ptes_none = hpage_pmd_nr - 1; default_settings.khugepaged.max_ptes_swap = hpage_pmd_nr / 8; + default_settings.khugepaged.max_ptes_shared = hpage_pmd_nr / 2; default_settings.khugepaged.pages_to_scan = hpage_pmd_nr * 8; save_settings(); @@ -947,6 +1029,7 @@ int main(void) collapse_compound_extreme(); collapse_fork(); collapse_fork_compound(); + collapse_max_ptes_shared(); restore_settings(0); } _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 069/131] hugetlbfs: add arch_hugetlb_valid_size 2020-06-03 22:55 incoming Andrew Morton ` (67 preceding siblings ...) 2020-06-03 23:00 ` [patch 068/131] khugepaged: introduce 'max_ptes_shared' tunable Andrew Morton @ 2020-06-03 23:00 ` Andrew Morton 2020-06-03 23:00 ` [patch 070/131] hugetlbfs: move hugepagesz= parsing to arch independent code Andrew Morton ` (62 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:00 UTC (permalink / raw) To: akpm, almasrymina, anders.roxell, aneesh.kumar, aou, benh, borntraeger, cai, catalin.marinas, christophe.leroy, corbet, dave.hansen, davem, gerald.schaefer, gor, heiko.carstens, linux-mm, longpeng2, mike.kravetz, mingo, mm-commits, nitesh, palmer, paul.walmsley, paulus, peterx, rdunlap, sfr, tglx, torvalds, will From: Mike Kravetz <mike.kravetz@oracle.com> Subject: hugetlbfs: add arch_hugetlb_valid_size Patch series "Clean up hugetlb boot command line processing", v4. Longpeng(Mike) reported a weird message from hugetlb command line processing and proposed a solution [1]. While the proposed patch does address the specific issue, there are other related issues in command line processing. As hugetlbfs evolved, updates to command line processing have been made to meet immediate needs and not necessarily in a coordinated manner. The result is that some processing is done in arch specific code, some is done in arch independent code and coordination is problematic. Semantics can vary between architectures. The patch series does the following: - Define arch specific arch_hugetlb_valid_size routine used to validate passed huge page sizes. - Move hugepagesz= command line parsing out of arch specific code and into an arch independent routine. - Clean up command line processing to follow desired semantics and document those semantics. [1] https://lore.kernel.org/linux-mm/20200305033014.1152-1-longpeng2@huawei.com This patch (of 3): The architecture independent routine hugetlb_default_setup sets up the default huge pages size. It has no way to verify if the passed value is valid, so it accepts it and attempts to validate at a later time. This requires undocumented cooperation between the arch specific and arch independent code. For architectures that support more than one huge page size, provide a routine arch_hugetlb_valid_size to validate a huge page size. hugetlb_default_setup can use this to validate passed values. arch_hugetlb_valid_size will also be used in a subsequent patch to move processing of the "hugepagesz=" in arch specific code to a common routine in arch independent code. Link: http://lkml.kernel.org/r/20200428205614.246260-1-mike.kravetz@oracle.com Link: http://lkml.kernel.org/r/20200428205614.246260-2-mike.kravetz@oracle.com Link: http://lkml.kernel.org/r/20200417185049.275845-1-mike.kravetz@oracle.com Link: http://lkml.kernel.org/r/20200417185049.275845-2-mike.kravetz@oracle.com Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [s390] Acked-by: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: David S. Miller <davem@davemloft.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Longpeng <longpeng2@huawei.com> Cc: Christophe Leroy <christophe.leroy@c-s.fr> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Mina Almasry <almasrymina@google.com> Cc: Peter Xu <peterx@redhat.com> Cc: Nitesh Narayan Lal <nitesh@redhat.com> Cc: Anders Roxell <anders.roxell@linaro.org> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Cc: Qian Cai <cai@lca.pw> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/arm64/mm/hugetlbpage.c | 17 +++++++++++++---- arch/powerpc/mm/hugetlbpage.c | 20 +++++++++++++------- arch/riscv/mm/hugetlbpage.c | 26 +++++++++++++++++--------- arch/s390/mm/hugetlbpage.c | 16 ++++++++++++---- arch/sparc/mm/init_64.c | 24 ++++++++++++++++-------- arch/x86/mm/hugetlbpage.c | 17 +++++++++++++---- include/linux/hugetlb.h | 1 + mm/hugetlb.c | 21 ++++++++++++++++++--- 8 files changed, 103 insertions(+), 39 deletions(-) --- a/arch/arm64/mm/hugetlbpage.c~hugetlbfs-add-arch_hugetlb_valid_size +++ a/arch/arm64/mm/hugetlbpage.c @@ -464,17 +464,26 @@ static int __init hugetlbpage_init(void) } arch_initcall(hugetlbpage_init); -static __init int setup_hugepagesz(char *opt) +bool __init arch_hugetlb_valid_size(unsigned long size) { - unsigned long ps = memparse(opt, &opt); - - switch (ps) { + switch (size) { #ifdef CONFIG_ARM64_4K_PAGES case PUD_SIZE: #endif case CONT_PMD_SIZE: case PMD_SIZE: case CONT_PTE_SIZE: + return true; + } + + return false; +} + +static __init int setup_hugepagesz(char *opt) +{ + unsigned long ps = memparse(opt, &opt); + + if (arch_hugetlb_valid_size(ps)) { add_huge_page_size(ps); return 1; } --- a/arch/powerpc/mm/hugetlbpage.c~hugetlbfs-add-arch_hugetlb_valid_size +++ a/arch/powerpc/mm/hugetlbpage.c @@ -558,7 +558,7 @@ unsigned long vma_mmu_pagesize(struct vm return vma_kernel_pagesize(vma); } -static int __init add_huge_page_size(unsigned long long size) +bool __init arch_hugetlb_valid_size(unsigned long size) { int shift = __ffs(size); int mmu_psize; @@ -566,20 +566,26 @@ static int __init add_huge_page_size(uns /* Check that it is a page size supported by the hardware and * that it fits within pagetable and slice limits. */ if (size <= PAGE_SIZE || !is_power_of_2(size)) - return -EINVAL; + return false; mmu_psize = check_and_get_huge_psize(shift); if (mmu_psize < 0) - return -EINVAL; + return false; BUG_ON(mmu_psize_defs[mmu_psize].shift != shift); - /* Return if huge page size has already been setup */ - if (size_to_hstate(size)) - return 0; + return true; +} - hugetlb_add_hstate(shift - PAGE_SHIFT); +static int __init add_huge_page_size(unsigned long long size) +{ + int shift = __ffs(size); + + if (!arch_hugetlb_valid_size((unsigned long)size)) + return -EINVAL; + if (!size_to_hstate(size)) + hugetlb_add_hstate(shift - PAGE_SHIFT); return 0; } --- a/arch/riscv/mm/hugetlbpage.c~hugetlbfs-add-arch_hugetlb_valid_size +++ a/arch/riscv/mm/hugetlbpage.c @@ -12,21 +12,29 @@ int pmd_huge(pmd_t pmd) return pmd_leaf(pmd); } +bool __init arch_hugetlb_valid_size(unsigned long size) +{ + if (size == HPAGE_SIZE) + return true; + else if (IS_ENABLED(CONFIG_64BIT) && size == PUD_SIZE) + return true; + else + return false; +} + static __init int setup_hugepagesz(char *opt) { unsigned long ps = memparse(opt, &opt); - if (ps == HPAGE_SIZE) { - hugetlb_add_hstate(HPAGE_SHIFT - PAGE_SHIFT); - } else if (IS_ENABLED(CONFIG_64BIT) && ps == PUD_SIZE) { - hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT); - } else { - hugetlb_bad_size(); - pr_err("hugepagesz: Unsupported page size %lu M\n", ps >> 20); - return 0; + if (arch_hugetlb_valid_size(ps)) { + hugetlb_add_hstate(ilog2(ps) - PAGE_SHIFT); + return 1; } - return 1; + hugetlb_bad_size(); + pr_err("hugepagesz: Unsupported page size %lu M\n", ps >> 20); + return 0; + } __setup("hugepagesz=", setup_hugepagesz); --- a/arch/s390/mm/hugetlbpage.c~hugetlbfs-add-arch_hugetlb_valid_size +++ a/arch/s390/mm/hugetlbpage.c @@ -254,16 +254,24 @@ follow_huge_pud(struct mm_struct *mm, un return pud_page(*pud) + ((address & ~PUD_MASK) >> PAGE_SHIFT); } +bool __init arch_hugetlb_valid_size(unsigned long size) +{ + if (MACHINE_HAS_EDAT1 && size == PMD_SIZE) + return true; + else if (MACHINE_HAS_EDAT2 && size == PUD_SIZE) + return true; + else + return false; +} + static __init int setup_hugepagesz(char *opt) { unsigned long size; char *string = opt; size = memparse(opt, &opt); - if (MACHINE_HAS_EDAT1 && size == PMD_SIZE) { - hugetlb_add_hstate(PMD_SHIFT - PAGE_SHIFT); - } else if (MACHINE_HAS_EDAT2 && size == PUD_SIZE) { - hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT); + if (arch_hugetlb_valid_size(size)) { + hugetlb_add_hstate(ilog2(size) - PAGE_SHIFT); } else { hugetlb_bad_size(); pr_err("hugepagesz= specifies an unsupported page size %s\n", --- a/arch/sparc/mm/init_64.c~hugetlbfs-add-arch_hugetlb_valid_size +++ a/arch/sparc/mm/init_64.c @@ -360,16 +360,11 @@ static void __init pud_huge_patch(void) __asm__ __volatile__("flush %0" : : "r" (addr)); } -static int __init setup_hugepagesz(char *string) +bool __init arch_hugetlb_valid_size(unsigned long size) { - unsigned long long hugepage_size; - unsigned int hugepage_shift; + unsigned int hugepage_shift = ilog2(size); unsigned short hv_pgsz_idx; unsigned int hv_pgsz_mask; - int rc = 0; - - hugepage_size = memparse(string, &string); - hugepage_shift = ilog2(hugepage_size); switch (hugepage_shift) { case HPAGE_16GB_SHIFT: @@ -397,7 +392,20 @@ static int __init setup_hugepagesz(char hv_pgsz_mask = 0; } - if ((hv_pgsz_mask & cpu_pgsz_mask) == 0U) { + if ((hv_pgsz_mask & cpu_pgsz_mask) == 0U) + return false; + + return true; +} + +static int __init setup_hugepagesz(char *string) +{ + unsigned long long hugepage_size; + int rc = 0; + + hugepage_size = memparse(string, &string); + + if (!arch_hugetlb_valid_size((unsigned long)hugepage_size)) { hugetlb_bad_size(); pr_err("hugepagesz=%llu not supported by MMU.\n", hugepage_size); --- a/arch/x86/mm/hugetlbpage.c~hugetlbfs-add-arch_hugetlb_valid_size +++ a/arch/x86/mm/hugetlbpage.c @@ -181,13 +181,22 @@ get_unmapped_area: #endif /* CONFIG_HUGETLB_PAGE */ #ifdef CONFIG_X86_64 +bool __init arch_hugetlb_valid_size(unsigned long size) +{ + if (size == PMD_SIZE) + return true; + else if (size == PUD_SIZE && boot_cpu_has(X86_FEATURE_GBPAGES)) + return true; + else + return false; +} + static __init int setup_hugepagesz(char *opt) { unsigned long ps = memparse(opt, &opt); - if (ps == PMD_SIZE) { - hugetlb_add_hstate(PMD_SHIFT - PAGE_SHIFT); - } else if (ps == PUD_SIZE && boot_cpu_has(X86_FEATURE_GBPAGES)) { - hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT); + + if (arch_hugetlb_valid_size(ps)) { + hugetlb_add_hstate(ilog2(ps) - PAGE_SHIFT); } else { hugetlb_bad_size(); printk(KERN_ERR "hugepagesz: Unsupported page size %lu M\n", --- a/include/linux/hugetlb.h~hugetlbfs-add-arch_hugetlb_valid_size +++ a/include/linux/hugetlb.h @@ -521,6 +521,7 @@ int __init alloc_bootmem_huge_page(struc void __init hugetlb_bad_size(void); void __init hugetlb_add_hstate(unsigned order); +bool __init arch_hugetlb_valid_size(unsigned long size); struct hstate *size_to_hstate(unsigned long size); #ifndef HUGE_MAX_HSTATE --- a/mm/hugetlb.c~hugetlbfs-add-arch_hugetlb_valid_size +++ a/mm/hugetlb.c @@ -3256,6 +3256,12 @@ static int __init hugetlb_init(void) } subsys_initcall(hugetlb_init); +/* Overwritten by architectures with more huge page sizes */ +bool __init __attribute((weak)) arch_hugetlb_valid_size(unsigned long size) +{ + return size == HPAGE_SIZE; +} + /* Should be called on processing a hugepagesz=... option */ void __init hugetlb_bad_size(void) { @@ -3331,12 +3337,21 @@ static int __init hugetlb_nrpages_setup( } __setup("hugepages=", hugetlb_nrpages_setup); -static int __init hugetlb_default_setup(char *s) +static int __init default_hugepagesz_setup(char *s) { - default_hstate_size = memparse(s, &s); + unsigned long size; + + size = (unsigned long)memparse(s, NULL); + + if (!arch_hugetlb_valid_size(size)) { + pr_err("HugeTLB: unsupported default_hugepagesz %s\n", s); + return 0; + } + + default_hstate_size = size; return 1; } -__setup("default_hugepagesz=", hugetlb_default_setup); +__setup("default_hugepagesz=", default_hugepagesz_setup); static unsigned int cpuset_mems_nr(unsigned int *array) { _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 070/131] hugetlbfs: move hugepagesz= parsing to arch independent code 2020-06-03 22:55 incoming Andrew Morton ` (68 preceding siblings ...) 2020-06-03 23:00 ` [patch 069/131] hugetlbfs: add arch_hugetlb_valid_size Andrew Morton @ 2020-06-03 23:00 ` Andrew Morton 2020-06-03 23:00 ` [patch 071/131] hugetlbfs: remove hugetlb_add_hstate() warning for existing hstate Andrew Morton ` (61 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:00 UTC (permalink / raw) To: akpm, almasrymina, anders.roxell, aneesh.kumar, aou, benh, borntraeger, cai, catalin.marinas, christophe.leroy, corbet, dave.hansen, davem, gerald.schaefer, gor, heiko.carstens, linux-mm, longpeng2, mike.kravetz, mingo, mm-commits, nitesh, palmer, paul.walmsley, paulus, peterx, rdunlap, sandipan, sfr, tglx, torvalds, will From: Mike Kravetz <mike.kravetz@oracle.com> Subject: hugetlbfs: move hugepagesz= parsing to arch independent code Now that architectures provide arch_hugetlb_valid_size(), parsing of "hugepagesz=" can be done in architecture independent code. Create a single routine to handle hugepagesz= parsing and remove all arch specific routines. We can also remove the interface hugetlb_bad_size() as this is no longer used outside arch independent code. This also provides consistent behavior of hugetlbfs command line options. The hugepagesz= option should only be specified once for a specific size, but some architectures allow multiple instances. This appears to be more of an oversight when code was added by some architectures to set up ALL huge pages sizes. Link: http://lkml.kernel.org/r/20200417185049.275845-3-mike.kravetz@oracle.com Link: http://lkml.kernel.org/r/20200428205614.246260-3-mike.kravetz@oracle.com Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Acked-by: Mina Almasry <almasrymina@google.com> Reviewed-by: Peter Xu <peterx@redhat.com> Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [s390] Acked-by: Will Deacon <will@kernel.org> Tested-by: Sandipan Das <sandipan@linux.ibm.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Christophe Leroy <christophe.leroy@c-s.fr> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David S. Miller <davem@davemloft.net> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Longpeng <longpeng2@huawei.com> Cc: Nitesh Narayan Lal <nitesh@redhat.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Anders Roxell <anders.roxell@linaro.org> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Cc: Qian Cai <cai@lca.pw> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/arm64/mm/hugetlbpage.c | 15 --------------- arch/powerpc/mm/hugetlbpage.c | 15 --------------- arch/riscv/mm/hugetlbpage.c | 16 ---------------- arch/s390/mm/hugetlbpage.c | 18 ------------------ arch/sparc/mm/init_64.c | 22 ---------------------- arch/x86/mm/hugetlbpage.c | 16 ---------------- include/linux/hugetlb.h | 1 - mm/hugetlb.c | 23 +++++++++++++++++------ 8 files changed, 17 insertions(+), 109 deletions(-) --- a/arch/arm64/mm/hugetlbpage.c~hugetlbfs-move-hugepagesz=-parsing-to-arch-independent-code +++ a/arch/arm64/mm/hugetlbpage.c @@ -478,18 +478,3 @@ bool __init arch_hugetlb_valid_size(unsi return false; } - -static __init int setup_hugepagesz(char *opt) -{ - unsigned long ps = memparse(opt, &opt); - - if (arch_hugetlb_valid_size(ps)) { - add_huge_page_size(ps); - return 1; - } - - hugetlb_bad_size(); - pr_err("hugepagesz: Unsupported page size %lu K\n", ps >> 10); - return 0; -} -__setup("hugepagesz=", setup_hugepagesz); --- a/arch/powerpc/mm/hugetlbpage.c~hugetlbfs-move-hugepagesz=-parsing-to-arch-independent-code +++ a/arch/powerpc/mm/hugetlbpage.c @@ -589,21 +589,6 @@ static int __init add_huge_page_size(uns return 0; } -static int __init hugepage_setup_sz(char *str) -{ - unsigned long long size; - - size = memparse(str, &str); - - if (add_huge_page_size(size) != 0) { - hugetlb_bad_size(); - pr_err("Invalid huge page size specified(%llu)\n", size); - } - - return 1; -} -__setup("hugepagesz=", hugepage_setup_sz); - static int __init hugetlbpage_init(void) { bool configured = false; --- a/arch/riscv/mm/hugetlbpage.c~hugetlbfs-move-hugepagesz=-parsing-to-arch-independent-code +++ a/arch/riscv/mm/hugetlbpage.c @@ -22,22 +22,6 @@ bool __init arch_hugetlb_valid_size(unsi return false; } -static __init int setup_hugepagesz(char *opt) -{ - unsigned long ps = memparse(opt, &opt); - - if (arch_hugetlb_valid_size(ps)) { - hugetlb_add_hstate(ilog2(ps) - PAGE_SHIFT); - return 1; - } - - hugetlb_bad_size(); - pr_err("hugepagesz: Unsupported page size %lu M\n", ps >> 20); - return 0; - -} -__setup("hugepagesz=", setup_hugepagesz); - #ifdef CONFIG_CONTIG_ALLOC static __init int gigantic_pages_init(void) { --- a/arch/s390/mm/hugetlbpage.c~hugetlbfs-move-hugepagesz=-parsing-to-arch-independent-code +++ a/arch/s390/mm/hugetlbpage.c @@ -264,24 +264,6 @@ bool __init arch_hugetlb_valid_size(unsi return false; } -static __init int setup_hugepagesz(char *opt) -{ - unsigned long size; - char *string = opt; - - size = memparse(opt, &opt); - if (arch_hugetlb_valid_size(size)) { - hugetlb_add_hstate(ilog2(size) - PAGE_SHIFT); - } else { - hugetlb_bad_size(); - pr_err("hugepagesz= specifies an unsupported page size %s\n", - string); - return 0; - } - return 1; -} -__setup("hugepagesz=", setup_hugepagesz); - static unsigned long hugetlb_get_unmapped_area_bottomup(struct file *file, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) --- a/arch/sparc/mm/init_64.c~hugetlbfs-move-hugepagesz=-parsing-to-arch-independent-code +++ a/arch/sparc/mm/init_64.c @@ -397,28 +397,6 @@ bool __init arch_hugetlb_valid_size(unsi return true; } - -static int __init setup_hugepagesz(char *string) -{ - unsigned long long hugepage_size; - int rc = 0; - - hugepage_size = memparse(string, &string); - - if (!arch_hugetlb_valid_size((unsigned long)hugepage_size)) { - hugetlb_bad_size(); - pr_err("hugepagesz=%llu not supported by MMU.\n", - hugepage_size); - goto out; - } - - add_huge_page_size(hugepage_size); - rc = 1; - -out: - return rc; -} -__setup("hugepagesz=", setup_hugepagesz); #endif /* CONFIG_HUGETLB_PAGE */ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) --- a/arch/x86/mm/hugetlbpage.c~hugetlbfs-move-hugepagesz=-parsing-to-arch-independent-code +++ a/arch/x86/mm/hugetlbpage.c @@ -191,22 +191,6 @@ bool __init arch_hugetlb_valid_size(unsi return false; } -static __init int setup_hugepagesz(char *opt) -{ - unsigned long ps = memparse(opt, &opt); - - if (arch_hugetlb_valid_size(ps)) { - hugetlb_add_hstate(ilog2(ps) - PAGE_SHIFT); - } else { - hugetlb_bad_size(); - printk(KERN_ERR "hugepagesz: Unsupported page size %lu M\n", - ps >> 20); - return 0; - } - return 1; -} -__setup("hugepagesz=", setup_hugepagesz); - #ifdef CONFIG_CONTIG_ALLOC static __init int gigantic_pages_init(void) { --- a/include/linux/hugetlb.h~hugetlbfs-move-hugepagesz=-parsing-to-arch-independent-code +++ a/include/linux/hugetlb.h @@ -519,7 +519,6 @@ int huge_add_to_page_cache(struct page * int __init __alloc_bootmem_huge_page(struct hstate *h); int __init alloc_bootmem_huge_page(struct hstate *h); -void __init hugetlb_bad_size(void); void __init hugetlb_add_hstate(unsigned order); bool __init arch_hugetlb_valid_size(unsigned long size); struct hstate *size_to_hstate(unsigned long size); --- a/mm/hugetlb.c~hugetlbfs-move-hugepagesz=-parsing-to-arch-independent-code +++ a/mm/hugetlb.c @@ -3262,12 +3262,6 @@ bool __init __attribute((weak)) arch_hug return size == HPAGE_SIZE; } -/* Should be called on processing a hugepagesz=... option */ -void __init hugetlb_bad_size(void) -{ - parsed_valid_hugepagesz = false; -} - void __init hugetlb_add_hstate(unsigned int order) { struct hstate *h; @@ -3337,6 +3331,23 @@ static int __init hugetlb_nrpages_setup( } __setup("hugepages=", hugetlb_nrpages_setup); +static int __init hugepagesz_setup(char *s) +{ + unsigned long size; + + size = (unsigned long)memparse(s, NULL); + + if (!arch_hugetlb_valid_size(size)) { + parsed_valid_hugepagesz = false; + pr_err("HugeTLB: unsupported hugepagesz %s\n", s); + return 0; + } + + hugetlb_add_hstate(ilog2(size) - PAGE_SHIFT); + return 1; +} +__setup("hugepagesz=", hugepagesz_setup); + static int __init default_hugepagesz_setup(char *s) { unsigned long size; _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 071/131] hugetlbfs: remove hugetlb_add_hstate() warning for existing hstate 2020-06-03 22:55 incoming Andrew Morton ` (69 preceding siblings ...) 2020-06-03 23:00 ` [patch 070/131] hugetlbfs: move hugepagesz= parsing to arch independent code Andrew Morton @ 2020-06-03 23:00 ` Andrew Morton 2020-06-03 23:00 ` [patch 072/131] hugetlbfs: clean up command line processing Andrew Morton ` (60 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:00 UTC (permalink / raw) To: akpm, almasrymina, anders.roxell, aneesh.kumar, aou, benh, borntraeger, cai, catalin.marinas, christophe.leroy, corbet, dave.hansen, davem, gerald.schaefer, gor, heiko.carstens, linux-mm, longpeng2, mike.kravetz, mingo, mm-commits, nitesh, palmer, paul.walmsley, paulus, peterx, rdunlap, sfr, tglx, torvalds, will From: Mike Kravetz <mike.kravetz@oracle.com> Subject: hugetlbfs: remove hugetlb_add_hstate() warning for existing hstate hugetlb_add_hstate() prints a warning if the hstate already exists. This was originally done as part of kernel command line parsing. If 'hugepagesz=' was specified more than once, the warning pr_warn("hugepagesz= specified twice, ignoring\n"); would be printed. Some architectures want to enable all huge page sizes. They would call hugetlb_add_hstate for all supported sizes. However, this was done after command line processing and as a result hstates could have already been created for some sizes. To make sure no warning were printed, there would often be code like: if (!size_to_hstate(size) hugetlb_add_hstate(ilog2(size) - PAGE_SHIFT) The only time we want to print the warning is as the result of command line processing. So, remove the warning from hugetlb_add_hstate and add it to the single arch independent routine processing "hugepagesz=". After this, calls to size_to_hstate() in arch specific code can be removed and hugetlb_add_hstate can be called without worrying about warning messages. [mike.kravetz@oracle.com: fix hugetlb initialization] Link: http://lkml.kernel.org/r/4c36c6ce-3774-78fa-abc4-b7346bf24348@oracle.com Link: http://lkml.kernel.org/r/20200428205614.246260-5-mike.kravetz@oracle.com Link: http://lkml.kernel.org/r/20200417185049.275845-4-mike.kravetz@oracle.com Link: http://lkml.kernel.org/r/20200428205614.246260-4-mike.kravetz@oracle.com Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Acked-by: Mina Almasry <almasrymina@google.com> Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [s390] Acked-by: Will Deacon <will@kernel.org> Tested-by: Anders Roxell <anders.roxell@linaro.org> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Christophe Leroy <christophe.leroy@c-s.fr> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David S. Miller <davem@davemloft.net> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Longpeng <longpeng2@huawei.com> Cc: Nitesh Narayan Lal <nitesh@redhat.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Xu <peterx@redhat.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Cc: Qian Cai <cai@lca.pw> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/arm64/mm/hugetlbpage.c | 16 ++++------------ arch/powerpc/mm/hugetlbpage.c | 3 +-- arch/riscv/mm/hugetlbpage.c | 2 +- arch/sparc/mm/init_64.c | 19 ++++--------------- arch/x86/mm/hugetlbpage.c | 2 +- mm/hugetlb.c | 9 ++++++--- 6 files changed, 17 insertions(+), 34 deletions(-) --- a/arch/arm64/mm/hugetlbpage.c~hugetlbfs-remove-hugetlb_add_hstate-warning-for-existing-hstate +++ a/arch/arm64/mm/hugetlbpage.c @@ -443,22 +443,14 @@ void huge_ptep_clear_flush(struct vm_are clear_flush(vma->vm_mm, addr, ptep, pgsize, ncontig); } -static void __init add_huge_page_size(unsigned long size) -{ - if (size_to_hstate(size)) - return; - - hugetlb_add_hstate(ilog2(size) - PAGE_SHIFT); -} - static int __init hugetlbpage_init(void) { #ifdef CONFIG_ARM64_4K_PAGES - add_huge_page_size(PUD_SIZE); + hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT); #endif - add_huge_page_size(CONT_PMD_SIZE); - add_huge_page_size(PMD_SIZE); - add_huge_page_size(CONT_PTE_SIZE); + hugetlb_add_hstate((CONT_PMD_SHIFT + PMD_SHIFT) - PAGE_SHIFT); + hugetlb_add_hstate(PMD_SHIFT - PAGE_SHIFT); + hugetlb_add_hstate((CONT_PTE_SHIFT + PAGE_SHIFT) - PAGE_SHIFT); return 0; } --- a/arch/powerpc/mm/hugetlbpage.c~hugetlbfs-remove-hugetlb_add_hstate-warning-for-existing-hstate +++ a/arch/powerpc/mm/hugetlbpage.c @@ -584,8 +584,7 @@ static int __init add_huge_page_size(uns if (!arch_hugetlb_valid_size((unsigned long)size)) return -EINVAL; - if (!size_to_hstate(size)) - hugetlb_add_hstate(shift - PAGE_SHIFT); + hugetlb_add_hstate(shift - PAGE_SHIFT); return 0; } --- a/arch/riscv/mm/hugetlbpage.c~hugetlbfs-remove-hugetlb_add_hstate-warning-for-existing-hstate +++ a/arch/riscv/mm/hugetlbpage.c @@ -26,7 +26,7 @@ bool __init arch_hugetlb_valid_size(unsi static __init int gigantic_pages_init(void) { /* With CONTIG_ALLOC, we can allocate gigantic pages at runtime */ - if (IS_ENABLED(CONFIG_64BIT) && !size_to_hstate(1UL << PUD_SHIFT)) + if (IS_ENABLED(CONFIG_64BIT)) hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT); return 0; } --- a/arch/sparc/mm/init_64.c~hugetlbfs-remove-hugetlb_add_hstate-warning-for-existing-hstate +++ a/arch/sparc/mm/init_64.c @@ -325,23 +325,12 @@ static void __update_mmu_tsb_insert(stru } #ifdef CONFIG_HUGETLB_PAGE -static void __init add_huge_page_size(unsigned long size) -{ - unsigned int order; - - if (size_to_hstate(size)) - return; - - order = ilog2(size) - PAGE_SHIFT; - hugetlb_add_hstate(order); -} - static int __init hugetlbpage_init(void) { - add_huge_page_size(1UL << HPAGE_64K_SHIFT); - add_huge_page_size(1UL << HPAGE_SHIFT); - add_huge_page_size(1UL << HPAGE_256MB_SHIFT); - add_huge_page_size(1UL << HPAGE_2GB_SHIFT); + hugetlb_add_hstate(HPAGE_64K_SHIFT - PAGE_SHIFT); + hugetlb_add_hstate(HPAGE_SHIFT - PAGE_SHIFT); + hugetlb_add_hstate(HPAGE_256MB_SHIFT - PAGE_SHIFT); + hugetlb_add_hstate(HPAGE_2GB_SHIFT - PAGE_SHIFT); return 0; } --- a/arch/x86/mm/hugetlbpage.c~hugetlbfs-remove-hugetlb_add_hstate-warning-for-existing-hstate +++ a/arch/x86/mm/hugetlbpage.c @@ -195,7 +195,7 @@ bool __init arch_hugetlb_valid_size(unsi static __init int gigantic_pages_init(void) { /* With compaction or CMA we can allocate gigantic pages at runtime */ - if (boot_cpu_has(X86_FEATURE_GBPAGES) && !size_to_hstate(1UL << PUD_SHIFT)) + if (boot_cpu_has(X86_FEATURE_GBPAGES)) hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT); return 0; } --- a/mm/hugetlb.c~hugetlbfs-remove-hugetlb_add_hstate-warning-for-existing-hstate +++ a/mm/hugetlb.c @@ -3222,8 +3222,7 @@ static int __init hugetlb_init(void) } default_hstate_size = HPAGE_SIZE; - if (!size_to_hstate(default_hstate_size)) - hugetlb_add_hstate(HUGETLB_PAGE_ORDER); + hugetlb_add_hstate(HUGETLB_PAGE_ORDER); } default_hstate_idx = hstate_index(size_to_hstate(default_hstate_size)); if (default_hstate_max_huge_pages) { @@ -3268,7 +3267,6 @@ void __init hugetlb_add_hstate(unsigned unsigned long i; if (size_to_hstate(PAGE_SIZE << order)) { - pr_warn("hugepagesz= specified twice, ignoring\n"); return; } BUG_ON(hugetlb_max_hstate >= HUGE_MAX_HSTATE); @@ -3343,6 +3341,11 @@ static int __init hugepagesz_setup(char return 0; } + if (size_to_hstate(size)) { + pr_warn("HugeTLB: hugepagesz %s specified twice, ignoring\n", s); + return 0; + } + hugetlb_add_hstate(ilog2(size) - PAGE_SHIFT); return 1; } _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 072/131] hugetlbfs: clean up command line processing 2020-06-03 22:55 incoming Andrew Morton ` (70 preceding siblings ...) 2020-06-03 23:00 ` [patch 071/131] hugetlbfs: remove hugetlb_add_hstate() warning for existing hstate Andrew Morton @ 2020-06-03 23:00 ` Andrew Morton 2020-06-03 23:00 ` [patch 073/131] hugetlbfs: fix changes to " Andrew Morton ` (59 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:00 UTC (permalink / raw) To: akpm, almasrymina, anders.roxell, aneesh.kumar, aou, benh, borntraeger, cai, catalin.marinas, christophe.leroy, corbet, dave.hansen, davem, gerald.schaefer, gor, heiko.carstens, linux-mm, longpeng2, mike.kravetz, mingo, mm-commits, nitesh, palmer, paul.walmsley, paulus, peterx, rdunlap, sandipan, sfr, tglx, torvalds, will From: Mike Kravetz <mike.kravetz@oracle.com> Subject: hugetlbfs: clean up command line processing With all hugetlb page processing done in a single file clean up code. - Make code match desired semantics - Update documentation with semantics - Make all warnings and errors messages start with 'HugeTLB:'. - Consistently name command line parsing routines. - Warn if !hugepages_supported() and command line parameters have been specified. - Add comments to code - Describe some of the subtle interactions - Describe semantics of command line arguments This patch also fixes issues with implicitly setting the number of gigantic huge pages to preallocate. Previously on X86 command line, hugepages=2 default_hugepagesz=1G would result in zero 1G pages being preallocated and, # grep HugePages_Total /proc/meminfo HugePages_Total: 0 # sysctl -a | grep nr_hugepages vm.nr_hugepages = 2 vm.nr_hugepages_mempolicy = 2 # cat /proc/sys/vm/nr_hugepages 2 After this patch 2 gigantic pages will be preallocated and all the proc, sysfs, sysctl and meminfo files will accurately reflect this. To address the issue with gigantic pages, a small change in behavior was made to command line processing. Previously the command line, hugepages=128 default_hugepagesz=2M hugepagesz=2M hugepages=256 would result in the allocation of 256 2M huge pages. The value 128 would be ignored without any warning. After this patch, 128 2M pages will be allocated and a warning message will be displayed indicating the value of 256 is ignored. This change in behavior is required because allocation of implicitly specified gigantic pages must be done when the default_hugepagesz= is encountered for gigantic pages. Previously the code waited until later in the boot process (hugetlb_init), to allocate pages of default size. However the bootmem allocator required for gigantic allocations is not available at this time. Link: http://lkml.kernel.org/r/20200417185049.275845-5-mike.kravetz@oracle.com Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [s390] Acked-by: Will Deacon <will@kernel.org> Tested-by: Sandipan Das <sandipan@linux.ibm.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Christophe Leroy <christophe.leroy@c-s.fr> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David S. Miller <davem@davemloft.net> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Longpeng <longpeng2@huawei.com> Cc: Mina Almasry <almasrymina@google.com> Cc: Nitesh Narayan Lal <nitesh@redhat.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Xu <peterx@redhat.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Anders Roxell <anders.roxell@linaro.org> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Cc: Qian Cai <cai@lca.pw> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- Documentation/admin-guide/kernel-parameters.txt | 40 ++- Documentation/admin-guide/mm/hugetlbpage.rst | 35 +++ mm/hugetlb.c | 159 +++++++++++--- 3 files changed, 190 insertions(+), 44 deletions(-) --- a/Documentation/admin-guide/kernel-parameters.txt~hugetlbfs-clean-up-command-line-processing +++ a/Documentation/admin-guide/kernel-parameters.txt @@ -834,12 +834,15 @@ See also Documentation/networking/decnet.txt. default_hugepagesz= - [same as hugepagesz=] The size of the default - HugeTLB page size. This is the size represented by - the legacy /proc/ hugepages APIs, used for SHM, and - default size when mounting hugetlbfs filesystems. - Defaults to the default architecture's huge page size - if not specified. + [HW] The size of the default HugeTLB page. This is + the size represented by the legacy /proc/ hugepages + APIs. In addition, this is the default hugetlb size + used for shmget(), mmap() and mounting hugetlbfs + filesystems. If not specified, defaults to the + architecture's default huge page size. Huge page + sizes are architecture dependent. See also + Documentation/admin-guide/mm/hugetlbpage.rst. + Format: size[KMG] deferred_probe_timeout= [KNL] Debugging option to set a timeout in seconds for @@ -1479,13 +1482,24 @@ hugepages using the cma allocator. If enabled, the boot-time allocation of gigantic hugepages is skipped. - hugepages= [HW,X86-32,IA-64] HugeTLB pages to allocate at boot. - hugepagesz= [HW,IA-64,PPC,X86-64] The size of the HugeTLB pages. - On x86-64 and powerpc, this option can be specified - multiple times interleaved with hugepages= to reserve - huge pages of different sizes. Valid pages sizes on - x86-64 are 2M (when the CPU supports "pse") and 1G - (when the CPU supports the "pdpe1gb" cpuinfo flag). + hugepages= [HW] Number of HugeTLB pages to allocate at boot. + If this follows hugepagesz (below), it specifies + the number of pages of hugepagesz to be allocated. + If this is the first HugeTLB parameter on the command + line, it specifies the number of pages to allocate for + the default huge page size. See also + Documentation/admin-guide/mm/hugetlbpage.rst. + Format: <integer> + + hugepagesz= + [HW] The size of the HugeTLB pages. This is used in + conjunction with hugepages (above) to allocate huge + pages of a specific size at boot. The pair + hugepagesz=X hugepages=Y can be specified once for + each supported huge page size. Huge page sizes are + architecture dependent. See also + Documentation/admin-guide/mm/hugetlbpage.rst. + Format: size[KMG] hung_task_panic= [KNL] Should the hung task detector generate panics. --- a/Documentation/admin-guide/mm/hugetlbpage.rst~hugetlbfs-clean-up-command-line-processing +++ a/Documentation/admin-guide/mm/hugetlbpage.rst @@ -100,6 +100,41 @@ with a huge page size selection paramete be specified in bytes with optional scale suffix [kKmMgG]. The default huge page size may be selected with the "default_hugepagesz=<size>" boot parameter. +Hugetlb boot command line parameter semantics +hugepagesz - Specify a huge page size. Used in conjunction with hugepages + parameter to preallocate a number of huge pages of the specified + size. Hence, hugepagesz and hugepages are typically specified in + pairs such as: + hugepagesz=2M hugepages=512 + hugepagesz can only be specified once on the command line for a + specific huge page size. Valid huge page sizes are architecture + dependent. +hugepages - Specify the number of huge pages to preallocate. This typically + follows a valid hugepagesz or default_hugepagesz parameter. However, + if hugepages is the first or only hugetlb command line parameter it + implicitly specifies the number of huge pages of default size to + allocate. If the number of huge pages of default size is implicitly + specified, it can not be overwritten by a hugepagesz,hugepages + parameter pair for the default size. + For example, on an architecture with 2M default huge page size: + hugepages=256 hugepagesz=2M hugepages=512 + will result in 256 2M huge pages being allocated and a warning message + indicating that the hugepages=512 parameter is ignored. If a hugepages + parameter is preceded by an invalid hugepagesz parameter, it will + be ignored. +default_hugepagesz - Specify the default huge page size. This parameter can + only be specified once on the command line. default_hugepagesz can + optionally be followed by the hugepages parameter to preallocate a + specific number of huge pages of default size. The number of default + sized huge pages to preallocate can also be implicitly specified as + mentioned in the hugepages section above. Therefore, on an + architecture with 2M default huge page size: + hugepages=256 + default_hugepagesz=2M hugepages=256 + hugepages=256 default_hugepagesz=2M + will all result in 256 2M huge pages being allocated. Valid default + huge page size is architecture dependent. + When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages`` indicates the current number of pre-allocated huge pages of the default size. Thus, one can use the following command to dynamically allocate/deallocate --- a/mm/hugetlb.c~hugetlbfs-clean-up-command-line-processing +++ a/mm/hugetlb.c @@ -59,8 +59,8 @@ __initdata LIST_HEAD(huge_boot_pages); /* for command line parsing */ static struct hstate * __initdata parsed_hstate; static unsigned long __initdata default_hstate_max_huge_pages; -static unsigned long __initdata default_hstate_size; static bool __initdata parsed_valid_hugepagesz = true; +static bool __initdata parsed_default_hugepagesz; /* * Protects updates to hugepage_freelists, hugepage_activelist, nr_huge_pages, @@ -3060,7 +3060,7 @@ static void __init hugetlb_sysfs_init(vo err = hugetlb_sysfs_add_hstate(h, hugepages_kobj, hstate_kobjs, &hstate_attr_group); if (err) - pr_err("Hugetlb: Unable to add hstate %s", h->name); + pr_err("HugeTLB: Unable to add hstate %s", h->name); } } @@ -3164,7 +3164,7 @@ static void hugetlb_register_node(struct nhs->hstate_kobjs, &per_node_hstate_attr_group); if (err) { - pr_err("Hugetlb: Unable to add hstate %s for node %d\n", + pr_err("HugeTLB: Unable to add hstate %s for node %d\n", h->name, node->dev.id); hugetlb_unregister_node(node); break; @@ -3215,19 +3215,35 @@ static int __init hugetlb_init(void) if (!hugepages_supported()) return 0; - if (!size_to_hstate(default_hstate_size)) { - if (default_hstate_size != 0) { - pr_err("HugeTLB: unsupported default_hugepagesz %lu. Reverting to %lu\n", - default_hstate_size, HPAGE_SIZE); + /* + * Make sure HPAGE_SIZE (HUGETLB_PAGE_ORDER) hstate exists. Some + * architectures depend on setup being done here. + */ + hugetlb_add_hstate(HUGETLB_PAGE_ORDER); + if (!parsed_default_hugepagesz) { + /* + * If we did not parse a default huge page size, set + * default_hstate_idx to HPAGE_SIZE hstate. And, if the + * number of huge pages for this default size was implicitly + * specified, set that here as well. + * Note that the implicit setting will overwrite an explicit + * setting. A warning will be printed in this case. + */ + default_hstate_idx = hstate_index(size_to_hstate(HPAGE_SIZE)); + if (default_hstate_max_huge_pages) { + if (default_hstate.max_huge_pages) { + char buf[32]; + + string_get_size(huge_page_size(&default_hstate), + 1, STRING_UNITS_2, buf, 32); + pr_warn("HugeTLB: Ignoring hugepages=%lu associated with %s page size\n", + default_hstate.max_huge_pages, buf); + pr_warn("HugeTLB: Using hugepages=%lu for number of default huge pages\n", + default_hstate_max_huge_pages); + } + default_hstate.max_huge_pages = + default_hstate_max_huge_pages; } - - default_hstate_size = HPAGE_SIZE; - hugetlb_add_hstate(HUGETLB_PAGE_ORDER); - } - default_hstate_idx = hstate_index(size_to_hstate(default_hstate_size)); - if (default_hstate_max_huge_pages) { - if (!default_hstate.max_huge_pages) - default_hstate.max_huge_pages = default_hstate_max_huge_pages; } hugetlb_cma_check(); @@ -3287,20 +3303,34 @@ void __init hugetlb_add_hstate(unsigned parsed_hstate = h; } -static int __init hugetlb_nrpages_setup(char *s) +/* + * hugepages command line processing + * hugepages normally follows a valid hugepagsz or default_hugepagsz + * specification. If not, ignore the hugepages value. hugepages can also + * be the first huge page command line option in which case it implicitly + * specifies the number of huge pages for the default size. + */ +static int __init hugepages_setup(char *s) { unsigned long *mhp; static unsigned long *last_mhp; + if (!hugepages_supported()) { + pr_warn("HugeTLB: huge pages not supported, ignoring hugepages = %s\n", s); + return 0; + } + if (!parsed_valid_hugepagesz) { - pr_warn("hugepages = %s preceded by " - "an unsupported hugepagesz, ignoring\n", s); + pr_warn("HugeTLB: hugepages=%s does not follow a valid hugepagesz, ignoring\n", s); parsed_valid_hugepagesz = true; - return 1; + return 0; } + /* - * !hugetlb_max_hstate means we haven't parsed a hugepagesz= parameter yet, - * so this hugepages= parameter goes to the "default hstate". + * !hugetlb_max_hstate means we haven't parsed a hugepagesz= parameter + * yet, so this hugepages= parameter goes to the "default hstate". + * Otherwise, it goes with the previously parsed hugepagesz or + * default_hugepagesz. */ else if (!hugetlb_max_hstate) mhp = &default_hstate_max_huge_pages; @@ -3308,8 +3338,8 @@ static int __init hugetlb_nrpages_setup( mhp = &parsed_hstate->max_huge_pages; if (mhp == last_mhp) { - pr_warn("hugepages= specified twice without interleaving hugepagesz=, ignoring\n"); - return 1; + pr_warn("HugeTLB: hugepages= specified twice without interleaving hugepagesz=, ignoring hugepages=%s\n", s); + return 0; } if (sscanf(s, "%lu", mhp) <= 0) @@ -3327,42 +3357,109 @@ static int __init hugetlb_nrpages_setup( return 1; } -__setup("hugepages=", hugetlb_nrpages_setup); +__setup("hugepages=", hugepages_setup); +/* + * hugepagesz command line processing + * A specific huge page size can only be specified once with hugepagesz. + * hugepagesz is followed by hugepages on the command line. The global + * variable 'parsed_valid_hugepagesz' is used to determine if prior + * hugepagesz argument was valid. + */ static int __init hugepagesz_setup(char *s) { unsigned long size; + struct hstate *h; + + parsed_valid_hugepagesz = false; + if (!hugepages_supported()) { + pr_warn("HugeTLB: huge pages not supported, ignoring hugepagesz = %s\n", s); + return 0; + } size = (unsigned long)memparse(s, NULL); if (!arch_hugetlb_valid_size(size)) { - parsed_valid_hugepagesz = false; - pr_err("HugeTLB: unsupported hugepagesz %s\n", s); + pr_err("HugeTLB: unsupported hugepagesz=%s\n", s); return 0; } - if (size_to_hstate(size)) { - pr_warn("HugeTLB: hugepagesz %s specified twice, ignoring\n", s); - return 0; + h = size_to_hstate(size); + if (h) { + /* + * hstate for this size already exists. This is normally + * an error, but is allowed if the existing hstate is the + * default hstate. More specifically, it is only allowed if + * the number of huge pages for the default hstate was not + * previously specified. + */ + if (!parsed_default_hugepagesz || h != &default_hstate || + default_hstate.max_huge_pages) { + pr_warn("HugeTLB: hugepagesz=%s specified twice, ignoring\n", s); + return 0; + } + + /* + * No need to call hugetlb_add_hstate() as hstate already + * exists. But, do set parsed_hstate so that a following + * hugepages= parameter will be applied to this hstate. + */ + parsed_hstate = h; + parsed_valid_hugepagesz = true; + return 1; } hugetlb_add_hstate(ilog2(size) - PAGE_SHIFT); + parsed_valid_hugepagesz = true; return 1; } __setup("hugepagesz=", hugepagesz_setup); +/* + * default_hugepagesz command line input + * Only one instance of default_hugepagesz allowed on command line. + */ static int __init default_hugepagesz_setup(char *s) { unsigned long size; + parsed_valid_hugepagesz = false; + if (!hugepages_supported()) { + pr_warn("HugeTLB: huge pages not supported, ignoring default_hugepagesz = %s\n", s); + return 0; + } + + if (parsed_default_hugepagesz) { + pr_err("HugeTLB: default_hugepagesz previously specified, ignoring %s\n", s); + return 0; + } + size = (unsigned long)memparse(s, NULL); if (!arch_hugetlb_valid_size(size)) { - pr_err("HugeTLB: unsupported default_hugepagesz %s\n", s); + pr_err("HugeTLB: unsupported default_hugepagesz=%s\n", s); return 0; } - default_hstate_size = size; + hugetlb_add_hstate(ilog2(size) - PAGE_SHIFT); + parsed_valid_hugepagesz = true; + parsed_default_hugepagesz = true; + default_hstate_idx = hstate_index(size_to_hstate(size)); + + /* + * The number of default huge pages (for this size) could have been + * specified as the first hugetlb parameter: hugepages=X. If so, + * then default_hstate_max_huge_pages is set. If the default huge + * page size is gigantic (>= MAX_ORDER), then the pages must be + * allocated here from bootmem allocator. + */ + if (default_hstate_max_huge_pages) { + default_hstate.max_huge_pages = default_hstate_max_huge_pages; + if (hstate_is_gigantic(&default_hstate)) + hugetlb_hstate_alloc_pages(&default_hstate); + default_hstate_max_huge_pages = 0; + } + return 1; } __setup("default_hugepagesz=", default_hugepagesz_setup); _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 073/131] hugetlbfs: fix changes to command line processing 2020-06-03 22:55 incoming Andrew Morton ` (71 preceding siblings ...) 2020-06-03 23:00 ` [patch 072/131] hugetlbfs: clean up command line processing Andrew Morton @ 2020-06-03 23:00 ` Andrew Morton 2020-06-03 23:00 ` [patch 074/131] mm/hugetlb: avoid unnecessary check on pud and pmd entry in huge_pte_offset Andrew Morton ` (58 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:00 UTC (permalink / raw) To: akpm, linux-mm, mike.kravetz, mm-commits, sandipan.osd, sfr, torvalds From: Mike Kravetz <mike.kravetz@oracle.com> Subject: hugetlbfs: fix changes to command line processing Previously, a check for hugepages_supported was added before processing hugetlb command line parameters. On some architectures such as powerpc, hugepages_supported() is not set to true until after command line processing. Therefore, no hugetlb command line parameters would be accepted. Remove the additional checks for hugepages_supported. In hugetlb_init, print a warning if !hugepages_supported and command line parameters were specified. Link: http://lkml.kernel.org/r/b1f04f9f-fa46-c2a0-7693-4a0679d2a1ee@oracle.com Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Reported-by: Sandipan Das <sandipan.osd@gmail.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/hugetlb.c | 20 ++++---------------- 1 file changed, 4 insertions(+), 16 deletions(-) --- a/mm/hugetlb.c~hugetlbfs-move-hugepagesz=-parsing-to-arch-independent-code-fix +++ a/mm/hugetlb.c @@ -3212,8 +3212,11 @@ static int __init hugetlb_init(void) { int i; - if (!hugepages_supported()) + if (!hugepages_supported()) { + if (hugetlb_max_hstate || default_hstate_max_huge_pages) + pr_warn("HugeTLB: huge pages not supported, ignoring associated command-line parameters\n"); return 0; + } /* * Make sure HPAGE_SIZE (HUGETLB_PAGE_ORDER) hstate exists. Some @@ -3315,11 +3318,6 @@ static int __init hugepages_setup(char * unsigned long *mhp; static unsigned long *last_mhp; - if (!hugepages_supported()) { - pr_warn("HugeTLB: huge pages not supported, ignoring hugepages = %s\n", s); - return 0; - } - if (!parsed_valid_hugepagesz) { pr_warn("HugeTLB: hugepages=%s does not follow a valid hugepagesz, ignoring\n", s); parsed_valid_hugepagesz = true; @@ -3372,11 +3370,6 @@ static int __init hugepagesz_setup(char struct hstate *h; parsed_valid_hugepagesz = false; - if (!hugepages_supported()) { - pr_warn("HugeTLB: huge pages not supported, ignoring hugepagesz = %s\n", s); - return 0; - } - size = (unsigned long)memparse(s, NULL); if (!arch_hugetlb_valid_size(size)) { @@ -3424,11 +3417,6 @@ static int __init default_hugepagesz_set unsigned long size; parsed_valid_hugepagesz = false; - if (!hugepages_supported()) { - pr_warn("HugeTLB: huge pages not supported, ignoring default_hugepagesz = %s\n", s); - return 0; - } - if (parsed_default_hugepagesz) { pr_err("HugeTLB: default_hugepagesz previously specified, ignoring %s\n", s); return 0; _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 074/131] mm/hugetlb: avoid unnecessary check on pud and pmd entry in huge_pte_offset 2020-06-03 22:55 incoming Andrew Morton ` (72 preceding siblings ...) 2020-06-03 23:00 ` [patch 073/131] hugetlbfs: fix changes to " Andrew Morton @ 2020-06-03 23:00 ` Andrew Morton 2020-06-03 23:00 ` [patch 075/131] arm64/mm: drop __HAVE_ARCH_HUGE_PTEP_GET Andrew Morton ` (57 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:00 UTC (permalink / raw) To: akpm, jgg, linux-mm, lixinhai.lxh, longpeng2, mike.kravetz, mm-commits, punit.agrawal, torvalds From: Li Xinhai <lixinhai.lxh@gmail.com> Subject: mm/hugetlb: avoid unnecessary check on pud and pmd entry in huge_pte_offset When huge_pte_offset() is called, the parameter sz can only be PUD_SIZE or PMD_SIZE. If sz is PUD_SIZE and code can reach pud, then *pud must be none, or normal hugetlb entry, or non-present (migration or hwpoisoned) hugetlb entry, and we can directly return pud. When sz is PMD_SIZE, pud must be none or present, and if code can reach pmd, we can directly return pmd. So after this patch the code is simplified by first check on the parameter sz, and avoid unnecessary checks in current code. Same semantics of existing code is maintained. More details about relevant commits: commit 9b19df292c66 ("mm/hugetlb.c: make huge_pte_offset() consistent and document behaviour") changed the code path for pud and pmd handling, see comments about why this patch intends to change it. ... pud = pud_offset(p4d, addr); if (sz != PUD_SIZE && pud_none(*pud)) // [1] return NULL; /* hugepage or swap? */ if (pud_huge(*pud) || !pud_present(*pud)) // [2] return (pte_t *)pud; pmd = pmd_offset(pud, addr); if (sz != PMD_SIZE && pmd_none(*pmd)) // [3] return NULL; /* hugepage or swap? */ if (pmd_huge(*pmd) || !pmd_present(*pmd)) // [4] return (pte_t *)pmd; return NULL; // [5] ... [1]: this is necessary, return NULL for sz == PMD_SIZE; [2]: if sz == PUD_SIZE, all valid values of pud entry will cause return; [3]: dead code, sz != PMD_SIZE never true; [4]: all valid values of pmd entry will cause return; [5]: dead code, because of check in [4]. Now, this patch combines [1] and [2] for pud, and combines [3], [4] and [5] for pmd, so avoid unnecessary checks. I don't try to catch any invalid values in page table entry, as that will be checked by caller and avoid extra branch in this function. Also no assert on sz must equal PUD_SIZE or PMD_SIZE, since this function only call for hugetlb mapping. For commit 3c1d7e6ccb64 ("mm/hugetlb: fix a addressing exception caused by huge_pte_offset"), since we don't read the entry more than once now, variable pud_entry and pmd_entry are not needed. Link: http://lkml.kernel.org/r/1587794313-16849-1-git-send-email-lixinhai.lxh@gmail.com Signed-off-by: Li Xinhai <lixinhai.lxh@gmail.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Jason Gunthorpe <jgg@mellanox.com> Cc: Punit Agrawal <punit.agrawal@arm.com> Cc: Longpeng <longpeng2@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/hugetlb.c | 28 +++++++++++----------------- 1 file changed, 11 insertions(+), 17 deletions(-) --- a/mm/hugetlb.c~mm-hugetlb-avoid-unnecessary-check-on-pud-and-pmd-entry-in-huge_pte_offset +++ a/mm/hugetlb.c @@ -5469,8 +5469,8 @@ pte_t *huge_pte_alloc(struct mm_struct * * huge_pte_offset() - Walk the page table to resolve the hugepage * entry at address @addr * - * Return: Pointer to page table or swap entry (PUD or PMD) for - * address @addr, or NULL if a p*d_none() entry is encountered and the + * Return: Pointer to page table entry (PUD or PMD) for + * address @addr, or NULL if a !p*d_present() entry is encountered and the * size @sz doesn't match the hugepage size at this level of the page * table. */ @@ -5479,8 +5479,8 @@ pte_t *huge_pte_offset(struct mm_struct { pgd_t *pgd; p4d_t *p4d; - pud_t *pud, pud_entry; - pmd_t *pmd, pmd_entry; + pud_t *pud; + pmd_t *pmd; pgd = pgd_offset(mm, addr); if (!pgd_present(*pgd)) @@ -5490,22 +5490,16 @@ pte_t *huge_pte_offset(struct mm_struct return NULL; pud = pud_offset(p4d, addr); - pud_entry = READ_ONCE(*pud); - if (sz != PUD_SIZE && pud_none(pud_entry)) - return NULL; - /* hugepage or swap? */ - if (pud_huge(pud_entry) || !pud_present(pud_entry)) + if (sz == PUD_SIZE) + /* must be pud huge, non-present or none */ return (pte_t *)pud; - - pmd = pmd_offset(pud, addr); - pmd_entry = READ_ONCE(*pmd); - if (sz != PMD_SIZE && pmd_none(pmd_entry)) + if (!pud_present(*pud)) return NULL; - /* hugepage or swap? */ - if (pmd_huge(pmd_entry) || !pmd_present(pmd_entry)) - return (pte_t *)pmd; + /* must have a valid entry and size to go further */ - return NULL; + pmd = pmd_offset(pud, addr); + /* must be pmd huge, non-present or none */ + return (pte_t *)pmd; } #endif /* CONFIG_ARCH_WANT_GENERAL_HUGETLB */ _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 075/131] arm64/mm: drop __HAVE_ARCH_HUGE_PTEP_GET 2020-06-03 22:55 incoming Andrew Morton ` (73 preceding siblings ...) 2020-06-03 23:00 ` [patch 074/131] mm/hugetlb: avoid unnecessary check on pud and pmd entry in huge_pte_offset Andrew Morton @ 2020-06-03 23:00 ` Andrew Morton 2020-06-03 23:01 ` [patch 076/131] mm/hugetlb: define a generic fallback for is_hugepage_only_range() Andrew Morton ` (56 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:00 UTC (permalink / raw) To: akpm, anshuman.khandual, benh, borntraeger, bp, catalin.marinas, dalias, davem, deller, fenghua.yu, gor, heiko.carstens, hpa, James.Bottomley, linux-mm, linux, mike.kravetz, mingo, mm-commits, mpe, palmer, paul.walmsley, paulus, tglx, tony.luck, torvalds, tsbogend, will, ysato From: Anshuman Khandual <anshuman.khandual@arm.com> Subject: arm64/mm: drop __HAVE_ARCH_HUGE_PTEP_GET Patch series "mm/hugetlb: Add some new generic fallbacks", v3. This series adds the following new generic fallbacks. Before that it drops __HAVE_ARCH_HUGE_PTEP_GET from arm64 platform. 1. is_hugepage_only_range() 2. arch_clear_hugepage_flags() After this arm (32 bit) remains the sole platform defining it's own huge_ptep_get() via __HAVE_ARCH_HUGE_PTEP_GET. This patch (of 3): Platform specific huge_ptep_get() is required only when fetching the huge PTE involves more than just dereferencing the page table pointer. This is not the case on arm64 platform. Hence huge_ptep_pte() can be dropped along with it's __HAVE_ARCH_HUGE_PTEP_GET subscription. Before that, it updates the generic huge_ptep_get() with READ_ONCE() which will prevent known page table issues with THP on arm64. Link: http://lkml.kernel.org/r/1588907271-11920-1-git-send-email-anshuman.khandual@arm.com Link: http://lkml.kernel.org/r//1506527369-19535-1-git-send-email-will.deacon@arm.com/ Link: http://lkml.kernel.org/r/1588907271-11920-2-git-send-email-anshuman.khandual@arm.com Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Mike Kravetz <mike.kravetz@oracle.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/arm64/include/asm/hugetlb.h | 6 ------ include/asm-generic/hugetlb.h | 2 +- 2 files changed, 1 insertion(+), 7 deletions(-) --- a/arch/arm64/include/asm/hugetlb.h~arm64-mm-drop-__have_arch_huge_ptep_get +++ a/arch/arm64/include/asm/hugetlb.h @@ -17,12 +17,6 @@ extern bool arch_hugetlb_migration_supported(struct hstate *h); #endif -#define __HAVE_ARCH_HUGE_PTEP_GET -static inline pte_t huge_ptep_get(pte_t *ptep) -{ - return READ_ONCE(*ptep); -} - static inline int is_hugepage_only_range(struct mm_struct *mm, unsigned long addr, unsigned long len) { --- a/include/asm-generic/hugetlb.h~arm64-mm-drop-__have_arch_huge_ptep_get +++ a/include/asm-generic/hugetlb.h @@ -122,7 +122,7 @@ static inline int huge_ptep_set_access_f #ifndef __HAVE_ARCH_HUGE_PTEP_GET static inline pte_t huge_ptep_get(pte_t *ptep) { - return *ptep; + return READ_ONCE(*ptep); } #endif _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 076/131] mm/hugetlb: define a generic fallback for is_hugepage_only_range() 2020-06-03 22:55 incoming Andrew Morton ` (74 preceding siblings ...) 2020-06-03 23:00 ` [patch 075/131] arm64/mm: drop __HAVE_ARCH_HUGE_PTEP_GET Andrew Morton @ 2020-06-03 23:01 ` Andrew Morton 2020-06-03 23:01 ` [patch 077/131] mm/hugetlb: define a generic fallback for arch_clear_hugepage_flags() Andrew Morton ` (55 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:01 UTC (permalink / raw) To: akpm, anshuman.khandual, benh, borntraeger, bp, catalin.marinas, dalias, davem, deller, fenghua.yu, gor, heiko.carstens, hpa, James.Bottomley, linux-mm, linux, mike.kravetz, mingo, mm-commits, mpe, palmer, paul.walmsley, paulus, tglx, tony.luck, torvalds, tsbogend, will, ysato From: Anshuman Khandual <anshuman.khandual@arm.com> Subject: mm/hugetlb: define a generic fallback for is_hugepage_only_range() There are multiple similar definitions for is_hugepage_only_range() on various platforms. Lets just add it's generic fallback definition for platforms that do not override. This help reduce code duplication. Link: http://lkml.kernel.org/r/1588907271-11920-3-git-send-email-anshuman.khandual@arm.com Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Mike Kravetz <mike.kravetz@oracle.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Helge Deller <deller@gmx.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Rich Felker <dalias@libc.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/arm/include/asm/hugetlb.h | 6 ------ arch/arm64/include/asm/hugetlb.h | 6 ------ arch/ia64/include/asm/hugetlb.h | 1 + arch/mips/include/asm/hugetlb.h | 7 ------- arch/parisc/include/asm/hugetlb.h | 6 ------ arch/powerpc/include/asm/hugetlb.h | 1 + arch/riscv/include/asm/hugetlb.h | 6 ------ arch/s390/include/asm/hugetlb.h | 7 ------- arch/sh/include/asm/hugetlb.h | 6 ------ arch/sparc/include/asm/hugetlb.h | 6 ------ arch/x86/include/asm/hugetlb.h | 6 ------ include/linux/hugetlb.h | 9 +++++++++ 12 files changed, 11 insertions(+), 56 deletions(-) --- a/arch/arm64/include/asm/hugetlb.h~mm-hugetlb-define-a-generic-fallback-for-is_hugepage_only_range +++ a/arch/arm64/include/asm/hugetlb.h @@ -17,12 +17,6 @@ extern bool arch_hugetlb_migration_supported(struct hstate *h); #endif -static inline int is_hugepage_only_range(struct mm_struct *mm, - unsigned long addr, unsigned long len) -{ - return 0; -} - static inline void arch_clear_hugepage_flags(struct page *page) { clear_bit(PG_dcache_clean, &page->flags); --- a/arch/arm/include/asm/hugetlb.h~mm-hugetlb-define-a-generic-fallback-for-is_hugepage_only_range +++ a/arch/arm/include/asm/hugetlb.h @@ -14,12 +14,6 @@ #include <asm/hugetlb-3level.h> #include <asm-generic/hugetlb.h> -static inline int is_hugepage_only_range(struct mm_struct *mm, - unsigned long addr, unsigned long len) -{ - return 0; -} - static inline void arch_clear_hugepage_flags(struct page *page) { clear_bit(PG_dcache_clean, &page->flags); --- a/arch/ia64/include/asm/hugetlb.h~mm-hugetlb-define-a-generic-fallback-for-is_hugepage_only_range +++ a/arch/ia64/include/asm/hugetlb.h @@ -20,6 +20,7 @@ static inline int is_hugepage_only_range return (REGION_NUMBER(addr) == RGN_HPAGE || REGION_NUMBER((addr)+(len)-1) == RGN_HPAGE); } +#define is_hugepage_only_range is_hugepage_only_range #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, --- a/arch/mips/include/asm/hugetlb.h~mm-hugetlb-define-a-generic-fallback-for-is_hugepage_only_range +++ a/arch/mips/include/asm/hugetlb.h @@ -11,13 +11,6 @@ #include <asm/page.h> -static inline int is_hugepage_only_range(struct mm_struct *mm, - unsigned long addr, - unsigned long len) -{ - return 0; -} - #define __HAVE_ARCH_PREPARE_HUGEPAGE_RANGE static inline int prepare_hugepage_range(struct file *file, unsigned long addr, --- a/arch/parisc/include/asm/hugetlb.h~mm-hugetlb-define-a-generic-fallback-for-is_hugepage_only_range +++ a/arch/parisc/include/asm/hugetlb.h @@ -12,12 +12,6 @@ void set_huge_pte_at(struct mm_struct *m pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep); -static inline int is_hugepage_only_range(struct mm_struct *mm, - unsigned long addr, - unsigned long len) { - return 0; -} - /* * If the arch doesn't supply something else, assume that hugepage * size aligned regions are ok without further preparation. --- a/arch/powerpc/include/asm/hugetlb.h~mm-hugetlb-define-a-generic-fallback-for-is_hugepage_only_range +++ a/arch/powerpc/include/asm/hugetlb.h @@ -30,6 +30,7 @@ static inline int is_hugepage_only_range return slice_is_hugepage_only_range(mm, addr, len); return 0; } +#define is_hugepage_only_range is_hugepage_only_range #define __HAVE_ARCH_HUGETLB_FREE_PGD_RANGE void hugetlb_free_pgd_range(struct mmu_gather *tlb, unsigned long addr, --- a/arch/riscv/include/asm/hugetlb.h~mm-hugetlb-define-a-generic-fallback-for-is_hugepage_only_range +++ a/arch/riscv/include/asm/hugetlb.h @@ -5,12 +5,6 @@ #include <asm-generic/hugetlb.h> #include <asm/page.h> -static inline int is_hugepage_only_range(struct mm_struct *mm, - unsigned long addr, - unsigned long len) { - return 0; -} - static inline void arch_clear_hugepage_flags(struct page *page) { } --- a/arch/s390/include/asm/hugetlb.h~mm-hugetlb-define-a-generic-fallback-for-is_hugepage_only_range +++ a/arch/s390/include/asm/hugetlb.h @@ -21,13 +21,6 @@ pte_t huge_ptep_get(pte_t *ptep); pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep); -static inline bool is_hugepage_only_range(struct mm_struct *mm, - unsigned long addr, - unsigned long len) -{ - return false; -} - /* * If the arch doesn't supply something else, assume that hugepage * size aligned regions are ok without further preparation. --- a/arch/sh/include/asm/hugetlb.h~mm-hugetlb-define-a-generic-fallback-for-is_hugepage_only_range +++ a/arch/sh/include/asm/hugetlb.h @@ -5,12 +5,6 @@ #include <asm/cacheflush.h> #include <asm/page.h> -static inline int is_hugepage_only_range(struct mm_struct *mm, - unsigned long addr, - unsigned long len) { - return 0; -} - /* * If the arch doesn't supply something else, assume that hugepage * size aligned regions are ok without further preparation. --- a/arch/sparc/include/asm/hugetlb.h~mm-hugetlb-define-a-generic-fallback-for-is_hugepage_only_range +++ a/arch/sparc/include/asm/hugetlb.h @@ -20,12 +20,6 @@ void set_huge_pte_at(struct mm_struct *m pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep); -static inline int is_hugepage_only_range(struct mm_struct *mm, - unsigned long addr, - unsigned long len) { - return 0; -} - #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) --- a/arch/x86/include/asm/hugetlb.h~mm-hugetlb-define-a-generic-fallback-for-is_hugepage_only_range +++ a/arch/x86/include/asm/hugetlb.h @@ -7,12 +7,6 @@ #define hugepages_supported() boot_cpu_has(X86_FEATURE_PSE) -static inline int is_hugepage_only_range(struct mm_struct *mm, - unsigned long addr, - unsigned long len) { - return 0; -} - static inline void arch_clear_hugepage_flags(struct page *page) { } --- a/include/linux/hugetlb.h~mm-hugetlb-define-a-generic-fallback-for-is_hugepage_only_range +++ a/include/linux/hugetlb.h @@ -591,6 +591,15 @@ static inline unsigned int blocks_per_hu #include <asm/hugetlb.h> +#ifndef is_hugepage_only_range +static inline int is_hugepage_only_range(struct mm_struct *mm, + unsigned long addr, unsigned long len) +{ + return 0; +} +#define is_hugepage_only_range is_hugepage_only_range +#endif + #ifndef arch_make_huge_pte static inline pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma, struct page *page, int writable) _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 077/131] mm/hugetlb: define a generic fallback for arch_clear_hugepage_flags() 2020-06-03 22:55 incoming Andrew Morton ` (75 preceding siblings ...) 2020-06-03 23:01 ` [patch 076/131] mm/hugetlb: define a generic fallback for is_hugepage_only_range() Andrew Morton @ 2020-06-03 23:01 ` Andrew Morton 2020-06-03 23:01 ` [patch 078/131] mm: simplify calling a compound page destructor Andrew Morton ` (54 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:01 UTC (permalink / raw) To: akpm, anshuman.khandual, benh, borntraeger, bp, catalin.marinas, dalias, davem, deller, fenghua.yu, gor, heiko.carstens, hpa, James.Bottomley, linux-mm, linux, mike.kravetz, mingo, mm-commits, mpe, palmer, paul.walmsley, paulus, tglx, tony.luck, torvalds, tsbogend, will, ysato From: Anshuman Khandual <anshuman.khandual@arm.com> Subject: mm/hugetlb: define a generic fallback for arch_clear_hugepage_flags() There are multiple similar definitions for arch_clear_hugepage_flags() on various platforms. Lets just add it's generic fallback definition for platforms that do not override. This help reduce code duplication. Link: http://lkml.kernel.org/r/1588907271-11920-4-git-send-email-anshuman.khandual@arm.com Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Mike Kravetz <mike.kravetz@oracle.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Helge Deller <deller@gmx.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Rich Felker <dalias@libc.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/arm/include/asm/hugetlb.h | 1 + arch/arm64/include/asm/hugetlb.h | 1 + arch/ia64/include/asm/hugetlb.h | 4 ---- arch/mips/include/asm/hugetlb.h | 4 ---- arch/parisc/include/asm/hugetlb.h | 4 ---- arch/powerpc/include/asm/hugetlb.h | 4 ---- arch/riscv/include/asm/hugetlb.h | 4 ---- arch/s390/include/asm/hugetlb.h | 1 + arch/sh/include/asm/hugetlb.h | 1 + arch/sparc/include/asm/hugetlb.h | 4 ---- arch/x86/include/asm/hugetlb.h | 4 ---- include/linux/hugetlb.h | 5 +++++ 12 files changed, 9 insertions(+), 28 deletions(-) --- a/arch/arm64/include/asm/hugetlb.h~mm-hugetlb-define-a-generic-fallback-for-arch_clear_hugepage_flags +++ a/arch/arm64/include/asm/hugetlb.h @@ -21,6 +21,7 @@ static inline void arch_clear_hugepage_f { clear_bit(PG_dcache_clean, &page->flags); } +#define arch_clear_hugepage_flags arch_clear_hugepage_flags extern pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma, struct page *page, int writable); --- a/arch/arm/include/asm/hugetlb.h~mm-hugetlb-define-a-generic-fallback-for-arch_clear_hugepage_flags +++ a/arch/arm/include/asm/hugetlb.h @@ -18,5 +18,6 @@ static inline void arch_clear_hugepage_f { clear_bit(PG_dcache_clean, &page->flags); } +#define arch_clear_hugepage_flags arch_clear_hugepage_flags #endif /* _ASM_ARM_HUGETLB_H */ --- a/arch/ia64/include/asm/hugetlb.h~mm-hugetlb-define-a-generic-fallback-for-arch_clear_hugepage_flags +++ a/arch/ia64/include/asm/hugetlb.h @@ -28,10 +28,6 @@ static inline void huge_ptep_clear_flush { } -static inline void arch_clear_hugepage_flags(struct page *page) -{ -} - #include <asm-generic/hugetlb.h> #endif /* _ASM_IA64_HUGETLB_H */ --- a/arch/mips/include/asm/hugetlb.h~mm-hugetlb-define-a-generic-fallback-for-arch_clear_hugepage_flags +++ a/arch/mips/include/asm/hugetlb.h @@ -75,10 +75,6 @@ static inline int huge_ptep_set_access_f return changed; } -static inline void arch_clear_hugepage_flags(struct page *page) -{ -} - #include <asm-generic/hugetlb.h> #endif /* __ASM_HUGETLB_H */ --- a/arch/parisc/include/asm/hugetlb.h~mm-hugetlb-define-a-generic-fallback-for-arch_clear_hugepage_flags +++ a/arch/parisc/include/asm/hugetlb.h @@ -42,10 +42,6 @@ int huge_ptep_set_access_flags(struct vm unsigned long addr, pte_t *ptep, pte_t pte, int dirty); -static inline void arch_clear_hugepage_flags(struct page *page) -{ -} - #include <asm-generic/hugetlb.h> #endif /* _ASM_PARISC64_HUGETLB_H */ --- a/arch/powerpc/include/asm/hugetlb.h~mm-hugetlb-define-a-generic-fallback-for-arch_clear_hugepage_flags +++ a/arch/powerpc/include/asm/hugetlb.h @@ -61,10 +61,6 @@ int huge_ptep_set_access_flags(struct vm unsigned long addr, pte_t *ptep, pte_t pte, int dirty); -static inline void arch_clear_hugepage_flags(struct page *page) -{ -} - #include <asm-generic/hugetlb.h> #else /* ! CONFIG_HUGETLB_PAGE */ --- a/arch/riscv/include/asm/hugetlb.h~mm-hugetlb-define-a-generic-fallback-for-arch_clear_hugepage_flags +++ a/arch/riscv/include/asm/hugetlb.h @@ -5,8 +5,4 @@ #include <asm-generic/hugetlb.h> #include <asm/page.h> -static inline void arch_clear_hugepage_flags(struct page *page) -{ -} - #endif /* _ASM_RISCV_HUGETLB_H */ --- a/arch/s390/include/asm/hugetlb.h~mm-hugetlb-define-a-generic-fallback-for-arch_clear_hugepage_flags +++ a/arch/s390/include/asm/hugetlb.h @@ -39,6 +39,7 @@ static inline void arch_clear_hugepage_f { clear_bit(PG_arch_1, &page->flags); } +#define arch_clear_hugepage_flags arch_clear_hugepage_flags static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep, unsigned long sz) --- a/arch/sh/include/asm/hugetlb.h~mm-hugetlb-define-a-generic-fallback-for-arch_clear_hugepage_flags +++ a/arch/sh/include/asm/hugetlb.h @@ -30,6 +30,7 @@ static inline void arch_clear_hugepage_f { clear_bit(PG_dcache_clean, &page->flags); } +#define arch_clear_hugepage_flags arch_clear_hugepage_flags #include <asm-generic/hugetlb.h> --- a/arch/sparc/include/asm/hugetlb.h~mm-hugetlb-define-a-generic-fallback-for-arch_clear_hugepage_flags +++ a/arch/sparc/include/asm/hugetlb.h @@ -47,10 +47,6 @@ static inline int huge_ptep_set_access_f return changed; } -static inline void arch_clear_hugepage_flags(struct page *page) -{ -} - #define __HAVE_ARCH_HUGETLB_FREE_PGD_RANGE void hugetlb_free_pgd_range(struct mmu_gather *tlb, unsigned long addr, unsigned long end, unsigned long floor, --- a/arch/x86/include/asm/hugetlb.h~mm-hugetlb-define-a-generic-fallback-for-arch_clear_hugepage_flags +++ a/arch/x86/include/asm/hugetlb.h @@ -7,8 +7,4 @@ #define hugepages_supported() boot_cpu_has(X86_FEATURE_PSE) -static inline void arch_clear_hugepage_flags(struct page *page) -{ -} - #endif /* _ASM_X86_HUGETLB_H */ --- a/include/linux/hugetlb.h~mm-hugetlb-define-a-generic-fallback-for-arch_clear_hugepage_flags +++ a/include/linux/hugetlb.h @@ -600,6 +600,11 @@ static inline int is_hugepage_only_range #define is_hugepage_only_range is_hugepage_only_range #endif +#ifndef arch_clear_hugepage_flags +static inline void arch_clear_hugepage_flags(struct page *page) { } +#define arch_clear_hugepage_flags arch_clear_hugepage_flags +#endif + #ifndef arch_make_huge_pte static inline pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma, struct page *page, int writable) _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 078/131] mm: simplify calling a compound page destructor 2020-06-03 22:55 incoming Andrew Morton ` (76 preceding siblings ...) 2020-06-03 23:01 ` [patch 077/131] mm/hugetlb: define a generic fallback for arch_clear_hugepage_flags() Andrew Morton @ 2020-06-03 23:01 ` Andrew Morton 2020-06-03 23:01 ` [patch 079/131] mm/vmscan.c: use update_lru_size() in update_lru_sizes() Andrew Morton ` (53 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:01 UTC (permalink / raw) To: akpm, anshuman.khandual, david, kirill.shutemov, linux-mm, mm-commits, torvalds, willy From: "Matthew Wilcox (Oracle)" <willy@infradead.org> Subject: mm: simplify calling a compound page destructor None of the three callers of get_compound_page_dtor() want to know the value; they just want to call the function. Replace it with destroy_compound_page() which calls the dtor for them. Link: http://lkml.kernel.org/r/20200517105051.9352-1-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/mm.h | 4 ++-- mm/swap.c | 5 +---- mm/vmscan.c | 4 ++-- 3 files changed, 5 insertions(+), 8 deletions(-) --- a/include/linux/mm.h~mm-simplify-calling-a-compound-page-destructor +++ a/include/linux/mm.h @@ -876,10 +876,10 @@ static inline void set_compound_page_dto page[1].compound_dtor = compound_dtor; } -static inline compound_page_dtor *get_compound_page_dtor(struct page *page) +static inline void destroy_compound_page(struct page *page) { VM_BUG_ON_PAGE(page[1].compound_dtor >= NR_COMPOUND_DTORS, page); - return compound_page_dtors[page[1].compound_dtor]; + compound_page_dtors[page[1].compound_dtor](page); } static inline unsigned int compound_order(struct page *page) --- a/mm/swap.c~mm-simplify-calling-a-compound-page-destructor +++ a/mm/swap.c @@ -102,8 +102,6 @@ static void __put_single_page(struct pag static void __put_compound_page(struct page *page) { - compound_page_dtor *dtor; - /* * __page_cache_release() is supposed to be called for thp, not for * hugetlb. This is because hugetlb page does never have PageLRU set @@ -112,8 +110,7 @@ static void __put_compound_page(struct p */ if (!PageHuge(page)) __page_cache_release(page); - dtor = get_compound_page_dtor(page); - (*dtor)(page); + destroy_compound_page(page); } void __put_page(struct page *page) --- a/mm/vmscan.c~mm-simplify-calling-a-compound-page-destructor +++ a/mm/vmscan.c @@ -1438,7 +1438,7 @@ free_it: * appear not as the counts should be low */ if (unlikely(PageTransHuge(page))) - (*get_compound_page_dtor(page))(page); + destroy_compound_page(page); else list_add(&page->lru, &free_pages); continue; @@ -1859,7 +1859,7 @@ static unsigned noinline_for_stack move_ if (unlikely(PageCompound(page))) { spin_unlock_irq(&pgdat->lru_lock); - (*get_compound_page_dtor(page))(page); + destroy_compound_page(page); spin_lock_irq(&pgdat->lru_lock); } else list_add(&page->lru, &pages_to_free); _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 079/131] mm/vmscan.c: use update_lru_size() in update_lru_sizes() 2020-06-03 22:55 incoming Andrew Morton ` (77 preceding siblings ...) 2020-06-03 23:01 ` [patch 078/131] mm: simplify calling a compound page destructor Andrew Morton @ 2020-06-03 23:01 ` Andrew Morton 2020-06-03 23:01 ` [patch 080/131] mm/vmscan: count layzfree pages and fix nr_isolated_* mismatch Andrew Morton ` (52 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:01 UTC (permalink / raw) To: akpm, bhe, linux-mm, mhocko, mm-commits, richard.weiyang, torvalds From: Wei Yang <richard.weiyang@gmail.com> Subject: mm/vmscan.c: use update_lru_size() in update_lru_sizes() We already defined the helper update_lru_size(). Let's use this to reduce code duplication. Link: http://lkml.kernel.org/r/20200331221550.1011-1-richard.weiyang@gmail.com Signed-off-by: Wei Yang <richard.weiyang@gmail.com> Reviewed-by: Baoquan He <bhe@redhat.com> Acked-by: Michal Hocko <mhocko@suse.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/vmscan.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) --- a/mm/vmscan.c~mm-vmscanc-use-update_lru_size-in-update_lru_sizes +++ a/mm/vmscan.c @@ -1602,10 +1602,7 @@ static __always_inline void update_lru_s if (!nr_zone_taken[zid]) continue; - __update_lru_size(lruvec, lru, zid, -nr_zone_taken[zid]); -#ifdef CONFIG_MEMCG - mem_cgroup_update_lru_size(lruvec, lru, zid, -nr_zone_taken[zid]); -#endif + update_lru_size(lruvec, lru, zid, -nr_zone_taken[zid]); } } _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 080/131] mm/vmscan: count layzfree pages and fix nr_isolated_* mismatch 2020-06-03 22:55 incoming Andrew Morton ` (78 preceding siblings ...) 2020-06-03 23:01 ` [patch 079/131] mm/vmscan.c: use update_lru_size() in update_lru_sizes() Andrew Morton @ 2020-06-03 23:01 ` Andrew Morton 2020-06-03 23:01 ` [patch 081/131] mm/vmscan.c: change prototype for shrink_page_list Andrew Morton ` (51 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:01 UTC (permalink / raw) To: akpm, hannes, jaewon31.kim, linux-mm, m.szyprowski, mgorman, mina86, minchan, mm-commits, shli, torvalds, ytk.lee From: Jaewon Kim <jaewon31.kim@samsung.com> Subject: mm/vmscan: count layzfree pages and fix nr_isolated_* mismatch Fix an nr_isolate_* mismatch problem between cma and dirty lazyfree pages. If try_to_unmap_one is used for reclaim and it detects a dirty lazyfree page, then the lazyfree page is changed to a normal anon page having SwapBacked by commit 802a3a92ad7a ("mm: reclaim MADV_FREE pages"). Even with the change, reclaim context correctly counts isolated files because it uses is_file_lru to distinguish file. And the change to anon is not happened if try_to_unmap_one is used for migration. So migration context like compaction also correctly counts isolated files even though it uses page_is_file_lru insted of is_file_lru. Recently page_is_file_cache was renamed to page_is_file_lru by commit 9de4f22a60f7 ("mm: code cleanup for MADV_FREE"). But the nr_isolate_* mismatch problem happens on cma alloc. There is reclaim_clean_pages_from_list which is being used only by cma. It was introduced by commit 02c6de8d757c ("mm: cma: discard clean pages during contiguous allocation instead of migration") to reclaim clean file pages without migration. The cma alloc uses both reclaim_clean_pages_from_list and migrate_pages, and it uses page_is_file_lru to count isolated files. If there are dirty lazyfree pages allocated from cma memory region, the pages are counted as isolated file at the beginging but are counted as isolated anon after finished. Mem-Info: Node 0 active_anon:3045904kB inactive_anon:611448kB active_file:14892kB inactive_file:205636kB unevictable:10416kB isolated(anon):0kB isolated(file):37664kB mapped:630216kB dirty:384kB writeback:0kB shmem:42576kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no Like log above, there were too much isolated files, 37664kB, which triggers too_many_isolated in reclaim even when there is no actually isolated file in system wide. It could be reproducible by running two programs, writing on MADV_FREE page and doing cma alloc, respectively. Although isolated anon is 0, I found that the internal value of isolated anon was the negative value of isolated file. Fix this by compensating the isolated count for both LRU lists. Count non-discarded lazyfree pages in shrink_page_list, then compensate the counted number in reclaim_clean_pages_from_list. Link: http://lkml.kernel.org/r/20200426011718.30246-1-jaewon31.kim@samsung.com Signed-off-by: Jaewon Kim <jaewon31.kim@samsung.com> Reported-by: Yong-Taek Lee <ytk.lee@samsung.com> Suggested-by: Minchan Kim <minchan@kernel.org> Acked-by: Minchan Kim <minchan@kernel.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Michal Nazarewicz <mina86@mina86.com> Cc: Shaohua Li <shli@fb.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/vmstat.h | 1 + mm/vmscan.c | 26 ++++++++++++++++++++------ 2 files changed, 21 insertions(+), 6 deletions(-) --- a/include/linux/vmstat.h~mm-vmscan-count-layzfree-pages-and-fix-nr_isolated_-mismatch +++ a/include/linux/vmstat.h @@ -29,6 +29,7 @@ struct reclaim_stat { unsigned nr_activate[2]; unsigned nr_ref_keep; unsigned nr_unmap_fail; + unsigned nr_lazyfree_fail; }; enum writeback_stat_item { --- a/mm/vmscan.c~mm-vmscan-count-layzfree-pages-and-fix-nr_isolated_-mismatch +++ a/mm/vmscan.c @@ -1295,11 +1295,15 @@ static unsigned long shrink_page_list(st */ if (page_mapped(page)) { enum ttu_flags flags = ttu_flags | TTU_BATCH_FLUSH; + bool was_swapbacked = PageSwapBacked(page); if (unlikely(PageTransHuge(page))) flags |= TTU_SPLIT_HUGE_PMD; + if (!try_to_unmap(page, flags)) { stat->nr_unmap_fail += nr_pages; + if (!was_swapbacked && PageSwapBacked(page)) + stat->nr_lazyfree_fail += nr_pages; goto activate_locked; } } @@ -1491,8 +1495,8 @@ unsigned long reclaim_clean_pages_from_l .priority = DEF_PRIORITY, .may_unmap = 1, }; - struct reclaim_stat dummy_stat; - unsigned long ret; + struct reclaim_stat stat; + unsigned long nr_reclaimed; struct page *page, *next; LIST_HEAD(clean_pages); @@ -1504,11 +1508,21 @@ unsigned long reclaim_clean_pages_from_l } } - ret = shrink_page_list(&clean_pages, zone->zone_pgdat, &sc, - TTU_IGNORE_ACCESS, &dummy_stat, true); + nr_reclaimed = shrink_page_list(&clean_pages, zone->zone_pgdat, &sc, + TTU_IGNORE_ACCESS, &stat, true); list_splice(&clean_pages, page_list); - mod_node_page_state(zone->zone_pgdat, NR_ISOLATED_FILE, -ret); - return ret; + mod_node_page_state(zone->zone_pgdat, NR_ISOLATED_FILE, -nr_reclaimed); + /* + * Since lazyfree pages are isolated from file LRU from the beginning, + * they will rotate back to anonymous LRU in the end if it failed to + * discard so isolated count will be mismatched. + * Compensate the isolated count for both LRU lists. + */ + mod_node_page_state(zone->zone_pgdat, NR_ISOLATED_ANON, + stat.nr_lazyfree_fail); + mod_node_page_state(zone->zone_pgdat, NR_ISOLATED_FILE, + -stat.nr_lazyfree_fail); + return nr_reclaimed; } /* _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 081/131] mm/vmscan.c: change prototype for shrink_page_list 2020-06-03 22:55 incoming Andrew Morton ` (79 preceding siblings ...) 2020-06-03 23:01 ` [patch 080/131] mm/vmscan: count layzfree pages and fix nr_isolated_* mismatch Andrew Morton @ 2020-06-03 23:01 ` Andrew Morton 2020-06-03 23:01 ` [patch 082/131] mm/vmscan: update the comment of should_continue_reclaim() Andrew Morton ` (50 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:01 UTC (permalink / raw) To: a.sahrawat, akpm, linux-mm, maninder1.s, mgorman, mhocko, mm-commits, torvalds, v.narang, vbabka From: Maninder Singh <maninder1.s@samsung.com> Subject: mm/vmscan.c: change prototype for shrink_page_list commit 3c710c1ad11b ("mm, vmscan extract shrink_page_list reclaim counters into a struct") changed data type for the function, so changing return type for funciton and its caller. Link: http://lkml.kernel.org/r/1588168259-25604-1-git-send-email-maninder1.s@samsung.com Signed-off-by: Vaneet Narang <v.narang@samsung.com> Signed-off-by: Maninder Singh <maninder1.s@samsung.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Amit Sahrawat <a.sahrawat@samsung.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/internal.h | 2 +- mm/page_alloc.c | 2 +- mm/vmscan.c | 24 ++++++++++++------------ 3 files changed, 14 insertions(+), 14 deletions(-) --- a/mm/internal.h~mm-vmscanc-change-prototype-for-shrink_page_list +++ a/mm/internal.h @@ -538,7 +538,7 @@ extern unsigned long __must_check vm_mm unsigned long, unsigned long); extern void set_pageblock_order(void); -unsigned long reclaim_clean_pages_from_list(struct zone *zone, +unsigned int reclaim_clean_pages_from_list(struct zone *zone, struct list_head *page_list); /* The ALLOC_WMARK bits are used as an index to zone->watermark */ #define ALLOC_WMARK_MIN WMARK_MIN --- a/mm/page_alloc.c~mm-vmscanc-change-prototype-for-shrink_page_list +++ a/mm/page_alloc.c @@ -8355,7 +8355,7 @@ static int __alloc_contig_migrate_range( unsigned long start, unsigned long end) { /* This function is based on compact_zone() from compaction.c. */ - unsigned long nr_reclaimed; + unsigned int nr_reclaimed; unsigned long pfn = start; unsigned int tries = 0; int ret = 0; --- a/mm/vmscan.c~mm-vmscanc-change-prototype-for-shrink_page_list +++ a/mm/vmscan.c @@ -1066,17 +1066,17 @@ static void page_check_dirty_writeback(s /* * shrink_page_list() returns the number of reclaimed pages */ -static unsigned long shrink_page_list(struct list_head *page_list, - struct pglist_data *pgdat, - struct scan_control *sc, - enum ttu_flags ttu_flags, - struct reclaim_stat *stat, - bool ignore_references) +static unsigned int shrink_page_list(struct list_head *page_list, + struct pglist_data *pgdat, + struct scan_control *sc, + enum ttu_flags ttu_flags, + struct reclaim_stat *stat, + bool ignore_references) { LIST_HEAD(ret_pages); LIST_HEAD(free_pages); - unsigned nr_reclaimed = 0; - unsigned pgactivate = 0; + unsigned int nr_reclaimed = 0; + unsigned int pgactivate = 0; memset(stat, 0, sizeof(*stat)); cond_resched(); @@ -1487,7 +1487,7 @@ keep: return nr_reclaimed; } -unsigned long reclaim_clean_pages_from_list(struct zone *zone, +unsigned int reclaim_clean_pages_from_list(struct zone *zone, struct list_head *page_list) { struct scan_control sc = { @@ -1496,7 +1496,7 @@ unsigned long reclaim_clean_pages_from_l .may_unmap = 1, }; struct reclaim_stat stat; - unsigned long nr_reclaimed; + unsigned int nr_reclaimed; struct page *page, *next; LIST_HEAD(clean_pages); @@ -1910,7 +1910,7 @@ shrink_inactive_list(unsigned long nr_to { LIST_HEAD(page_list); unsigned long nr_scanned; - unsigned long nr_reclaimed = 0; + unsigned int nr_reclaimed = 0; unsigned long nr_taken; struct reclaim_stat stat; int file = is_file_lru(lru); @@ -2106,7 +2106,7 @@ static void shrink_active_list(unsigned unsigned long reclaim_pages(struct list_head *page_list) { int nid = NUMA_NO_NODE; - unsigned long nr_reclaimed = 0; + unsigned int nr_reclaimed = 0; LIST_HEAD(node_page_list); struct reclaim_stat dummy_stat; struct page *page; _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 082/131] mm/vmscan: update the comment of should_continue_reclaim() 2020-06-03 22:55 incoming Andrew Morton ` (80 preceding siblings ...) 2020-06-03 23:01 ` [patch 081/131] mm/vmscan.c: change prototype for shrink_page_list Andrew Morton @ 2020-06-03 23:01 ` Andrew Morton 2020-06-03 23:01 ` [patch 083/131] mm: fix NUMA node file count error in replace_page_cache() Andrew Morton ` (49 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:01 UTC (permalink / raw) To: akpm, chenqiwu, linux-mm, mm-commits, qiwuchen55, torvalds From: Qiwu Chen <qiwuchen55@gmail.com> Subject: mm/vmscan: update the comment of should_continue_reclaim() try_to_compact_zone() has been replaced by try_to_compact_pages(), which is necessary to be updated in the comment of should_continue_reclaim(). Link: http://lkml.kernel.org/r/20200501034907.22991-1-chenqiwu@xiaomi.com Signed-off-by: Qiwu Chen <chenqiwu@xiaomi.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/vmscan.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/mm/vmscan.c~mm-vmscan-update-the-comment-of-should_continue_reclaim +++ a/mm/vmscan.c @@ -2577,7 +2577,7 @@ static bool in_reclaim_compaction(struct * Reclaim/compaction is used for high-order allocation requests. It reclaims * order-0 pages before compacting the zone. should_continue_reclaim() returns * true if more pages should be reclaimed such that when the page allocator - * calls try_to_compact_zone() that it will have enough free pages to succeed. + * calls try_to_compact_pages() that it will have enough free pages to succeed. * It will give up earlier than that if there is difficulty reclaiming pages. */ static inline bool should_continue_reclaim(struct pglist_data *pgdat, _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 083/131] mm: fix NUMA node file count error in replace_page_cache() 2020-06-03 22:55 incoming Andrew Morton ` (81 preceding siblings ...) 2020-06-03 23:01 ` [patch 082/131] mm/vmscan: update the comment of should_continue_reclaim() Andrew Morton @ 2020-06-03 23:01 ` Andrew Morton 2020-06-03 23:01 ` [patch 084/131] mm: memcontrol: fix stat-corrupting race in charge moving Andrew Morton ` (48 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:01 UTC (permalink / raw) To: akpm, alex.shi, bsingharora, guro, hannes, hughd, iamjoonsoo.kim, kirill, linux-mm, mhocko, mm-commits, shakeelb, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: fix NUMA node file count error in replace_page_cache() Patch series "mm: memcontrol: charge swapin pages on instantiation", v2. This patch series reworks memcg to charge swapin pages directly at swapin time, rather than at fault time, which may be much later, or not happen at all. Changes in version 2: - prevent double charges on pre-allocated hugepages in khugepaged - leave shmem swapcache when charging fails to avoid double IO (Joonsoo) - fix temporary accounting bug by switching rmap<->commit (Joonsoo) - fix double swap charge bug in cgroup1/cgroup2 code gating - simplify swapin error checking (Joonsoo) - mm: memcontrol: document the new swap control behavior (Alex) - review tags The delayed swapin charging scheme we have right now causes problems: - Alex's per-cgroup lru_lock patches rely on pages that have been isolated from the LRU to have a stable page->mem_cgroup; otherwise the lock may change underneath him. Swapcache pages are charged only after they are added to the LRU, and charging doesn't follow the LRU isolation protocol. - Joonsoo's anon workingset patches need a suitable LRU at the time the page enters the swap cache and displaces the non-resident info. But the correct LRU is only available after charging. - It's a containment hole / DoS vector. Users can trigger arbitrarily large swap readahead using MADV_WILLNEED. The memory is never charged unless somebody actually touches it. - It complicates the page->mem_cgroup stabilization rules In order to charge pages directly at swapin time, the memcg code base needs to be prepared, and several overdue cleanups become a necessity: To charge pages at swapin time, we need to always have cgroup ownership tracking of swap records. We also cannot rely on page->mapping to tell apart page types at charge time, because that's only set up during a page fault. To eliminate the page->mapping dependency, memcg needs to ditch its private page type counters (MEMCG_CACHE, MEMCG_RSS, NR_SHMEM) in favor of the generic vmstat counters and accounting sites, such as NR_FILE_PAGES, NR_ANON_MAPPED etc. To switch to generic vmstat counters, the charge sequence must be adjusted such that page->mem_cgroup is set up by the time these counters are modified. The series is structured as follows: 1. Bug fixes 2. Decoupling charging from rmap 3. Swap controller integration into memcg 4. Direct swapin charging This patch (of 19): When replacing one page with another one in the cache, we have to decrease the file count of the old page's NUMA node and increase the one of the new NUMA node, otherwise the old node leaks the count and the new node eventually underflows its counter. Link: http://lkml.kernel.org/r/20200508183105.225460-1-hannes@cmpxchg.org Link: http://lkml.kernel.org/r/20200508183105.225460-2-hannes@cmpxchg.org Fixes: 74d609585d8b ("page cache: Add and replace pages using the XArray") Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Alex Shi <alex.shi@linux.alibaba.com> Reviewed-by: Shakeel Butt <shakeelb@google.com> Reviewed-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Reviewed-by: Balbir Singh <bsingharora@gmail.com> Cc: Hugh Dickins <hughd@google.com> Cc: Michal Hocko <mhocko@suse.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Roman Gushchin <guro@fb.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/filemap.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- a/mm/filemap.c~mm-fix-numa-node-file-count-error-in-replace_page_cache +++ a/mm/filemap.c @@ -808,11 +808,11 @@ int replace_page_cache_page(struct page old->mapping = NULL; /* hugetlb pages do not participate in page cache accounting. */ if (!PageHuge(old)) - __dec_node_page_state(new, NR_FILE_PAGES); + __dec_node_page_state(old, NR_FILE_PAGES); if (!PageHuge(new)) __inc_node_page_state(new, NR_FILE_PAGES); if (PageSwapBacked(old)) - __dec_node_page_state(new, NR_SHMEM); + __dec_node_page_state(old, NR_SHMEM); if (PageSwapBacked(new)) __inc_node_page_state(new, NR_SHMEM); xas_unlock_irqrestore(&xas, flags); _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 084/131] mm: memcontrol: fix stat-corrupting race in charge moving 2020-06-03 22:55 incoming Andrew Morton ` (82 preceding siblings ...) 2020-06-03 23:01 ` [patch 083/131] mm: fix NUMA node file count error in replace_page_cache() Andrew Morton @ 2020-06-03 23:01 ` Andrew Morton 2020-06-03 23:01 ` [patch 085/131] mm: memcontrol: drop @compound parameter from memcg charging API Andrew Morton ` (47 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:01 UTC (permalink / raw) To: akpm, alex.shi, bsingharora, guro, hannes, hughd, iamjoonsoo.kim, kirill, linux-mm, mhocko, mm-commits, shakeelb, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: memcontrol: fix stat-corrupting race in charge moving The move_lock is a per-memcg lock, but the VM accounting code that needs to acquire it comes from the page and follows page->mem_cgroup under RCU protection. That means that the page becomes unlocked not when we drop the move_lock, but when we update page->mem_cgroup. And that assignment doesn't imply any memory ordering. If that pointer write gets reordered against the reads of the page state - page_mapped, PageDirty etc. the state may change while we rely on it being stable and we can end up corrupting the counters. Place an SMP memory barrier to make sure we're done with all page state by the time the new page->mem_cgroup becomes visible. Also replace the open-coded move_lock with a lock_page_memcg() to make it more obvious what we're serializing against. Link: http://lkml.kernel.org/r/20200508183105.225460-3-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Reviewed-by: Shakeel Butt <shakeelb@google.com> Cc: Alex Shi <alex.shi@linux.alibaba.com> Cc: Hugh Dickins <hughd@google.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Michal Hocko <mhocko@suse.com> Cc: Roman Gushchin <guro@fb.com> Cc: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/memcontrol.c | 26 ++++++++++++++------------ 1 file changed, 14 insertions(+), 12 deletions(-) --- a/mm/memcontrol.c~mm-memcontrol-fix-stat-corrupting-race-in-charge-moving +++ a/mm/memcontrol.c @@ -5432,7 +5432,6 @@ static int mem_cgroup_move_account(struc { struct lruvec *from_vec, *to_vec; struct pglist_data *pgdat; - unsigned long flags; unsigned int nr_pages = compound ? hpage_nr_pages(page) : 1; int ret; bool anon; @@ -5459,18 +5458,13 @@ static int mem_cgroup_move_account(struc from_vec = mem_cgroup_lruvec(from, pgdat); to_vec = mem_cgroup_lruvec(to, pgdat); - spin_lock_irqsave(&from->move_lock, flags); + lock_page_memcg(page); if (!anon && page_mapped(page)) { __mod_lruvec_state(from_vec, NR_FILE_MAPPED, -nr_pages); __mod_lruvec_state(to_vec, NR_FILE_MAPPED, nr_pages); } - /* - * move_lock grabbed above and caller set from->moving_account, so - * mod_memcg_page_state will serialize updates to PageDirty. - * So mapping should be stable for dirty pages. - */ if (!anon && PageDirty(page)) { struct address_space *mapping = page_mapping(page); @@ -5486,15 +5480,23 @@ static int mem_cgroup_move_account(struc } /* + * All state has been migrated, let's switch to the new memcg. + * * It is safe to change page->mem_cgroup here because the page - * is referenced, charged, and isolated - we can't race with - * uncharging, charging, migration, or LRU putback. + * is referenced, charged, isolated, and locked: we can't race + * with (un)charging, migration, LRU putback, or anything else + * that would rely on a stable page->mem_cgroup. + * + * Note that lock_page_memcg is a memcg lock, not a page lock, + * to save space. As soon as we switch page->mem_cgroup to a + * new memcg that isn't locked, the above state can change + * concurrently again. Make sure we're truly done with it. */ + smp_mb(); - /* caller should have done css_get */ - page->mem_cgroup = to; + page->mem_cgroup = to; /* caller should have done css_get */ - spin_unlock_irqrestore(&from->move_lock, flags); + __unlock_page_memcg(from); ret = 0; _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 085/131] mm: memcontrol: drop @compound parameter from memcg charging API 2020-06-03 22:55 incoming Andrew Morton ` (83 preceding siblings ...) 2020-06-03 23:01 ` [patch 084/131] mm: memcontrol: fix stat-corrupting race in charge moving Andrew Morton @ 2020-06-03 23:01 ` Andrew Morton 2020-06-03 23:01 ` [patch 086/131] mm: shmem: remove rare optimization when swapin races with hole punching Andrew Morton ` (46 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:01 UTC (permalink / raw) To: akpm, alex.shi, bsingharora, guro, hannes, hughd, iamjoonsoo.kim, kirill, linux-mm, mhocko, mm-commits, shakeelb, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: memcontrol: drop @compound parameter from memcg charging API The memcg charging API carries a boolean @compound parameter that tells whether the page we're dealing with is a hugepage. mem_cgroup_commit_charge() has another boolean @lrucare that indicates whether the page needs LRU locking or not while charging. The majority of callsites know those parameters at compile time, which results in a lot of naked "false, false" argument lists. This makes for cryptic code and is a breeding ground for subtle mistakes. Thankfully, the huge page state can be inferred from the page itself and doesn't need to be passed along. This is safe because charging completes before the page is published and somebody may split it. Simplify the callsites by removing @compound, and let memcg infer the state by using hpage_nr_pages() unconditionally. That function does PageTransHuge() to identify huge pages, which also helpfully asserts that nobody passes in tail pages by accident. The following patches will introduce a new charging API, best not to carry over unnecessary weight. Link: http://lkml.kernel.org/r/20200508183105.225460-4-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Alex Shi <alex.shi@linux.alibaba.com> Reviewed-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Reviewed-by: Shakeel Butt <shakeelb@google.com> Cc: Hugh Dickins <hughd@google.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Michal Hocko <mhocko@suse.com> Cc: Roman Gushchin <guro@fb.com> Cc: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/memcontrol.h | 22 +++++++------------- kernel/events/uprobes.c | 6 ++--- mm/filemap.c | 6 ++--- mm/huge_memory.c | 8 +++---- mm/khugepaged.c | 20 +++++++++--------- mm/memcontrol.c | 38 +++++++++++++---------------------- mm/memory.c | 32 +++++++++++++---------------- mm/migrate.c | 6 ++--- mm/shmem.c | 22 ++++++++------------ mm/swapfile.c | 9 +++----- mm/userfaultfd.c | 6 ++--- 11 files changed, 77 insertions(+), 98 deletions(-) --- a/include/linux/memcontrol.h~mm-memcontrol-drop-compound-parameter-from-memcg-charging-api +++ a/include/linux/memcontrol.h @@ -359,15 +359,12 @@ enum mem_cgroup_protection mem_cgroup_pr struct mem_cgroup *memcg); int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm, - gfp_t gfp_mask, struct mem_cgroup **memcgp, - bool compound); + gfp_t gfp_mask, struct mem_cgroup **memcgp); int mem_cgroup_try_charge_delay(struct page *page, struct mm_struct *mm, - gfp_t gfp_mask, struct mem_cgroup **memcgp, - bool compound); + gfp_t gfp_mask, struct mem_cgroup **memcgp); void mem_cgroup_commit_charge(struct page *page, struct mem_cgroup *memcg, - bool lrucare, bool compound); -void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg, - bool compound); + bool lrucare); +void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg); void mem_cgroup_uncharge(struct page *page); void mem_cgroup_uncharge_list(struct list_head *page_list); @@ -849,8 +846,7 @@ static inline enum mem_cgroup_protection static inline int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, - struct mem_cgroup **memcgp, - bool compound) + struct mem_cgroup **memcgp) { *memcgp = NULL; return 0; @@ -859,8 +855,7 @@ static inline int mem_cgroup_try_charge( static inline int mem_cgroup_try_charge_delay(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, - struct mem_cgroup **memcgp, - bool compound) + struct mem_cgroup **memcgp) { *memcgp = NULL; return 0; @@ -868,13 +863,12 @@ static inline int mem_cgroup_try_charge_ static inline void mem_cgroup_commit_charge(struct page *page, struct mem_cgroup *memcg, - bool lrucare, bool compound) + bool lrucare) { } static inline void mem_cgroup_cancel_charge(struct page *page, - struct mem_cgroup *memcg, - bool compound) + struct mem_cgroup *memcg) { } --- a/kernel/events/uprobes.c~mm-memcontrol-drop-compound-parameter-from-memcg-charging-api +++ a/kernel/events/uprobes.c @@ -169,7 +169,7 @@ static int __replace_page(struct vm_area if (new_page) { err = mem_cgroup_try_charge(new_page, vma->vm_mm, GFP_KERNEL, - &memcg, false); + &memcg); if (err) return err; } @@ -181,7 +181,7 @@ static int __replace_page(struct vm_area err = -EAGAIN; if (!page_vma_mapped_walk(&pvmw)) { if (new_page) - mem_cgroup_cancel_charge(new_page, memcg, false); + mem_cgroup_cancel_charge(new_page, memcg); goto unlock; } VM_BUG_ON_PAGE(addr != pvmw.address, old_page); @@ -189,7 +189,7 @@ static int __replace_page(struct vm_area if (new_page) { get_page(new_page); page_add_new_anon_rmap(new_page, vma, addr, false); - mem_cgroup_commit_charge(new_page, memcg, false, false); + mem_cgroup_commit_charge(new_page, memcg, false); lru_cache_add_active_or_unevictable(new_page, vma); } else /* no new page, just dec_mm_counter for old_page */ --- a/mm/filemap.c~mm-memcontrol-drop-compound-parameter-from-memcg-charging-api +++ a/mm/filemap.c @@ -842,7 +842,7 @@ static int __add_to_page_cache_locked(st if (!huge) { error = mem_cgroup_try_charge(page, current->mm, - gfp_mask, &memcg, false); + gfp_mask, &memcg); if (error) return error; } @@ -878,14 +878,14 @@ unlock: goto error; if (!huge) - mem_cgroup_commit_charge(page, memcg, false, false); + mem_cgroup_commit_charge(page, memcg, false); trace_mm_filemap_add_to_page_cache(page); return 0; error: page->mapping = NULL; /* Leave page->index set: truncation relies upon it */ if (!huge) - mem_cgroup_cancel_charge(page, memcg, false); + mem_cgroup_cancel_charge(page, memcg); put_page(page); return xas_error(&xas); } --- a/mm/huge_memory.c~mm-memcontrol-drop-compound-parameter-from-memcg-charging-api +++ a/mm/huge_memory.c @@ -594,7 +594,7 @@ static vm_fault_t __do_huge_pmd_anonymou VM_BUG_ON_PAGE(!PageCompound(page), page); - if (mem_cgroup_try_charge_delay(page, vma->vm_mm, gfp, &memcg, true)) { + if (mem_cgroup_try_charge_delay(page, vma->vm_mm, gfp, &memcg)) { put_page(page); count_vm_event(THP_FAULT_FALLBACK); count_vm_event(THP_FAULT_FALLBACK_CHARGE); @@ -630,7 +630,7 @@ static vm_fault_t __do_huge_pmd_anonymou vm_fault_t ret2; spin_unlock(vmf->ptl); - mem_cgroup_cancel_charge(page, memcg, true); + mem_cgroup_cancel_charge(page, memcg); put_page(page); pte_free(vma->vm_mm, pgtable); ret2 = handle_userfault(vmf, VM_UFFD_MISSING); @@ -641,7 +641,7 @@ static vm_fault_t __do_huge_pmd_anonymou entry = mk_huge_pmd(page, vma->vm_page_prot); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); page_add_new_anon_rmap(page, vma, haddr, true); - mem_cgroup_commit_charge(page, memcg, false, true); + mem_cgroup_commit_charge(page, memcg, false); lru_cache_add_active_or_unevictable(page, vma); pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable); set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry); @@ -658,7 +658,7 @@ unlock_release: release: if (pgtable) pte_free(vma->vm_mm, pgtable); - mem_cgroup_cancel_charge(page, memcg, true); + mem_cgroup_cancel_charge(page, memcg); put_page(page); return ret; --- a/mm/khugepaged.c~mm-memcontrol-drop-compound-parameter-from-memcg-charging-api +++ a/mm/khugepaged.c @@ -1060,7 +1060,7 @@ static void collapse_huge_page(struct mm goto out_nolock; } - if (unlikely(mem_cgroup_try_charge(new_page, mm, gfp, &memcg, true))) { + if (unlikely(mem_cgroup_try_charge(new_page, mm, gfp, &memcg))) { result = SCAN_CGROUP_CHARGE_FAIL; goto out_nolock; } @@ -1068,7 +1068,7 @@ static void collapse_huge_page(struct mm down_read(&mm->mmap_sem); result = hugepage_vma_revalidate(mm, address, &vma); if (result) { - mem_cgroup_cancel_charge(new_page, memcg, true); + mem_cgroup_cancel_charge(new_page, memcg); up_read(&mm->mmap_sem); goto out_nolock; } @@ -1076,7 +1076,7 @@ static void collapse_huge_page(struct mm pmd = mm_find_pmd(mm, address); if (!pmd) { result = SCAN_PMD_NULL; - mem_cgroup_cancel_charge(new_page, memcg, true); + mem_cgroup_cancel_charge(new_page, memcg); up_read(&mm->mmap_sem); goto out_nolock; } @@ -1088,7 +1088,7 @@ static void collapse_huge_page(struct mm */ if (unmapped && !__collapse_huge_page_swapin(mm, vma, address, pmd, referenced)) { - mem_cgroup_cancel_charge(new_page, memcg, true); + mem_cgroup_cancel_charge(new_page, memcg); up_read(&mm->mmap_sem); goto out_nolock; } @@ -1176,7 +1176,7 @@ static void collapse_huge_page(struct mm spin_lock(pmd_ptl); BUG_ON(!pmd_none(*pmd)); page_add_new_anon_rmap(new_page, vma, address, true); - mem_cgroup_commit_charge(new_page, memcg, false, true); + mem_cgroup_commit_charge(new_page, memcg, false); count_memcg_events(memcg, THP_COLLAPSE_ALLOC, 1); lru_cache_add_active_or_unevictable(new_page, vma); pgtable_trans_huge_deposit(mm, pmd, pgtable); @@ -1194,7 +1194,7 @@ out_nolock: trace_mm_collapse_huge_page(mm, isolated, result); return; out: - mem_cgroup_cancel_charge(new_page, memcg, true); + mem_cgroup_cancel_charge(new_page, memcg); goto out_up_write; } @@ -1637,7 +1637,7 @@ static void collapse_file(struct mm_stru goto out; } - if (unlikely(mem_cgroup_try_charge(new_page, mm, gfp, &memcg, true))) { + if (unlikely(mem_cgroup_try_charge(new_page, mm, gfp, &memcg))) { result = SCAN_CGROUP_CHARGE_FAIL; goto out; } @@ -1650,7 +1650,7 @@ static void collapse_file(struct mm_stru break; xas_unlock_irq(&xas); if (!xas_nomem(&xas, GFP_KERNEL)) { - mem_cgroup_cancel_charge(new_page, memcg, true); + mem_cgroup_cancel_charge(new_page, memcg); result = SCAN_FAIL; goto out; } @@ -1887,7 +1887,7 @@ xa_unlocked: SetPageUptodate(new_page); page_ref_add(new_page, HPAGE_PMD_NR - 1); - mem_cgroup_commit_charge(new_page, memcg, false, true); + mem_cgroup_commit_charge(new_page, memcg, false); if (is_shmem) { set_page_dirty(new_page); @@ -1942,7 +1942,7 @@ xa_unlocked: VM_BUG_ON(nr_none); xas_unlock_irq(&xas); - mem_cgroup_cancel_charge(new_page, memcg, true); + mem_cgroup_cancel_charge(new_page, memcg); new_page->mapping = NULL; } --- a/mm/memcontrol.c~mm-memcontrol-drop-compound-parameter-from-memcg-charging-api +++ a/mm/memcontrol.c @@ -834,7 +834,7 @@ static unsigned long memcg_events_local( static void mem_cgroup_charge_statistics(struct mem_cgroup *memcg, struct page *page, - bool compound, int nr_pages) + int nr_pages) { /* * Here, RSS means 'mapped anon' and anon's SwapCache. Shmem/tmpfs is @@ -848,7 +848,7 @@ static void mem_cgroup_charge_statistics __mod_memcg_state(memcg, NR_SHMEM, nr_pages); } - if (compound) { + if (abs(nr_pages) > 1) { VM_BUG_ON_PAGE(!PageTransHuge(page), page); __mod_memcg_state(memcg, MEMCG_RSS_HUGE, nr_pages); } @@ -5501,9 +5501,9 @@ static int mem_cgroup_move_account(struc ret = 0; local_irq_disable(); - mem_cgroup_charge_statistics(to, page, compound, nr_pages); + mem_cgroup_charge_statistics(to, page, nr_pages); memcg_check_events(to, page); - mem_cgroup_charge_statistics(from, page, compound, -nr_pages); + mem_cgroup_charge_statistics(from, page, -nr_pages); memcg_check_events(from, page); local_irq_enable(); out_unlock: @@ -6494,7 +6494,6 @@ out: * @mm: mm context of the victim * @gfp_mask: reclaim mode * @memcgp: charged memcg return - * @compound: charge the page as compound or small page * * Try to charge @page to the memcg that @mm belongs to, reclaiming * pages according to @gfp_mask if necessary. @@ -6507,11 +6506,10 @@ out: * with mem_cgroup_cancel_charge() in case page instantiation fails. */ int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm, - gfp_t gfp_mask, struct mem_cgroup **memcgp, - bool compound) + gfp_t gfp_mask, struct mem_cgroup **memcgp) { + unsigned int nr_pages = hpage_nr_pages(page); struct mem_cgroup *memcg = NULL; - unsigned int nr_pages = compound ? hpage_nr_pages(page) : 1; int ret = 0; if (mem_cgroup_disabled()) @@ -6553,13 +6551,12 @@ out: } int mem_cgroup_try_charge_delay(struct page *page, struct mm_struct *mm, - gfp_t gfp_mask, struct mem_cgroup **memcgp, - bool compound) + gfp_t gfp_mask, struct mem_cgroup **memcgp) { struct mem_cgroup *memcg; int ret; - ret = mem_cgroup_try_charge(page, mm, gfp_mask, memcgp, compound); + ret = mem_cgroup_try_charge(page, mm, gfp_mask, memcgp); memcg = *memcgp; mem_cgroup_throttle_swaprate(memcg, page_to_nid(page), gfp_mask); return ret; @@ -6570,7 +6567,6 @@ int mem_cgroup_try_charge_delay(struct p * @page: page to charge * @memcg: memcg to charge the page to * @lrucare: page might be on LRU already - * @compound: charge the page as compound or small page * * Finalize a charge transaction started by mem_cgroup_try_charge(), * after page->mapping has been set up. This must happen atomically @@ -6583,9 +6579,9 @@ int mem_cgroup_try_charge_delay(struct p * Use mem_cgroup_cancel_charge() to cancel the transaction instead. */ void mem_cgroup_commit_charge(struct page *page, struct mem_cgroup *memcg, - bool lrucare, bool compound) + bool lrucare) { - unsigned int nr_pages = compound ? hpage_nr_pages(page) : 1; + unsigned int nr_pages = hpage_nr_pages(page); VM_BUG_ON_PAGE(!page->mapping, page); VM_BUG_ON_PAGE(PageLRU(page) && !lrucare, page); @@ -6603,7 +6599,7 @@ void mem_cgroup_commit_charge(struct pag commit_charge(page, memcg, lrucare); local_irq_disable(); - mem_cgroup_charge_statistics(memcg, page, compound, nr_pages); + mem_cgroup_charge_statistics(memcg, page, nr_pages); memcg_check_events(memcg, page); local_irq_enable(); @@ -6622,14 +6618,12 @@ void mem_cgroup_commit_charge(struct pag * mem_cgroup_cancel_charge - cancel a page charge * @page: page to charge * @memcg: memcg to charge the page to - * @compound: charge the page as compound or small page * * Cancel a charge transaction started by mem_cgroup_try_charge(). */ -void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg, - bool compound) +void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg) { - unsigned int nr_pages = compound ? hpage_nr_pages(page) : 1; + unsigned int nr_pages = hpage_nr_pages(page); if (mem_cgroup_disabled()) return; @@ -6844,8 +6838,7 @@ void mem_cgroup_migrate(struct page *old commit_charge(newpage, memcg, false); local_irq_save(flags); - mem_cgroup_charge_statistics(memcg, newpage, PageTransHuge(newpage), - nr_pages); + mem_cgroup_charge_statistics(memcg, newpage, nr_pages); memcg_check_events(memcg, newpage); local_irq_restore(flags); } @@ -7075,8 +7068,7 @@ void mem_cgroup_swapout(struct page *pag * only synchronisation we have for updating the per-CPU variables. */ VM_BUG_ON(!irqs_disabled()); - mem_cgroup_charge_statistics(memcg, page, PageTransHuge(page), - -nr_entries); + mem_cgroup_charge_statistics(memcg, page, -nr_entries); memcg_check_events(memcg, page); if (!mem_cgroup_is_root(memcg)) --- a/mm/memory.c~mm-memcontrol-drop-compound-parameter-from-memcg-charging-api +++ a/mm/memory.c @@ -2676,7 +2676,7 @@ static vm_fault_t wp_page_copy(struct vm } } - if (mem_cgroup_try_charge_delay(new_page, mm, GFP_KERNEL, &memcg, false)) + if (mem_cgroup_try_charge_delay(new_page, mm, GFP_KERNEL, &memcg)) goto oom_free_new; __SetPageUptodate(new_page); @@ -2711,7 +2711,7 @@ static vm_fault_t wp_page_copy(struct vm */ ptep_clear_flush_notify(vma, vmf->address, vmf->pte); page_add_new_anon_rmap(new_page, vma, vmf->address, false); - mem_cgroup_commit_charge(new_page, memcg, false, false); + mem_cgroup_commit_charge(new_page, memcg, false); lru_cache_add_active_or_unevictable(new_page, vma); /* * We call the notify macro here because, when using secondary @@ -2750,7 +2750,7 @@ static vm_fault_t wp_page_copy(struct vm new_page = old_page; page_copied = 1; } else { - mem_cgroup_cancel_charge(new_page, memcg, false); + mem_cgroup_cancel_charge(new_page, memcg); } if (new_page) @@ -3193,8 +3193,7 @@ vm_fault_t do_swap_page(struct vm_fault goto out_page; } - if (mem_cgroup_try_charge_delay(page, vma->vm_mm, GFP_KERNEL, - &memcg, false)) { + if (mem_cgroup_try_charge_delay(page, vma->vm_mm, GFP_KERNEL, &memcg)) { ret = VM_FAULT_OOM; goto out_page; } @@ -3245,11 +3244,11 @@ vm_fault_t do_swap_page(struct vm_fault /* ksm created a completely new copy */ if (unlikely(page != swapcache && swapcache)) { page_add_new_anon_rmap(page, vma, vmf->address, false); - mem_cgroup_commit_charge(page, memcg, false, false); + mem_cgroup_commit_charge(page, memcg, false); lru_cache_add_active_or_unevictable(page, vma); } else { do_page_add_anon_rmap(page, vma, vmf->address, exclusive); - mem_cgroup_commit_charge(page, memcg, true, false); + mem_cgroup_commit_charge(page, memcg, true); activate_page(page); } @@ -3285,7 +3284,7 @@ unlock: out: return ret; out_nomap: - mem_cgroup_cancel_charge(page, memcg, false); + mem_cgroup_cancel_charge(page, memcg); pte_unmap_unlock(vmf->pte, vmf->ptl); out_page: unlock_page(page); @@ -3359,8 +3358,7 @@ static vm_fault_t do_anonymous_page(stru if (!page) goto oom; - if (mem_cgroup_try_charge_delay(page, vma->vm_mm, GFP_KERNEL, &memcg, - false)) + if (mem_cgroup_try_charge_delay(page, vma->vm_mm, GFP_KERNEL, &memcg)) goto oom_free_page; /* @@ -3386,14 +3384,14 @@ static vm_fault_t do_anonymous_page(stru /* Deliver the page fault to userland, check inside PT lock */ if (userfaultfd_missing(vma)) { pte_unmap_unlock(vmf->pte, vmf->ptl); - mem_cgroup_cancel_charge(page, memcg, false); + mem_cgroup_cancel_charge(page, memcg); put_page(page); return handle_userfault(vmf, VM_UFFD_MISSING); } inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES); page_add_new_anon_rmap(page, vma, vmf->address, false); - mem_cgroup_commit_charge(page, memcg, false, false); + mem_cgroup_commit_charge(page, memcg, false); lru_cache_add_active_or_unevictable(page, vma); setpte: set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry); @@ -3404,7 +3402,7 @@ unlock: pte_unmap_unlock(vmf->pte, vmf->ptl); return ret; release: - mem_cgroup_cancel_charge(page, memcg, false); + mem_cgroup_cancel_charge(page, memcg); put_page(page); goto unlock; oom_free_page: @@ -3655,7 +3653,7 @@ vm_fault_t alloc_set_pte(struct vm_fault if (write && !(vma->vm_flags & VM_SHARED)) { inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES); page_add_new_anon_rmap(page, vma, vmf->address, false); - mem_cgroup_commit_charge(page, memcg, false, false); + mem_cgroup_commit_charge(page, memcg, false); lru_cache_add_active_or_unevictable(page, vma); } else { inc_mm_counter_fast(vma->vm_mm, mm_counter_file(page)); @@ -3864,8 +3862,8 @@ static vm_fault_t do_cow_fault(struct vm if (!vmf->cow_page) return VM_FAULT_OOM; - if (mem_cgroup_try_charge_delay(vmf->cow_page, vma->vm_mm, GFP_KERNEL, - &vmf->memcg, false)) { + if (mem_cgroup_try_charge_delay(vmf->cow_page, vma->vm_mm, + GFP_KERNEL, &vmf->memcg)) { put_page(vmf->cow_page); return VM_FAULT_OOM; } @@ -3886,7 +3884,7 @@ static vm_fault_t do_cow_fault(struct vm goto uncharge_out; return ret; uncharge_out: - mem_cgroup_cancel_charge(vmf->cow_page, vmf->memcg, false); + mem_cgroup_cancel_charge(vmf->cow_page, vmf->memcg); put_page(vmf->cow_page); return ret; } --- a/mm/migrate.c~mm-memcontrol-drop-compound-parameter-from-memcg-charging-api +++ a/mm/migrate.c @@ -2780,7 +2780,7 @@ static void migrate_vma_insert_page(stru if (unlikely(anon_vma_prepare(vma))) goto abort; - if (mem_cgroup_try_charge(page, vma->vm_mm, GFP_KERNEL, &memcg, false)) + if (mem_cgroup_try_charge(page, vma->vm_mm, GFP_KERNEL, &memcg)) goto abort; /* @@ -2826,7 +2826,7 @@ static void migrate_vma_insert_page(stru inc_mm_counter(mm, MM_ANONPAGES); page_add_new_anon_rmap(page, vma, addr, false); - mem_cgroup_commit_charge(page, memcg, false, false); + mem_cgroup_commit_charge(page, memcg, false); if (!is_zone_device_page(page)) lru_cache_add_active_or_unevictable(page, vma); get_page(page); @@ -2848,7 +2848,7 @@ static void migrate_vma_insert_page(stru unlock_abort: pte_unmap_unlock(ptep, ptl); - mem_cgroup_cancel_charge(page, memcg, false); + mem_cgroup_cancel_charge(page, memcg); abort: *src &= ~MIGRATE_PFN_MIGRATE; } --- a/mm/shmem.c~mm-memcontrol-drop-compound-parameter-from-memcg-charging-api +++ a/mm/shmem.c @@ -1664,8 +1664,7 @@ static int shmem_swapin_page(struct inod goto failed; } - error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg, - false); + error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg); if (!error) { error = shmem_add_to_page_cache(page, mapping, index, swp_to_radix_entry(swap), gfp); @@ -1680,14 +1679,14 @@ static int shmem_swapin_page(struct inod * the rest. */ if (error) { - mem_cgroup_cancel_charge(page, memcg, false); + mem_cgroup_cancel_charge(page, memcg); delete_from_swap_cache(page); } } if (error) goto failed; - mem_cgroup_commit_charge(page, memcg, true, false); + mem_cgroup_commit_charge(page, memcg, true); spin_lock_irq(&info->lock); info->swapped--; @@ -1859,8 +1858,7 @@ alloc_nohuge: if (sgp == SGP_WRITE) __SetPageReferenced(page); - error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg, - PageTransHuge(page)); + error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg); if (error) { if (PageTransHuge(page)) { count_vm_event(THP_FILE_FALLBACK); @@ -1871,12 +1869,10 @@ alloc_nohuge: error = shmem_add_to_page_cache(page, mapping, hindex, NULL, gfp & GFP_RECLAIM_MASK); if (error) { - mem_cgroup_cancel_charge(page, memcg, - PageTransHuge(page)); + mem_cgroup_cancel_charge(page, memcg); goto unacct; } - mem_cgroup_commit_charge(page, memcg, false, - PageTransHuge(page)); + mem_cgroup_commit_charge(page, memcg, false); lru_cache_add_anon(page); spin_lock_irq(&info->lock); @@ -2364,7 +2360,7 @@ static int shmem_mfill_atomic_pte(struct if (unlikely(offset >= max_off)) goto out_release; - ret = mem_cgroup_try_charge_delay(page, dst_mm, gfp, &memcg, false); + ret = mem_cgroup_try_charge_delay(page, dst_mm, gfp, &memcg); if (ret) goto out_release; @@ -2373,7 +2369,7 @@ static int shmem_mfill_atomic_pte(struct if (ret) goto out_release_uncharge; - mem_cgroup_commit_charge(page, memcg, false, false); + mem_cgroup_commit_charge(page, memcg, false); _dst_pte = mk_pte(page, dst_vma->vm_page_prot); if (dst_vma->vm_flags & VM_WRITE) @@ -2424,7 +2420,7 @@ out_release_uncharge_unlock: ClearPageDirty(page); delete_from_page_cache(page); out_release_uncharge: - mem_cgroup_cancel_charge(page, memcg, false); + mem_cgroup_cancel_charge(page, memcg); out_release: unlock_page(page); put_page(page); --- a/mm/swapfile.c~mm-memcontrol-drop-compound-parameter-from-memcg-charging-api +++ a/mm/swapfile.c @@ -1902,15 +1902,14 @@ static int unuse_pte(struct vm_area_stru if (unlikely(!page)) return -ENOMEM; - if (mem_cgroup_try_charge(page, vma->vm_mm, GFP_KERNEL, - &memcg, false)) { + if (mem_cgroup_try_charge(page, vma->vm_mm, GFP_KERNEL, &memcg)) { ret = -ENOMEM; goto out_nolock; } pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); if (unlikely(!pte_same_as_swp(*pte, swp_entry_to_pte(entry)))) { - mem_cgroup_cancel_charge(page, memcg, false); + mem_cgroup_cancel_charge(page, memcg); ret = 0; goto out; } @@ -1922,10 +1921,10 @@ static int unuse_pte(struct vm_area_stru pte_mkold(mk_pte(page, vma->vm_page_prot))); if (page == swapcache) { page_add_anon_rmap(page, vma, addr, false); - mem_cgroup_commit_charge(page, memcg, true, false); + mem_cgroup_commit_charge(page, memcg, true); } else { /* ksm created a completely new copy */ page_add_new_anon_rmap(page, vma, addr, false); - mem_cgroup_commit_charge(page, memcg, false, false); + mem_cgroup_commit_charge(page, memcg, false); lru_cache_add_active_or_unevictable(page, vma); } swap_free(entry); --- a/mm/userfaultfd.c~mm-memcontrol-drop-compound-parameter-from-memcg-charging-api +++ a/mm/userfaultfd.c @@ -97,7 +97,7 @@ static int mcopy_atomic_pte(struct mm_st __SetPageUptodate(page); ret = -ENOMEM; - if (mem_cgroup_try_charge(page, dst_mm, GFP_KERNEL, &memcg, false)) + if (mem_cgroup_try_charge(page, dst_mm, GFP_KERNEL, &memcg)) goto out_release; _dst_pte = pte_mkdirty(mk_pte(page, dst_vma->vm_page_prot)); @@ -124,7 +124,7 @@ static int mcopy_atomic_pte(struct mm_st inc_mm_counter(dst_mm, MM_ANONPAGES); page_add_new_anon_rmap(page, dst_vma, dst_addr, false); - mem_cgroup_commit_charge(page, memcg, false, false); + mem_cgroup_commit_charge(page, memcg, false); lru_cache_add_active_or_unevictable(page, dst_vma); set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); @@ -138,7 +138,7 @@ out: return ret; out_release_uncharge_unlock: pte_unmap_unlock(dst_pte, ptl); - mem_cgroup_cancel_charge(page, memcg, false); + mem_cgroup_cancel_charge(page, memcg); out_release: put_page(page); goto out; _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 086/131] mm: shmem: remove rare optimization when swapin races with hole punching 2020-06-03 22:55 incoming Andrew Morton ` (84 preceding siblings ...) 2020-06-03 23:01 ` [patch 085/131] mm: memcontrol: drop @compound parameter from memcg charging API Andrew Morton @ 2020-06-03 23:01 ` Andrew Morton 2020-06-03 23:01 ` [patch 087/131] mm: memcontrol: move out cgroup swaprate throttling Andrew Morton ` (45 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:01 UTC (permalink / raw) To: akpm, alex.shi, bsingharora, guro, hannes, hughd, iamjoonsoo.kim, kirill, linux-mm, mhocko, mm-commits, shakeelb, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: shmem: remove rare optimization when swapin races with hole punching Commit 215c02bc33bb ("tmpfs: fix shmem_getpage_gfp() VM_BUG_ON") recognized that hole punching can race with swapin and removed the BUG_ON() for a truncated entry from the swapin path. The patch also added a swapcache deletion to optimize this rare case: Since swapin has the page locked, and free_swap_and_cache() merely trylocks, this situation can leave the page stranded in swapcache. Usually, page reclaim picks up stale swapcache pages, and the race can happen at any other time when the page is locked. (The same happens for non-shmem swapin racing with page table zapping.) The thinking here was: we already observed the race and we have the page locked, we may as well do the cleanup instead of waiting for reclaim. However, this optimization complicates the next patch which moves the cgroup charging code around. As this is just a minor speedup for a race condition that is so rare that it required a fuzzer to trigger the original BUG_ON(), it's no longer worth the complications. Link: http://lkml.kernel.org/r/20200511181056.GA339505@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Suggested-by: Hugh Dickins <hughd@google.com> Acked-by: Hugh Dickins <hughd@google.com> Cc: Alex Shi <alex.shi@linux.alibaba.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Michal Hocko <mhocko@suse.com> Cc: Roman Gushchin <guro@fb.com> Cc: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/shmem.c | 25 +++++++------------------ 1 file changed, 7 insertions(+), 18 deletions(-) --- a/mm/shmem.c~mm-shmem-remove-rare-optimization-when-swapin-races-with-hole-punching +++ a/mm/shmem.c @@ -1665,27 +1665,16 @@ static int shmem_swapin_page(struct inod } error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg); - if (!error) { - error = shmem_add_to_page_cache(page, mapping, index, - swp_to_radix_entry(swap), gfp); - /* - * We already confirmed swap under page lock, and make - * no memory allocation here, so usually no possibility - * of error; but free_swap_and_cache() only trylocks a - * page, so it is just possible that the entry has been - * truncated or holepunched since swap was confirmed. - * shmem_undo_range() will have done some of the - * unaccounting, now delete_from_swap_cache() will do - * the rest. - */ - if (error) { - mem_cgroup_cancel_charge(page, memcg); - delete_from_swap_cache(page); - } - } if (error) goto failed; + error = shmem_add_to_page_cache(page, mapping, index, + swp_to_radix_entry(swap), gfp); + if (error) { + mem_cgroup_cancel_charge(page, memcg); + goto failed; + } + mem_cgroup_commit_charge(page, memcg, true); spin_lock_irq(&info->lock); _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 087/131] mm: memcontrol: move out cgroup swaprate throttling 2020-06-03 22:55 incoming Andrew Morton ` (85 preceding siblings ...) 2020-06-03 23:01 ` [patch 086/131] mm: shmem: remove rare optimization when swapin races with hole punching Andrew Morton @ 2020-06-03 23:01 ` Andrew Morton 2020-06-03 23:01 ` [patch 088/131] mm: memcontrol: convert page cache to a new mem_cgroup_charge() API Andrew Morton ` (44 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:01 UTC (permalink / raw) To: akpm, alex.shi, bsingharora, guro, hannes, hughd, iamjoonsoo.kim, kirill, linux-mm, mhocko, mm-commits, shakeelb, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: memcontrol: move out cgroup swaprate throttling The cgroup swaprate throttling is about matching new anon allocations to the rate of available IO when that is being throttled. It's the io controller hooking into the VM, rather than a memory controller thing. Rename mem_cgroup_throttle_swaprate() to cgroup_throttle_swaprate(), and drop the @memcg argument which is only used to check whether the preceding page charge has succeeded and the fault is proceeding. We could decouple the call from mem_cgroup_try_charge() here as well, but that would cause unnecessary churn: the following patches convert all callsites to a new charge API and we'll decouple as we go along. Link: http://lkml.kernel.org/r/20200508183105.225460-5-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Alex Shi <alex.shi@linux.alibaba.com> Reviewed-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Reviewed-by: Shakeel Butt <shakeelb@google.com> Cc: Hugh Dickins <hughd@google.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Michal Hocko <mhocko@suse.com> Cc: Roman Gushchin <guro@fb.com> Cc: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/swap.h | 6 ++---- mm/memcontrol.c | 5 ++--- mm/swapfile.c | 14 +++++++------- 3 files changed, 11 insertions(+), 14 deletions(-) --- a/include/linux/swap.h~mm-memcontrol-move-out-cgroup-swaprate-throttling +++ a/include/linux/swap.h @@ -651,11 +651,9 @@ static inline int mem_cgroup_swappiness( #endif #if defined(CONFIG_SWAP) && defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP) -extern void mem_cgroup_throttle_swaprate(struct mem_cgroup *memcg, int node, - gfp_t gfp_mask); +extern void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask); #else -static inline void mem_cgroup_throttle_swaprate(struct mem_cgroup *memcg, - int node, gfp_t gfp_mask) +static inline void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask) { } #endif --- a/mm/memcontrol.c~mm-memcontrol-move-out-cgroup-swaprate-throttling +++ a/mm/memcontrol.c @@ -6553,12 +6553,11 @@ out: int mem_cgroup_try_charge_delay(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, struct mem_cgroup **memcgp) { - struct mem_cgroup *memcg; int ret; ret = mem_cgroup_try_charge(page, mm, gfp_mask, memcgp); - memcg = *memcgp; - mem_cgroup_throttle_swaprate(memcg, page_to_nid(page), gfp_mask); + if (*memcgp) + cgroup_throttle_swaprate(page, gfp_mask); return ret; } --- a/mm/swapfile.c~mm-memcontrol-move-out-cgroup-swaprate-throttling +++ a/mm/swapfile.c @@ -3798,11 +3798,12 @@ static void free_swap_count_continuation } #if defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP) -void mem_cgroup_throttle_swaprate(struct mem_cgroup *memcg, int node, - gfp_t gfp_mask) +void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask) { struct swap_info_struct *si, *next; - if (!(gfp_mask & __GFP_IO) || !memcg) + int nid = page_to_nid(page); + + if (!(gfp_mask & __GFP_IO)) return; if (!blk_cgroup_congested()) @@ -3816,11 +3817,10 @@ void mem_cgroup_throttle_swaprate(struct return; spin_lock(&swap_avail_lock); - plist_for_each_entry_safe(si, next, &swap_avail_heads[node], - avail_lists[node]) { + plist_for_each_entry_safe(si, next, &swap_avail_heads[nid], + avail_lists[nid]) { if (si->bdev) { - blkcg_schedule_throttle(bdev_get_queue(si->bdev), - true); + blkcg_schedule_throttle(bdev_get_queue(si->bdev), true); break; } } _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 088/131] mm: memcontrol: convert page cache to a new mem_cgroup_charge() API 2020-06-03 22:55 incoming Andrew Morton ` (86 preceding siblings ...) 2020-06-03 23:01 ` [patch 087/131] mm: memcontrol: move out cgroup swaprate throttling Andrew Morton @ 2020-06-03 23:01 ` Andrew Morton 2020-06-03 23:01 ` [patch 089/131] mm: memcontrol: prepare uncharging for removal of private page type counters Andrew Morton ` (43 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:01 UTC (permalink / raw) To: akpm, alex.shi, bsingharora, guro, hannes, hughd, iamjoonsoo.kim, kirill, linux-mm, mhocko, mm-commits, shakeelb, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: memcontrol: convert page cache to a new mem_cgroup_charge() API The try/commit/cancel protocol that memcg uses dates back to when pages used to be uncharged upon removal from the page cache, and thus couldn't be committed before the insertion had succeeded. Nowadays, pages are uncharged when they are physically freed; it doesn't matter whether the insertion was successful or not. For the page cache, the transaction dance has become unnecessary. Introduce a mem_cgroup_charge() function that simply charges a newly allocated page to a cgroup and sets up page->mem_cgroup in one single step. If the insertion fails, the caller doesn't have to do anything but free/put the page. Then switch the page cache over to this new API. Subsequent patches will also convert anon pages, but it needs a bit more prep work. Right now, memcg depends on page->mapping being already set up at the time of charging, so that it can maintain its own MEMCG_CACHE and MEMCG_RSS counters. For anon, page->mapping is set under the same pte lock under which the page is publishd, so a single charge point that can block doesn't work there just yet. The following prep patches will replace the private memcg counters with the generic vmstat counters, thus removing the page->mapping dependency, then complete the transition to the new single-point charge API and delete the old transactional scheme. v2: leave shmem swapcache when charging fails to avoid double IO (Joonsoo) v3: rebase on preceeding shmem simplification patch Link: http://lkml.kernel.org/r/20200508183105.225460-6-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Alex Shi <alex.shi@linux.alibaba.com> Cc: Hugh Dickins <hughd@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Michal Hocko <mhocko@suse.com> Cc: Roman Gushchin <guro@fb.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/memcontrol.h | 10 ++++ mm/filemap.c | 24 ++++------- mm/memcontrol.c | 29 ++++++++++++- mm/shmem.c | 73 ++++++++++++++--------------------- 4 files changed, 77 insertions(+), 59 deletions(-) --- a/include/linux/memcontrol.h~mm-memcontrol-convert-page-cache-to-a-new-mem_cgroup_charge-api +++ a/include/linux/memcontrol.h @@ -365,6 +365,10 @@ int mem_cgroup_try_charge_delay(struct p void mem_cgroup_commit_charge(struct page *page, struct mem_cgroup *memcg, bool lrucare); void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg); + +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, + bool lrucare); + void mem_cgroup_uncharge(struct page *page); void mem_cgroup_uncharge_list(struct list_head *page_list); @@ -872,6 +876,12 @@ static inline void mem_cgroup_cancel_cha { } +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm, + gfp_t gfp_mask, bool lrucare) +{ + return 0; +} + static inline void mem_cgroup_uncharge(struct page *page) { } --- a/mm/filemap.c~mm-memcontrol-convert-page-cache-to-a-new-mem_cgroup_charge-api +++ a/mm/filemap.c @@ -832,7 +832,6 @@ static int __add_to_page_cache_locked(st { XA_STATE(xas, &mapping->i_pages, offset); int huge = PageHuge(page); - struct mem_cgroup *memcg; int error; void *old; @@ -840,17 +839,16 @@ static int __add_to_page_cache_locked(st VM_BUG_ON_PAGE(PageSwapBacked(page), page); mapping_set_update(&xas, mapping); - if (!huge) { - error = mem_cgroup_try_charge(page, current->mm, - gfp_mask, &memcg); - if (error) - return error; - } - get_page(page); page->mapping = mapping; page->index = offset; + if (!huge) { + error = mem_cgroup_charge(page, current->mm, gfp_mask, false); + if (error) + goto error; + } + do { xas_lock_irq(&xas); old = xas_load(&xas); @@ -874,20 +872,18 @@ unlock: xas_unlock_irq(&xas); } while (xas_nomem(&xas, gfp_mask & GFP_RECLAIM_MASK)); - if (xas_error(&xas)) + if (xas_error(&xas)) { + error = xas_error(&xas); goto error; + } - if (!huge) - mem_cgroup_commit_charge(page, memcg, false); trace_mm_filemap_add_to_page_cache(page); return 0; error: page->mapping = NULL; /* Leave page->index set: truncation relies upon it */ - if (!huge) - mem_cgroup_cancel_charge(page, memcg); put_page(page); - return xas_error(&xas); + return error; } ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO); --- a/mm/memcontrol.c~mm-memcontrol-convert-page-cache-to-a-new-mem_cgroup_charge-api +++ a/mm/memcontrol.c @@ -6637,6 +6637,33 @@ void mem_cgroup_cancel_charge(struct pag cancel_charge(memcg, nr_pages); } +/** + * mem_cgroup_charge - charge a newly allocated page to a cgroup + * @page: page to charge + * @mm: mm context of the victim + * @gfp_mask: reclaim mode + * @lrucare: page might be on the LRU already + * + * Try to charge @page to the memcg that @mm belongs to, reclaiming + * pages according to @gfp_mask if necessary. + * + * Returns 0 on success. Otherwise, an error code is returned. + */ +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, + bool lrucare) +{ + struct mem_cgroup *memcg; + int ret; + + VM_BUG_ON_PAGE(!page->mapping, page); + + ret = mem_cgroup_try_charge(page, mm, gfp_mask, &memcg); + if (ret) + return ret; + mem_cgroup_commit_charge(page, memcg, lrucare); + return 0; +} + struct uncharge_gather { struct mem_cgroup *memcg; unsigned long pgpgout; @@ -6684,8 +6711,6 @@ static void uncharge_batch(const struct static void uncharge_page(struct page *page, struct uncharge_gather *ug) { VM_BUG_ON_PAGE(PageLRU(page), page); - VM_BUG_ON_PAGE(page_count(page) && !is_zone_device_page(page) && - !PageHWPoison(page) , page); if (!page->mem_cgroup) return; --- a/mm/shmem.c~mm-memcontrol-convert-page-cache-to-a-new-mem_cgroup_charge-api +++ a/mm/shmem.c @@ -605,11 +605,13 @@ static inline bool is_huge_enabled(struc */ static int shmem_add_to_page_cache(struct page *page, struct address_space *mapping, - pgoff_t index, void *expected, gfp_t gfp) + pgoff_t index, void *expected, gfp_t gfp, + struct mm_struct *charge_mm) { XA_STATE_ORDER(xas, &mapping->i_pages, index, compound_order(page)); unsigned long i = 0; unsigned long nr = compound_nr(page); + int error; VM_BUG_ON_PAGE(PageTail(page), page); VM_BUG_ON_PAGE(index != round_down(index, nr), page); @@ -621,6 +623,16 @@ static int shmem_add_to_page_cache(struc page->mapping = mapping; page->index = index; + error = mem_cgroup_charge(page, charge_mm, gfp, PageSwapCache(page)); + if (error) { + if (!PageSwapCache(page) && PageTransHuge(page)) { + count_vm_event(THP_FILE_FALLBACK); + count_vm_event(THP_FILE_FALLBACK_CHARGE); + } + goto error; + } + cgroup_throttle_swaprate(page, gfp); + do { void *entry; xas_lock_irq(&xas); @@ -648,12 +660,15 @@ unlock: } while (xas_nomem(&xas, gfp)); if (xas_error(&xas)) { - page->mapping = NULL; - page_ref_sub(page, nr); - return xas_error(&xas); + error = xas_error(&xas); + goto error; } return 0; +error: + page->mapping = NULL; + page_ref_sub(page, nr); + return error; } /* @@ -1619,7 +1634,6 @@ static int shmem_swapin_page(struct inod struct address_space *mapping = inode->i_mapping; struct shmem_inode_info *info = SHMEM_I(inode); struct mm_struct *charge_mm = vma ? vma->vm_mm : current->mm; - struct mem_cgroup *memcg; struct page *page; swp_entry_t swap; int error; @@ -1664,18 +1678,11 @@ static int shmem_swapin_page(struct inod goto failed; } - error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg); - if (error) - goto failed; - error = shmem_add_to_page_cache(page, mapping, index, - swp_to_radix_entry(swap), gfp); - if (error) { - mem_cgroup_cancel_charge(page, memcg); + swp_to_radix_entry(swap), gfp, + charge_mm); + if (error) goto failed; - } - - mem_cgroup_commit_charge(page, memcg, true); spin_lock_irq(&info->lock); info->swapped--; @@ -1722,7 +1729,6 @@ static int shmem_getpage_gfp(struct inod struct shmem_inode_info *info = SHMEM_I(inode); struct shmem_sb_info *sbinfo; struct mm_struct *charge_mm; - struct mem_cgroup *memcg; struct page *page; enum sgp_type sgp_huge = sgp; pgoff_t hindex = index; @@ -1847,21 +1853,11 @@ alloc_nohuge: if (sgp == SGP_WRITE) __SetPageReferenced(page); - error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg); - if (error) { - if (PageTransHuge(page)) { - count_vm_event(THP_FILE_FALLBACK); - count_vm_event(THP_FILE_FALLBACK_CHARGE); - } - goto unacct; - } error = shmem_add_to_page_cache(page, mapping, hindex, - NULL, gfp & GFP_RECLAIM_MASK); - if (error) { - mem_cgroup_cancel_charge(page, memcg); + NULL, gfp & GFP_RECLAIM_MASK, + charge_mm); + if (error) goto unacct; - } - mem_cgroup_commit_charge(page, memcg, false); lru_cache_add_anon(page); spin_lock_irq(&info->lock); @@ -2299,7 +2295,6 @@ static int shmem_mfill_atomic_pte(struct struct address_space *mapping = inode->i_mapping; gfp_t gfp = mapping_gfp_mask(mapping); pgoff_t pgoff = linear_page_index(dst_vma, dst_addr); - struct mem_cgroup *memcg; spinlock_t *ptl; void *page_kaddr; struct page *page; @@ -2349,16 +2344,10 @@ static int shmem_mfill_atomic_pte(struct if (unlikely(offset >= max_off)) goto out_release; - ret = mem_cgroup_try_charge_delay(page, dst_mm, gfp, &memcg); - if (ret) - goto out_release; - ret = shmem_add_to_page_cache(page, mapping, pgoff, NULL, - gfp & GFP_RECLAIM_MASK); + gfp & GFP_RECLAIM_MASK, dst_mm); if (ret) - goto out_release_uncharge; - - mem_cgroup_commit_charge(page, memcg, false); + goto out_release; _dst_pte = mk_pte(page, dst_vma->vm_page_prot); if (dst_vma->vm_flags & VM_WRITE) @@ -2379,11 +2368,11 @@ static int shmem_mfill_atomic_pte(struct ret = -EFAULT; max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); if (unlikely(offset >= max_off)) - goto out_release_uncharge_unlock; + goto out_release_unlock; ret = -EEXIST; if (!pte_none(*dst_pte)) - goto out_release_uncharge_unlock; + goto out_release_unlock; lru_cache_add_anon(page); @@ -2404,12 +2393,10 @@ static int shmem_mfill_atomic_pte(struct ret = 0; out: return ret; -out_release_uncharge_unlock: +out_release_unlock: pte_unmap_unlock(dst_pte, ptl); ClearPageDirty(page); delete_from_page_cache(page); -out_release_uncharge: - mem_cgroup_cancel_charge(page, memcg); out_release: unlock_page(page); put_page(page); _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 089/131] mm: memcontrol: prepare uncharging for removal of private page type counters 2020-06-03 22:55 incoming Andrew Morton ` (87 preceding siblings ...) 2020-06-03 23:01 ` [patch 088/131] mm: memcontrol: convert page cache to a new mem_cgroup_charge() API Andrew Morton @ 2020-06-03 23:01 ` Andrew Morton 2020-06-03 23:01 ` [patch 090/131] mm: memcontrol: prepare move_account " Andrew Morton ` (42 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:01 UTC (permalink / raw) To: akpm, alex.shi, bsingharora, guro, hannes, hughd, iamjoonsoo.kim, kirill, linux-mm, mhocko, mm-commits, shakeelb, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: memcontrol: prepare uncharging for removal of private page type counters The uncharge batching code adds up the anon, file, kmem counts to determine the total number of pages to uncharge and references to drop. But the next patches will remove the anon and file counters. Maintain an aggregate nr_pages in the uncharge_gather struct. Link: http://lkml.kernel.org/r/20200508183105.225460-7-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Alex Shi <alex.shi@linux.alibaba.com> Reviewed-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Hugh Dickins <hughd@google.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Michal Hocko <mhocko@suse.com> Cc: Roman Gushchin <guro@fb.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/memcontrol.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) --- a/mm/memcontrol.c~mm-memcontrol-prepare-uncharging-for-removal-of-private-page-type-counters +++ a/mm/memcontrol.c @@ -6666,6 +6666,7 @@ int mem_cgroup_charge(struct page *page, struct uncharge_gather { struct mem_cgroup *memcg; + unsigned long nr_pages; unsigned long pgpgout; unsigned long nr_anon; unsigned long nr_file; @@ -6682,13 +6683,12 @@ static inline void uncharge_gather_clear static void uncharge_batch(const struct uncharge_gather *ug) { - unsigned long nr_pages = ug->nr_anon + ug->nr_file + ug->nr_kmem; unsigned long flags; if (!mem_cgroup_is_root(ug->memcg)) { - page_counter_uncharge(&ug->memcg->memory, nr_pages); + page_counter_uncharge(&ug->memcg->memory, ug->nr_pages); if (do_memsw_account()) - page_counter_uncharge(&ug->memcg->memsw, nr_pages); + page_counter_uncharge(&ug->memcg->memsw, ug->nr_pages); if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && ug->nr_kmem) page_counter_uncharge(&ug->memcg->kmem, ug->nr_kmem); memcg_oom_recover(ug->memcg); @@ -6700,16 +6700,18 @@ static void uncharge_batch(const struct __mod_memcg_state(ug->memcg, MEMCG_RSS_HUGE, -ug->nr_huge); __mod_memcg_state(ug->memcg, NR_SHMEM, -ug->nr_shmem); __count_memcg_events(ug->memcg, PGPGOUT, ug->pgpgout); - __this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, nr_pages); + __this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, ug->nr_pages); memcg_check_events(ug->memcg, ug->dummy_page); local_irq_restore(flags); if (!mem_cgroup_is_root(ug->memcg)) - css_put_many(&ug->memcg->css, nr_pages); + css_put_many(&ug->memcg->css, ug->nr_pages); } static void uncharge_page(struct page *page, struct uncharge_gather *ug) { + unsigned long nr_pages; + VM_BUG_ON_PAGE(PageLRU(page), page); if (!page->mem_cgroup) @@ -6729,13 +6731,12 @@ static void uncharge_page(struct page *p ug->memcg = page->mem_cgroup; } - if (!PageKmemcg(page)) { - unsigned int nr_pages = 1; + nr_pages = compound_nr(page); + ug->nr_pages += nr_pages; - if (PageTransHuge(page)) { - nr_pages = compound_nr(page); + if (!PageKmemcg(page)) { + if (PageTransHuge(page)) ug->nr_huge += nr_pages; - } if (PageAnon(page)) ug->nr_anon += nr_pages; else { @@ -6745,7 +6746,7 @@ static void uncharge_page(struct page *p } ug->pgpgout++; } else { - ug->nr_kmem += compound_nr(page); + ug->nr_kmem += nr_pages; __ClearPageKmemcg(page); } _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 090/131] mm: memcontrol: prepare move_account for removal of private page type counters 2020-06-03 22:55 incoming Andrew Morton ` (88 preceding siblings ...) 2020-06-03 23:01 ` [patch 089/131] mm: memcontrol: prepare uncharging for removal of private page type counters Andrew Morton @ 2020-06-03 23:01 ` Andrew Morton 2020-06-03 23:01 ` [patch 091/131] mm: memcontrol: prepare cgroup vmstat infrastructure for native anon counters Andrew Morton ` (41 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:01 UTC (permalink / raw) To: akpm, alex.shi, bsingharora, guro, hannes, hughd, iamjoonsoo.kim, kirill, linux-mm, mhocko, mm-commits, shakeelb, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: memcontrol: prepare move_account for removal of private page type counters When memcg uses the generic vmstat counters, it doesn't need to do anything at charging and uncharging time. It does, however, need to migrate counts when pages move to a different cgroup in move_account. Prepare the move_account function for the arrival of NR_FILE_PAGES, NR_ANON_MAPPED, NR_ANON_THPS etc. by having a branch for files and a branch for anon, which can then divided into sub-branches. Link: http://lkml.kernel.org/r/20200508183105.225460-8-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Alex Shi <alex.shi@linux.alibaba.com> Reviewed-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Hugh Dickins <hughd@google.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Michal Hocko <mhocko@suse.com> Cc: Roman Gushchin <guro@fb.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/memcontrol.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) --- a/mm/memcontrol.c~mm-memcontrol-prepare-move_account-for-removal-of-private-page-type-counters +++ a/mm/memcontrol.c @@ -5434,7 +5434,6 @@ static int mem_cgroup_move_account(struc struct pglist_data *pgdat; unsigned int nr_pages = compound ? hpage_nr_pages(page) : 1; int ret; - bool anon; VM_BUG_ON(from == to); VM_BUG_ON_PAGE(PageLRU(page), page); @@ -5452,25 +5451,27 @@ static int mem_cgroup_move_account(struc if (page->mem_cgroup != from) goto out_unlock; - anon = PageAnon(page); - pgdat = page_pgdat(page); from_vec = mem_cgroup_lruvec(from, pgdat); to_vec = mem_cgroup_lruvec(to, pgdat); lock_page_memcg(page); - if (!anon && page_mapped(page)) { - __mod_lruvec_state(from_vec, NR_FILE_MAPPED, -nr_pages); - __mod_lruvec_state(to_vec, NR_FILE_MAPPED, nr_pages); - } + if (!PageAnon(page)) { + if (page_mapped(page)) { + __mod_lruvec_state(from_vec, NR_FILE_MAPPED, -nr_pages); + __mod_lruvec_state(to_vec, NR_FILE_MAPPED, nr_pages); + } - if (!anon && PageDirty(page)) { - struct address_space *mapping = page_mapping(page); + if (PageDirty(page)) { + struct address_space *mapping = page_mapping(page); - if (mapping_cap_account_dirty(mapping)) { - __mod_lruvec_state(from_vec, NR_FILE_DIRTY, -nr_pages); - __mod_lruvec_state(to_vec, NR_FILE_DIRTY, nr_pages); + if (mapping_cap_account_dirty(mapping)) { + __mod_lruvec_state(from_vec, NR_FILE_DIRTY, + -nr_pages); + __mod_lruvec_state(to_vec, NR_FILE_DIRTY, + nr_pages); + } } } _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 091/131] mm: memcontrol: prepare cgroup vmstat infrastructure for native anon counters 2020-06-03 22:55 incoming Andrew Morton ` (89 preceding siblings ...) 2020-06-03 23:01 ` [patch 090/131] mm: memcontrol: prepare move_account " Andrew Morton @ 2020-06-03 23:01 ` Andrew Morton 2020-06-03 23:01 ` [patch 092/131] mm: memcontrol: switch to native NR_FILE_PAGES and NR_SHMEM counters Andrew Morton ` (40 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:01 UTC (permalink / raw) To: akpm, alex.shi, bsingharora, guro, hannes, hughd, iamjoonsoo.kim, kirill, linux-mm, mhocko, mm-commits, shakeelb, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: memcontrol: prepare cgroup vmstat infrastructure for native anon counters Anonymous compound pages can be mapped by ptes, which means that if we want to track NR_MAPPED_ANON, NR_ANON_THPS on a per-cgroup basis, we have to be prepared to see tail pages in our accounting functions. Make mod_lruvec_page_state() and lock_page_memcg() deal with tail pages correctly, namely by redirecting to the head page which has the page->mem_cgroup set up. Link: http://lkml.kernel.org/r/20200508183105.225460-9-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Alex Shi <alex.shi@linux.alibaba.com> Cc: Hugh Dickins <hughd@google.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Michal Hocko <mhocko@suse.com> Cc: Roman Gushchin <guro@fb.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/memcontrol.h | 5 +++-- mm/memcontrol.c | 9 ++++++--- 2 files changed, 9 insertions(+), 5 deletions(-) --- a/include/linux/memcontrol.h~mm-memcontrol-prepare-cgroup-vmstat-infrastructure-for-native-anon-counters +++ a/include/linux/memcontrol.h @@ -709,16 +709,17 @@ static inline void mod_lruvec_state(stru static inline void __mod_lruvec_page_state(struct page *page, enum node_stat_item idx, int val) { + struct page *head = compound_head(page); /* rmap on tail pages */ pg_data_t *pgdat = page_pgdat(page); struct lruvec *lruvec; /* Untracked pages have no memcg, no lruvec. Update only the node */ - if (!page->mem_cgroup) { + if (!head->mem_cgroup) { __mod_node_page_state(pgdat, idx, val); return; } - lruvec = mem_cgroup_lruvec(page->mem_cgroup, pgdat); + lruvec = mem_cgroup_lruvec(head->mem_cgroup, pgdat); __mod_lruvec_state(lruvec, idx, val); } --- a/mm/memcontrol.c~mm-memcontrol-prepare-cgroup-vmstat-infrastructure-for-native-anon-counters +++ a/mm/memcontrol.c @@ -1981,6 +1981,7 @@ void mem_cgroup_print_oom_group(struct m */ struct mem_cgroup *lock_page_memcg(struct page *page) { + struct page *head = compound_head(page); /* rmap on tail pages */ struct mem_cgroup *memcg; unsigned long flags; @@ -2000,7 +2001,7 @@ struct mem_cgroup *lock_page_memcg(struc if (mem_cgroup_disabled()) return NULL; again: - memcg = page->mem_cgroup; + memcg = head->mem_cgroup; if (unlikely(!memcg)) return NULL; @@ -2008,7 +2009,7 @@ again: return memcg; spin_lock_irqsave(&memcg->move_lock, flags); - if (memcg != page->mem_cgroup) { + if (memcg != head->mem_cgroup) { spin_unlock_irqrestore(&memcg->move_lock, flags); goto again; } @@ -2051,7 +2052,9 @@ void __unlock_page_memcg(struct mem_cgro */ void unlock_page_memcg(struct page *page) { - __unlock_page_memcg(page->mem_cgroup); + struct page *head = compound_head(page); + + __unlock_page_memcg(head->mem_cgroup); } EXPORT_SYMBOL(unlock_page_memcg); _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 092/131] mm: memcontrol: switch to native NR_FILE_PAGES and NR_SHMEM counters 2020-06-03 22:55 incoming Andrew Morton ` (90 preceding siblings ...) 2020-06-03 23:01 ` [patch 091/131] mm: memcontrol: prepare cgroup vmstat infrastructure for native anon counters Andrew Morton @ 2020-06-03 23:01 ` Andrew Morton 2020-06-03 23:01 ` [patch 093/131] mm: memcontrol: switch to native NR_ANON_MAPPED counter Andrew Morton ` (39 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:01 UTC (permalink / raw) To: akpm, alex.shi, bsingharora, guro, hannes, hughd, iamjoonsoo.kim, kirill, linux-mm, mhocko, mm-commits, shakeelb, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: memcontrol: switch to native NR_FILE_PAGES and NR_SHMEM counters Memcg maintains private MEMCG_CACHE and NR_SHMEM counters. This divergence from the generic VM accounting means unnecessary code overhead, and creates a dependency for memcg that page->mapping is set up at the time of charging, so that page types can be told apart. Convert the generic accounting sites to mod_lruvec_page_state and friends to maintain the per-cgroup vmstat counters of NR_FILE_PAGES and NR_SHMEM. The page is already locked in these places, so page->mem_cgroup is stable; we only need minimal tweaks of two mem_cgroup_migrate() calls to ensure it's set up in time. Then replace MEMCG_CACHE with NR_FILE_PAGES and delete the private NR_SHMEM accounting sites. Link: http://lkml.kernel.org/r/20200508183105.225460-10-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Alex Shi <alex.shi@linux.alibaba.com> Cc: Hugh Dickins <hughd@google.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Michal Hocko <mhocko@suse.com> Cc: Roman Gushchin <guro@fb.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/memcontrol.h | 3 +-- mm/filemap.c | 17 +++++++++-------- mm/khugepaged.c | 16 +++++++++++----- mm/memcontrol.c | 28 +++++++++++----------------- mm/migrate.c | 15 +++++++++++---- mm/shmem.c | 14 +++++++------- 6 files changed, 50 insertions(+), 43 deletions(-) --- a/include/linux/memcontrol.h~mm-memcontrol-switch-to-native-nr_file_pages-and-nr_shmem-counters +++ a/include/linux/memcontrol.h @@ -29,8 +29,7 @@ struct kmem_cache; /* Cgroup-specific page state, on top of universal node page state */ enum memcg_stat_item { - MEMCG_CACHE = NR_VM_NODE_STAT_ITEMS, - MEMCG_RSS, + MEMCG_RSS = NR_VM_NODE_STAT_ITEMS, MEMCG_RSS_HUGE, MEMCG_SWAP, MEMCG_SOCK, --- a/mm/filemap.c~mm-memcontrol-switch-to-native-nr_file_pages-and-nr_shmem-counters +++ a/mm/filemap.c @@ -199,9 +199,9 @@ static void unaccount_page_cache_page(st nr = hpage_nr_pages(page); - __mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, -nr); + __mod_lruvec_page_state(page, NR_FILE_PAGES, -nr); if (PageSwapBacked(page)) { - __mod_node_page_state(page_pgdat(page), NR_SHMEM, -nr); + __mod_lruvec_page_state(page, NR_SHMEM, -nr); if (PageTransHuge(page)) __dec_node_page_state(page, NR_SHMEM_THPS); } else if (PageTransHuge(page)) { @@ -802,21 +802,22 @@ int replace_page_cache_page(struct page new->mapping = mapping; new->index = offset; + mem_cgroup_migrate(old, new); + xas_lock_irqsave(&xas, flags); xas_store(&xas, new); old->mapping = NULL; /* hugetlb pages do not participate in page cache accounting. */ if (!PageHuge(old)) - __dec_node_page_state(old, NR_FILE_PAGES); + __dec_lruvec_page_state(old, NR_FILE_PAGES); if (!PageHuge(new)) - __inc_node_page_state(new, NR_FILE_PAGES); + __inc_lruvec_page_state(new, NR_FILE_PAGES); if (PageSwapBacked(old)) - __dec_node_page_state(old, NR_SHMEM); + __dec_lruvec_page_state(old, NR_SHMEM); if (PageSwapBacked(new)) - __inc_node_page_state(new, NR_SHMEM); + __inc_lruvec_page_state(new, NR_SHMEM); xas_unlock_irqrestore(&xas, flags); - mem_cgroup_migrate(old, new); if (freepage) freepage(old); put_page(old); @@ -867,7 +868,7 @@ static int __add_to_page_cache_locked(st /* hugetlb pages do not participate in page cache accounting */ if (!huge) - __inc_node_page_state(page, NR_FILE_PAGES); + __inc_lruvec_page_state(page, NR_FILE_PAGES); unlock: xas_unlock_irq(&xas); } while (xas_nomem(&xas, gfp_mask & GFP_RECLAIM_MASK)); --- a/mm/khugepaged.c~mm-memcontrol-switch-to-native-nr_file_pages-and-nr_shmem-counters +++ a/mm/khugepaged.c @@ -1844,12 +1844,18 @@ out_unlock: } if (nr_none) { - struct zone *zone = page_zone(new_page); - - __mod_node_page_state(zone->zone_pgdat, NR_FILE_PAGES, nr_none); + struct lruvec *lruvec; + /* + * XXX: We have started try_charge and pinned the + * memcg, but the page isn't committed yet so we + * cannot use mod_lruvec_page_state(). This hackery + * will be cleaned up when remove the page->mapping + * dependency from memcg and fully charge above. + */ + lruvec = mem_cgroup_lruvec(memcg, page_pgdat(new_page)); + __mod_lruvec_state(lruvec, NR_FILE_PAGES, nr_none); if (is_shmem) - __mod_node_page_state(zone->zone_pgdat, - NR_SHMEM, nr_none); + __mod_lruvec_state(lruvec, NR_SHMEM, nr_none); } xa_locked: --- a/mm/memcontrol.c~mm-memcontrol-switch-to-native-nr_file_pages-and-nr_shmem-counters +++ a/mm/memcontrol.c @@ -842,11 +842,6 @@ static void mem_cgroup_charge_statistics */ if (PageAnon(page)) __mod_memcg_state(memcg, MEMCG_RSS, nr_pages); - else { - __mod_memcg_state(memcg, MEMCG_CACHE, nr_pages); - if (PageSwapBacked(page)) - __mod_memcg_state(memcg, NR_SHMEM, nr_pages); - } if (abs(nr_pages) > 1) { VM_BUG_ON_PAGE(!PageTransHuge(page), page); @@ -1392,7 +1387,7 @@ static char *memory_stat_format(struct m (u64)memcg_page_state(memcg, MEMCG_RSS) * PAGE_SIZE); seq_buf_printf(&s, "file %llu\n", - (u64)memcg_page_state(memcg, MEMCG_CACHE) * + (u64)memcg_page_state(memcg, NR_FILE_PAGES) * PAGE_SIZE); seq_buf_printf(&s, "kernel_stack %llu\n", (u64)memcg_page_state(memcg, MEMCG_KERNEL_STACK_KB) * @@ -3357,7 +3352,7 @@ static unsigned long mem_cgroup_usage(st unsigned long val; if (mem_cgroup_is_root(memcg)) { - val = memcg_page_state(memcg, MEMCG_CACHE) + + val = memcg_page_state(memcg, NR_FILE_PAGES) + memcg_page_state(memcg, MEMCG_RSS); if (swap) val += memcg_page_state(memcg, MEMCG_SWAP); @@ -3828,7 +3823,7 @@ static int memcg_numa_stat_show(struct s #endif /* CONFIG_NUMA */ static const unsigned int memcg1_stats[] = { - MEMCG_CACHE, + NR_FILE_PAGES, MEMCG_RSS, MEMCG_RSS_HUGE, NR_SHMEM, @@ -5461,6 +5456,14 @@ static int mem_cgroup_move_account(struc lock_page_memcg(page); if (!PageAnon(page)) { + __mod_lruvec_state(from_vec, NR_FILE_PAGES, -nr_pages); + __mod_lruvec_state(to_vec, NR_FILE_PAGES, nr_pages); + + if (PageSwapBacked(page)) { + __mod_lruvec_state(from_vec, NR_SHMEM, -nr_pages); + __mod_lruvec_state(to_vec, NR_SHMEM, nr_pages); + } + if (page_mapped(page)) { __mod_lruvec_state(from_vec, NR_FILE_MAPPED, -nr_pages); __mod_lruvec_state(to_vec, NR_FILE_MAPPED, nr_pages); @@ -6673,10 +6676,8 @@ struct uncharge_gather { unsigned long nr_pages; unsigned long pgpgout; unsigned long nr_anon; - unsigned long nr_file; unsigned long nr_kmem; unsigned long nr_huge; - unsigned long nr_shmem; struct page *dummy_page; }; @@ -6700,9 +6701,7 @@ static void uncharge_batch(const struct local_irq_save(flags); __mod_memcg_state(ug->memcg, MEMCG_RSS, -ug->nr_anon); - __mod_memcg_state(ug->memcg, MEMCG_CACHE, -ug->nr_file); __mod_memcg_state(ug->memcg, MEMCG_RSS_HUGE, -ug->nr_huge); - __mod_memcg_state(ug->memcg, NR_SHMEM, -ug->nr_shmem); __count_memcg_events(ug->memcg, PGPGOUT, ug->pgpgout); __this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, ug->nr_pages); memcg_check_events(ug->memcg, ug->dummy_page); @@ -6743,11 +6742,6 @@ static void uncharge_page(struct page *p ug->nr_huge += nr_pages; if (PageAnon(page)) ug->nr_anon += nr_pages; - else { - ug->nr_file += nr_pages; - if (PageSwapBacked(page)) - ug->nr_shmem += nr_pages; - } ug->pgpgout++; } else { ug->nr_kmem += nr_pages; --- a/mm/migrate.c~mm-memcontrol-switch-to-native-nr_file_pages-and-nr_shmem-counters +++ a/mm/migrate.c @@ -490,11 +490,18 @@ int migrate_page_move_mapping(struct add * are mapped to swap space. */ if (newzone != oldzone) { - __dec_node_state(oldzone->zone_pgdat, NR_FILE_PAGES); - __inc_node_state(newzone->zone_pgdat, NR_FILE_PAGES); + struct lruvec *old_lruvec, *new_lruvec; + struct mem_cgroup *memcg; + + memcg = page_memcg(page); + old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat); + new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat); + + __dec_lruvec_state(old_lruvec, NR_FILE_PAGES); + __inc_lruvec_state(new_lruvec, NR_FILE_PAGES); if (PageSwapBacked(page) && !PageSwapCache(page)) { - __dec_node_state(oldzone->zone_pgdat, NR_SHMEM); - __inc_node_state(newzone->zone_pgdat, NR_SHMEM); + __dec_lruvec_state(old_lruvec, NR_SHMEM); + __inc_lruvec_state(new_lruvec, NR_SHMEM); } if (dirty && mapping_cap_account_dirty(mapping)) { __dec_node_state(oldzone->zone_pgdat, NR_FILE_DIRTY); --- a/mm/shmem.c~mm-memcontrol-switch-to-native-nr_file_pages-and-nr_shmem-counters +++ a/mm/shmem.c @@ -653,8 +653,8 @@ next: __inc_node_page_state(page, NR_SHMEM_THPS); } mapping->nrpages += nr; - __mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, nr); - __mod_node_page_state(page_pgdat(page), NR_SHMEM, nr); + __mod_lruvec_page_state(page, NR_FILE_PAGES, nr); + __mod_lruvec_page_state(page, NR_SHMEM, nr); unlock: xas_unlock_irq(&xas); } while (xas_nomem(&xas, gfp)); @@ -685,8 +685,8 @@ static void shmem_delete_from_page_cache error = shmem_replace_entry(mapping, page->index, page, radswap); page->mapping = NULL; mapping->nrpages--; - __dec_node_page_state(page, NR_FILE_PAGES); - __dec_node_page_state(page, NR_SHMEM); + __dec_lruvec_page_state(page, NR_FILE_PAGES); + __dec_lruvec_page_state(page, NR_SHMEM); xa_unlock_irq(&mapping->i_pages); put_page(page); BUG_ON(error); @@ -1593,8 +1593,9 @@ static int shmem_replace_page(struct pag xa_lock_irq(&swap_mapping->i_pages); error = shmem_replace_entry(swap_mapping, swap_index, oldpage, newpage); if (!error) { - __inc_node_page_state(newpage, NR_FILE_PAGES); - __dec_node_page_state(oldpage, NR_FILE_PAGES); + mem_cgroup_migrate(oldpage, newpage); + __inc_lruvec_page_state(newpage, NR_FILE_PAGES); + __dec_lruvec_page_state(oldpage, NR_FILE_PAGES); } xa_unlock_irq(&swap_mapping->i_pages); @@ -1606,7 +1607,6 @@ static int shmem_replace_page(struct pag */ oldpage = newpage; } else { - mem_cgroup_migrate(oldpage, newpage); lru_cache_add_anon(newpage); *pagep = newpage; } _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 093/131] mm: memcontrol: switch to native NR_ANON_MAPPED counter 2020-06-03 22:55 incoming Andrew Morton ` (91 preceding siblings ...) 2020-06-03 23:01 ` [patch 092/131] mm: memcontrol: switch to native NR_FILE_PAGES and NR_SHMEM counters Andrew Morton @ 2020-06-03 23:01 ` Andrew Morton 2020-06-03 23:02 ` [patch 094/131] mm: memcontrol: switch to native NR_ANON_THPS counter Andrew Morton ` (38 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:01 UTC (permalink / raw) To: akpm, alex.shi, bsingharora, guro, hannes, hughd, iamjoonsoo.kim, kirill, linux-mm, mhocko, mm-commits, shakeelb, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: memcontrol: switch to native NR_ANON_MAPPED counter Memcg maintains a private MEMCG_RSS counter. This divergence from the generic VM accounting means unnecessary code overhead, and creates a dependency for memcg that page->mapping is set up at the time of charging, so that page types can be told apart. Convert the generic accounting sites to mod_lruvec_page_state and friends to maintain the per-cgroup vmstat counter of NR_ANON_MAPPED. We use lock_page_memcg() to stabilize page->mem_cgroup during rmap changes, the same way we do for NR_FILE_MAPPED. With the previous patch removing MEMCG_CACHE and the private NR_SHMEM counter, this patch finally eliminates the need to have page->mapping set up at charge time. However, we need to have page->mem_cgroup set up by the time rmap runs and does the accounting, so switch the commit and the rmap callbacks around. v2: fix temporary accounting bug by switching rmap<->commit (Joonsoo) Link: http://lkml.kernel.org/r/20200508183105.225460-11-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Alex Shi <alex.shi@linux.alibaba.com> Cc: Hugh Dickins <hughd@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Michal Hocko <mhocko@suse.com> Cc: Roman Gushchin <guro@fb.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/memcontrol.h | 3 -- kernel/events/uprobes.c | 2 - mm/huge_memory.c | 2 - mm/khugepaged.c | 2 - mm/memcontrol.c | 27 ++++++-------------- mm/memory.c | 10 +++---- mm/migrate.c | 2 - mm/rmap.c | 47 +++++++++++++++++++++-------------- mm/swapfile.c | 4 +- mm/userfaultfd.c | 2 - 10 files changed, 51 insertions(+), 50 deletions(-) --- a/include/linux/memcontrol.h~mm-memcontrol-switch-to-native-nr_anon_mapped-counter +++ a/include/linux/memcontrol.h @@ -29,8 +29,7 @@ struct kmem_cache; /* Cgroup-specific page state, on top of universal node page state */ enum memcg_stat_item { - MEMCG_RSS = NR_VM_NODE_STAT_ITEMS, - MEMCG_RSS_HUGE, + MEMCG_RSS_HUGE = NR_VM_NODE_STAT_ITEMS, MEMCG_SWAP, MEMCG_SOCK, /* XXX: why are these zone and not node counters? */ --- a/kernel/events/uprobes.c~mm-memcontrol-switch-to-native-nr_anon_mapped-counter +++ a/kernel/events/uprobes.c @@ -188,8 +188,8 @@ static int __replace_page(struct vm_area if (new_page) { get_page(new_page); - page_add_new_anon_rmap(new_page, vma, addr, false); mem_cgroup_commit_charge(new_page, memcg, false); + page_add_new_anon_rmap(new_page, vma, addr, false); lru_cache_add_active_or_unevictable(new_page, vma); } else /* no new page, just dec_mm_counter for old_page */ --- a/mm/huge_memory.c~mm-memcontrol-switch-to-native-nr_anon_mapped-counter +++ a/mm/huge_memory.c @@ -640,8 +640,8 @@ static vm_fault_t __do_huge_pmd_anonymou entry = mk_huge_pmd(page, vma->vm_page_prot); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); - page_add_new_anon_rmap(page, vma, haddr, true); mem_cgroup_commit_charge(page, memcg, false); + page_add_new_anon_rmap(page, vma, haddr, true); lru_cache_add_active_or_unevictable(page, vma); pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable); set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry); --- a/mm/khugepaged.c~mm-memcontrol-switch-to-native-nr_anon_mapped-counter +++ a/mm/khugepaged.c @@ -1175,8 +1175,8 @@ static void collapse_huge_page(struct mm spin_lock(pmd_ptl); BUG_ON(!pmd_none(*pmd)); - page_add_new_anon_rmap(new_page, vma, address, true); mem_cgroup_commit_charge(new_page, memcg, false); + page_add_new_anon_rmap(new_page, vma, address, true); count_memcg_events(memcg, THP_COLLAPSE_ALLOC, 1); lru_cache_add_active_or_unevictable(new_page, vma); pgtable_trans_huge_deposit(mm, pmd, pgtable); --- a/mm/memcontrol.c~mm-memcontrol-switch-to-native-nr_anon_mapped-counter +++ a/mm/memcontrol.c @@ -836,13 +836,6 @@ static void mem_cgroup_charge_statistics struct page *page, int nr_pages) { - /* - * Here, RSS means 'mapped anon' and anon's SwapCache. Shmem/tmpfs is - * counted as CACHE even if it's on ANON LRU. - */ - if (PageAnon(page)) - __mod_memcg_state(memcg, MEMCG_RSS, nr_pages); - if (abs(nr_pages) > 1) { VM_BUG_ON_PAGE(!PageTransHuge(page), page); __mod_memcg_state(memcg, MEMCG_RSS_HUGE, nr_pages); @@ -1384,7 +1377,7 @@ static char *memory_stat_format(struct m */ seq_buf_printf(&s, "anon %llu\n", - (u64)memcg_page_state(memcg, MEMCG_RSS) * + (u64)memcg_page_state(memcg, NR_ANON_MAPPED) * PAGE_SIZE); seq_buf_printf(&s, "file %llu\n", (u64)memcg_page_state(memcg, NR_FILE_PAGES) * @@ -3353,7 +3346,7 @@ static unsigned long mem_cgroup_usage(st if (mem_cgroup_is_root(memcg)) { val = memcg_page_state(memcg, NR_FILE_PAGES) + - memcg_page_state(memcg, MEMCG_RSS); + memcg_page_state(memcg, NR_ANON_MAPPED); if (swap) val += memcg_page_state(memcg, MEMCG_SWAP); } else { @@ -3824,7 +3817,7 @@ static int memcg_numa_stat_show(struct s static const unsigned int memcg1_stats[] = { NR_FILE_PAGES, - MEMCG_RSS, + NR_ANON_MAPPED, MEMCG_RSS_HUGE, NR_SHMEM, NR_FILE_MAPPED, @@ -5455,7 +5448,12 @@ static int mem_cgroup_move_account(struc lock_page_memcg(page); - if (!PageAnon(page)) { + if (PageAnon(page)) { + if (page_mapped(page)) { + __mod_lruvec_state(from_vec, NR_ANON_MAPPED, -nr_pages); + __mod_lruvec_state(to_vec, NR_ANON_MAPPED, nr_pages); + } + } else { __mod_lruvec_state(from_vec, NR_FILE_PAGES, -nr_pages); __mod_lruvec_state(to_vec, NR_FILE_PAGES, nr_pages); @@ -6589,7 +6587,6 @@ void mem_cgroup_commit_charge(struct pag { unsigned int nr_pages = hpage_nr_pages(page); - VM_BUG_ON_PAGE(!page->mapping, page); VM_BUG_ON_PAGE(PageLRU(page) && !lrucare, page); if (mem_cgroup_disabled()) @@ -6662,8 +6659,6 @@ int mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg; int ret; - VM_BUG_ON_PAGE(!page->mapping, page); - ret = mem_cgroup_try_charge(page, mm, gfp_mask, &memcg); if (ret) return ret; @@ -6675,7 +6670,6 @@ struct uncharge_gather { struct mem_cgroup *memcg; unsigned long nr_pages; unsigned long pgpgout; - unsigned long nr_anon; unsigned long nr_kmem; unsigned long nr_huge; struct page *dummy_page; @@ -6700,7 +6694,6 @@ static void uncharge_batch(const struct } local_irq_save(flags); - __mod_memcg_state(ug->memcg, MEMCG_RSS, -ug->nr_anon); __mod_memcg_state(ug->memcg, MEMCG_RSS_HUGE, -ug->nr_huge); __count_memcg_events(ug->memcg, PGPGOUT, ug->pgpgout); __this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, ug->nr_pages); @@ -6740,8 +6733,6 @@ static void uncharge_page(struct page *p if (!PageKmemcg(page)) { if (PageTransHuge(page)) ug->nr_huge += nr_pages; - if (PageAnon(page)) - ug->nr_anon += nr_pages; ug->pgpgout++; } else { ug->nr_kmem += nr_pages; --- a/mm/memory.c~mm-memcontrol-switch-to-native-nr_anon_mapped-counter +++ a/mm/memory.c @@ -2710,8 +2710,8 @@ static vm_fault_t wp_page_copy(struct vm * thread doing COW. */ ptep_clear_flush_notify(vma, vmf->address, vmf->pte); - page_add_new_anon_rmap(new_page, vma, vmf->address, false); mem_cgroup_commit_charge(new_page, memcg, false); + page_add_new_anon_rmap(new_page, vma, vmf->address, false); lru_cache_add_active_or_unevictable(new_page, vma); /* * We call the notify macro here because, when using secondary @@ -3243,12 +3243,12 @@ vm_fault_t do_swap_page(struct vm_fault /* ksm created a completely new copy */ if (unlikely(page != swapcache && swapcache)) { - page_add_new_anon_rmap(page, vma, vmf->address, false); mem_cgroup_commit_charge(page, memcg, false); + page_add_new_anon_rmap(page, vma, vmf->address, false); lru_cache_add_active_or_unevictable(page, vma); } else { - do_page_add_anon_rmap(page, vma, vmf->address, exclusive); mem_cgroup_commit_charge(page, memcg, true); + do_page_add_anon_rmap(page, vma, vmf->address, exclusive); activate_page(page); } @@ -3390,8 +3390,8 @@ static vm_fault_t do_anonymous_page(stru } inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES); - page_add_new_anon_rmap(page, vma, vmf->address, false); mem_cgroup_commit_charge(page, memcg, false); + page_add_new_anon_rmap(page, vma, vmf->address, false); lru_cache_add_active_or_unevictable(page, vma); setpte: set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry); @@ -3652,8 +3652,8 @@ vm_fault_t alloc_set_pte(struct vm_fault /* copy-on-write page */ if (write && !(vma->vm_flags & VM_SHARED)) { inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES); - page_add_new_anon_rmap(page, vma, vmf->address, false); mem_cgroup_commit_charge(page, memcg, false); + page_add_new_anon_rmap(page, vma, vmf->address, false); lru_cache_add_active_or_unevictable(page, vma); } else { inc_mm_counter_fast(vma->vm_mm, mm_counter_file(page)); --- a/mm/migrate.c~mm-memcontrol-switch-to-native-nr_anon_mapped-counter +++ a/mm/migrate.c @@ -2832,8 +2832,8 @@ static void migrate_vma_insert_page(stru goto unlock_abort; inc_mm_counter(mm, MM_ANONPAGES); - page_add_new_anon_rmap(page, vma, addr, false); mem_cgroup_commit_charge(page, memcg, false); + page_add_new_anon_rmap(page, vma, addr, false); if (!is_zone_device_page(page)) lru_cache_add_active_or_unevictable(page, vma); get_page(page); --- a/mm/rmap.c~mm-memcontrol-switch-to-native-nr_anon_mapped-counter +++ a/mm/rmap.c @@ -1114,6 +1114,11 @@ void do_page_add_anon_rmap(struct page * bool compound = flags & RMAP_COMPOUND; bool first; + if (unlikely(PageKsm(page))) + lock_page_memcg(page); + else + VM_BUG_ON_PAGE(!PageLocked(page), page); + if (compound) { atomic_t *mapcount; VM_BUG_ON_PAGE(!PageLocked(page), page); @@ -1134,12 +1139,13 @@ void do_page_add_anon_rmap(struct page * */ if (compound) __inc_node_page_state(page, NR_ANON_THPS); - __mod_node_page_state(page_pgdat(page), NR_ANON_MAPPED, nr); + __mod_lruvec_page_state(page, NR_ANON_MAPPED, nr); } - if (unlikely(PageKsm(page))) - return; - VM_BUG_ON_PAGE(!PageLocked(page), page); + if (unlikely(PageKsm(page))) { + unlock_page_memcg(page); + return; + } /* address might be in next vma when migration races vma_adjust */ if (first) @@ -1181,7 +1187,7 @@ void page_add_new_anon_rmap(struct page /* increment count (starts at -1) */ atomic_set(&page->_mapcount, 0); } - __mod_node_page_state(page_pgdat(page), NR_ANON_MAPPED, nr); + __mod_lruvec_page_state(page, NR_ANON_MAPPED, nr); __page_set_anon_rmap(page, vma, address, 1); } @@ -1230,13 +1236,12 @@ static void page_remove_file_rmap(struct int i, nr = 1; VM_BUG_ON_PAGE(compound && !PageHead(page), page); - lock_page_memcg(page); /* Hugepages are not counted in NR_FILE_MAPPED for now. */ if (unlikely(PageHuge(page))) { /* hugetlb pages are always mapped with pmds */ atomic_dec(compound_mapcount_ptr(page)); - goto out; + return; } /* page still mapped by someone else? */ @@ -1246,14 +1251,14 @@ static void page_remove_file_rmap(struct nr++; } if (!atomic_add_negative(-1, compound_mapcount_ptr(page))) - goto out; + return; if (PageSwapBacked(page)) __dec_node_page_state(page, NR_SHMEM_PMDMAPPED); else __dec_node_page_state(page, NR_FILE_PMDMAPPED); } else { if (!atomic_add_negative(-1, &page->_mapcount)) - goto out; + return; } /* @@ -1265,8 +1270,6 @@ static void page_remove_file_rmap(struct if (unlikely(PageMlocked(page))) clear_page_mlock(page); -out: - unlock_page_memcg(page); } static void page_remove_anon_compound_rmap(struct page *page) @@ -1310,7 +1313,7 @@ static void page_remove_anon_compound_rm clear_page_mlock(page); if (nr) - __mod_node_page_state(page_pgdat(page), NR_ANON_MAPPED, -nr); + __mod_lruvec_page_state(page, NR_ANON_MAPPED, -nr); } /** @@ -1322,22 +1325,28 @@ static void page_remove_anon_compound_rm */ void page_remove_rmap(struct page *page, bool compound) { - if (!PageAnon(page)) - return page_remove_file_rmap(page, compound); + lock_page_memcg(page); - if (compound) - return page_remove_anon_compound_rmap(page); + if (!PageAnon(page)) { + page_remove_file_rmap(page, compound); + goto out; + } + + if (compound) { + page_remove_anon_compound_rmap(page); + goto out; + } /* page still mapped by someone else? */ if (!atomic_add_negative(-1, &page->_mapcount)) - return; + goto out; /* * We use the irq-unsafe __{inc|mod}_zone_page_stat because * these counters are not modified in interrupt context, and * pte lock(a spinlock) is held, which implies preemption disabled. */ - __dec_node_page_state(page, NR_ANON_MAPPED); + __dec_lruvec_page_state(page, NR_ANON_MAPPED); if (unlikely(PageMlocked(page))) clear_page_mlock(page); @@ -1354,6 +1363,8 @@ void page_remove_rmap(struct page *page, * Leaving it set also helps swapoff to reinstate ptes * faster for those pages still in swapcache. */ +out: + unlock_page_memcg(page); } /* --- a/mm/swapfile.c~mm-memcontrol-switch-to-native-nr_anon_mapped-counter +++ a/mm/swapfile.c @@ -1920,11 +1920,11 @@ static int unuse_pte(struct vm_area_stru set_pte_at(vma->vm_mm, addr, pte, pte_mkold(mk_pte(page, vma->vm_page_prot))); if (page == swapcache) { - page_add_anon_rmap(page, vma, addr, false); mem_cgroup_commit_charge(page, memcg, true); + page_add_anon_rmap(page, vma, addr, false); } else { /* ksm created a completely new copy */ - page_add_new_anon_rmap(page, vma, addr, false); mem_cgroup_commit_charge(page, memcg, false); + page_add_new_anon_rmap(page, vma, addr, false); lru_cache_add_active_or_unevictable(page, vma); } swap_free(entry); --- a/mm/userfaultfd.c~mm-memcontrol-switch-to-native-nr_anon_mapped-counter +++ a/mm/userfaultfd.c @@ -123,8 +123,8 @@ static int mcopy_atomic_pte(struct mm_st goto out_release_uncharge_unlock; inc_mm_counter(dst_mm, MM_ANONPAGES); - page_add_new_anon_rmap(page, dst_vma, dst_addr, false); mem_cgroup_commit_charge(page, memcg, false); + page_add_new_anon_rmap(page, dst_vma, dst_addr, false); lru_cache_add_active_or_unevictable(page, dst_vma); set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 094/131] mm: memcontrol: switch to native NR_ANON_THPS counter 2020-06-03 22:55 incoming Andrew Morton ` (92 preceding siblings ...) 2020-06-03 23:01 ` [patch 093/131] mm: memcontrol: switch to native NR_ANON_MAPPED counter Andrew Morton @ 2020-06-03 23:02 ` Andrew Morton 2020-06-03 23:02 ` [patch 095/131] mm: memcontrol: convert anon and file-thp to new mem_cgroup_charge() API Andrew Morton ` (37 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:02 UTC (permalink / raw) To: akpm, alex.shi, bsingharora, guro, hannes, hughd, iamjoonsoo.kim, kirill, linux-mm, mhocko, mm-commits, naresh.kamboju, rdunlap, shakeelb, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: memcontrol: switch to native NR_ANON_THPS counter With rmap memcg locking already in place for NR_ANON_MAPPED, it's just a small step to remove the MEMCG_RSS_HUGE wart and switch memcg to the native NR_ANON_THPS accounting sites. [hannes@cmpxchg.org: fixes] Link: http://lkml.kernel.org/r/20200512121750.GA397968@cmpxchg.org Link: http://lkml.kernel.org/r/20200508183105.225460-12-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Tested-by: Naresh Kamboju <naresh.kamboju@linaro.org> Acked-by: Randy Dunlap <rdunlap@infradead.org> [build-tested] Cc: Alex Shi <alex.shi@linux.alibaba.com> Cc: Hugh Dickins <hughd@google.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Michal Hocko <mhocko@suse.com> Cc: Roman Gushchin <guro@fb.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/memcontrol.h | 3 -- mm/huge_memory.c | 4 ++ mm/memcontrol.c | 47 +++++++++++++++++------------------ mm/rmap.c | 6 ++-- 4 files changed, 31 insertions(+), 29 deletions(-) --- a/include/linux/memcontrol.h~mm-memcontrol-switch-to-native-nr_anon_thps-counter +++ a/include/linux/memcontrol.h @@ -29,8 +29,7 @@ struct kmem_cache; /* Cgroup-specific page state, on top of universal node page state */ enum memcg_stat_item { - MEMCG_RSS_HUGE = NR_VM_NODE_STAT_ITEMS, - MEMCG_SWAP, + MEMCG_SWAP = NR_VM_NODE_STAT_ITEMS, MEMCG_SOCK, /* XXX: why are these zone and not node counters? */ MEMCG_KERNEL_STACK_KB, --- a/mm/huge_memory.c~mm-memcontrol-switch-to-native-nr_anon_thps-counter +++ a/mm/huge_memory.c @@ -2159,15 +2159,17 @@ static void __split_huge_pmd_locked(stru atomic_inc(&page[i]._mapcount); } + lock_page_memcg(page); if (atomic_add_negative(-1, compound_mapcount_ptr(page))) { /* Last compound_mapcount is gone. */ - __dec_node_page_state(page, NR_ANON_THPS); + __dec_lruvec_page_state(page, NR_ANON_THPS); if (TestClearPageDoubleMap(page)) { /* No need in mapcount reference anymore */ for (i = 0; i < HPAGE_PMD_NR; i++) atomic_dec(&page[i]._mapcount); } } + unlock_page_memcg(page); smp_wmb(); /* make pte visible before pmd */ pmd_populate(mm, pmd, pgtable); --- a/mm/memcontrol.c~mm-memcontrol-switch-to-native-nr_anon_thps-counter +++ a/mm/memcontrol.c @@ -836,11 +836,6 @@ static void mem_cgroup_charge_statistics struct page *page, int nr_pages) { - if (abs(nr_pages) > 1) { - VM_BUG_ON_PAGE(!PageTransHuge(page), page); - __mod_memcg_state(memcg, MEMCG_RSS_HUGE, nr_pages); - } - /* pagein of a big page is an event. So, ignore page size */ if (nr_pages > 0) __count_memcg_events(memcg, PGPGIN, 1); @@ -1406,15 +1401,11 @@ static char *memory_stat_format(struct m (u64)memcg_page_state(memcg, NR_WRITEBACK) * PAGE_SIZE); - /* - * TODO: We should eventually replace our own MEMCG_RSS_HUGE counter - * with the NR_ANON_THP vm counter, but right now it's a pain in the - * arse because it requires migrating the work out of rmap to a place - * where the page->mem_cgroup is set up and stable. - */ +#ifdef CONFIG_TRANSPARENT_HUGEPAGE seq_buf_printf(&s, "anon_thp %llu\n", - (u64)memcg_page_state(memcg, MEMCG_RSS_HUGE) * - PAGE_SIZE); + (u64)memcg_page_state(memcg, NR_ANON_THPS) * + HPAGE_PMD_SIZE); +#endif for (i = 0; i < NR_LRU_LISTS; i++) seq_buf_printf(&s, "%s %llu\n", lru_list_name(i), @@ -3061,8 +3052,6 @@ void mem_cgroup_split_huge_fixup(struct for (i = 1; i < HPAGE_PMD_NR; i++) head[i].mem_cgroup = head->mem_cgroup; - - __mod_memcg_state(head->mem_cgroup, MEMCG_RSS_HUGE, -HPAGE_PMD_NR); } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ @@ -3818,7 +3807,9 @@ static int memcg_numa_stat_show(struct s static const unsigned int memcg1_stats[] = { NR_FILE_PAGES, NR_ANON_MAPPED, - MEMCG_RSS_HUGE, +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + NR_ANON_THPS, +#endif NR_SHMEM, NR_FILE_MAPPED, NR_FILE_DIRTY, @@ -3829,7 +3820,9 @@ static const unsigned int memcg1_stats[] static const char *const memcg1_stat_names[] = { "cache", "rss", +#ifdef CONFIG_TRANSPARENT_HUGEPAGE "rss_huge", +#endif "shmem", "mapped_file", "dirty", @@ -3855,11 +3848,16 @@ static int memcg_stat_show(struct seq_fi BUILD_BUG_ON(ARRAY_SIZE(memcg1_stat_names) != ARRAY_SIZE(memcg1_stats)); for (i = 0; i < ARRAY_SIZE(memcg1_stats); i++) { + unsigned long nr; + if (memcg1_stats[i] == MEMCG_SWAP && !do_memsw_account()) continue; - seq_printf(m, "%s %lu\n", memcg1_stat_names[i], - memcg_page_state_local(memcg, memcg1_stats[i]) * - PAGE_SIZE); + nr = memcg_page_state_local(memcg, memcg1_stats[i]); +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + if (memcg1_stats[i] == NR_ANON_THPS) + nr *= HPAGE_PMD_NR; +#endif + seq_printf(m, "%s %lu\n", memcg1_stat_names[i], nr * PAGE_SIZE); } for (i = 0; i < ARRAY_SIZE(memcg1_events); i++) @@ -5452,6 +5450,13 @@ static int mem_cgroup_move_account(struc if (page_mapped(page)) { __mod_lruvec_state(from_vec, NR_ANON_MAPPED, -nr_pages); __mod_lruvec_state(to_vec, NR_ANON_MAPPED, nr_pages); + if (PageTransHuge(page)) { + __mod_lruvec_state(from_vec, NR_ANON_THPS, + -nr_pages); + __mod_lruvec_state(to_vec, NR_ANON_THPS, + nr_pages); + } + } } else { __mod_lruvec_state(from_vec, NR_FILE_PAGES, -nr_pages); @@ -6671,7 +6676,6 @@ struct uncharge_gather { unsigned long nr_pages; unsigned long pgpgout; unsigned long nr_kmem; - unsigned long nr_huge; struct page *dummy_page; }; @@ -6694,7 +6698,6 @@ static void uncharge_batch(const struct } local_irq_save(flags); - __mod_memcg_state(ug->memcg, MEMCG_RSS_HUGE, -ug->nr_huge); __count_memcg_events(ug->memcg, PGPGOUT, ug->pgpgout); __this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, ug->nr_pages); memcg_check_events(ug->memcg, ug->dummy_page); @@ -6731,8 +6734,6 @@ static void uncharge_page(struct page *p ug->nr_pages += nr_pages; if (!PageKmemcg(page)) { - if (PageTransHuge(page)) - ug->nr_huge += nr_pages; ug->pgpgout++; } else { ug->nr_kmem += nr_pages; --- a/mm/rmap.c~mm-memcontrol-switch-to-native-nr_anon_thps-counter +++ a/mm/rmap.c @@ -1138,7 +1138,7 @@ void do_page_add_anon_rmap(struct page * * disabled. */ if (compound) - __inc_node_page_state(page, NR_ANON_THPS); + __inc_lruvec_page_state(page, NR_ANON_THPS); __mod_lruvec_page_state(page, NR_ANON_MAPPED, nr); } @@ -1180,7 +1180,7 @@ void page_add_new_anon_rmap(struct page if (hpage_pincount_available(page)) atomic_set(compound_pincount_ptr(page), 0); - __inc_node_page_state(page, NR_ANON_THPS); + __inc_lruvec_page_state(page, NR_ANON_THPS); } else { /* Anon THP always mapped first with PMD */ VM_BUG_ON_PAGE(PageTransCompound(page), page); @@ -1286,7 +1286,7 @@ static void page_remove_anon_compound_rm if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) return; - __dec_node_page_state(page, NR_ANON_THPS); + __dec_lruvec_page_state(page, NR_ANON_THPS); if (TestClearPageDoubleMap(page)) { /* _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 095/131] mm: memcontrol: convert anon and file-thp to new mem_cgroup_charge() API 2020-06-03 22:55 incoming Andrew Morton ` (93 preceding siblings ...) 2020-06-03 23:02 ` [patch 094/131] mm: memcontrol: switch to native NR_ANON_THPS counter Andrew Morton @ 2020-06-03 23:02 ` Andrew Morton 2020-06-03 23:02 ` [patch 096/131] mm: memcontrol: drop unused try/commit/cancel charge API Andrew Morton ` (36 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:02 UTC (permalink / raw) To: akpm, alex.shi, bsingharora, cai, guro, hannes, hughd, iamjoonsoo.kim, kirill, linux-mm, mhocko, mm-commits, shakeelb, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: memcontrol: convert anon and file-thp to new mem_cgroup_charge() API With the page->mapping requirement gone from memcg, we can charge anon and file-thp pages in one single step, right after they're allocated. This removes two out of three API calls - especially the tricky commit step that needed to happen at just the right time between when the page is "set up" and when it's "published" - somewhat vague and fluid concepts that varied by page type. All we need is a freshly allocated page and a memcg context to charge. v2: prevent double charges on pre-allocated hugepages in khugepaged [hannes@cmpxchg.org: Fix crash - *hpage could be ERR_PTR instead of NULL] Link: http://lkml.kernel.org/r/20200512215813.GA487759@cmpxchg.org Link: http://lkml.kernel.org/r/20200508183105.225460-13-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Alex Shi <alex.shi@linux.alibaba.com> Cc: Hugh Dickins <hughd@google.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Michal Hocko <mhocko@suse.com> Cc: Roman Gushchin <guro@fb.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Balbir Singh <bsingharora@gmail.com> Cc: Qian Cai <cai@lca.pw> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/mm.h | 4 +--- kernel/events/uprobes.c | 11 +++-------- mm/filemap.c | 2 +- mm/huge_memory.c | 9 +++------ mm/khugepaged.c | 35 ++++++++++------------------------- mm/memory.c | 36 ++++++++++-------------------------- mm/migrate.c | 5 +---- mm/swapfile.c | 6 +----- mm/userfaultfd.c | 5 +---- 9 files changed, 31 insertions(+), 82 deletions(-) --- a/include/linux/mm.h~mm-memcontrol-convert-anon-and-file-thp-to-new-mem_cgroup_charge-api +++ a/include/linux/mm.h @@ -501,7 +501,6 @@ struct vm_fault { pte_t orig_pte; /* Value of PTE at the time of fault */ struct page *cow_page; /* Page handler may use for COW fault */ - struct mem_cgroup *memcg; /* Cgroup cow_page belongs to */ struct page *page; /* ->fault handlers should return a * page here, unless VM_FAULT_NOPAGE * is set (which is also implied by @@ -946,8 +945,7 @@ static inline pte_t maybe_mkwrite(pte_t return pte; } -vm_fault_t alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg, - struct page *page); +vm_fault_t alloc_set_pte(struct vm_fault *vmf, struct page *page); vm_fault_t finish_fault(struct vm_fault *vmf); vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf); #endif --- a/kernel/events/uprobes.c~mm-memcontrol-convert-anon-and-file-thp-to-new-mem_cgroup_charge-api +++ a/kernel/events/uprobes.c @@ -162,14 +162,13 @@ static int __replace_page(struct vm_area }; int err; struct mmu_notifier_range range; - struct mem_cgroup *memcg; mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, mm, addr, addr + PAGE_SIZE); if (new_page) { - err = mem_cgroup_try_charge(new_page, vma->vm_mm, GFP_KERNEL, - &memcg); + err = mem_cgroup_charge(new_page, vma->vm_mm, GFP_KERNEL, + false); if (err) return err; } @@ -179,16 +178,12 @@ static int __replace_page(struct vm_area mmu_notifier_invalidate_range_start(&range); err = -EAGAIN; - if (!page_vma_mapped_walk(&pvmw)) { - if (new_page) - mem_cgroup_cancel_charge(new_page, memcg); + if (!page_vma_mapped_walk(&pvmw)) goto unlock; - } VM_BUG_ON_PAGE(addr != pvmw.address, old_page); if (new_page) { get_page(new_page); - mem_cgroup_commit_charge(new_page, memcg, false); page_add_new_anon_rmap(new_page, vma, addr, false); lru_cache_add_active_or_unevictable(new_page, vma); } else --- a/mm/filemap.c~mm-memcontrol-convert-anon-and-file-thp-to-new-mem_cgroup_charge-api +++ a/mm/filemap.c @@ -2633,7 +2633,7 @@ void filemap_map_pages(struct vm_fault * if (vmf->pte) vmf->pte += xas.xa_index - last_pgoff; last_pgoff = xas.xa_index; - if (alloc_set_pte(vmf, NULL, page)) + if (alloc_set_pte(vmf, page)) goto unlock; unlock_page(page); goto next; --- a/mm/huge_memory.c~mm-memcontrol-convert-anon-and-file-thp-to-new-mem_cgroup_charge-api +++ a/mm/huge_memory.c @@ -587,19 +587,19 @@ static vm_fault_t __do_huge_pmd_anonymou struct page *page, gfp_t gfp) { struct vm_area_struct *vma = vmf->vma; - struct mem_cgroup *memcg; pgtable_t pgtable; unsigned long haddr = vmf->address & HPAGE_PMD_MASK; vm_fault_t ret = 0; VM_BUG_ON_PAGE(!PageCompound(page), page); - if (mem_cgroup_try_charge_delay(page, vma->vm_mm, gfp, &memcg)) { + if (mem_cgroup_charge(page, vma->vm_mm, gfp, false)) { put_page(page); count_vm_event(THP_FAULT_FALLBACK); count_vm_event(THP_FAULT_FALLBACK_CHARGE); return VM_FAULT_FALLBACK; } + cgroup_throttle_swaprate(page, gfp); pgtable = pte_alloc_one(vma->vm_mm); if (unlikely(!pgtable)) { @@ -630,7 +630,6 @@ static vm_fault_t __do_huge_pmd_anonymou vm_fault_t ret2; spin_unlock(vmf->ptl); - mem_cgroup_cancel_charge(page, memcg); put_page(page); pte_free(vma->vm_mm, pgtable); ret2 = handle_userfault(vmf, VM_UFFD_MISSING); @@ -640,7 +639,6 @@ static vm_fault_t __do_huge_pmd_anonymou entry = mk_huge_pmd(page, vma->vm_page_prot); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); - mem_cgroup_commit_charge(page, memcg, false); page_add_new_anon_rmap(page, vma, haddr, true); lru_cache_add_active_or_unevictable(page, vma); pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable); @@ -649,7 +647,7 @@ static vm_fault_t __do_huge_pmd_anonymou mm_inc_nr_ptes(vma->vm_mm); spin_unlock(vmf->ptl); count_vm_event(THP_FAULT_ALLOC); - count_memcg_events(memcg, THP_FAULT_ALLOC, 1); + count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC); } return 0; @@ -658,7 +656,6 @@ unlock_release: release: if (pgtable) pte_free(vma->vm_mm, pgtable); - mem_cgroup_cancel_charge(page, memcg); put_page(page); return ret; --- a/mm/khugepaged.c~mm-memcontrol-convert-anon-and-file-thp-to-new-mem_cgroup_charge-api +++ a/mm/khugepaged.c @@ -1037,7 +1037,6 @@ static void collapse_huge_page(struct mm struct page *new_page; spinlock_t *pmd_ptl, *pte_ptl; int isolated = 0, result = 0; - struct mem_cgroup *memcg; struct vm_area_struct *vma; struct mmu_notifier_range range; gfp_t gfp; @@ -1060,15 +1059,15 @@ static void collapse_huge_page(struct mm goto out_nolock; } - if (unlikely(mem_cgroup_try_charge(new_page, mm, gfp, &memcg))) { + if (unlikely(mem_cgroup_charge(new_page, mm, gfp, false))) { result = SCAN_CGROUP_CHARGE_FAIL; goto out_nolock; } + count_memcg_page_event(new_page, THP_COLLAPSE_ALLOC); down_read(&mm->mmap_sem); result = hugepage_vma_revalidate(mm, address, &vma); if (result) { - mem_cgroup_cancel_charge(new_page, memcg); up_read(&mm->mmap_sem); goto out_nolock; } @@ -1076,7 +1075,6 @@ static void collapse_huge_page(struct mm pmd = mm_find_pmd(mm, address); if (!pmd) { result = SCAN_PMD_NULL; - mem_cgroup_cancel_charge(new_page, memcg); up_read(&mm->mmap_sem); goto out_nolock; } @@ -1088,7 +1086,6 @@ static void collapse_huge_page(struct mm */ if (unmapped && !__collapse_huge_page_swapin(mm, vma, address, pmd, referenced)) { - mem_cgroup_cancel_charge(new_page, memcg); up_read(&mm->mmap_sem); goto out_nolock; } @@ -1175,9 +1172,7 @@ static void collapse_huge_page(struct mm spin_lock(pmd_ptl); BUG_ON(!pmd_none(*pmd)); - mem_cgroup_commit_charge(new_page, memcg, false); page_add_new_anon_rmap(new_page, vma, address, true); - count_memcg_events(memcg, THP_COLLAPSE_ALLOC, 1); lru_cache_add_active_or_unevictable(new_page, vma); pgtable_trans_huge_deposit(mm, pmd, pgtable); set_pmd_at(mm, address, pmd, _pmd); @@ -1191,10 +1186,11 @@ static void collapse_huge_page(struct mm out_up_write: up_write(&mm->mmap_sem); out_nolock: + if (!IS_ERR_OR_NULL(*hpage)) + mem_cgroup_uncharge(*hpage); trace_mm_collapse_huge_page(mm, isolated, result); return; out: - mem_cgroup_cancel_charge(new_page, memcg); goto out_up_write; } @@ -1618,7 +1614,6 @@ static void collapse_file(struct mm_stru struct address_space *mapping = file->f_mapping; gfp_t gfp; struct page *new_page; - struct mem_cgroup *memcg; pgoff_t index, end = start + HPAGE_PMD_NR; LIST_HEAD(pagelist); XA_STATE_ORDER(xas, &mapping->i_pages, start, HPAGE_PMD_ORDER); @@ -1637,10 +1632,11 @@ static void collapse_file(struct mm_stru goto out; } - if (unlikely(mem_cgroup_try_charge(new_page, mm, gfp, &memcg))) { + if (unlikely(mem_cgroup_charge(new_page, mm, gfp, false))) { result = SCAN_CGROUP_CHARGE_FAIL; goto out; } + count_memcg_page_event(new_page, THP_COLLAPSE_ALLOC); /* This will be less messy when we use multi-index entries */ do { @@ -1650,7 +1646,6 @@ static void collapse_file(struct mm_stru break; xas_unlock_irq(&xas); if (!xas_nomem(&xas, GFP_KERNEL)) { - mem_cgroup_cancel_charge(new_page, memcg); result = SCAN_FAIL; goto out; } @@ -1844,18 +1839,9 @@ out_unlock: } if (nr_none) { - struct lruvec *lruvec; - /* - * XXX: We have started try_charge and pinned the - * memcg, but the page isn't committed yet so we - * cannot use mod_lruvec_page_state(). This hackery - * will be cleaned up when remove the page->mapping - * dependency from memcg and fully charge above. - */ - lruvec = mem_cgroup_lruvec(memcg, page_pgdat(new_page)); - __mod_lruvec_state(lruvec, NR_FILE_PAGES, nr_none); + __mod_lruvec_page_state(new_page, NR_FILE_PAGES, nr_none); if (is_shmem) - __mod_lruvec_state(lruvec, NR_SHMEM, nr_none); + __mod_lruvec_page_state(new_page, NR_SHMEM, nr_none); } xa_locked: @@ -1893,7 +1879,6 @@ xa_unlocked: SetPageUptodate(new_page); page_ref_add(new_page, HPAGE_PMD_NR - 1); - mem_cgroup_commit_charge(new_page, memcg, false); if (is_shmem) { set_page_dirty(new_page); @@ -1901,7 +1886,6 @@ xa_unlocked: } else { lru_cache_add_file(new_page); } - count_memcg_events(memcg, THP_COLLAPSE_ALLOC, 1); /* * Remove pte page tables, so we can re-fault the page as huge. @@ -1948,13 +1932,14 @@ xa_unlocked: VM_BUG_ON(nr_none); xas_unlock_irq(&xas); - mem_cgroup_cancel_charge(new_page, memcg); new_page->mapping = NULL; } unlock_page(new_page); out: VM_BUG_ON(!list_empty(&pagelist)); + if (!IS_ERR_OR_NULL(*hpage)) + mem_cgroup_uncharge(*hpage); /* TODO: tracepoints */ } --- a/mm/memory.c~mm-memcontrol-convert-anon-and-file-thp-to-new-mem_cgroup_charge-api +++ a/mm/memory.c @@ -2645,7 +2645,6 @@ static vm_fault_t wp_page_copy(struct vm struct page *new_page = NULL; pte_t entry; int page_copied = 0; - struct mem_cgroup *memcg; struct mmu_notifier_range range; if (unlikely(anon_vma_prepare(vma))) @@ -2676,8 +2675,9 @@ static vm_fault_t wp_page_copy(struct vm } } - if (mem_cgroup_try_charge_delay(new_page, mm, GFP_KERNEL, &memcg)) + if (mem_cgroup_charge(new_page, mm, GFP_KERNEL, false)) goto oom_free_new; + cgroup_throttle_swaprate(new_page, GFP_KERNEL); __SetPageUptodate(new_page); @@ -2710,7 +2710,6 @@ static vm_fault_t wp_page_copy(struct vm * thread doing COW. */ ptep_clear_flush_notify(vma, vmf->address, vmf->pte); - mem_cgroup_commit_charge(new_page, memcg, false); page_add_new_anon_rmap(new_page, vma, vmf->address, false); lru_cache_add_active_or_unevictable(new_page, vma); /* @@ -2749,8 +2748,6 @@ static vm_fault_t wp_page_copy(struct vm /* Free the old page.. */ new_page = old_page; page_copied = 1; - } else { - mem_cgroup_cancel_charge(new_page, memcg); } if (new_page) @@ -3088,7 +3085,6 @@ vm_fault_t do_swap_page(struct vm_fault { struct vm_area_struct *vma = vmf->vma; struct page *page = NULL, *swapcache; - struct mem_cgroup *memcg; swp_entry_t entry; pte_t pte; int locked; @@ -3193,10 +3189,11 @@ vm_fault_t do_swap_page(struct vm_fault goto out_page; } - if (mem_cgroup_try_charge_delay(page, vma->vm_mm, GFP_KERNEL, &memcg)) { + if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL, true)) { ret = VM_FAULT_OOM; goto out_page; } + cgroup_throttle_swaprate(page, GFP_KERNEL); /* * Back out if somebody else already faulted in this pte. @@ -3243,11 +3240,9 @@ vm_fault_t do_swap_page(struct vm_fault /* ksm created a completely new copy */ if (unlikely(page != swapcache && swapcache)) { - mem_cgroup_commit_charge(page, memcg, false); page_add_new_anon_rmap(page, vma, vmf->address, false); lru_cache_add_active_or_unevictable(page, vma); } else { - mem_cgroup_commit_charge(page, memcg, true); do_page_add_anon_rmap(page, vma, vmf->address, exclusive); activate_page(page); } @@ -3284,7 +3279,6 @@ unlock: out: return ret; out_nomap: - mem_cgroup_cancel_charge(page, memcg); pte_unmap_unlock(vmf->pte, vmf->ptl); out_page: unlock_page(page); @@ -3305,7 +3299,6 @@ out_release: static vm_fault_t do_anonymous_page(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; - struct mem_cgroup *memcg; struct page *page; vm_fault_t ret = 0; pte_t entry; @@ -3358,8 +3351,9 @@ static vm_fault_t do_anonymous_page(stru if (!page) goto oom; - if (mem_cgroup_try_charge_delay(page, vma->vm_mm, GFP_KERNEL, &memcg)) + if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL, false)) goto oom_free_page; + cgroup_throttle_swaprate(page, GFP_KERNEL); /* * The memory barrier inside __SetPageUptodate makes sure that @@ -3384,13 +3378,11 @@ static vm_fault_t do_anonymous_page(stru /* Deliver the page fault to userland, check inside PT lock */ if (userfaultfd_missing(vma)) { pte_unmap_unlock(vmf->pte, vmf->ptl); - mem_cgroup_cancel_charge(page, memcg); put_page(page); return handle_userfault(vmf, VM_UFFD_MISSING); } inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES); - mem_cgroup_commit_charge(page, memcg, false); page_add_new_anon_rmap(page, vma, vmf->address, false); lru_cache_add_active_or_unevictable(page, vma); setpte: @@ -3402,7 +3394,6 @@ unlock: pte_unmap_unlock(vmf->pte, vmf->ptl); return ret; release: - mem_cgroup_cancel_charge(page, memcg); put_page(page); goto unlock; oom_free_page: @@ -3607,7 +3598,6 @@ static vm_fault_t do_set_pmd(struct vm_f * mapping. If needed, the fucntion allocates page table or use pre-allocated. * * @vmf: fault environment - * @memcg: memcg to charge page (only for private mappings) * @page: page to map * * Caller must take care of unlocking vmf->ptl, if vmf->pte is non-NULL on @@ -3618,8 +3608,7 @@ static vm_fault_t do_set_pmd(struct vm_f * * Return: %0 on success, %VM_FAULT_ code in case of error. */ -vm_fault_t alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg, - struct page *page) +vm_fault_t alloc_set_pte(struct vm_fault *vmf, struct page *page) { struct vm_area_struct *vma = vmf->vma; bool write = vmf->flags & FAULT_FLAG_WRITE; @@ -3627,9 +3616,6 @@ vm_fault_t alloc_set_pte(struct vm_fault vm_fault_t ret; if (pmd_none(*vmf->pmd) && PageTransCompound(page)) { - /* THP on COW? */ - VM_BUG_ON_PAGE(memcg, page); - ret = do_set_pmd(vmf, page); if (ret != VM_FAULT_FALLBACK) return ret; @@ -3652,7 +3638,6 @@ vm_fault_t alloc_set_pte(struct vm_fault /* copy-on-write page */ if (write && !(vma->vm_flags & VM_SHARED)) { inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES); - mem_cgroup_commit_charge(page, memcg, false); page_add_new_anon_rmap(page, vma, vmf->address, false); lru_cache_add_active_or_unevictable(page, vma); } else { @@ -3702,7 +3687,7 @@ vm_fault_t finish_fault(struct vm_fault if (!(vmf->vma->vm_flags & VM_SHARED)) ret = check_stable_address_space(vmf->vma->vm_mm); if (!ret) - ret = alloc_set_pte(vmf, vmf->memcg, page); + ret = alloc_set_pte(vmf, page); if (vmf->pte) pte_unmap_unlock(vmf->pte, vmf->ptl); return ret; @@ -3862,11 +3847,11 @@ static vm_fault_t do_cow_fault(struct vm if (!vmf->cow_page) return VM_FAULT_OOM; - if (mem_cgroup_try_charge_delay(vmf->cow_page, vma->vm_mm, - GFP_KERNEL, &vmf->memcg)) { + if (mem_cgroup_charge(vmf->cow_page, vma->vm_mm, GFP_KERNEL, false)) { put_page(vmf->cow_page); return VM_FAULT_OOM; } + cgroup_throttle_swaprate(vmf->cow_page, GFP_KERNEL); ret = __do_fault(vmf); if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY))) @@ -3884,7 +3869,6 @@ static vm_fault_t do_cow_fault(struct vm goto uncharge_out; return ret; uncharge_out: - mem_cgroup_cancel_charge(vmf->cow_page, vmf->memcg); put_page(vmf->cow_page); return ret; } --- a/mm/migrate.c~mm-memcontrol-convert-anon-and-file-thp-to-new-mem_cgroup_charge-api +++ a/mm/migrate.c @@ -2740,7 +2740,6 @@ static void migrate_vma_insert_page(stru { struct vm_area_struct *vma = migrate->vma; struct mm_struct *mm = vma->vm_mm; - struct mem_cgroup *memcg; bool flush = false; spinlock_t *ptl; pte_t entry; @@ -2787,7 +2786,7 @@ static void migrate_vma_insert_page(stru if (unlikely(anon_vma_prepare(vma))) goto abort; - if (mem_cgroup_try_charge(page, vma->vm_mm, GFP_KERNEL, &memcg)) + if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL, false)) goto abort; /* @@ -2832,7 +2831,6 @@ static void migrate_vma_insert_page(stru goto unlock_abort; inc_mm_counter(mm, MM_ANONPAGES); - mem_cgroup_commit_charge(page, memcg, false); page_add_new_anon_rmap(page, vma, addr, false); if (!is_zone_device_page(page)) lru_cache_add_active_or_unevictable(page, vma); @@ -2855,7 +2853,6 @@ static void migrate_vma_insert_page(stru unlock_abort: pte_unmap_unlock(ptep, ptl); - mem_cgroup_cancel_charge(page, memcg); abort: *src &= ~MIGRATE_PFN_MIGRATE; } --- a/mm/swapfile.c~mm-memcontrol-convert-anon-and-file-thp-to-new-mem_cgroup_charge-api +++ a/mm/swapfile.c @@ -1892,7 +1892,6 @@ static int unuse_pte(struct vm_area_stru unsigned long addr, swp_entry_t entry, struct page *page) { struct page *swapcache; - struct mem_cgroup *memcg; spinlock_t *ptl; pte_t *pte; int ret = 1; @@ -1902,14 +1901,13 @@ static int unuse_pte(struct vm_area_stru if (unlikely(!page)) return -ENOMEM; - if (mem_cgroup_try_charge(page, vma->vm_mm, GFP_KERNEL, &memcg)) { + if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL, true)) { ret = -ENOMEM; goto out_nolock; } pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); if (unlikely(!pte_same_as_swp(*pte, swp_entry_to_pte(entry)))) { - mem_cgroup_cancel_charge(page, memcg); ret = 0; goto out; } @@ -1920,10 +1918,8 @@ static int unuse_pte(struct vm_area_stru set_pte_at(vma->vm_mm, addr, pte, pte_mkold(mk_pte(page, vma->vm_page_prot))); if (page == swapcache) { - mem_cgroup_commit_charge(page, memcg, true); page_add_anon_rmap(page, vma, addr, false); } else { /* ksm created a completely new copy */ - mem_cgroup_commit_charge(page, memcg, false); page_add_new_anon_rmap(page, vma, addr, false); lru_cache_add_active_or_unevictable(page, vma); } --- a/mm/userfaultfd.c~mm-memcontrol-convert-anon-and-file-thp-to-new-mem_cgroup_charge-api +++ a/mm/userfaultfd.c @@ -56,7 +56,6 @@ static int mcopy_atomic_pte(struct mm_st struct page **pagep, bool wp_copy) { - struct mem_cgroup *memcg; pte_t _dst_pte, *dst_pte; spinlock_t *ptl; void *page_kaddr; @@ -97,7 +96,7 @@ static int mcopy_atomic_pte(struct mm_st __SetPageUptodate(page); ret = -ENOMEM; - if (mem_cgroup_try_charge(page, dst_mm, GFP_KERNEL, &memcg)) + if (mem_cgroup_charge(page, dst_mm, GFP_KERNEL, false)) goto out_release; _dst_pte = pte_mkdirty(mk_pte(page, dst_vma->vm_page_prot)); @@ -123,7 +122,6 @@ static int mcopy_atomic_pte(struct mm_st goto out_release_uncharge_unlock; inc_mm_counter(dst_mm, MM_ANONPAGES); - mem_cgroup_commit_charge(page, memcg, false); page_add_new_anon_rmap(page, dst_vma, dst_addr, false); lru_cache_add_active_or_unevictable(page, dst_vma); @@ -138,7 +136,6 @@ out: return ret; out_release_uncharge_unlock: pte_unmap_unlock(dst_pte, ptl); - mem_cgroup_cancel_charge(page, memcg); out_release: put_page(page); goto out; _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 096/131] mm: memcontrol: drop unused try/commit/cancel charge API 2020-06-03 22:55 incoming Andrew Morton ` (94 preceding siblings ...) 2020-06-03 23:02 ` [patch 095/131] mm: memcontrol: convert anon and file-thp to new mem_cgroup_charge() API Andrew Morton @ 2020-06-03 23:02 ` Andrew Morton 2020-06-03 23:02 ` [patch 097/131] mm: memcontrol: prepare swap controller setup for integration Andrew Morton ` (35 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:02 UTC (permalink / raw) To: akpm, alex.shi, arnd, bsingharora, guro, hannes, hughd, iamjoonsoo.kim, kirill, linux-mm, mhocko, mm-commits, shakeelb, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: memcontrol: drop unused try/commit/cancel charge API There are no more users. RIP in peace. [arnd@arndb.de: fix an unused-function warning] Link: http://lkml.kernel.org/r/20200528095640.151454-1-arnd@arndb.de Link: http://lkml.kernel.org/r/20200508183105.225460-14-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Alex Shi <alex.shi@linux.alibaba.com> Cc: Hugh Dickins <hughd@google.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Michal Hocko <mhocko@suse.com> Cc: Roman Gushchin <guro@fb.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/memcontrol.h | 36 --------- mm/memcontrol.c | 128 ++++------------------------------- 2 files changed, 17 insertions(+), 147 deletions(-) --- a/include/linux/memcontrol.h~mm-memcontrol-drop-unused-try-commit-cancel-charge-api +++ a/include/linux/memcontrol.h @@ -355,14 +355,6 @@ static inline unsigned long mem_cgroup_p enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root, struct mem_cgroup *memcg); -int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm, - gfp_t gfp_mask, struct mem_cgroup **memcgp); -int mem_cgroup_try_charge_delay(struct page *page, struct mm_struct *mm, - gfp_t gfp_mask, struct mem_cgroup **memcgp); -void mem_cgroup_commit_charge(struct page *page, struct mem_cgroup *memcg, - bool lrucare); -void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg); - int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, bool lrucare); @@ -846,34 +838,6 @@ static inline enum mem_cgroup_protection return MEMCG_PROT_NONE; } -static inline int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm, - gfp_t gfp_mask, - struct mem_cgroup **memcgp) -{ - *memcgp = NULL; - return 0; -} - -static inline int mem_cgroup_try_charge_delay(struct page *page, - struct mm_struct *mm, - gfp_t gfp_mask, - struct mem_cgroup **memcgp) -{ - *memcgp = NULL; - return 0; -} - -static inline void mem_cgroup_commit_charge(struct page *page, - struct mem_cgroup *memcg, - bool lrucare) -{ -} - -static inline void mem_cgroup_cancel_charge(struct page *page, - struct mem_cgroup *memcg) -{ -} - static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, bool lrucare) { --- a/mm/memcontrol.c~mm-memcontrol-drop-unused-try-commit-cancel-charge-api +++ a/mm/memcontrol.c @@ -2641,6 +2641,7 @@ done_restock: return 0; } +#if defined(CONFIG_MEMCG_KMEM) || defined(CONFIG_MMU) static void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages) { if (mem_cgroup_is_root(memcg)) @@ -2652,6 +2653,7 @@ static void cancel_charge(struct mem_cgr css_put_many(&memcg->css, nr_pages); } +#endif static void lock_page_lru(struct page *page, int *isolated) { @@ -6499,29 +6501,26 @@ out: } /** - * mem_cgroup_try_charge - try charging a page + * mem_cgroup_charge - charge a newly allocated page to a cgroup * @page: page to charge * @mm: mm context of the victim * @gfp_mask: reclaim mode - * @memcgp: charged memcg return + * @lrucare: page might be on the LRU already * * Try to charge @page to the memcg that @mm belongs to, reclaiming * pages according to @gfp_mask if necessary. * - * Returns 0 on success, with *@memcgp pointing to the charged memcg. - * Otherwise, an error code is returned. - * - * After page->mapping has been set up, the caller must finalize the - * charge with mem_cgroup_commit_charge(). Or abort the transaction - * with mem_cgroup_cancel_charge() in case page instantiation fails. + * Returns 0 on success. Otherwise, an error code is returned. */ -int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm, - gfp_t gfp_mask, struct mem_cgroup **memcgp) +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, + bool lrucare) { unsigned int nr_pages = hpage_nr_pages(page); struct mem_cgroup *memcg = NULL; int ret = 0; + VM_BUG_ON_PAGE(PageLRU(page) && !lrucare, page); + if (mem_cgroup_disabled()) goto out; @@ -6553,56 +6552,8 @@ int mem_cgroup_try_charge(struct page *p memcg = get_mem_cgroup_from_mm(mm); ret = try_charge(memcg, gfp_mask, nr_pages); - - css_put(&memcg->css); -out: - *memcgp = memcg; - return ret; -} - -int mem_cgroup_try_charge_delay(struct page *page, struct mm_struct *mm, - gfp_t gfp_mask, struct mem_cgroup **memcgp) -{ - int ret; - - ret = mem_cgroup_try_charge(page, mm, gfp_mask, memcgp); - if (*memcgp) - cgroup_throttle_swaprate(page, gfp_mask); - return ret; -} - -/** - * mem_cgroup_commit_charge - commit a page charge - * @page: page to charge - * @memcg: memcg to charge the page to - * @lrucare: page might be on LRU already - * - * Finalize a charge transaction started by mem_cgroup_try_charge(), - * after page->mapping has been set up. This must happen atomically - * as part of the page instantiation, i.e. under the page table lock - * for anonymous pages, under the page lock for page and swap cache. - * - * In addition, the page must not be on the LRU during the commit, to - * prevent racing with task migration. If it might be, use @lrucare. - * - * Use mem_cgroup_cancel_charge() to cancel the transaction instead. - */ -void mem_cgroup_commit_charge(struct page *page, struct mem_cgroup *memcg, - bool lrucare) -{ - unsigned int nr_pages = hpage_nr_pages(page); - - VM_BUG_ON_PAGE(PageLRU(page) && !lrucare, page); - - if (mem_cgroup_disabled()) - return; - /* - * Swap faults will attempt to charge the same page multiple - * times. But reuse_swap_page() might have removed the page - * from swapcache already, so we can't check PageSwapCache(). - */ - if (!memcg) - return; + if (ret) + goto out_put; commit_charge(page, memcg, lrucare); @@ -6620,55 +6571,11 @@ void mem_cgroup_commit_charge(struct pag */ mem_cgroup_uncharge_swap(entry, nr_pages); } -} -/** - * mem_cgroup_cancel_charge - cancel a page charge - * @page: page to charge - * @memcg: memcg to charge the page to - * - * Cancel a charge transaction started by mem_cgroup_try_charge(). - */ -void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg) -{ - unsigned int nr_pages = hpage_nr_pages(page); - - if (mem_cgroup_disabled()) - return; - /* - * Swap faults will attempt to charge the same page multiple - * times. But reuse_swap_page() might have removed the page - * from swapcache already, so we can't check PageSwapCache(). - */ - if (!memcg) - return; - - cancel_charge(memcg, nr_pages); -} - -/** - * mem_cgroup_charge - charge a newly allocated page to a cgroup - * @page: page to charge - * @mm: mm context of the victim - * @gfp_mask: reclaim mode - * @lrucare: page might be on the LRU already - * - * Try to charge @page to the memcg that @mm belongs to, reclaiming - * pages according to @gfp_mask if necessary. - * - * Returns 0 on success. Otherwise, an error code is returned. - */ -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, - bool lrucare) -{ - struct mem_cgroup *memcg; - int ret; - - ret = mem_cgroup_try_charge(page, mm, gfp_mask, &memcg); - if (ret) - return ret; - mem_cgroup_commit_charge(page, memcg, lrucare); - return 0; +out_put: + css_put(&memcg->css); +out: + return ret; } struct uncharge_gather { @@ -6773,8 +6680,7 @@ static void uncharge_list(struct list_he * mem_cgroup_uncharge - uncharge a page * @page: page to uncharge * - * Uncharge a page previously charged with mem_cgroup_try_charge() and - * mem_cgroup_commit_charge(). + * Uncharge a page previously charged with mem_cgroup_charge(). */ void mem_cgroup_uncharge(struct page *page) { @@ -6797,7 +6703,7 @@ void mem_cgroup_uncharge(struct page *pa * @page_list: list of pages to uncharge * * Uncharge a list of pages previously charged with - * mem_cgroup_try_charge() and mem_cgroup_commit_charge(). + * mem_cgroup_charge(). */ void mem_cgroup_uncharge_list(struct list_head *page_list) { _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 097/131] mm: memcontrol: prepare swap controller setup for integration 2020-06-03 22:55 incoming Andrew Morton ` (95 preceding siblings ...) 2020-06-03 23:02 ` [patch 096/131] mm: memcontrol: drop unused try/commit/cancel charge API Andrew Morton @ 2020-06-03 23:02 ` Andrew Morton 2020-06-03 23:02 ` [patch 098/131] mm: memcontrol: make swap tracking an integral part of memory control Andrew Morton ` (34 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:02 UTC (permalink / raw) To: akpm, alex.shi, bsingharora, guro, hannes, hughd, iamjoonsoo.kim, kirill, linux-mm, mhocko, mm-commits, shakeelb, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: memcontrol: prepare swap controller setup for integration A few cleanups to streamline the swap controller setup: - Replace the do_swap_account flag with cgroup_memory_noswap. This brings it in line with other functionality that is usually available unless explicitly opted out of - nosocket, nokmem. - Remove the really_do_swap_account flag that stores the boot option and is later used to switch the do_swap_account. It's not clear why this indirection is/was necessary. Use do_swap_account directly. - Minor coding style polishing Link: http://lkml.kernel.org/r/20200508183105.225460-15-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Alex Shi <alex.shi@linux.alibaba.com> Cc: Hugh Dickins <hughd@google.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Michal Hocko <mhocko@suse.com> Cc: Roman Gushchin <guro@fb.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/memcontrol.h | 2 - mm/memcontrol.c | 59 ++++++++++++++++------------------- mm/swap_cgroup.c | 4 +- 3 files changed, 31 insertions(+), 34 deletions(-) --- a/include/linux/memcontrol.h~mm-memcontrol-prepare-swap-controller-setup-for-integration +++ a/include/linux/memcontrol.h @@ -558,7 +558,7 @@ struct mem_cgroup *mem_cgroup_get_oom_gr void mem_cgroup_print_oom_group(struct mem_cgroup *memcg); #ifdef CONFIG_MEMCG_SWAP -extern int do_swap_account; +extern bool cgroup_memory_noswap; #endif struct mem_cgroup *lock_page_memcg(struct page *page); --- a/mm/memcontrol.c~mm-memcontrol-prepare-swap-controller-setup-for-integration +++ a/mm/memcontrol.c @@ -83,10 +83,14 @@ static bool cgroup_memory_nokmem; /* Whether the swap controller is active */ #ifdef CONFIG_MEMCG_SWAP -int do_swap_account __read_mostly; +#ifdef CONFIG_MEMCG_SWAP_ENABLED +bool cgroup_memory_noswap __read_mostly; #else -#define do_swap_account 0 -#endif +bool cgroup_memory_noswap __read_mostly = 1; +#endif /* CONFIG_MEMCG_SWAP_ENABLED */ +#else +#define cgroup_memory_noswap 1 +#endif /* CONFIG_MEMCG_SWAP */ #ifdef CONFIG_CGROUP_WRITEBACK static DECLARE_WAIT_QUEUE_HEAD(memcg_cgwb_frn_waitq); @@ -95,7 +99,7 @@ static DECLARE_WAIT_QUEUE_HEAD(memcg_cgw /* Whether legacy memory+swap accounting is active */ static bool do_memsw_account(void) { - return !cgroup_subsys_on_dfl(memory_cgrp_subsys) && do_swap_account; + return !cgroup_subsys_on_dfl(memory_cgrp_subsys) && !cgroup_memory_noswap; } #define THRESHOLDS_EVENTS_TARGET 128 @@ -6528,18 +6532,19 @@ int mem_cgroup_charge(struct page *page, /* * Every swap fault against a single page tries to charge the * page, bail as early as possible. shmem_unuse() encounters - * already charged pages, too. The USED bit is protected by - * the page lock, which serializes swap cache removal, which + * already charged pages, too. page->mem_cgroup is protected + * by the page lock, which serializes swap cache removal, which * in turn serializes uncharging. */ VM_BUG_ON_PAGE(!PageLocked(page), page); if (compound_head(page)->mem_cgroup) goto out; - if (do_swap_account) { + if (!cgroup_memory_noswap) { swp_entry_t ent = { .val = page_private(page), }; - unsigned short id = lookup_swap_cgroup_id(ent); + unsigned short id; + id = lookup_swap_cgroup_id(ent); rcu_read_lock(); memcg = mem_cgroup_from_id(id); if (memcg && !css_tryget_online(&memcg->css)) @@ -7012,7 +7017,7 @@ int mem_cgroup_try_charge_swap(struct pa struct mem_cgroup *memcg; unsigned short oldid; - if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) || !do_swap_account) + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) || cgroup_memory_noswap) return 0; memcg = page->mem_cgroup; @@ -7056,7 +7061,7 @@ void mem_cgroup_uncharge_swap(swp_entry_ struct mem_cgroup *memcg; unsigned short id; - if (!do_swap_account) + if (cgroup_memory_noswap) return; id = swap_cgroup_record(entry, 0, nr_pages); @@ -7079,7 +7084,7 @@ long mem_cgroup_get_nr_swap_pages(struct { long nr_swap_pages = get_nr_swap_pages(); - if (!do_swap_account || !cgroup_subsys_on_dfl(memory_cgrp_subsys)) + if (cgroup_memory_noswap || !cgroup_subsys_on_dfl(memory_cgrp_subsys)) return nr_swap_pages; for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) nr_swap_pages = min_t(long, nr_swap_pages, @@ -7096,7 +7101,7 @@ bool mem_cgroup_swap_full(struct page *p if (vm_swap_full()) return true; - if (!do_swap_account || !cgroup_subsys_on_dfl(memory_cgrp_subsys)) + if (cgroup_memory_noswap || !cgroup_subsys_on_dfl(memory_cgrp_subsys)) return false; memcg = page->mem_cgroup; @@ -7114,22 +7119,15 @@ bool mem_cgroup_swap_full(struct page *p return false; } -/* for remember boot option*/ -#ifdef CONFIG_MEMCG_SWAP_ENABLED -static int really_do_swap_account __initdata = 1; -#else -static int really_do_swap_account __initdata; -#endif - -static int __init enable_swap_account(char *s) +static int __init setup_swap_account(char *s) { if (!strcmp(s, "1")) - really_do_swap_account = 1; + cgroup_memory_noswap = 0; else if (!strcmp(s, "0")) - really_do_swap_account = 0; + cgroup_memory_noswap = 1; return 1; } -__setup("swapaccount=", enable_swap_account); +__setup("swapaccount=", setup_swap_account); static u64 swap_current_read(struct cgroup_subsys_state *css, struct cftype *cft) @@ -7226,7 +7224,7 @@ static struct cftype swap_files[] = { { } /* terminate */ }; -static struct cftype memsw_cgroup_files[] = { +static struct cftype memsw_files[] = { { .name = "memsw.usage_in_bytes", .private = MEMFILE_PRIVATE(_MEMSWAP, RES_USAGE), @@ -7255,13 +7253,12 @@ static struct cftype memsw_cgroup_files[ static int __init mem_cgroup_swap_init(void) { - if (!mem_cgroup_disabled() && really_do_swap_account) { - do_swap_account = 1; - WARN_ON(cgroup_add_dfl_cftypes(&memory_cgrp_subsys, - swap_files)); - WARN_ON(cgroup_add_legacy_cftypes(&memory_cgrp_subsys, - memsw_cgroup_files)); - } + if (mem_cgroup_disabled() || cgroup_memory_noswap) + return 0; + + WARN_ON(cgroup_add_dfl_cftypes(&memory_cgrp_subsys, swap_files)); + WARN_ON(cgroup_add_legacy_cftypes(&memory_cgrp_subsys, memsw_files)); + return 0; } subsys_initcall(mem_cgroup_swap_init); --- a/mm/swap_cgroup.c~mm-memcontrol-prepare-swap-controller-setup-for-integration +++ a/mm/swap_cgroup.c @@ -171,7 +171,7 @@ int swap_cgroup_swapon(int type, unsigne unsigned long length; struct swap_cgroup_ctrl *ctrl; - if (!do_swap_account) + if (cgroup_memory_noswap) return 0; length = DIV_ROUND_UP(max_pages, SC_PER_PAGE); @@ -209,7 +209,7 @@ void swap_cgroup_swapoff(int type) unsigned long i, length; struct swap_cgroup_ctrl *ctrl; - if (!do_swap_account) + if (cgroup_memory_noswap) return; mutex_lock(&swap_cgroup_mutex); _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 098/131] mm: memcontrol: make swap tracking an integral part of memory control 2020-06-03 22:55 incoming Andrew Morton ` (96 preceding siblings ...) 2020-06-03 23:02 ` [patch 097/131] mm: memcontrol: prepare swap controller setup for integration Andrew Morton @ 2020-06-03 23:02 ` Andrew Morton 2020-06-03 23:02 ` [patch 099/131] mm: memcontrol: charge swapin pages on instantiation Andrew Morton ` (33 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:02 UTC (permalink / raw) To: akpm, alex.shi, bsingharora, guro, hannes, hughd, iamjoonsoo.kim, kirill, linux-mm, mhocko, mm-commits, naresh.kamboju, shakeelb, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: memcontrol: make swap tracking an integral part of memory control Without swap page tracking, users that are otherwise memory controlled can easily escape their containment and allocate significant amounts of memory that they're not being charged for. That's because swap does readahead, but without the cgroup records of who owned the page at swapout, readahead pages don't get charged until somebody actually faults them into their page table and we can identify an owner task. This can be maliciously exploited with MADV_WILLNEED, which triggers arbitrary readahead allocations without charging the pages. Make swap swap page tracking an integral part of memcg and remove the Kconfig options. In the first place, it was only made configurable to allow users to save some memory. But the overhead of tracking cgroup ownership per swap page is minimal - 2 byte per page, or 512k per 1G of swap, or 0.04%. Saving that at the expense of broken containment semantics is not something we should present as a coequal option. The swapaccount=0 boot option will continue to exist, and it will eliminate the page_counter overhead and hide the swap control files, but it won't disable swap slot ownership tracking. This patch makes sure we always have the cgroup records at swapin time; the next patch will fix the actual bug by charging readahead swap pages at swapin time rather than at fault time. v2: fix double swap charge bug in cgroup1/cgroup2 code gating [hannes@cmpxchg.org: fix crash with cgroup_disable=memory] Link: http://lkml.kernel.org/r/20200521215855.GB815153@cmpxchg.org Link: http://lkml.kernel.org/r/20200508183105.225460-16-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Debugged-by: Hugh Dickins <hughd@google.com> Debugged-by: Michal Hocko <mhocko@kernel.org> Cc: Alex Shi <alex.shi@linux.alibaba.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Roman Gushchin <guro@fb.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Balbir Singh <bsingharora@gmail.com> Cc: Naresh Kamboju <naresh.kamboju@linaro.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- init/Kconfig | 17 -------------- mm/memcontrol.c | 53 +++++++++++++++++++-------------------------- mm/swap_cgroup.c | 6 ----- 3 files changed, 24 insertions(+), 52 deletions(-) --- a/init/Kconfig~mm-memcontrol-make-swap-tracking-an-integral-part-of-memory-control +++ a/init/Kconfig @@ -819,24 +819,9 @@ config MEMCG Provides control over the memory footprint of tasks in a cgroup. config MEMCG_SWAP - bool "Swap controller" + bool depends on MEMCG && SWAP - help - Provides control over the swap space consumed by tasks in a cgroup. - -config MEMCG_SWAP_ENABLED - bool "Swap controller enabled by default" - depends on MEMCG_SWAP default y - help - Memory Resource Controller Swap Extension comes with its price in - a bigger memory consumption. General purpose distribution kernels - which want to enable the feature but keep it disabled by default - and let the user enable it by swapaccount=1 boot command line - parameter should have this option unselected. - For those who want to have the feature enabled by default should - select this option (if, for some reason, they need to disable it - then swapaccount=0 does the trick). config MEMCG_KMEM bool --- a/mm/memcontrol.c~mm-memcontrol-make-swap-tracking-an-integral-part-of-memory-control +++ a/mm/memcontrol.c @@ -83,14 +83,10 @@ static bool cgroup_memory_nokmem; /* Whether the swap controller is active */ #ifdef CONFIG_MEMCG_SWAP -#ifdef CONFIG_MEMCG_SWAP_ENABLED bool cgroup_memory_noswap __read_mostly; #else -bool cgroup_memory_noswap __read_mostly = 1; -#endif /* CONFIG_MEMCG_SWAP_ENABLED */ -#else #define cgroup_memory_noswap 1 -#endif /* CONFIG_MEMCG_SWAP */ +#endif #ifdef CONFIG_CGROUP_WRITEBACK static DECLARE_WAIT_QUEUE_HEAD(memcg_cgwb_frn_waitq); @@ -5360,8 +5356,7 @@ static struct page *mc_handle_swap_pte(s * we call find_get_page() with swapper_space directly. */ page = find_get_page(swap_address_space(ent), swp_offset(ent)); - if (do_memsw_account()) - entry->val = ent.val; + entry->val = ent.val; return page; } @@ -5395,8 +5390,7 @@ static struct page *mc_handle_file_pte(s page = find_get_entry(mapping, pgoff); if (xa_is_value(page)) { swp_entry_t swp = radix_to_swp_entry(page); - if (do_memsw_account()) - *entry = swp; + *entry = swp; page = find_get_page(swap_address_space(swp), swp_offset(swp)); } @@ -6529,6 +6523,9 @@ int mem_cgroup_charge(struct page *page, goto out; if (PageSwapCache(page)) { + swp_entry_t ent = { .val = page_private(page), }; + unsigned short id; + /* * Every swap fault against a single page tries to charge the * page, bail as early as possible. shmem_unuse() encounters @@ -6540,17 +6537,12 @@ int mem_cgroup_charge(struct page *page, if (compound_head(page)->mem_cgroup) goto out; - if (!cgroup_memory_noswap) { - swp_entry_t ent = { .val = page_private(page), }; - unsigned short id; - - id = lookup_swap_cgroup_id(ent); - rcu_read_lock(); - memcg = mem_cgroup_from_id(id); - if (memcg && !css_tryget_online(&memcg->css)) - memcg = NULL; - rcu_read_unlock(); - } + id = lookup_swap_cgroup_id(ent); + rcu_read_lock(); + memcg = mem_cgroup_from_id(id); + if (memcg && !css_tryget_online(&memcg->css)) + memcg = NULL; + rcu_read_unlock(); } if (!memcg) @@ -6567,7 +6559,7 @@ int mem_cgroup_charge(struct page *page, memcg_check_events(memcg, page); local_irq_enable(); - if (do_memsw_account() && PageSwapCache(page)) { + if (PageSwapCache(page)) { swp_entry_t entry = { .val = page_private(page) }; /* * The swap entry might not get freed for a long time, @@ -6952,7 +6944,7 @@ void mem_cgroup_swapout(struct page *pag VM_BUG_ON_PAGE(PageLRU(page), page); VM_BUG_ON_PAGE(page_count(page), page); - if (!do_memsw_account()) + if (cgroup_subsys_on_dfl(memory_cgrp_subsys)) return; memcg = page->mem_cgroup; @@ -6981,7 +6973,7 @@ void mem_cgroup_swapout(struct page *pag if (!mem_cgroup_is_root(memcg)) page_counter_uncharge(&memcg->memory, nr_entries); - if (memcg != swap_memcg) { + if (!cgroup_memory_noswap && memcg != swap_memcg) { if (!mem_cgroup_is_root(swap_memcg)) page_counter_charge(&swap_memcg->memsw, nr_entries); page_counter_uncharge(&memcg->memsw, nr_entries); @@ -7017,7 +7009,7 @@ int mem_cgroup_try_charge_swap(struct pa struct mem_cgroup *memcg; unsigned short oldid; - if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) || cgroup_memory_noswap) + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) return 0; memcg = page->mem_cgroup; @@ -7033,7 +7025,7 @@ int mem_cgroup_try_charge_swap(struct pa memcg = mem_cgroup_id_get_online(memcg); - if (!mem_cgroup_is_root(memcg) && + if (!cgroup_memory_noswap && !mem_cgroup_is_root(memcg) && !page_counter_try_charge(&memcg->swap, nr_pages, &counter)) { memcg_memory_event(memcg, MEMCG_SWAP_MAX); memcg_memory_event(memcg, MEMCG_SWAP_FAIL); @@ -7061,14 +7053,11 @@ void mem_cgroup_uncharge_swap(swp_entry_ struct mem_cgroup *memcg; unsigned short id; - if (cgroup_memory_noswap) - return; - id = swap_cgroup_record(entry, 0, nr_pages); rcu_read_lock(); memcg = mem_cgroup_from_id(id); if (memcg) { - if (!mem_cgroup_is_root(memcg)) { + if (!cgroup_memory_noswap && !mem_cgroup_is_root(memcg)) { if (cgroup_subsys_on_dfl(memory_cgrp_subsys)) page_counter_uncharge(&memcg->swap, nr_pages); else @@ -7253,7 +7242,11 @@ static struct cftype memsw_files[] = { static int __init mem_cgroup_swap_init(void) { - if (mem_cgroup_disabled() || cgroup_memory_noswap) + /* No memory control -> no swap control */ + if (mem_cgroup_disabled()) + cgroup_memory_noswap = true; + + if (cgroup_memory_noswap) return 0; WARN_ON(cgroup_add_dfl_cftypes(&memory_cgrp_subsys, swap_files)); --- a/mm/swap_cgroup.c~mm-memcontrol-make-swap-tracking-an-integral-part-of-memory-control +++ a/mm/swap_cgroup.c @@ -171,9 +171,6 @@ int swap_cgroup_swapon(int type, unsigne unsigned long length; struct swap_cgroup_ctrl *ctrl; - if (cgroup_memory_noswap) - return 0; - length = DIV_ROUND_UP(max_pages, SC_PER_PAGE); array_size = length * sizeof(void *); @@ -209,9 +206,6 @@ void swap_cgroup_swapoff(int type) unsigned long i, length; struct swap_cgroup_ctrl *ctrl; - if (cgroup_memory_noswap) - return; - mutex_lock(&swap_cgroup_mutex); ctrl = &swap_cgroup_ctrl[type]; map = ctrl->map; _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 099/131] mm: memcontrol: charge swapin pages on instantiation 2020-06-03 22:55 incoming Andrew Morton ` (97 preceding siblings ...) 2020-06-03 23:02 ` [patch 098/131] mm: memcontrol: make swap tracking an integral part of memory control Andrew Morton @ 2020-06-03 23:02 ` Andrew Morton 2020-06-03 23:02 ` [patch 100/131] mm: memcontrol: document the new swap control behavior Andrew Morton ` (32 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:02 UTC (permalink / raw) To: akpm, alex.shi, aquini, bsingharora, guro, hannes, hughd, iamjoonsoo.kim, kirill, linux-mm, mhocko, mm-commits, shakeelb, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: memcontrol: charge swapin pages on instantiation Right now, users that are otherwise memory controlled can easily escape their containment and allocate significant amounts of memory that they're not being charged for. That's because swap readahead pages are not being charged until somebody actually faults them into their page table. This can be exploited with MADV_WILLNEED, which triggers arbitrary readahead allocations without charging the pages. There are additional problems with the delayed charging of swap pages: 1. To implement refault/workingset detection for anonymous pages, we need to have a target LRU available at swapin time, but the LRU is not determinable until the page has been charged. 2. To implement per-cgroup LRU locking, we need page->mem_cgroup to be stable when the page is isolated from the LRU; otherwise, the locks change under us. But swapcache gets charged after it's already on the LRU, and even if we cannot isolate it ourselves (since charging is not exactly optional). The previous patch ensured we always maintain cgroup ownership records for swap pages. This patch moves the swapcache charging point from the fault handler to swapin time to fix all of the above problems. v2: simplify swapin error checking (Joonsoo) [hughd@google.com: fix livelock in __read_swap_cache_async()] Link: http://lkml.kernel.org/r/alpine.LSU.2.11.2005212246080.8458@eggly.anvils Link: http://lkml.kernel.org/r/20200508183105.225460-17-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: Hugh Dickins <hughd@google.com> Reviewed-by: Alex Shi <alex.shi@linux.alibaba.com> Cc: Hugh Dickins <hughd@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Michal Hocko <mhocko@suse.com> Cc: Roman Gushchin <guro@fb.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Balbir Singh <bsingharora@gmail.com> Cc: Rafael Aquini <aquini@redhat.com> Cc: Alex Shi <alex.shi@linux.alibaba.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/memory.c | 15 +++++- mm/shmem.c | 14 +++--- mm/swap_state.c | 99 ++++++++++++++++++++++++++-------------------- mm/swapfile.c | 6 -- 4 files changed, 75 insertions(+), 59 deletions(-) --- a/mm/memory.c~mm-memcontrol-charge-swapin-pages-on-instantiation +++ a/mm/memory.c @@ -3125,9 +3125,20 @@ vm_fault_t do_swap_page(struct vm_fault page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, vmf->address); if (page) { + int err; + __SetPageLocked(page); __SetPageSwapBacked(page); set_page_private(page, entry.val); + + /* Tell memcg to use swap ownership records */ + SetPageSwapCache(page); + err = mem_cgroup_charge(page, vma->vm_mm, + GFP_KERNEL, false); + ClearPageSwapCache(page); + if (err) + goto out_page; + lru_cache_add_anon(page); swap_readpage(page, true); } @@ -3189,10 +3200,6 @@ vm_fault_t do_swap_page(struct vm_fault goto out_page; } - if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL, true)) { - ret = VM_FAULT_OOM; - goto out_page; - } cgroup_throttle_swaprate(page, GFP_KERNEL); /* --- a/mm/shmem.c~mm-memcontrol-charge-swapin-pages-on-instantiation +++ a/mm/shmem.c @@ -623,13 +623,15 @@ static int shmem_add_to_page_cache(struc page->mapping = mapping; page->index = index; - error = mem_cgroup_charge(page, charge_mm, gfp, PageSwapCache(page)); - if (error) { - if (!PageSwapCache(page) && PageTransHuge(page)) { - count_vm_event(THP_FILE_FALLBACK); - count_vm_event(THP_FILE_FALLBACK_CHARGE); + if (!PageSwapCache(page)) { + error = mem_cgroup_charge(page, charge_mm, gfp, false); + if (error) { + if (PageTransHuge(page)) { + count_vm_event(THP_FILE_FALLBACK); + count_vm_event(THP_FILE_FALLBACK_CHARGE); + } + goto error; } - goto error; } cgroup_throttle_swaprate(page, gfp); --- a/mm/swapfile.c~mm-memcontrol-charge-swapin-pages-on-instantiation +++ a/mm/swapfile.c @@ -1901,11 +1901,6 @@ static int unuse_pte(struct vm_area_stru if (unlikely(!page)) return -ENOMEM; - if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL, true)) { - ret = -ENOMEM; - goto out_nolock; - } - pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); if (unlikely(!pte_same_as_swp(*pte, swp_entry_to_pte(entry)))) { ret = 0; @@ -1931,7 +1926,6 @@ static int unuse_pte(struct vm_area_stru activate_page(page); out: pte_unmap_unlock(pte, ptl); -out_nolock: if (page != swapcache) { unlock_page(page); put_page(page); --- a/mm/swap_state.c~mm-memcontrol-charge-swapin-pages-on-instantiation +++ a/mm/swap_state.c @@ -360,12 +360,13 @@ struct page *__read_swap_cache_async(swp struct vm_area_struct *vma, unsigned long addr, bool *new_page_allocated) { - struct page *found_page = NULL, *new_page = NULL; struct swap_info_struct *si; - int err; + struct page *page; + *new_page_allocated = false; - do { + for (;;) { + int err; /* * First check the swap cache. Since this is normally * called after lookup_swap_cache() failed, re-calling @@ -373,12 +374,12 @@ struct page *__read_swap_cache_async(swp */ si = get_swap_device(entry); if (!si) - break; - found_page = find_get_page(swap_address_space(entry), - swp_offset(entry)); + return NULL; + page = find_get_page(swap_address_space(entry), + swp_offset(entry)); put_swap_device(si); - if (found_page) - break; + if (page) + return page; /* * Just skip read ahead for unused swap slot. @@ -389,54 +390,66 @@ struct page *__read_swap_cache_async(swp * else swap_off will be aborted if we return NULL. */ if (!__swp_swapcount(entry) && swap_slot_cache_enabled) - break; + return NULL; /* - * Get a new page to read into from swap. + * Get a new page to read into from swap. Allocate it now, + * before marking swap_map SWAP_HAS_CACHE, when -EEXIST will + * cause any racers to loop around until we add it to cache. */ - if (!new_page) { - new_page = alloc_page_vma(gfp_mask, vma, addr); - if (!new_page) - break; /* Out of memory */ - } + page = alloc_page_vma(gfp_mask, vma, addr); + if (!page) + return NULL; /* * Swap entry may have been freed since our caller observed it. */ err = swapcache_prepare(entry); - if (err == -EEXIST) { - /* - * We might race against get_swap_page() and stumble - * across a SWAP_HAS_CACHE swap_map entry whose page - * has not been brought into the swapcache yet. - */ - cond_resched(); - continue; - } else if (err) /* swp entry is obsolete ? */ + if (!err) break; - /* May fail (-ENOMEM) if XArray node allocation failed. */ - __SetPageLocked(new_page); - __SetPageSwapBacked(new_page); - err = add_to_swap_cache(new_page, entry, gfp_mask & GFP_KERNEL); - if (likely(!err)) { - /* Initiate read into locked page */ - SetPageWorkingset(new_page); - lru_cache_add_anon(new_page); - *new_page_allocated = true; - return new_page; - } - __ClearPageLocked(new_page); + put_page(page); + if (err != -EEXIST) + return NULL; + /* - * add_to_swap_cache() doesn't return -EEXIST, so we can safely - * clear SWAP_HAS_CACHE flag. + * We might race against __delete_from_swap_cache(), and + * stumble across a swap_map entry whose SWAP_HAS_CACHE + * has not yet been cleared. Or race against another + * __read_swap_cache_async(), which has set SWAP_HAS_CACHE + * in swap_map, but not yet added its page to swap cache. */ - put_swap_page(new_page, entry); - } while (err != -ENOMEM); + cond_resched(); + } - if (new_page) - put_page(new_page); - return found_page; + /* + * The swap entry is ours to swap in. Prepare the new page. + */ + + __SetPageLocked(page); + __SetPageSwapBacked(page); + + /* May fail (-ENOMEM) if XArray node allocation failed. */ + if (add_to_swap_cache(page, entry, gfp_mask & GFP_KERNEL)) { + put_swap_page(page, entry); + goto fail_unlock; + } + + if (mem_cgroup_charge(page, NULL, gfp_mask, false)) { + delete_from_swap_cache(page); + goto fail_unlock; + } + + /* Caller will initiate read into locked page */ + SetPageWorkingset(page); + lru_cache_add_anon(page); + *new_page_allocated = true; + return page; + +fail_unlock: + unlock_page(page); + put_page(page); + return NULL; } /* _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 100/131] mm: memcontrol: document the new swap control behavior 2020-06-03 22:55 incoming Andrew Morton ` (98 preceding siblings ...) 2020-06-03 23:02 ` [patch 099/131] mm: memcontrol: charge swapin pages on instantiation Andrew Morton @ 2020-06-03 23:02 ` Andrew Morton 2020-06-03 23:02 ` [patch 101/131] mm: memcontrol: delete unused lrucare handling Andrew Morton ` (31 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:02 UTC (permalink / raw) To: akpm, alex.shi, bsingharora, guro, hannes, hughd, iamjoonsoo.kim, kirill, linux-mm, mhocko, mm-commits, shakeelb, torvalds From: Alex Shi <alex.shi@linux.alibaba.com> Subject: mm: memcontrol: document the new swap control behavior Link: http://lkml.kernel.org/r/20200508183105.225460-18-hannes@cmpxchg.org Signed-off-by: Alex Shi <alex.shi@linux.alibaba.com> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Hugh Dickins <hughd@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Michal Hocko <mhocko@suse.com> Cc: Roman Gushchin <guro@fb.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- Documentation/admin-guide/cgroup-v1/memory.rst | 19 +++++---------- 1 file changed, 7 insertions(+), 12 deletions(-) --- a/Documentation/admin-guide/cgroup-v1/memory.rst~mm-memcontrol-document-the-new-swap-control-behavior +++ a/Documentation/admin-guide/cgroup-v1/memory.rst @@ -199,11 +199,11 @@ An RSS page is unaccounted when it's ful unaccounted when it's removed from radix-tree. Even if RSS pages are fully unmapped (by kswapd), they may exist as SwapCache in the system until they are really freed. Such SwapCaches are also accounted. -A swapped-in page is not accounted until it's mapped. +A swapped-in page is accounted after adding into swapcache. Note: The kernel does swapin-readahead and reads multiple swaps at once. -This means swapped-in pages may contain pages for other tasks than a task -causing page fault. So, we avoid accounting at swap-in I/O. +Since page's memcg recorded into swap whatever memsw enabled, the page will +be accounted after swapin. At page migration, accounting information is kept. @@ -222,18 +222,13 @@ the cgroup that brought it in -- this wi But see section 8.2: when moving a task to another cgroup, its pages may be recharged to the new cgroup, if move_charge_at_immigrate has been chosen. -Exception: If CONFIG_MEMCG_SWAP is not used. -When you do swapoff and make swapped-out pages of shmem(tmpfs) to -be backed into memory in force, charges for pages are accounted against the -caller of swapoff rather than the users of shmem. - -2.4 Swap Extension (CONFIG_MEMCG_SWAP) +2.4 Swap Extension -------------------------------------- -Swap Extension allows you to record charge for swap. A swapped-in page is -charged back to original page allocator if possible. +Swap usage is always recorded for each of cgroup. Swap Extension allows you to +read and limit it. -When swap is accounted, following files are added. +When CONFIG_SWAP is enabled, following files are added. - memory.memsw.usage_in_bytes. - memory.memsw.limit_in_bytes. _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 101/131] mm: memcontrol: delete unused lrucare handling 2020-06-03 22:55 incoming Andrew Morton ` (99 preceding siblings ...) 2020-06-03 23:02 ` [patch 100/131] mm: memcontrol: document the new swap control behavior Andrew Morton @ 2020-06-03 23:02 ` Andrew Morton 2020-06-03 23:02 ` [patch 102/131] mm: memcontrol: update page->mem_cgroup stability rules Andrew Morton ` (30 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:02 UTC (permalink / raw) To: akpm, alex.shi, bsingharora, guro, hannes, hughd, iamjoonsoo.kim, kirill, linux-mm, mhocko, mm-commits, shakeelb, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: memcontrol: delete unused lrucare handling Swapin faults were the last event to charge pages after they had already been put on the LRU list. Now that we charge directly on swapin, the lrucare portion of the charge code is unused. Link: http://lkml.kernel.org/r/20200508183105.225460-19-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Alex Shi <alex.shi@linux.alibaba.com> Cc: Hugh Dickins <hughd@google.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Michal Hocko <mhocko@suse.com> Cc: Roman Gushchin <guro@fb.com> Cc: Balbir Singh <bsingharora@gmail.com> Cc: Shakeel Butt <shakeelb@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/memcontrol.h | 5 +-- kernel/events/uprobes.c | 3 - mm/filemap.c | 2 - mm/huge_memory.c | 2 - mm/khugepaged.c | 4 +- mm/memcontrol.c | 57 ++--------------------------------- mm/memory.c | 8 ++-- mm/migrate.c | 2 - mm/shmem.c | 2 - mm/swap_state.c | 2 - mm/userfaultfd.c | 2 - 11 files changed, 19 insertions(+), 70 deletions(-) --- a/include/linux/memcontrol.h~mm-memcontrol-delete-unused-lrucare-handling +++ a/include/linux/memcontrol.h @@ -355,8 +355,7 @@ static inline unsigned long mem_cgroup_p enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root, struct mem_cgroup *memcg); -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, - bool lrucare); +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask); void mem_cgroup_uncharge(struct page *page); void mem_cgroup_uncharge_list(struct list_head *page_list); @@ -839,7 +838,7 @@ static inline enum mem_cgroup_protection } static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm, - gfp_t gfp_mask, bool lrucare) + gfp_t gfp_mask) { return 0; } --- a/kernel/events/uprobes.c~mm-memcontrol-delete-unused-lrucare-handling +++ a/kernel/events/uprobes.c @@ -167,8 +167,7 @@ static int __replace_page(struct vm_area addr + PAGE_SIZE); if (new_page) { - err = mem_cgroup_charge(new_page, vma->vm_mm, GFP_KERNEL, - false); + err = mem_cgroup_charge(new_page, vma->vm_mm, GFP_KERNEL); if (err) return err; } --- a/mm/filemap.c~mm-memcontrol-delete-unused-lrucare-handling +++ a/mm/filemap.c @@ -845,7 +845,7 @@ static int __add_to_page_cache_locked(st page->index = offset; if (!huge) { - error = mem_cgroup_charge(page, current->mm, gfp_mask, false); + error = mem_cgroup_charge(page, current->mm, gfp_mask); if (error) goto error; } --- a/mm/huge_memory.c~mm-memcontrol-delete-unused-lrucare-handling +++ a/mm/huge_memory.c @@ -593,7 +593,7 @@ static vm_fault_t __do_huge_pmd_anonymou VM_BUG_ON_PAGE(!PageCompound(page), page); - if (mem_cgroup_charge(page, vma->vm_mm, gfp, false)) { + if (mem_cgroup_charge(page, vma->vm_mm, gfp)) { put_page(page); count_vm_event(THP_FAULT_FALLBACK); count_vm_event(THP_FAULT_FALLBACK_CHARGE); --- a/mm/khugepaged.c~mm-memcontrol-delete-unused-lrucare-handling +++ a/mm/khugepaged.c @@ -1059,7 +1059,7 @@ static void collapse_huge_page(struct mm goto out_nolock; } - if (unlikely(mem_cgroup_charge(new_page, mm, gfp, false))) { + if (unlikely(mem_cgroup_charge(new_page, mm, gfp))) { result = SCAN_CGROUP_CHARGE_FAIL; goto out_nolock; } @@ -1632,7 +1632,7 @@ static void collapse_file(struct mm_stru goto out; } - if (unlikely(mem_cgroup_charge(new_page, mm, gfp, false))) { + if (unlikely(mem_cgroup_charge(new_page, mm, gfp))) { result = SCAN_CGROUP_CHARGE_FAIL; goto out; } --- a/mm/memcontrol.c~mm-memcontrol-delete-unused-lrucare-handling +++ a/mm/memcontrol.c @@ -2655,51 +2655,9 @@ static void cancel_charge(struct mem_cgr } #endif -static void lock_page_lru(struct page *page, int *isolated) +static void commit_charge(struct page *page, struct mem_cgroup *memcg) { - pg_data_t *pgdat = page_pgdat(page); - - spin_lock_irq(&pgdat->lru_lock); - if (PageLRU(page)) { - struct lruvec *lruvec; - - lruvec = mem_cgroup_page_lruvec(page, pgdat); - ClearPageLRU(page); - del_page_from_lru_list(page, lruvec, page_lru(page)); - *isolated = 1; - } else - *isolated = 0; -} - -static void unlock_page_lru(struct page *page, int isolated) -{ - pg_data_t *pgdat = page_pgdat(page); - - if (isolated) { - struct lruvec *lruvec; - - lruvec = mem_cgroup_page_lruvec(page, pgdat); - VM_BUG_ON_PAGE(PageLRU(page), page); - SetPageLRU(page); - add_page_to_lru_list(page, lruvec, page_lru(page)); - } - spin_unlock_irq(&pgdat->lru_lock); -} - -static void commit_charge(struct page *page, struct mem_cgroup *memcg, - bool lrucare) -{ - int isolated; - VM_BUG_ON_PAGE(page->mem_cgroup, page); - - /* - * In some cases, SwapCache and FUSE(splice_buf->radixtree), the page - * may already be on some other mem_cgroup's LRU. Take care of it. - */ - if (lrucare) - lock_page_lru(page, &isolated); - /* * Nobody should be changing or seriously looking at * page->mem_cgroup at this point: @@ -2715,9 +2673,6 @@ static void commit_charge(struct page *p * have the page locked */ page->mem_cgroup = memcg; - - if (lrucare) - unlock_page_lru(page, isolated); } #ifdef CONFIG_MEMCG_KMEM @@ -6503,22 +6458,18 @@ out: * @page: page to charge * @mm: mm context of the victim * @gfp_mask: reclaim mode - * @lrucare: page might be on the LRU already * * Try to charge @page to the memcg that @mm belongs to, reclaiming * pages according to @gfp_mask if necessary. * * Returns 0 on success. Otherwise, an error code is returned. */ -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, - bool lrucare) +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask) { unsigned int nr_pages = hpage_nr_pages(page); struct mem_cgroup *memcg = NULL; int ret = 0; - VM_BUG_ON_PAGE(PageLRU(page) && !lrucare, page); - if (mem_cgroup_disabled()) goto out; @@ -6552,7 +6503,7 @@ int mem_cgroup_charge(struct page *page, if (ret) goto out_put; - commit_charge(page, memcg, lrucare); + commit_charge(page, memcg); local_irq_disable(); mem_cgroup_charge_statistics(memcg, page, nr_pages); @@ -6753,7 +6704,7 @@ void mem_cgroup_migrate(struct page *old page_counter_charge(&memcg->memsw, nr_pages); css_get_many(&memcg->css, nr_pages); - commit_charge(newpage, memcg, false); + commit_charge(newpage, memcg); local_irq_save(flags); mem_cgroup_charge_statistics(memcg, newpage, nr_pages); --- a/mm/memory.c~mm-memcontrol-delete-unused-lrucare-handling +++ a/mm/memory.c @@ -2675,7 +2675,7 @@ static vm_fault_t wp_page_copy(struct vm } } - if (mem_cgroup_charge(new_page, mm, GFP_KERNEL, false)) + if (mem_cgroup_charge(new_page, mm, GFP_KERNEL)) goto oom_free_new; cgroup_throttle_swaprate(new_page, GFP_KERNEL); @@ -3134,7 +3134,7 @@ vm_fault_t do_swap_page(struct vm_fault /* Tell memcg to use swap ownership records */ SetPageSwapCache(page); err = mem_cgroup_charge(page, vma->vm_mm, - GFP_KERNEL, false); + GFP_KERNEL); ClearPageSwapCache(page); if (err) goto out_page; @@ -3358,7 +3358,7 @@ static vm_fault_t do_anonymous_page(stru if (!page) goto oom; - if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL, false)) + if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL)) goto oom_free_page; cgroup_throttle_swaprate(page, GFP_KERNEL); @@ -3854,7 +3854,7 @@ static vm_fault_t do_cow_fault(struct vm if (!vmf->cow_page) return VM_FAULT_OOM; - if (mem_cgroup_charge(vmf->cow_page, vma->vm_mm, GFP_KERNEL, false)) { + if (mem_cgroup_charge(vmf->cow_page, vma->vm_mm, GFP_KERNEL)) { put_page(vmf->cow_page); return VM_FAULT_OOM; } --- a/mm/migrate.c~mm-memcontrol-delete-unused-lrucare-handling +++ a/mm/migrate.c @@ -2786,7 +2786,7 @@ static void migrate_vma_insert_page(stru if (unlikely(anon_vma_prepare(vma))) goto abort; - if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL, false)) + if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL)) goto abort; /* --- a/mm/shmem.c~mm-memcontrol-delete-unused-lrucare-handling +++ a/mm/shmem.c @@ -624,7 +624,7 @@ static int shmem_add_to_page_cache(struc page->index = index; if (!PageSwapCache(page)) { - error = mem_cgroup_charge(page, charge_mm, gfp, false); + error = mem_cgroup_charge(page, charge_mm, gfp); if (error) { if (PageTransHuge(page)) { count_vm_event(THP_FILE_FALLBACK); --- a/mm/swap_state.c~mm-memcontrol-delete-unused-lrucare-handling +++ a/mm/swap_state.c @@ -435,7 +435,7 @@ struct page *__read_swap_cache_async(swp goto fail_unlock; } - if (mem_cgroup_charge(page, NULL, gfp_mask, false)) { + if (mem_cgroup_charge(page, NULL, gfp_mask)) { delete_from_swap_cache(page); goto fail_unlock; } --- a/mm/userfaultfd.c~mm-memcontrol-delete-unused-lrucare-handling +++ a/mm/userfaultfd.c @@ -96,7 +96,7 @@ static int mcopy_atomic_pte(struct mm_st __SetPageUptodate(page); ret = -ENOMEM; - if (mem_cgroup_charge(page, dst_mm, GFP_KERNEL, false)) + if (mem_cgroup_charge(page, dst_mm, GFP_KERNEL)) goto out_release; _dst_pte = pte_mkdirty(mk_pte(page, dst_vma->vm_page_prot)); _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 102/131] mm: memcontrol: update page->mem_cgroup stability rules 2020-06-03 22:55 incoming Andrew Morton ` (100 preceding siblings ...) 2020-06-03 23:02 ` [patch 101/131] mm: memcontrol: delete unused lrucare handling Andrew Morton @ 2020-06-03 23:02 ` Andrew Morton 2020-06-03 23:02 ` [patch 103/131] mm: fix LRU balancing effect of new transparent huge pages Andrew Morton ` (29 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:02 UTC (permalink / raw) To: akpm, alex.shi, bsingharora, guro, hannes, hughd, iamjoonsoo.kim, kirill, linux-mm, mhocko, mm-commits, shakeelb, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: memcontrol: update page->mem_cgroup stability rules The previous patches have simplified the access rules around page->mem_cgroup somewhat: 1. We never change page->mem_cgroup while the page is isolated by somebody else. This was by far the biggest exception to our rules and it didn't stop at lock_page() or lock_page_memcg(). 2. We charge pages before they get put into page tables now, so the somewhat fishy rule about "can be in page table as long as it's still locked" is now gone and boiled down to having an exclusive reference to the page. Document the new rules. Any of the following will stabilize the page->mem_cgroup association: - the page lock - LRU isolation - lock_page_memcg() - exclusive access to the page Link: http://lkml.kernel.org/r/20200508183105.225460-20-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Alex Shi <alex.shi@linux.alibaba.com> Reviewed-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Hugh Dickins <hughd@google.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Michal Hocko <mhocko@suse.com> Cc: Roman Gushchin <guro@fb.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/memcontrol.c | 21 +++++++-------------- 1 file changed, 7 insertions(+), 14 deletions(-) --- a/mm/memcontrol.c~mm-memcontrol-update-page-mem_cgroup-stability-rules +++ a/mm/memcontrol.c @@ -1201,9 +1201,8 @@ int mem_cgroup_scan_tasks(struct mem_cgr * @page: the page * @pgdat: pgdat of the page * - * This function is only safe when following the LRU page isolation - * and putback protocol: the LRU lock must be held, and the page must - * either be PageLRU() or the caller must have isolated/allocated it. + * This function relies on page->mem_cgroup being stable - see the + * access rules in commit_charge(). */ struct lruvec *mem_cgroup_page_lruvec(struct page *page, struct pglist_data *pgdat) { @@ -2659,18 +2658,12 @@ static void commit_charge(struct page *p { VM_BUG_ON_PAGE(page->mem_cgroup, page); /* - * Nobody should be changing or seriously looking at - * page->mem_cgroup at this point: + * Any of the following ensures page->mem_cgroup stability: * - * - the page is uncharged - * - * - the page is off-LRU - * - * - an anonymous fault has exclusive page access, except for - * a locked page table - * - * - a page cache insertion, a swapin fault, or a migration - * have the page locked + * - the page lock + * - LRU isolation + * - lock_page_memcg() + * - exclusive reference */ page->mem_cgroup = memcg; } _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 103/131] mm: fix LRU balancing effect of new transparent huge pages 2020-06-03 22:55 incoming Andrew Morton ` (101 preceding siblings ...) 2020-06-03 23:02 ` [patch 102/131] mm: memcontrol: update page->mem_cgroup stability rules Andrew Morton @ 2020-06-03 23:02 ` Andrew Morton 2020-06-03 23:02 ` [patch 104/131] mm: keep separate anon and file statistics on page reclaim activity Andrew Morton ` (28 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:02 UTC (permalink / raw) To: akpm, hannes, iamjoonsoo.kim, linux-mm, mhocko, minchan, mm-commits, riel, shakeelb, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: fix LRU balancing effect of new transparent huge pages The reclaim code that balances between swapping and cache reclaim tries to predict likely reuse based on in-memory reference patterns alone. This works in many cases, but when it fails it cannot detect when the cache is thrashing pathologically, or when we're in the middle of a swap storm. The high seek cost of rotational drives under which the algorithm evolved also meant that mistakes could quickly result in lockups from too aggressive swapping (which is predominantly random IO). As a result, the balancing code has been tuned over time to a point where it mostly goes for page cache and defers swapping until the VM is under significant memory pressure. The resulting strategy doesn't make optimal caching decisions - where optimal is the least amount of IO required to execute the workload. The proliferation of fast random IO devices such as SSDs, in-memory compression such as zswap, and persistent memory technologies on the horizon, has made this undesirable behavior very noticable: Even in the presence of large amounts of cold anonymous memory and a capable swap device, the VM refuses to even seriously scan these pages, and can leave the page cache thrashing needlessly. This series sets out to address this. Since commit ("a528910e12ec mm: thrash detection-based file cache sizing") we have exact tracking of refault IO - the ultimate cost of reclaiming the wrong pages. This allows us to use an IO cost based balancing model that is more aggressive about scanning anonymous memory when the cache is thrashing, while being able to avoid unnecessary swap storms. These patches base the LRU balance on the rate of refaults on each list, times the relative IO cost between swap device and filesystem (swappiness), in order to optimize reclaim for least IO cost incurred. History I floated these changes in 2016. At the time they were incomplete and full of workarounds due to a lack of infrastructure in the reclaim code: We didn't have PageWorkingset, we didn't have hierarchical cgroup statistics, and problems with the cgroup swap controller. As swapping wasn't too high a priority then, the patches stalled out. With all dependencies in place now, here we are again with much cleaner, feature-complete patches. I kept the acks for patches that stayed materially the same :-) Below is a series of test results that demonstrate certain problematic behavior of the current code, as well as showcase the new code's more predictable and appropriate balancing decisions. Test #1: No convergence This test shows an edge case where the VM currently doesn't converge at all on a new file workingset with a stale anon/tmpfs set. The test sets up a cold anon set the size of 3/4 RAM, then tries to establish a new file set half the size of RAM (flat access pattern). The vanilla kernel refuses to even scan anon pages and never converges. The file set is perpetually served from the filesystem. The first test kernel is with the series up to the workingset patch applied. This allows thrashing page cache to challenge the anonymous workingset. The VM then scans the lists based on the current scanned/rotated balancing algorithm. It converges on a stable state where all cold anon pages are pushed out and the fileset is served entirely from cache: noconverge/5.7-rc5-mm noconverge/5.7-rc5-mm-workingset Scanned 417719308.00 ( +0.00%) 64091155.00 ( -84.66%) Reclaimed 417711094.00 ( +0.00%) 61640308.00 ( -85.24%) Reclaim efficiency % 100.00 ( +0.00%) 96.18 ( -3.78%) Scanned file 417719308.00 ( +0.00%) 59211118.00 ( -85.83%) Scanned anon 0.00 ( +0.00%) 4880037.00 ( ) Swapouts 0.00 ( +0.00%) 2439957.00 ( ) Swapins 0.00 ( +0.00%) 257.00 ( ) Refaults 415246605.00 ( +0.00%) 59183722.00 ( -85.75%) Restore refaults 0.00 ( +0.00%) 54988252.00 ( ) The second test kernel is with the full patch series applied, which replaces the scanned/rotated ratios with refault/swapin rate-based balancing. It evicts the cold anon pages more aggressively in the presence of a thrashing cache and the absence of swapins, and so converges with about 60% of the IO and reclaim activity: noconverge/5.7-rc5-mm-workingset noconverge/5.7-rc5-mm-lrubalance Scanned 64091155.00 ( +0.00%) 37579741.00 ( -41.37%) Reclaimed 61640308.00 ( +0.00%) 35129293.00 ( -43.01%) Reclaim efficiency % 96.18 ( +0.00%) 93.48 ( -2.78%) Scanned file 59211118.00 ( +0.00%) 32708385.00 ( -44.76%) Scanned anon 4880037.00 ( +0.00%) 4871356.00 ( -0.18%) Swapouts 2439957.00 ( +0.00%) 2435565.00 ( -0.18%) Swapins 257.00 ( +0.00%) 262.00 ( +1.94%) Refaults 59183722.00 ( +0.00%) 32675667.00 ( -44.79%) Restore refaults 54988252.00 ( +0.00%) 28480430.00 ( -48.21%) We're triggering this case in host sideloading scenarios: When a host's primary workload is not saturating the machine (primary load is usually driven by user activity), we can optimistically sideload a batch job; if user activity picks up and the primary workload needs the whole host during this time, we freeze the sideload and rely on it getting pushed to swap. Frequently that swapping doesn't happen and the completely inactive sideload simply stays resident while the expanding primary worklad is struggling to gain ground. Test #2: Kernel build This test is a a kernel build that is slightly memory-restricted (make -j4 inside a 400M cgroup). Despite the very aggressive swapping of cold anon pages in test #1, this test shows that the new kernel carefully balances swap against cache refaults when both the file and the cache set are pressured. It shows the patched kernel to be slightly better at finding the coldest memory from the combined anon and file set to evict under pressure. The result is lower aggregate reclaim and paging activity: z 5.7-rc5-mm 5.7-rc5-mm-lrubalance Real time 210.60 ( +0.00%) 210.97 ( +0.18%) User time 745.42 ( +0.00%) 746.48 ( +0.14%) System time 69.78 ( +0.00%) 69.79 ( +0.02%) Scanned file 354682.00 ( +0.00%) 293661.00 ( -17.20%) Scanned anon 465381.00 ( +0.00%) 378144.00 ( -18.75%) Swapouts 185920.00 ( +0.00%) 147801.00 ( -20.50%) Swapins 34583.00 ( +0.00%) 32491.00 ( -6.05%) Refaults 212664.00 ( +0.00%) 172409.00 ( -18.93%) Restore refaults 48861.00 ( +0.00%) 80091.00 ( +63.91%) Total paging IO 433167.00 ( +0.00%) 352701.00 ( -18.58%) Test #3: Overload This next test is not about performance, but rather about the predictability of the algorithm. The current balancing behavior doesn't always lead to comprehensible results, which makes performance analysis and parameter tuning (swappiness e.g.) very difficult. The test shows the balancing behavior under equivalent anon and file input. Anon and file sets are created of equal size (3/4 RAM), have the same access patterns (a hot-cold gradient), and synchronized access rates. Swappiness is raised from the default of 60 to 100 to indicate equal IO cost between swap and cache. With the vanilla balancing code, anon scans make up around 9% of the total pages scanned, or a ~1:10 ratio. This is a surprisingly skewed ratio, and it's an outcome that is hard to explain given the input parameters to the VM. The new balancing model targets a 1:2 balance: All else being equal, reclaiming a file page costs one page IO - the refault; reclaiming an anon page costs two IOs - the swapout and the swapin. In the test we observe a ~1:3 balance. The scanned and paging IO numbers indicate that the anon LRU algorithm we have in place right now does a slightly worse job at picking the coldest pages compared to the file algorithm. There is ongoing work to improve this, like Joonsoo's anon workingset patches; however, it's difficult to compare the two aging strategies when the balancing between them is behaving unintuitively. The slightly less efficient anon reclaim results in a deviation from the optimal 1:2 scan ratio we would like to see here - however, 1:3 is much closer to what we'd want to see in this test than the vanilla kernel's aging of 10+ cache pages for every anonymous one: overload-100/5.7-rc5-mm-workingset overload-100/5.7-rc5-mm-lrubalance-realfile Scanned 533633725.00 ( +0.00%) 595687785.00 ( +11.63%) Reclaimed 494325440.00 ( +0.00%) 518154380.00 ( +4.82%) Reclaim efficiency % 92.63 ( +0.00%) 86.98 ( -6.03%) Scanned file 484532894.00 ( +0.00%) 456937722.00 ( -5.70%) Scanned anon 49100831.00 ( +0.00%) 138750063.00 ( +182.58%) Swapouts 8096423.00 ( +0.00%) 48982142.00 ( +504.98%) Swapins 10027384.00 ( +0.00%) 62325044.00 ( +521.55%) Refaults 479819973.00 ( +0.00%) 451309483.00 ( -5.94%) Restore refaults 426422087.00 ( +0.00%) 399914067.00 ( -6.22%) Total paging IO 497943780.00 ( +0.00%) 562616669.00 ( +12.99%) Test #4: Parallel IO It's important to note that these patches only affect the situation where the kernel has to reclaim workingset memory, which is usually a transitionary period. The vast majority of page reclaim occuring in a system is from trimming the ever-expanding page cache. These patches don't affect cache trimming behavior. We never swap as long as we only have use-once cache moving through the file LRU, we only consider swapping when the cache is actively thrashing. The following test demonstrates this. It has an anon workingset that takes up half of RAM and then writes a file that is twice the size of RAM out to disk. As the cache is funneled through the inactive file list, no anon pages are scanned (aside from apparently some background noise of 10 pages): 5.7-rc5-mm 5.7-rc5-mm-lrubalance Scanned 10714722.00 ( +0.00%) 10723445.00 ( +0.08%) Reclaimed 10703596.00 ( +0.00%) 10712166.00 ( +0.08%) Reclaim efficiency % 99.90 ( +0.00%) 99.89 ( -0.00%) Scanned file 10714722.00 ( +0.00%) 10723435.00 ( +0.08%) Scanned anon 0.00 ( +0.00%) 10.00 ( ) Swapouts 0.00 ( +0.00%) 7.00 ( ) Swapins 0.00 ( +0.00%) 0.00 ( +0.00%) Refaults 92.00 ( +0.00%) 41.00 ( -54.84%) Restore refaults 0.00 ( +0.00%) 0.00 ( +0.00%) Total paging IO 92.00 ( +0.00%) 48.00 ( -47.31%) This patch (of 14): Currently, THP are counted as single pages until they are split right before being swapped out. However, at that point the VM is already in the middle of reclaim, and adjusting the LRU balance then is useless. Always account THP by the number of basepages, and remove the fixup from the splitting path. Link: http://lkml.kernel.org/r/20200520232525.798933-1-hannes@cmpxchg.org Link: http://lkml.kernel.org/r/20200520232525.798933-2-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: Shakeel Butt <shakeelb@google.com> Reviewed-by: Rik van Riel <riel@surriel.com> Reviewed-by: Shakeel Butt <shakeelb@google.com> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Minchan Kim <minchan@kernel.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/swap.c | 25 +++++++++++-------------- 1 file changed, 11 insertions(+), 14 deletions(-) --- a/mm/swap.c~mm-fix-lru-balancing-effect-of-new-transparent-huge-pages +++ a/mm/swap.c @@ -279,13 +279,14 @@ void rotate_reclaimable_page(struct page } static void update_page_reclaim_stat(struct lruvec *lruvec, - int file, int rotated) + int file, int rotated, + unsigned int nr_pages) { struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat; - reclaim_stat->recent_scanned[file]++; + reclaim_stat->recent_scanned[file] += nr_pages; if (rotated) - reclaim_stat->recent_rotated[file]++; + reclaim_stat->recent_rotated[file] += nr_pages; } static void __activate_page(struct page *page, struct lruvec *lruvec, @@ -302,7 +303,7 @@ static void __activate_page(struct page trace_mm_lru_activate(page); __count_vm_event(PGACTIVATE); - update_page_reclaim_stat(lruvec, file, 1); + update_page_reclaim_stat(lruvec, file, 1, hpage_nr_pages(page)); } } @@ -564,7 +565,7 @@ static void lru_deactivate_file_fn(struc if (active) __count_vm_event(PGDEACTIVATE); - update_page_reclaim_stat(lruvec, file, 0); + update_page_reclaim_stat(lruvec, file, 0, hpage_nr_pages(page)); } static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, @@ -580,7 +581,7 @@ static void lru_deactivate_fn(struct pag add_page_to_lru_list(page, lruvec, lru); __count_vm_events(PGDEACTIVATE, hpage_nr_pages(page)); - update_page_reclaim_stat(lruvec, file, 0); + update_page_reclaim_stat(lruvec, file, 0, hpage_nr_pages(page)); } } @@ -605,7 +606,7 @@ static void lru_lazyfree_fn(struct page __count_vm_events(PGLAZYFREE, hpage_nr_pages(page)); count_memcg_page_event(page, PGLAZYFREE); - update_page_reclaim_stat(lruvec, 1, 0); + update_page_reclaim_stat(lruvec, 1, 0, hpage_nr_pages(page)); } } @@ -929,8 +930,6 @@ EXPORT_SYMBOL(__pagevec_release); void lru_add_page_tail(struct page *page, struct page *page_tail, struct lruvec *lruvec, struct list_head *list) { - const int file = 0; - VM_BUG_ON_PAGE(!PageHead(page), page); VM_BUG_ON_PAGE(PageCompound(page_tail), page); VM_BUG_ON_PAGE(PageLRU(page_tail), page); @@ -956,9 +955,6 @@ void lru_add_page_tail(struct page *page add_page_to_lru_list_tail(page_tail, lruvec, page_lru(page_tail)); } - - if (!PageUnevictable(page)) - update_page_reclaim_stat(lruvec, file, PageActive(page_tail)); } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ @@ -1001,8 +997,9 @@ static void __pagevec_lru_add_fn(struct if (page_evictable(page)) { lru = page_lru(page); - update_page_reclaim_stat(lruvec, page_is_file_lru(page), - PageActive(page)); + update_page_reclaim_stat(lruvec, is_file_lru(lru), + PageActive(page), + hpage_nr_pages(page)); if (was_unevictable) count_vm_event(UNEVICTABLE_PGRESCUED); } else { _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 104/131] mm: keep separate anon and file statistics on page reclaim activity 2020-06-03 22:55 incoming Andrew Morton ` (102 preceding siblings ...) 2020-06-03 23:02 ` [patch 103/131] mm: fix LRU balancing effect of new transparent huge pages Andrew Morton @ 2020-06-03 23:02 ` Andrew Morton 2020-06-03 23:02 ` [patch 105/131] mm: allow swappiness that prefers reclaiming anon over the file workingset Andrew Morton ` (27 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:02 UTC (permalink / raw) To: akpm, hannes, iamjoonsoo.kim, linux-mm, mhocko, minchan, mm-commits, riel, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: keep separate anon and file statistics on page reclaim activity Having statistics on pages scanned and pages reclaimed for both anon and file pages makes it easier to evaluate changes to LRU balancing. While at it, clean up the stat-keeping mess for isolation, putback, reclaim stats etc. a bit: first the physical LRU operation (isolation and putback), followed by vmstats, reclaim_stats, and then vm events. Link: http://lkml.kernel.org/r/20200520232525.798933-3-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/vm_event_item.h | 4 ++++ mm/vmscan.c | 17 +++++++++-------- mm/vmstat.c | 4 ++++ 3 files changed, 17 insertions(+), 8 deletions(-) --- a/include/linux/vm_event_item.h~mm-keep-separate-anon-and-file-statistics-on-page-reclaim-activity +++ a/include/linux/vm_event_item.h @@ -35,6 +35,10 @@ enum vm_event_item { PGPGIN, PGPGOUT, PS PGSCAN_KSWAPD, PGSCAN_DIRECT, PGSCAN_DIRECT_THROTTLE, + PGSCAN_ANON, + PGSCAN_FILE, + PGSTEAL_ANON, + PGSTEAL_FILE, #ifdef CONFIG_NUMA PGSCAN_ZONE_RECLAIM_FAILED, #endif --- a/mm/vmscan.c~mm-keep-separate-anon-and-file-statistics-on-page-reclaim-activity +++ a/mm/vmscan.c @@ -1913,7 +1913,7 @@ shrink_inactive_list(unsigned long nr_to unsigned int nr_reclaimed = 0; unsigned long nr_taken; struct reclaim_stat stat; - int file = is_file_lru(lru); + bool file = is_file_lru(lru); enum vm_event_item item; struct pglist_data *pgdat = lruvec_pgdat(lruvec); struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat; @@ -1941,11 +1941,12 @@ shrink_inactive_list(unsigned long nr_to __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, nr_taken); reclaim_stat->recent_scanned[file] += nr_taken; - item = current_is_kswapd() ? PGSCAN_KSWAPD : PGSCAN_DIRECT; if (!cgroup_reclaim(sc)) __count_vm_events(item, nr_scanned); __count_memcg_events(lruvec_memcg(lruvec), item, nr_scanned); + __count_vm_events(PGSCAN_ANON + file, nr_scanned); + spin_unlock_irq(&pgdat->lru_lock); if (nr_taken == 0) @@ -1956,16 +1957,16 @@ shrink_inactive_list(unsigned long nr_to spin_lock_irq(&pgdat->lru_lock); + move_pages_to_lru(lruvec, &page_list); + + __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); + reclaim_stat->recent_rotated[0] += stat.nr_activate[0]; + reclaim_stat->recent_rotated[1] += stat.nr_activate[1]; item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT; if (!cgroup_reclaim(sc)) __count_vm_events(item, nr_reclaimed); __count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed); - reclaim_stat->recent_rotated[0] += stat.nr_activate[0]; - reclaim_stat->recent_rotated[1] += stat.nr_activate[1]; - - move_pages_to_lru(lruvec, &page_list); - - __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); + __count_vm_events(PGSTEAL_ANON + file, nr_reclaimed); spin_unlock_irq(&pgdat->lru_lock); --- a/mm/vmstat.c~mm-keep-separate-anon-and-file-statistics-on-page-reclaim-activity +++ a/mm/vmstat.c @@ -1203,6 +1203,10 @@ const char * const vmstat_text[] = { "pgscan_kswapd", "pgscan_direct", "pgscan_direct_throttle", + "pgscan_anon", + "pgscan_file", + "pgsteal_anon", + "pgsteal_file", #ifdef CONFIG_NUMA "zone_reclaim_failed", _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 105/131] mm: allow swappiness that prefers reclaiming anon over the file workingset 2020-06-03 22:55 incoming Andrew Morton ` (103 preceding siblings ...) 2020-06-03 23:02 ` [patch 104/131] mm: keep separate anon and file statistics on page reclaim activity Andrew Morton @ 2020-06-03 23:02 ` Andrew Morton 2020-06-03 23:02 ` [patch 106/131] mm: fold and remove lru_cache_add_anon() and lru_cache_add_file() Andrew Morton ` (26 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:02 UTC (permalink / raw) To: akpm, hannes, iamjoonsoo.kim, linux-mm, mhocko, minchan, mm-commits, riel, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: allow swappiness that prefers reclaiming anon over the file workingset With the advent of fast random IO devices (SSDs, PMEM) and in-memory swap devices such as zswap, it's possible for swap to be much faster than filesystems, and for swapping to be preferable over thrashing filesystem caches. Allow setting swappiness - which defines the rough relative IO cost of cache misses between page cache and swap-backed pages - to reflect such situations by making the swap-preferred range configurable. Link: http://lkml.kernel.org/r/20200520232525.798933-4-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- Documentation/admin-guide/sysctl/vm.rst | 23 +++++++++++++++++----- kernel/sysctl.c | 3 +- mm/vmscan.c | 2 - 3 files changed, 21 insertions(+), 7 deletions(-) --- a/Documentation/admin-guide/sysctl/vm.rst~mm-allow-swappiness-that-prefers-reclaiming-anon-over-the-file-workingset +++ a/Documentation/admin-guide/sysctl/vm.rst @@ -831,14 +831,27 @@ tooling to work, you can do:: swappiness ========== -This control is used to define how aggressive the kernel will swap -memory pages. Higher values will increase aggressiveness, lower values -decrease the amount of swap. A value of 0 instructs the kernel not to -initiate swap until the amount of free and file-backed pages is less -than the high water mark in a zone. +This control is used to define the rough relative IO cost of swapping +and filesystem paging, as a value between 0 and 200. At 100, the VM +assumes equal IO cost and will thus apply memory pressure to the page +cache and swap-backed pages equally; lower values signify more +expensive swap IO, higher values indicates cheaper. + +Keep in mind that filesystem IO patterns under memory pressure tend to +be more efficient than swap's random IO. An optimal value will require +experimentation and will also be workload-dependent. The default value is 60. +For in-memory swap, like zram or zswap, as well as hybrid setups that +have swap on faster devices than the filesystem, values beyond 100 can +be considered. For example, if the random IO against the swap device +is on average 2x faster than IO from the filesystem, swappiness should +be 133 (x + 2x = 200, 2x = 133.33). + +At 0, the kernel will not initiate swap until the amount of free and +file-backed pages is less than the high watermark in a zone. + unprivileged_userfaultfd ======================== --- a/kernel/sysctl.c~mm-allow-swappiness-that-prefers-reclaiming-anon-over-the-file-workingset +++ a/kernel/sysctl.c @@ -131,6 +131,7 @@ static unsigned long zero_ul; static unsigned long one_ul = 1; static unsigned long long_max = LONG_MAX; static int one_hundred = 100; +static int two_hundred = 200; static int one_thousand = 1000; #ifdef CONFIG_PRINTK static int ten_thousand = 10000; @@ -1391,7 +1392,7 @@ static struct ctl_table vm_table[] = { .mode = 0644, .proc_handler = proc_dointvec_minmax, .extra1 = SYSCTL_ZERO, - .extra2 = &one_hundred, + .extra2 = &two_hundred, }, #ifdef CONFIG_HUGETLB_PAGE { --- a/mm/vmscan.c~mm-allow-swappiness-that-prefers-reclaiming-anon-over-the-file-workingset +++ a/mm/vmscan.c @@ -161,7 +161,7 @@ struct scan_control { #endif /* - * From 0 .. 100. Higher means more swappy. + * From 0 .. 200. Higher means more swappy. */ int vm_swappiness = 60; /* _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 106/131] mm: fold and remove lru_cache_add_anon() and lru_cache_add_file() 2020-06-03 22:55 incoming Andrew Morton ` (104 preceding siblings ...) 2020-06-03 23:02 ` [patch 105/131] mm: allow swappiness that prefers reclaiming anon over the file workingset Andrew Morton @ 2020-06-03 23:02 ` Andrew Morton 2020-06-03 23:02 ` [patch 107/131] mm: workingset: let cache workingset challenge anon Andrew Morton ` (25 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:02 UTC (permalink / raw) To: akpm, hannes, iamjoonsoo.kim, linux-mm, mhocko, minchan, mm-commits, riel, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: fold and remove lru_cache_add_anon() and lru_cache_add_file() They're the same function, and for the purpose of all callers they are equivalent to lru_cache_add(). [akpm@linux-foundation.org: fix it for local_lock changes] Link: http://lkml.kernel.org/r/20200520232525.798933-5-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Rik van Riel <riel@surriel.com> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Minchan Kim <minchan@kernel.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- fs/cifs/file.c | 10 ++++----- fs/fuse/dev.c | 2 - include/linux/swap.h | 2 - mm/khugepaged.c | 8 +------ mm/memory.c | 2 - mm/shmem.c | 6 ++--- mm/swap.c | 42 +++++++++-------------------------------- mm/swap_state.c | 2 - 8 files changed, 23 insertions(+), 51 deletions(-) --- a/fs/cifs/file.c~mm-fold-and-remove-lru_cache_add_anon-and-lru_cache_add_file +++ a/fs/cifs/file.c @@ -4162,7 +4162,7 @@ cifs_readv_complete(struct work_struct * for (i = 0; i < rdata->nr_pages; i++) { struct page *page = rdata->pages[i]; - lru_cache_add_file(page); + lru_cache_add(page); if (rdata->result == 0 || (rdata->result == -EAGAIN && got_bytes)) { @@ -4232,7 +4232,7 @@ readpages_fill_pages(struct TCP_Server_I * fill them until the writes are flushed. */ zero_user(page, 0, PAGE_SIZE); - lru_cache_add_file(page); + lru_cache_add(page); flush_dcache_page(page); SetPageUptodate(page); unlock_page(page); @@ -4242,7 +4242,7 @@ readpages_fill_pages(struct TCP_Server_I continue; } else { /* no need to hold page hostage */ - lru_cache_add_file(page); + lru_cache_add(page); unlock_page(page); put_page(page); rdata->pages[i] = NULL; @@ -4437,7 +4437,7 @@ static int cifs_readpages(struct file *f /* best to give up if we're out of mem */ list_for_each_entry_safe(page, tpage, &tmplist, lru) { list_del(&page->lru); - lru_cache_add_file(page); + lru_cache_add(page); unlock_page(page); put_page(page); } @@ -4475,7 +4475,7 @@ static int cifs_readpages(struct file *f add_credits_and_wake_if(server, &rdata->credits, 0); for (i = 0; i < rdata->nr_pages; i++) { page = rdata->pages[i]; - lru_cache_add_file(page); + lru_cache_add(page); unlock_page(page); put_page(page); } --- a/fs/fuse/dev.c~mm-fold-and-remove-lru_cache_add_anon-and-lru_cache_add_file +++ a/fs/fuse/dev.c @@ -840,7 +840,7 @@ static int fuse_try_move_page(struct fus get_page(newpage); if (!(buf->flags & PIPE_BUF_FLAG_LRU)) - lru_cache_add_file(newpage); + lru_cache_add(newpage); err = 0; spin_lock(&cs->req->waitq.lock); --- a/include/linux/swap.h~mm-fold-and-remove-lru_cache_add_anon-and-lru_cache_add_file +++ a/include/linux/swap.h @@ -335,8 +335,6 @@ extern unsigned long nr_free_pagecache_p /* linux/mm/swap.c */ extern void lru_cache_add(struct page *); -extern void lru_cache_add_anon(struct page *page); -extern void lru_cache_add_file(struct page *page); extern void lru_add_page_tail(struct page *page, struct page *page_tail, struct lruvec *lruvec, struct list_head *head); extern void activate_page(struct page *); --- a/mm/khugepaged.c~mm-fold-and-remove-lru_cache_add_anon-and-lru_cache_add_file +++ a/mm/khugepaged.c @@ -1879,13 +1879,9 @@ xa_unlocked: SetPageUptodate(new_page); page_ref_add(new_page, HPAGE_PMD_NR - 1); - - if (is_shmem) { + if (is_shmem) set_page_dirty(new_page); - lru_cache_add_anon(new_page); - } else { - lru_cache_add_file(new_page); - } + lru_cache_add(new_page); /* * Remove pte page tables, so we can re-fault the page as huge. --- a/mm/memory.c~mm-fold-and-remove-lru_cache_add_anon-and-lru_cache_add_file +++ a/mm/memory.c @@ -3139,7 +3139,7 @@ vm_fault_t do_swap_page(struct vm_fault if (err) goto out_page; - lru_cache_add_anon(page); + lru_cache_add(page); swap_readpage(page, true); } } else { --- a/mm/shmem.c~mm-fold-and-remove-lru_cache_add_anon-and-lru_cache_add_file +++ a/mm/shmem.c @@ -1609,7 +1609,7 @@ static int shmem_replace_page(struct pag */ oldpage = newpage; } else { - lru_cache_add_anon(newpage); + lru_cache_add(newpage); *pagep = newpage; } @@ -1860,7 +1860,7 @@ alloc_nohuge: charge_mm); if (error) goto unacct; - lru_cache_add_anon(page); + lru_cache_add(page); spin_lock_irq(&info->lock); info->alloced += compound_nr(page); @@ -2376,7 +2376,7 @@ static int shmem_mfill_atomic_pte(struct if (!pte_none(*dst_pte)) goto out_release_unlock; - lru_cache_add_anon(page); + lru_cache_add(page); spin_lock_irq(&info->lock); info->alloced++; --- a/mm/swap.c~mm-fold-and-remove-lru_cache_add_anon-and-lru_cache_add_file +++ a/mm/swap.c @@ -424,37 +424,6 @@ void mark_page_accessed(struct page *pag } EXPORT_SYMBOL(mark_page_accessed); -static void __lru_cache_add(struct page *page) -{ - struct pagevec *pvec; - - local_lock(&lru_pvecs.lock); - pvec = this_cpu_ptr(&lru_pvecs.lru_add); - get_page(page); - if (!pagevec_add(pvec, page) || PageCompound(page)) - __pagevec_lru_add(pvec); - local_unlock(&lru_pvecs.lock); -} - -/** - * lru_cache_add_anon - add a page to the page lists - * @page: the page to add - */ -void lru_cache_add_anon(struct page *page) -{ - if (PageActive(page)) - ClearPageActive(page); - __lru_cache_add(page); -} - -void lru_cache_add_file(struct page *page) -{ - if (PageActive(page)) - ClearPageActive(page); - __lru_cache_add(page); -} -EXPORT_SYMBOL(lru_cache_add_file); - /** * lru_cache_add - add a page to a page list * @page: the page to be added to the LRU. @@ -466,10 +435,19 @@ EXPORT_SYMBOL(lru_cache_add_file); */ void lru_cache_add(struct page *page) { + struct pagevec *pvec; + VM_BUG_ON_PAGE(PageActive(page) && PageUnevictable(page), page); VM_BUG_ON_PAGE(PageLRU(page), page); - __lru_cache_add(page); + + get_page(page); + local_lock(&lru_pvecs.lock); + pvec = this_cpu_ptr(&lru_pvecs.lru_add); + if (!pagevec_add(pvec, page) || PageCompound(page)) + __pagevec_lru_add(pvec); + local_unlock(&lru_pvecs.lock); } +EXPORT_SYMBOL(lru_cache_add); /** * lru_cache_add_active_or_unevictable --- a/mm/swap_state.c~mm-fold-and-remove-lru_cache_add_anon-and-lru_cache_add_file +++ a/mm/swap_state.c @@ -442,7 +442,7 @@ struct page *__read_swap_cache_async(swp /* Caller will initiate read into locked page */ SetPageWorkingset(page); - lru_cache_add_anon(page); + lru_cache_add(page); *new_page_allocated = true; return page; _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 107/131] mm: workingset: let cache workingset challenge anon 2020-06-03 22:55 incoming Andrew Morton ` (105 preceding siblings ...) 2020-06-03 23:02 ` [patch 106/131] mm: fold and remove lru_cache_add_anon() and lru_cache_add_file() Andrew Morton @ 2020-06-03 23:02 ` Andrew Morton 2020-06-03 23:02 ` [patch 108/131] mm: remove use-once cache bias from LRU balancing Andrew Morton ` (24 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:02 UTC (permalink / raw) To: akpm, hannes, iamjoonsoo.kim, linux-mm, mhocko, minchan, mm-commits, riel, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: workingset: let cache workingset challenge anon We activate cache refaults with reuse distances in pages smaller than the size of the total cache. This allows new pages with competitive access frequencies to establish themselves, as well as challenge and potentially displace pages on the active list that have gone cold. However, that assumes that active cache can only replace other active cache in a competition for the hottest memory. This is not a great default assumption. The page cache might be thrashing while there are enough completely cold and unused anonymous pages sitting around that we'd only have to write to swap once to stop all IO from the cache. Activate cache refaults when their reuse distance in pages is smaller than the total userspace workingset, including anonymous pages. Reclaim can still decide how to balance pressure among the two LRUs depending on the IO situation. Rotational drives will prefer avoiding random IO from swap and go harder after cache. But fundamentally, hot cache should be able to compete with anon pages for a place in RAM. Link: http://lkml.kernel.org/r/20200520232525.798933-6-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/workingset.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) --- a/mm/workingset.c~mm-workingset-let-cache-workingset-challenge-anon +++ a/mm/workingset.c @@ -277,8 +277,8 @@ void workingset_refault(struct page *pag struct mem_cgroup *eviction_memcg; struct lruvec *eviction_lruvec; unsigned long refault_distance; + unsigned long workingset_size; struct pglist_data *pgdat; - unsigned long active_file; struct mem_cgroup *memcg; unsigned long eviction; struct lruvec *lruvec; @@ -310,7 +310,6 @@ void workingset_refault(struct page *pag goto out; eviction_lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat); refault = atomic_long_read(&eviction_lruvec->inactive_age); - active_file = lruvec_page_state(eviction_lruvec, NR_ACTIVE_FILE); /* * Calculate the refault distance @@ -345,10 +344,18 @@ void workingset_refault(struct page *pag /* * Compare the distance to the existing workingset size. We - * don't act on pages that couldn't stay resident even if all - * the memory was available to the page cache. + * don't activate pages that couldn't stay resident even if + * all the memory was available to the page cache. Whether + * cache can compete with anon or not depends on having swap. */ - if (refault_distance > active_file) + workingset_size = lruvec_page_state(eviction_lruvec, NR_ACTIVE_FILE); + if (mem_cgroup_get_nr_swap_pages(memcg) > 0) { + workingset_size += lruvec_page_state(eviction_lruvec, + NR_INACTIVE_ANON); + workingset_size += lruvec_page_state(eviction_lruvec, + NR_ACTIVE_ANON); + } + if (refault_distance > workingset_size) goto out; SetPageActive(page); _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 108/131] mm: remove use-once cache bias from LRU balancing 2020-06-03 22:55 incoming Andrew Morton ` (106 preceding siblings ...) 2020-06-03 23:02 ` [patch 107/131] mm: workingset: let cache workingset challenge anon Andrew Morton @ 2020-06-03 23:02 ` Andrew Morton 2020-06-03 23:02 ` [patch 109/131] mm: vmscan: drop unnecessary div0 avoidance rounding in get_scan_count() Andrew Morton ` (23 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:02 UTC (permalink / raw) To: akpm, hannes, iamjoonsoo.kim, linux-mm, mhocko, minchan, mm-commits, riel, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: remove use-once cache bias from LRU balancing When the splitlru patches divided page cache and swap-backed pages into separate LRU lists, the pressure balance between the lists was biased to account for the fact that streaming IO can cause memory pressure with a flood of pages that are used only once. New page cache additions would tip the balance toward the file LRU, and repeat access would neutralize that bias again. This ensured that page reclaim would always go for used-once cache first. Since e9868505987a ("mm,vmscan: only evict file pages when we have plenty"), page reclaim generally skips over swap-backed memory entirely as long as there is used-once cache present, and will apply the LRU balancing when only repeatedly accessed cache pages are left - at which point the previous use-once bias will have been neutralized. This makes the use-once cache balancing bias unnecessary. Link: http://lkml.kernel.org/r/20200520232525.798933-7-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Minchan Kim <minchan@kernel.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/swap.c | 5 ----- 1 file changed, 5 deletions(-) --- a/mm/swap.c~mm-remove-use-once-cache-bias-from-lru-balancing +++ a/mm/swap.c @@ -293,7 +293,6 @@ static void __activate_page(struct page void *arg) { if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) { - int file = page_is_file_lru(page); int lru = page_lru_base_type(page); del_page_from_lru_list(page, lruvec, lru); @@ -303,7 +302,6 @@ static void __activate_page(struct page trace_mm_lru_activate(page); __count_vm_event(PGACTIVATE); - update_page_reclaim_stat(lruvec, file, 1, hpage_nr_pages(page)); } } @@ -975,9 +973,6 @@ static void __pagevec_lru_add_fn(struct if (page_evictable(page)) { lru = page_lru(page); - update_page_reclaim_stat(lruvec, is_file_lru(lru), - PageActive(page), - hpage_nr_pages(page)); if (was_unevictable) count_vm_event(UNEVICTABLE_PGRESCUED); } else { _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 109/131] mm: vmscan: drop unnecessary div0 avoidance rounding in get_scan_count() 2020-06-03 22:55 incoming Andrew Morton ` (107 preceding siblings ...) 2020-06-03 23:02 ` [patch 108/131] mm: remove use-once cache bias from LRU balancing Andrew Morton @ 2020-06-03 23:02 ` Andrew Morton 2020-06-03 23:02 ` [patch 110/131] mm: base LRU balancing on an explicit cost model Andrew Morton ` (22 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:02 UTC (permalink / raw) To: akpm, hannes, iamjoonsoo.kim, linux-mm, mhocko, minchan, mm-commits, riel, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: vmscan: drop unnecessary div0 avoidance rounding in get_scan_count() When we calculate the relative scan pressure between the anon and file LRU lists, we have to assume that reclaim_stat can contain zeroes. To avoid div0 crashes, we add 1 to all denominators like so: anon_prio = swappiness; file_prio = 200 - anon_prio; [...] /* * The amount of pressure on anon vs file pages is inversely * proportional to the fraction of recently scanned pages on * each list that were recently referenced and in active use. */ ap = anon_prio * (reclaim_stat->recent_scanned[0] + 1); ap /= reclaim_stat->recent_rotated[0] + 1; fp = file_prio * (reclaim_stat->recent_scanned[1] + 1); fp /= reclaim_stat->recent_rotated[1] + 1; spin_unlock_irq(&pgdat->lru_lock); fraction[0] = ap; fraction[1] = fp; denominator = ap + fp + 1; While reclaim_stat can contain 0, it's not actually possible for ap + fp to be 0. One of anon_prio or file_prio could be zero, but they must still add up to 200. And the reclaim_stat fraction, due to the +1 in there, is always at least 1. So if one of the two numerators is 0, the other one can't be. ap + fp is always at least 1. Drop the + 1. Link: http://lkml.kernel.org/r/20200520232525.798933-8-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/vmscan.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/mm/vmscan.c~mm-vmscan-drop-unnecessary-div0-avoidance-rounding-in-get_scan_count +++ a/mm/vmscan.c @@ -2348,7 +2348,7 @@ static void get_scan_count(struct lruvec fraction[0] = ap; fraction[1] = fp; - denominator = ap + fp + 1; + denominator = ap + fp; out: for_each_evictable_lru(lru) { int file = is_file_lru(lru); _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 110/131] mm: base LRU balancing on an explicit cost model 2020-06-03 22:55 incoming Andrew Morton ` (108 preceding siblings ...) 2020-06-03 23:02 ` [patch 109/131] mm: vmscan: drop unnecessary div0 avoidance rounding in get_scan_count() Andrew Morton @ 2020-06-03 23:02 ` Andrew Morton 2020-06-03 23:02 ` [patch 111/131] mm: deactivations shouldn't bias the LRU balance Andrew Morton ` (21 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:02 UTC (permalink / raw) To: akpm, hannes, iamjoonsoo.kim, linux-mm, mhocko, minchan, mm-commits, riel, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: base LRU balancing on an explicit cost model Currently, scan pressure between the anon and file LRU lists is balanced based on a mixture of reclaim efficiency and a somewhat vague notion of "value" of having certain pages in memory over others. That concept of value is problematic, because it has caused us to count any event that remotely makes one LRU list more or less preferrable for reclaim, even when these events are not directly comparable and impose very different costs on the system. One example is referenced file pages that we still deactivate and referenced anonymous pages that we actually rotate back to the head of the list. There is also conceptual overlap with the LRU algorithm itself. By rotating recently used pages instead of reclaiming them, the algorithm already biases the applied scan pressure based on page value. Thus, when rebalancing scan pressure due to rotations, we should think of reclaim cost, and leave assessing the page value to the LRU algorithm. Lastly, considering both value-increasing as well as value-decreasing events can sometimes cause the same type of event to be counted twice, i.e. how rotating a page increases the LRU value, while reclaiming it succesfully decreases the value. In itself this will balance out fine, but it quietly skews the impact of events that are only recorded once. The abstract metric of "value", the murky relationship with the LRU algorithm, and accounting both negative and positive events make the current pressure balancing model hard to reason about and modify. This patch switches to a balancing model of accounting the concrete, actually observed cost of reclaiming one LRU over another. For now, that cost includes pages that are scanned but rotated back to the list head. Subsequent patches will add consideration for IO caused by refaulting of recently evicted pages. Replace struct zone_reclaim_stat with two cost counters in the lruvec, and make everything that affects cost go through a new lru_note_cost() function. Link: http://lkml.kernel.org/r/20200520232525.798933-9-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/mmzone.h | 21 ++++++------------ include/linux/swap.h | 2 + mm/memcontrol.c | 18 +++++---------- mm/swap.c | 21 +++++++----------- mm/vmscan.c | 44 +++++++++++++++++++-------------------- 5 files changed, 46 insertions(+), 60 deletions(-) --- a/include/linux/mmzone.h~mm-base-lru-balancing-on-an-explicit-cost-model +++ a/include/linux/mmzone.h @@ -242,19 +242,6 @@ static inline bool is_active_lru(enum lr return (lru == LRU_ACTIVE_ANON || lru == LRU_ACTIVE_FILE); } -struct zone_reclaim_stat { - /* - * The pageout code in vmscan.c keeps track of how many of the - * mem/swap backed and file backed pages are referenced. - * The higher the rotated/scanned ratio, the more valuable - * that cache is. - * - * The anon LRU stats live in [0], file LRU stats in [1] - */ - unsigned long recent_rotated[2]; - unsigned long recent_scanned[2]; -}; - enum lruvec_flags { LRUVEC_CONGESTED, /* lruvec has many dirty pages * backed by a congested BDI @@ -263,7 +250,13 @@ enum lruvec_flags { struct lruvec { struct list_head lists[NR_LRU_LISTS]; - struct zone_reclaim_stat reclaim_stat; + /* + * These track the cost of reclaiming one LRU - file or anon - + * over the other. As the observed cost of reclaiming one LRU + * increases, the reclaim scan balance tips toward the other. + */ + unsigned long anon_cost; + unsigned long file_cost; /* Evictions & activations on the inactive file list */ atomic_long_t inactive_age; /* Refaults at the time of last reclaim cycle */ --- a/include/linux/swap.h~mm-base-lru-balancing-on-an-explicit-cost-model +++ a/include/linux/swap.h @@ -334,6 +334,8 @@ extern unsigned long nr_free_pagecache_p /* linux/mm/swap.c */ +extern void lru_note_cost(struct lruvec *lruvec, bool file, + unsigned int nr_pages); extern void lru_cache_add(struct page *); extern void lru_add_page_tail(struct page *page, struct page *page_tail, struct lruvec *lruvec, struct list_head *head); --- a/mm/memcontrol.c~mm-base-lru-balancing-on-an-explicit-cost-model +++ a/mm/memcontrol.c @@ -3853,23 +3853,17 @@ static int memcg_stat_show(struct seq_fi { pg_data_t *pgdat; struct mem_cgroup_per_node *mz; - struct zone_reclaim_stat *rstat; - unsigned long recent_rotated[2] = {0, 0}; - unsigned long recent_scanned[2] = {0, 0}; + unsigned long anon_cost = 0; + unsigned long file_cost = 0; for_each_online_pgdat(pgdat) { mz = mem_cgroup_nodeinfo(memcg, pgdat->node_id); - rstat = &mz->lruvec.reclaim_stat; - recent_rotated[0] += rstat->recent_rotated[0]; - recent_rotated[1] += rstat->recent_rotated[1]; - recent_scanned[0] += rstat->recent_scanned[0]; - recent_scanned[1] += rstat->recent_scanned[1]; + anon_cost += mz->lruvec.anon_cost; + file_cost += mz->lruvec.file_cost; } - seq_printf(m, "recent_rotated_anon %lu\n", recent_rotated[0]); - seq_printf(m, "recent_rotated_file %lu\n", recent_rotated[1]); - seq_printf(m, "recent_scanned_anon %lu\n", recent_scanned[0]); - seq_printf(m, "recent_scanned_file %lu\n", recent_scanned[1]); + seq_printf(m, "anon_cost %lu\n", anon_cost); + seq_printf(m, "file_cost %lu\n", file_cost); } #endif --- a/mm/swap.c~mm-base-lru-balancing-on-an-explicit-cost-model +++ a/mm/swap.c @@ -278,15 +278,12 @@ void rotate_reclaimable_page(struct page } } -static void update_page_reclaim_stat(struct lruvec *lruvec, - int file, int rotated, - unsigned int nr_pages) -{ - struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat; - - reclaim_stat->recent_scanned[file] += nr_pages; - if (rotated) - reclaim_stat->recent_rotated[file] += nr_pages; +void lru_note_cost(struct lruvec *lruvec, bool file, unsigned int nr_pages) +{ + if (file) + lruvec->file_cost += nr_pages; + else + lruvec->anon_cost += nr_pages; } static void __activate_page(struct page *page, struct lruvec *lruvec, @@ -541,7 +538,7 @@ static void lru_deactivate_file_fn(struc if (active) __count_vm_event(PGDEACTIVATE); - update_page_reclaim_stat(lruvec, file, 0, hpage_nr_pages(page)); + lru_note_cost(lruvec, !file, hpage_nr_pages(page)); } static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, @@ -557,7 +554,7 @@ static void lru_deactivate_fn(struct pag add_page_to_lru_list(page, lruvec, lru); __count_vm_events(PGDEACTIVATE, hpage_nr_pages(page)); - update_page_reclaim_stat(lruvec, file, 0, hpage_nr_pages(page)); + lru_note_cost(lruvec, !file, hpage_nr_pages(page)); } } @@ -582,7 +579,7 @@ static void lru_lazyfree_fn(struct page __count_vm_events(PGLAZYFREE, hpage_nr_pages(page)); count_memcg_page_event(page, PGLAZYFREE); - update_page_reclaim_stat(lruvec, 1, 0, hpage_nr_pages(page)); + lru_note_cost(lruvec, 0, hpage_nr_pages(page)); } } --- a/mm/vmscan.c~mm-base-lru-balancing-on-an-explicit-cost-model +++ a/mm/vmscan.c @@ -1916,7 +1916,6 @@ shrink_inactive_list(unsigned long nr_to bool file = is_file_lru(lru); enum vm_event_item item; struct pglist_data *pgdat = lruvec_pgdat(lruvec); - struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat; bool stalled = false; while (unlikely(too_many_isolated(pgdat, file, sc))) { @@ -1940,7 +1939,6 @@ shrink_inactive_list(unsigned long nr_to &nr_scanned, sc, lru); __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, nr_taken); - reclaim_stat->recent_scanned[file] += nr_taken; item = current_is_kswapd() ? PGSCAN_KSWAPD : PGSCAN_DIRECT; if (!cgroup_reclaim(sc)) __count_vm_events(item, nr_scanned); @@ -1960,8 +1958,12 @@ shrink_inactive_list(unsigned long nr_to move_pages_to_lru(lruvec, &page_list); __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); - reclaim_stat->recent_rotated[0] += stat.nr_activate[0]; - reclaim_stat->recent_rotated[1] += stat.nr_activate[1]; + /* + * Rotating pages costs CPU without actually + * progressing toward the reclaim goal. + */ + lru_note_cost(lruvec, 0, stat.nr_activate[0]); + lru_note_cost(lruvec, 1, stat.nr_activate[1]); item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT; if (!cgroup_reclaim(sc)) __count_vm_events(item, nr_reclaimed); @@ -2013,7 +2015,6 @@ static void shrink_active_list(unsigned LIST_HEAD(l_active); LIST_HEAD(l_inactive); struct page *page; - struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat; unsigned nr_deactivate, nr_activate; unsigned nr_rotated = 0; int file = is_file_lru(lru); @@ -2027,7 +2028,6 @@ static void shrink_active_list(unsigned &nr_scanned, sc, lru); __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, nr_taken); - reclaim_stat->recent_scanned[file] += nr_taken; __count_vm_events(PGREFILL, nr_scanned); __count_memcg_events(lruvec_memcg(lruvec), PGREFILL, nr_scanned); @@ -2085,7 +2085,7 @@ static void shrink_active_list(unsigned * helps balance scan pressure between file and anonymous pages in * get_scan_count. */ - reclaim_stat->recent_rotated[file] += nr_rotated; + lru_note_cost(lruvec, file, nr_rotated); nr_activate = move_pages_to_lru(lruvec, &l_active); nr_deactivate = move_pages_to_lru(lruvec, &l_inactive); @@ -2242,13 +2242,13 @@ static void get_scan_count(struct lruvec { struct mem_cgroup *memcg = lruvec_memcg(lruvec); int swappiness = mem_cgroup_swappiness(memcg); - struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat; u64 fraction[2]; u64 denominator = 0; /* gcc */ struct pglist_data *pgdat = lruvec_pgdat(lruvec); unsigned long anon_prio, file_prio; enum scan_balance scan_balance; unsigned long anon, file; + unsigned long totalcost; unsigned long ap, fp; enum lru_list lru; @@ -2324,26 +2324,26 @@ static void get_scan_count(struct lruvec lruvec_lru_size(lruvec, LRU_INACTIVE_FILE, MAX_NR_ZONES); spin_lock_irq(&pgdat->lru_lock); - if (unlikely(reclaim_stat->recent_scanned[0] > anon / 4)) { - reclaim_stat->recent_scanned[0] /= 2; - reclaim_stat->recent_rotated[0] /= 2; - } - - if (unlikely(reclaim_stat->recent_scanned[1] > file / 4)) { - reclaim_stat->recent_scanned[1] /= 2; - reclaim_stat->recent_rotated[1] /= 2; + totalcost = lruvec->anon_cost + lruvec->file_cost; + if (unlikely(totalcost > (anon + file) / 4)) { + lruvec->anon_cost /= 2; + lruvec->file_cost /= 2; + totalcost /= 2; } /* * The amount of pressure on anon vs file pages is inversely - * proportional to the fraction of recently scanned pages on - * each list that were recently referenced and in active use. + * proportional to the assumed cost of reclaiming each list, + * as determined by the share of pages that are likely going + * to refault or rotate on each list (recently referenced), + * times the relative IO cost of bringing back a swapped out + * anonymous page vs reloading a filesystem page (swappiness). */ - ap = anon_prio * (reclaim_stat->recent_scanned[0] + 1); - ap /= reclaim_stat->recent_rotated[0] + 1; + ap = anon_prio * (totalcost + 1); + ap /= lruvec->anon_cost + 1; - fp = file_prio * (reclaim_stat->recent_scanned[1] + 1); - fp /= reclaim_stat->recent_rotated[1] + 1; + fp = file_prio * (totalcost + 1); + fp /= lruvec->file_cost + 1; spin_unlock_irq(&pgdat->lru_lock); fraction[0] = ap; _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 111/131] mm: deactivations shouldn't bias the LRU balance 2020-06-03 22:55 incoming Andrew Morton ` (109 preceding siblings ...) 2020-06-03 23:02 ` [patch 110/131] mm: base LRU balancing on an explicit cost model Andrew Morton @ 2020-06-03 23:02 ` Andrew Morton 2020-06-03 23:03 ` [patch 112/131] mm: only count actual rotations as LRU reclaim cost Andrew Morton ` (20 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:02 UTC (permalink / raw) To: akpm, cai, hannes, iamjoonsoo.kim, linux-mm, mhocko, minchan, mm-commits, riel, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: deactivations shouldn't bias the LRU balance Operations like MADV_FREE, FADV_DONTNEED etc. currently move any affected active pages to the inactive list to accelerate their reclaim (good) but also steer page reclaim toward that LRU type, or away from the other (bad). The reason why this is undesirable is that such operations are not part of the regular page aging cycle, and rather a fluke that doesn't say much about the remaining pages on that list; they might all be in heavy use, and once the chunk of easy victims has been purged, the VM continues to apply elevated pressure on those remaining hot pages. The other LRU, meanwhile, might have easily reclaimable pages, and there was never a need to steer away from it in the first place. As the previous patch outlined, we should focus on recording actually observed cost to steer the balance rather than speculating about the potential value of one LRU list over the other. In that spirit, leave explicitely deactivated pages to the LRU algorithm to pick up, and let rotations decide which list is the easiest to reclaim. [cai@lca.pw: fix set-but-not-used warning] Link: http://lkml.kernel.org/r/20200522133335.GA624@Qians-MacBook-Air.local Link: http://lkml.kernel.org/r/20200520232525.798933-10-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Minchan Kim <minchan@kernel.org> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Rik van Riel <riel@surriel.com> Cc: Qian Cai <cai@lca.pw> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/swap.c | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) --- a/mm/swap.c~mm-deactivations-shouldnt-bias-the-lru-balance +++ a/mm/swap.c @@ -498,7 +498,7 @@ void lru_cache_add_active_or_unevictable static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, void *arg) { - int lru, file; + int lru; bool active; if (!PageLRU(page)) @@ -512,7 +512,6 @@ static void lru_deactivate_file_fn(struc return; active = PageActive(page); - file = page_is_file_lru(page); lru = page_lru_base_type(page); del_page_from_lru_list(page, lruvec, lru + active); @@ -538,14 +537,12 @@ static void lru_deactivate_file_fn(struc if (active) __count_vm_event(PGDEACTIVATE); - lru_note_cost(lruvec, !file, hpage_nr_pages(page)); } static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, void *arg) { if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) { - int file = page_is_file_lru(page); int lru = page_lru_base_type(page); del_page_from_lru_list(page, lruvec, lru + LRU_ACTIVE); @@ -554,7 +551,6 @@ static void lru_deactivate_fn(struct pag add_page_to_lru_list(page, lruvec, lru); __count_vm_events(PGDEACTIVATE, hpage_nr_pages(page)); - lru_note_cost(lruvec, !file, hpage_nr_pages(page)); } } @@ -579,7 +575,6 @@ static void lru_lazyfree_fn(struct page __count_vm_events(PGLAZYFREE, hpage_nr_pages(page)); count_memcg_page_event(page, PGLAZYFREE); - lru_note_cost(lruvec, 0, hpage_nr_pages(page)); } } _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 112/131] mm: only count actual rotations as LRU reclaim cost 2020-06-03 22:55 incoming Andrew Morton ` (110 preceding siblings ...) 2020-06-03 23:02 ` [patch 111/131] mm: deactivations shouldn't bias the LRU balance Andrew Morton @ 2020-06-03 23:03 ` Andrew Morton 2020-06-03 23:03 ` [patch 113/131] mm: balance LRU lists based on relative thrashing Andrew Morton ` (19 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:03 UTC (permalink / raw) To: akpm, hannes, iamjoonsoo.kim, linux-mm, mhocko, minchan, mm-commits, riel, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: only count actual rotations as LRU reclaim cost When shrinking the active file list we rotate referenced pages only when they're in an executable mapping. The others get deactivated. When it comes to balancing scan pressure, though, we count all referenced pages as rotated, even the deactivated ones. Yet they do not carry the same cost to the system: the deactivated page *might* refault later on, but the deactivation is tangible progress toward freeing pages; rotations on the other hand cost time and effort without getting any closer to freeing memory. Don't treat both events as equal. The following patch will hook up LRU balancing to cache and anon refaults, which are a much more concrete cost signal for reclaiming one list over the other. Thus, remove the maybe-IO cost bias from page references, and only note the CPU cost for actual rotations that prevent the pages from getting reclaimed. Link: http://lkml.kernel.org/r/20200520232525.798933-11-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Minchan Kim <minchan@kernel.org> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/vmscan.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) --- a/mm/vmscan.c~mm-only-count-actual-rotations-as-lru-reclaim-cost +++ a/mm/vmscan.c @@ -2054,7 +2054,6 @@ static void shrink_active_list(unsigned if (page_referenced(page, 0, sc->target_mem_cgroup, &vm_flags)) { - nr_rotated += hpage_nr_pages(page); /* * Identify referenced, file-backed active pages and * give them one more trip around the active list. So @@ -2065,6 +2064,7 @@ static void shrink_active_list(unsigned * so we ignore them here. */ if ((vm_flags & VM_EXEC) && page_is_file_lru(page)) { + nr_rotated += hpage_nr_pages(page); list_add(&page->lru, &l_active); continue; } @@ -2080,10 +2080,8 @@ static void shrink_active_list(unsigned */ spin_lock_irq(&pgdat->lru_lock); /* - * Count referenced pages from currently used mappings as rotated, - * even though only some of them are actually re-activated. This - * helps balance scan pressure between file and anonymous pages in - * get_scan_count. + * Rotating pages costs CPU without actually + * progressing toward the reclaim goal. */ lru_note_cost(lruvec, file, nr_rotated); _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 113/131] mm: balance LRU lists based on relative thrashing 2020-06-03 22:55 incoming Andrew Morton ` (111 preceding siblings ...) 2020-06-03 23:03 ` [patch 112/131] mm: only count actual rotations as LRU reclaim cost Andrew Morton @ 2020-06-03 23:03 ` Andrew Morton 2020-06-09 9:15 ` Alex Shi 2020-06-03 23:03 ` [patch 114/131] mm: vmscan: determine anon/file pressure balance at the reclaim root Andrew Morton ` (18 subsequent siblings) 131 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:03 UTC (permalink / raw) To: akpm, hannes, iamjoonsoo.kim, linux-mm, mhocko, minchan, mm-commits, riel, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: balance LRU lists based on relative thrashing Since the LRUs were split into anon and file lists, the VM has been balancing between page cache and anonymous pages based on per-list ratios of scanned vs. rotated pages. In most cases that tips page reclaim towards the list that is easier to reclaim and has the fewest actively used pages, but there are a few problems with it: 1. Refaults and LRU rotations are weighted the same way, even though one costs IO and the other costs a bit of CPU. 2. The less we scan an LRU list based on already observed rotations, the more we increase the sampling interval for new references, and rotations become even more likely on that list. This can enter a death spiral in which we stop looking at one list completely until the other one is all but annihilated by page reclaim. Since commit a528910e12ec ("mm: thrash detection-based file cache sizing") we have refault detection for the page cache. Along with swapin events, they are good indicators of when the file or anon list, respectively, is too small for its workingset and needs to grow. For example, if the page cache is thrashing, the cache pages need more time in memory, while there may be colder pages on the anonymous list. Likewise, if swapped pages are faulting back in, it indicates that we reclaim anonymous pages too aggressively and should back off. Replace LRU rotations with refaults and swapins as the basis for relative reclaim cost of the two LRUs. This will have the VM target list balances that incur the least amount of IO on aggregate. Link: http://lkml.kernel.org/r/20200520232525.798933-12-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/swap.h | 3 +-- mm/swap.c | 11 +++++++---- mm/swap_state.c | 5 +++++ mm/vmscan.c | 39 ++++++++++----------------------------- mm/workingset.c | 4 ++++ 5 files changed, 27 insertions(+), 35 deletions(-) --- a/include/linux/swap.h~mm-balance-lru-lists-based-on-relative-thrashing +++ a/include/linux/swap.h @@ -334,8 +334,7 @@ extern unsigned long nr_free_pagecache_p /* linux/mm/swap.c */ -extern void lru_note_cost(struct lruvec *lruvec, bool file, - unsigned int nr_pages); +extern void lru_note_cost(struct page *); extern void lru_cache_add(struct page *); extern void lru_add_page_tail(struct page *page, struct page *page_tail, struct lruvec *lruvec, struct list_head *head); --- a/mm/swap.c~mm-balance-lru-lists-based-on-relative-thrashing +++ a/mm/swap.c @@ -278,12 +278,15 @@ void rotate_reclaimable_page(struct page } } -void lru_note_cost(struct lruvec *lruvec, bool file, unsigned int nr_pages) +void lru_note_cost(struct page *page) { - if (file) - lruvec->file_cost += nr_pages; + struct lruvec *lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); + + /* Record new data point */ + if (page_is_file_lru(page)) + lruvec->file_cost++; else - lruvec->anon_cost += nr_pages; + lruvec->anon_cost++; } static void __activate_page(struct page *page, struct lruvec *lruvec, --- a/mm/swap_state.c~mm-balance-lru-lists-based-on-relative-thrashing +++ a/mm/swap_state.c @@ -440,6 +440,11 @@ struct page *__read_swap_cache_async(swp goto fail_unlock; } + /* XXX: Move to lru_cache_add() when it supports new vs putback */ + spin_lock_irq(&page_pgdat(page)->lru_lock); + lru_note_cost(page); + spin_unlock_irq(&page_pgdat(page)->lru_lock); + /* Caller will initiate read into locked page */ SetPageWorkingset(page); lru_cache_add(page); --- a/mm/vmscan.c~mm-balance-lru-lists-based-on-relative-thrashing +++ a/mm/vmscan.c @@ -1958,12 +1958,6 @@ shrink_inactive_list(unsigned long nr_to move_pages_to_lru(lruvec, &page_list); __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); - /* - * Rotating pages costs CPU without actually - * progressing toward the reclaim goal. - */ - lru_note_cost(lruvec, 0, stat.nr_activate[0]); - lru_note_cost(lruvec, 1, stat.nr_activate[1]); item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT; if (!cgroup_reclaim(sc)) __count_vm_events(item, nr_reclaimed); @@ -2079,11 +2073,6 @@ static void shrink_active_list(unsigned * Move pages back to the lru list. */ spin_lock_irq(&pgdat->lru_lock); - /* - * Rotating pages costs CPU without actually - * progressing toward the reclaim goal. - */ - lru_note_cost(lruvec, file, nr_rotated); nr_activate = move_pages_to_lru(lruvec, &l_active); nr_deactivate = move_pages_to_lru(lruvec, &l_inactive); @@ -2298,22 +2287,23 @@ static void get_scan_count(struct lruvec scan_balance = SCAN_FRACT; /* - * With swappiness at 100, anonymous and file have the same priority. - * This scanning priority is essentially the inverse of IO cost. + * Calculate the pressure balance between anon and file pages. + * + * The amount of pressure we put on each LRU is inversely + * proportional to the cost of reclaiming each list, as + * determined by the share of pages that are refaulting, times + * the relative IO cost of bringing back a swapped out + * anonymous page vs reloading a filesystem page (swappiness). + * + * With swappiness at 100, anon and file have equal IO cost. */ anon_prio = swappiness; file_prio = 200 - anon_prio; /* - * OK, so we have swap space and a fair amount of page cache - * pages. We use the recently rotated / recently scanned - * ratios to determine how valuable each cache is. - * * Because workloads change over time (and to avoid overflow) * we keep these statistics as a floating average, which ends - * up weighing recent references more than old ones. - * - * anon in [0], file in [1] + * up weighing recent refaults more than old ones. */ anon = lruvec_lru_size(lruvec, LRU_ACTIVE_ANON, MAX_NR_ZONES) + @@ -2328,15 +2318,6 @@ static void get_scan_count(struct lruvec lruvec->file_cost /= 2; totalcost /= 2; } - - /* - * The amount of pressure on anon vs file pages is inversely - * proportional to the assumed cost of reclaiming each list, - * as determined by the share of pages that are likely going - * to refault or rotate on each list (recently referenced), - * times the relative IO cost of bringing back a swapped out - * anonymous page vs reloading a filesystem page (swappiness). - */ ap = anon_prio * (totalcost + 1); ap /= lruvec->anon_cost + 1; --- a/mm/workingset.c~mm-balance-lru-lists-based-on-relative-thrashing +++ a/mm/workingset.c @@ -365,6 +365,10 @@ void workingset_refault(struct page *pag /* Page was active prior to eviction */ if (workingset) { SetPageWorkingset(page); + /* XXX: Move to lru_cache_add() when it supports new vs putback */ + spin_lock_irq(&page_pgdat(page)->lru_lock); + lru_note_cost(page); + spin_unlock_irq(&page_pgdat(page)->lru_lock); inc_lruvec_state(lruvec, WORKINGSET_RESTORE); } out: _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: [patch 113/131] mm: balance LRU lists based on relative thrashing 2020-06-03 23:03 ` [patch 113/131] mm: balance LRU lists based on relative thrashing Andrew Morton @ 2020-06-09 9:15 ` Alex Shi 2020-06-09 14:45 ` Johannes Weiner 0 siblings, 1 reply; 349+ messages in thread From: Alex Shi @ 2020-06-09 9:15 UTC (permalink / raw) To: linux-kernel, akpm, hannes, iamjoonsoo.kim, linux-mm, mhocko, minchan, mm-commits, riel, torvalds 在 2020/6/4 上午7:03, Andrew Morton 写道: > > + /* XXX: Move to lru_cache_add() when it supports new vs putback */ Hi Hannes, Sorry for a bit lost, would you like to explain a bit more of your idea here? > + spin_lock_irq(&page_pgdat(page)->lru_lock); > + lru_note_cost(page); > + spin_unlock_irq(&page_pgdat(page)->lru_lock); > + What could we see here w/o the lru_lock? Thanks Alex ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: [patch 113/131] mm: balance LRU lists based on relative thrashing 2020-06-09 9:15 ` Alex Shi @ 2020-06-09 14:45 ` Johannes Weiner 2020-06-10 5:23 ` Joonsoo Kim 0 siblings, 1 reply; 349+ messages in thread From: Johannes Weiner @ 2020-06-09 14:45 UTC (permalink / raw) To: Alex Shi Cc: linux-kernel, akpm, iamjoonsoo.kim, linux-mm, mhocko, minchan, mm-commits, riel, torvalds On Tue, Jun 09, 2020 at 05:15:33PM +0800, Alex Shi wrote: > > > 在 2020/6/4 上午7:03, Andrew Morton 写道: > > > > + /* XXX: Move to lru_cache_add() when it supports new vs putback */ > > Hi Hannes, > > Sorry for a bit lost, would you like to explain a bit more of your idea here? > > > + spin_lock_irq(&page_pgdat(page)->lru_lock); > > + lru_note_cost(page); > > + spin_unlock_irq(&page_pgdat(page)->lru_lock); > > + > > > What could we see here w/o the lru_lock? It'll just be part of the existing LRU locking in pagevec_lru_move_fn(), when the new pages are added to the LRU in batch. See this older patch for example: https://lore.kernel.org/linux-mm/20160606194836.3624-6-hannes@cmpxchg.org/ I didn't include it in this series to reduce conflict with Joonsoo's WIP series that also operates in this area and does something similar: https://lkml.org/lkml/2020/4/3/63 ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: [patch 113/131] mm: balance LRU lists based on relative thrashing 2020-06-09 14:45 ` Johannes Weiner @ 2020-06-10 5:23 ` Joonsoo Kim 2020-06-11 3:28 ` Alex Shi 0 siblings, 1 reply; 349+ messages in thread From: Joonsoo Kim @ 2020-06-10 5:23 UTC (permalink / raw) To: Johannes Weiner Cc: Alex Shi, LKML, Andrew Morton, Joonsoo Kim, Linux Memory Management List, Michal Hocko, 김민찬, mm-commits, Rik van Riel, Linus Torvalds 2020년 6월 9일 (화) 오후 11:46, Johannes Weiner <hannes@cmpxchg.org>님이 작성: > > On Tue, Jun 09, 2020 at 05:15:33PM +0800, Alex Shi wrote: > > > > > > 在 2020/6/4 上午7:03, Andrew Morton 写道: > > > > > > + /* XXX: Move to lru_cache_add() when it supports new vs putback */ > > > > Hi Hannes, > > > > Sorry for a bit lost, would you like to explain a bit more of your idea here? > > > > > + spin_lock_irq(&page_pgdat(page)->lru_lock); > > > + lru_note_cost(page); > > > + spin_unlock_irq(&page_pgdat(page)->lru_lock); > > > + > > > > > > What could we see here w/o the lru_lock? > > It'll just be part of the existing LRU locking in > pagevec_lru_move_fn(), when the new pages are added to the LRU in > batch. See this older patch for example: > > https://lore.kernel.org/linux-mm/20160606194836.3624-6-hannes@cmpxchg.org/ > > I didn't include it in this series to reduce conflict with Joonsoo's > WIP series that also operates in this area and does something similar: Thanks! > https://lkml.org/lkml/2020/4/3/63 I haven't completed the rebase of my series but I guess that referenced patch "https://lkml.org/lkml/2020/4/3/63" would be removed in the next version. Before the I/O cost model, a new anonymous page contributes to the LRU reclaim balance. But, now, a new anonymous page doesn't contributes to the I/O cost so this adjusting patch would not be needed anymore. If anyone wants to change this part, "/* XXX: Move to lru_cache_add() when it supports new vs putback */", feel free to do it. Thanks. ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: [patch 113/131] mm: balance LRU lists based on relative thrashing 2020-06-10 5:23 ` Joonsoo Kim @ 2020-06-11 3:28 ` Alex Shi 0 siblings, 0 replies; 349+ messages in thread From: Alex Shi @ 2020-06-11 3:28 UTC (permalink / raw) To: Joonsoo Kim, Johannes Weiner Cc: LKML, Andrew Morton, Joonsoo Kim, Linux Memory Management List, Michal Hocko, 김민찬, mm-commits, Rik van Riel, Linus Torvalds 在 2020/6/10 下午1:23, Joonsoo Kim 写道: > 2020년 6월 9일 (화) 오후 11:46, Johannes Weiner <hannes@cmpxchg.org>님이 작성: >> >> On Tue, Jun 09, 2020 at 05:15:33PM +0800, Alex Shi wrote: >>> >>> >>> 在 2020/6/4 上午7:03, Andrew Morton 写道: >>>> >>>> + /* XXX: Move to lru_cache_add() when it supports new vs putback */ >>> >>> Hi Hannes, >>> >>> Sorry for a bit lost, would you like to explain a bit more of your idea here? >>> >>>> + spin_lock_irq(&page_pgdat(page)->lru_lock); >>>> + lru_note_cost(page); >>>> + spin_unlock_irq(&page_pgdat(page)->lru_lock); >>>> + >>> >>> >>> What could we see here w/o the lru_lock? Why I want to know the lru_lock protection here is that currently we have 5 lru lists but only guarded by one lock, that would cause much contention when different apps active on a server. I guess originally we have only one lru_lock, since 5 locks would cause cacheline bouncing if we put them together, or a bit cacheline waste to separate them in cacheline. But after we have qspinlock, each of cpu will just loop lock on their cacheline, no interfer to others. It would much much relief the performance drop by cacheline bounce. And we could use page.mapping bits to store the using lru list index for the page. As a quick thought, I guess, except the 5 locks for 5 lists, we still need 1 more lock for common lruvec data or for others which relay on lru_lock now, like mlock, hpage_nr_pages.. That's the reason I want to know everything under lru_lock. :) Any comments for this idea? :) Thanks Alex >> >> It'll just be part of the existing LRU locking in >> pagevec_lru_move_fn(), when the new pages are added to the LRU in >> batch. See this older patch for example: >> >> https://lore.kernel.org/linux-mm/20160606194836.3624-6-hannes@cmpxchg.org/ >> >> I didn't include it in this series to reduce conflict with Joonsoo's >> WIP series that also operates in this area and does something similar: > > Thanks! > >> https://lkml.org/lkml/2020/4/3/63 > > I haven't completed the rebase of my series but I guess that referenced patch > "https://lkml.org/lkml/2020/4/3/63" would be removed in the next version. Thanks a lot for the info, Johannes&Joonsoo! A long history for a interesting idea. :) > > Before the I/O cost model, a new anonymous page contributes to the LRU reclaim > balance. But, now, a new anonymous page doesn't contributes to the I/O cost > so this adjusting patch would not be needed anymore. > > If anyone wants to change this part, > "/* XXX: Move to lru_cache_add() when it supports new vs putback */", feel free > to do it. ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 114/131] mm: vmscan: determine anon/file pressure balance at the reclaim root 2020-06-03 22:55 incoming Andrew Morton ` (112 preceding siblings ...) 2020-06-03 23:03 ` [patch 113/131] mm: balance LRU lists based on relative thrashing Andrew Morton @ 2020-06-03 23:03 ` Andrew Morton 2020-06-03 23:03 ` [patch 115/131] mm: vmscan: reclaim writepage is IO cost Andrew Morton ` (17 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:03 UTC (permalink / raw) To: akpm, hannes, iamjoonsoo.kim, linux-mm, mhocko, minchan, mm-commits, riel, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: vmscan: determine anon/file pressure balance at the reclaim root We split the LRU lists into anon and file, and we rebalance the scan pressure between them when one of them begins thrashing: if the file cache experiences workingset refaults, we increase the pressure on anonymous pages; if the workload is stalled on swapins, we increase the pressure on the file cache instead. With cgroups and their nested LRU lists, we currently don't do this correctly. While recursive cgroup reclaim establishes a relative LRU order among the pages of all involved cgroups, LRU pressure balancing is done on an individual cgroup LRU level. As a result, when one cgroup is thrashing on the filesystem cache while a sibling may have cold anonymous pages, pressure doesn't get equalized between them. This patch moves LRU balancing decision to the root of reclaim - the same level where the LRU order is established. It does this by tracking LRU cost recursively, so that every level of the cgroup tree knows the aggregate LRU cost of all memory within its domain. When the page scanner calculates the scan balance for any given individual cgroup's LRU list, it uses the values from the ancestor cgroup that initiated the reclaim cycle. If one sibling is then thrashing on the cache, it will tip the pressure balance inside its ancestors, and the next hierarchical reclaim iteration will go more after the anon pages in the tree. Link: http://lkml.kernel.org/r/20200520232525.798933-13-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/memcontrol.h | 13 +++++++++++ mm/swap.c | 32 +++++++++++++++++++++++---- mm/vmscan.c | 41 ++++++++++++++--------------------- 3 files changed, 57 insertions(+), 29 deletions(-) --- a/include/linux/memcontrol.h~mm-vmscan-determine-anon-file-pressure-balance-at-the-reclaim-root +++ a/include/linux/memcontrol.h @@ -1242,6 +1242,19 @@ static inline void dec_lruvec_page_state mod_lruvec_page_state(page, idx, -1); } +static inline struct lruvec *parent_lruvec(struct lruvec *lruvec) +{ + struct mem_cgroup *memcg; + + memcg = lruvec_memcg(lruvec); + if (!memcg) + return NULL; + memcg = parent_mem_cgroup(memcg); + if (!memcg) + return NULL; + return mem_cgroup_lruvec(memcg, lruvec_pgdat(lruvec)); +} + #ifdef CONFIG_CGROUP_WRITEBACK struct wb_domain *mem_cgroup_wb_domain(struct bdi_writeback *wb); --- a/mm/swap.c~mm-vmscan-determine-anon-file-pressure-balance-at-the-reclaim-root +++ a/mm/swap.c @@ -282,11 +282,33 @@ void lru_note_cost(struct page *page) { struct lruvec *lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); - /* Record new data point */ - if (page_is_file_lru(page)) - lruvec->file_cost++; - else - lruvec->anon_cost++; + do { + unsigned long lrusize; + + /* Record cost event */ + if (page_is_file_lru(page)) + lruvec->file_cost++; + else + lruvec->anon_cost++; + + /* + * Decay previous events + * + * Because workloads change over time (and to avoid + * overflow) we keep these statistics as a floating + * average, which ends up weighing recent refaults + * more than old ones. + */ + lrusize = lruvec_page_state(lruvec, NR_INACTIVE_ANON) + + lruvec_page_state(lruvec, NR_ACTIVE_ANON) + + lruvec_page_state(lruvec, NR_INACTIVE_FILE) + + lruvec_page_state(lruvec, NR_ACTIVE_FILE); + + if (lruvec->file_cost + lruvec->anon_cost > lrusize / 4) { + lruvec->file_cost /= 2; + lruvec->anon_cost /= 2; + } + } while ((lruvec = parent_lruvec(lruvec))); } static void __activate_page(struct page *page, struct lruvec *lruvec, --- a/mm/vmscan.c~mm-vmscan-determine-anon-file-pressure-balance-at-the-reclaim-root +++ a/mm/vmscan.c @@ -79,6 +79,12 @@ struct scan_control { */ struct mem_cgroup *target_mem_cgroup; + /* + * Scan pressure balancing between anon and file LRUs + */ + unsigned long anon_cost; + unsigned long file_cost; + /* Can active pages be deactivated as part of reclaim? */ #define DEACTIVATE_ANON 1 #define DEACTIVATE_FILE 2 @@ -2231,10 +2237,8 @@ static void get_scan_count(struct lruvec int swappiness = mem_cgroup_swappiness(memcg); u64 fraction[2]; u64 denominator = 0; /* gcc */ - struct pglist_data *pgdat = lruvec_pgdat(lruvec); unsigned long anon_prio, file_prio; enum scan_balance scan_balance; - unsigned long anon, file; unsigned long totalcost; unsigned long ap, fp; enum lru_list lru; @@ -2285,7 +2289,6 @@ static void get_scan_count(struct lruvec } scan_balance = SCAN_FRACT; - /* * Calculate the pressure balance between anon and file pages. * @@ -2300,30 +2303,12 @@ static void get_scan_count(struct lruvec anon_prio = swappiness; file_prio = 200 - anon_prio; - /* - * Because workloads change over time (and to avoid overflow) - * we keep these statistics as a floating average, which ends - * up weighing recent refaults more than old ones. - */ - - anon = lruvec_lru_size(lruvec, LRU_ACTIVE_ANON, MAX_NR_ZONES) + - lruvec_lru_size(lruvec, LRU_INACTIVE_ANON, MAX_NR_ZONES); - file = lruvec_lru_size(lruvec, LRU_ACTIVE_FILE, MAX_NR_ZONES) + - lruvec_lru_size(lruvec, LRU_INACTIVE_FILE, MAX_NR_ZONES); - - spin_lock_irq(&pgdat->lru_lock); - totalcost = lruvec->anon_cost + lruvec->file_cost; - if (unlikely(totalcost > (anon + file) / 4)) { - lruvec->anon_cost /= 2; - lruvec->file_cost /= 2; - totalcost /= 2; - } + totalcost = sc->anon_cost + sc->file_cost; ap = anon_prio * (totalcost + 1); - ap /= lruvec->anon_cost + 1; + ap /= sc->anon_cost + 1; fp = file_prio * (totalcost + 1); - fp /= lruvec->file_cost + 1; - spin_unlock_irq(&pgdat->lru_lock); + fp /= sc->file_cost + 1; fraction[0] = ap; fraction[1] = fp; @@ -2688,6 +2673,14 @@ again: nr_scanned = sc->nr_scanned; /* + * Determine the scan balance between anon and file LRUs. + */ + spin_lock_irq(&pgdat->lru_lock); + sc->anon_cost = target_lruvec->anon_cost; + sc->file_cost = target_lruvec->file_cost; + spin_unlock_irq(&pgdat->lru_lock); + + /* * Target desirable inactive:active list ratios for the anon * and file LRU lists. */ _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 115/131] mm: vmscan: reclaim writepage is IO cost 2020-06-03 22:55 incoming Andrew Morton ` (113 preceding siblings ...) 2020-06-03 23:03 ` [patch 114/131] mm: vmscan: determine anon/file pressure balance at the reclaim root Andrew Morton @ 2020-06-03 23:03 ` Andrew Morton 2020-06-03 23:03 ` [patch 116/131] mm: vmscan: limit the range of LRU type balancing Andrew Morton ` (16 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:03 UTC (permalink / raw) To: akpm, hannes, iamjoonsoo.kim, linux-mm, mhocko, minchan, mm-commits, riel, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: vmscan: reclaim writepage is IO cost The VM tries to balance reclaim pressure between anon and file so as to reduce the amount of IO incurred due to the memory shortage. It already counts refaults and swapins, but in addition it should also count writepage calls during reclaim. For swap, this is obvious: it's IO that wouldn't have occurred if the anonymous memory hadn't been under memory pressure. From a relative balancing point of view this makes sense as well: even if anon is cold and reclaimable, a cache that isn't thrashing may have equally cold pages that don't require IO to reclaim. For file writeback, it's trickier: some of the reclaim writepage IO would have likely occurred anyway due to dirty expiration. But not all of it - premature writeback reduces batching and generates additional writes. Since the flushers are already woken up by the time the VM starts writing cache pages one by one, let's assume that we'e likely causing writes that wouldn't have happened without memory pressure. In addition, the per-page cost of IO would have probably been much cheaper if written in larger batches from the flusher thread rather than the single-page-writes from kswapd. For our purposes - getting the trend right to accelerate convergence on a stable state that doesn't require paging at all - this is sufficiently accurate. If we later wanted to optimize for sustained thrashing, we can still refine the measurements. Count all writepage calls from kswapd as IO cost toward the LRU that the page belongs to. Why do this dynamically? Don't we know in advance that anon pages require IO to reclaim, and so could build in a static bias? First, scanning is not the same as reclaiming. If all the anon pages are referenced, we may not swap for a while just because we're scanning the anon list. During this time, however, it's important that we age anonymous memory and the page cache at the same rate so that their hot-cold gradients are comparable. Everything else being equal, we still want to reclaim the coldest memory overall. Second, we keep copies in swap unless the page changes. If there is swap-backed data that's mostly read (tmpfs file) and has been swapped out before, we can reclaim it without incurring additional IO. Link: http://lkml.kernel.org/r/20200520232525.798933-14-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/swap.h | 4 +++- include/linux/vmstat.h | 1 + mm/swap.c | 16 ++++++++++------ mm/swap_state.c | 2 +- mm/vmscan.c | 3 +++ mm/workingset.c | 2 +- 6 files changed, 19 insertions(+), 9 deletions(-) --- a/include/linux/swap.h~mm-vmscan-reclaim-writepage-is-io-cost +++ a/include/linux/swap.h @@ -334,7 +334,9 @@ extern unsigned long nr_free_pagecache_p /* linux/mm/swap.c */ -extern void lru_note_cost(struct page *); +extern void lru_note_cost(struct lruvec *lruvec, bool file, + unsigned int nr_pages); +extern void lru_note_cost_page(struct page *); extern void lru_cache_add(struct page *); extern void lru_add_page_tail(struct page *page, struct page *page_tail, struct lruvec *lruvec, struct list_head *head); --- a/include/linux/vmstat.h~mm-vmscan-reclaim-writepage-is-io-cost +++ a/include/linux/vmstat.h @@ -26,6 +26,7 @@ struct reclaim_stat { unsigned nr_congested; unsigned nr_writeback; unsigned nr_immediate; + unsigned nr_pageout; unsigned nr_activate[2]; unsigned nr_ref_keep; unsigned nr_unmap_fail; --- a/mm/swap.c~mm-vmscan-reclaim-writepage-is-io-cost +++ a/mm/swap.c @@ -278,18 +278,16 @@ void rotate_reclaimable_page(struct page } } -void lru_note_cost(struct page *page) +void lru_note_cost(struct lruvec *lruvec, bool file, unsigned int nr_pages) { - struct lruvec *lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); - do { unsigned long lrusize; /* Record cost event */ - if (page_is_file_lru(page)) - lruvec->file_cost++; + if (file) + lruvec->file_cost += nr_pages; else - lruvec->anon_cost++; + lruvec->anon_cost += nr_pages; /* * Decay previous events @@ -311,6 +309,12 @@ void lru_note_cost(struct page *page) } while ((lruvec = parent_lruvec(lruvec))); } +void lru_note_cost_page(struct page *page) +{ + lru_note_cost(mem_cgroup_page_lruvec(page, page_pgdat(page)), + page_is_file_lru(page), hpage_nr_pages(page)); +} + static void __activate_page(struct page *page, struct lruvec *lruvec, void *arg) { --- a/mm/swap_state.c~mm-vmscan-reclaim-writepage-is-io-cost +++ a/mm/swap_state.c @@ -442,7 +442,7 @@ struct page *__read_swap_cache_async(swp /* XXX: Move to lru_cache_add() when it supports new vs putback */ spin_lock_irq(&page_pgdat(page)->lru_lock); - lru_note_cost(page); + lru_note_cost_page(page); spin_unlock_irq(&page_pgdat(page)->lru_lock); /* Caller will initiate read into locked page */ --- a/mm/vmscan.c~mm-vmscan-reclaim-writepage-is-io-cost +++ a/mm/vmscan.c @@ -1359,6 +1359,8 @@ static unsigned int shrink_page_list(str case PAGE_ACTIVATE: goto activate_locked; case PAGE_SUCCESS: + stat->nr_pageout += hpage_nr_pages(page); + if (PageWriteback(page)) goto keep; if (PageDirty(page)) @@ -1964,6 +1966,7 @@ shrink_inactive_list(unsigned long nr_to move_pages_to_lru(lruvec, &page_list); __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); + lru_note_cost(lruvec, file, stat.nr_pageout); item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT; if (!cgroup_reclaim(sc)) __count_vm_events(item, nr_reclaimed); --- a/mm/workingset.c~mm-vmscan-reclaim-writepage-is-io-cost +++ a/mm/workingset.c @@ -367,7 +367,7 @@ void workingset_refault(struct page *pag SetPageWorkingset(page); /* XXX: Move to lru_cache_add() when it supports new vs putback */ spin_lock_irq(&page_pgdat(page)->lru_lock); - lru_note_cost(page); + lru_note_cost_page(page); spin_unlock_irq(&page_pgdat(page)->lru_lock); inc_lruvec_state(lruvec, WORKINGSET_RESTORE); } _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 116/131] mm: vmscan: limit the range of LRU type balancing 2020-06-03 22:55 incoming Andrew Morton ` (114 preceding siblings ...) 2020-06-03 23:03 ` [patch 115/131] mm: vmscan: reclaim writepage is IO cost Andrew Morton @ 2020-06-03 23:03 ` Andrew Morton 2020-06-03 23:03 ` [patch 117/131] mm: swap: fix vmstats for huge pages Andrew Morton ` (15 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:03 UTC (permalink / raw) To: akpm, hannes, iamjoonsoo.kim, linux-mm, mhocko, minchan, mm-commits, riel, torvalds From: Johannes Weiner <hannes@cmpxchg.org> Subject: mm: vmscan: limit the range of LRU type balancing When LRU cost only shows up on one list, we abruptly stop scanning that list altogether. That's an extreme reaction: by the time the other list starts thrashing and the pendulum swings back, we may have no recent age information on the first list anymore, and we could have significant latencies until the scanner has caught up. Soften this change in the feedback system by ensuring that no list receives less than a third of overall pressure, and only distribute the other 66% according to LRU cost. This ensures that we maintain a minimum rate of aging on the entire workingset while it's being pressured, while still allowing a generous rate of convergence when the relative sizes of the lists need to adjust. Link: http://lkml.kernel.org/r/20200520232525.798933-15-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/vmscan.c | 22 +++++++++++++--------- 1 file changed, 13 insertions(+), 9 deletions(-) --- a/mm/vmscan.c~mm-vmscan-limit-the-range-of-lru-type-balancing +++ a/mm/vmscan.c @@ -2237,12 +2237,11 @@ static void get_scan_count(struct lruvec unsigned long *nr) { struct mem_cgroup *memcg = lruvec_memcg(lruvec); + unsigned long anon_cost, file_cost, total_cost; int swappiness = mem_cgroup_swappiness(memcg); u64 fraction[2]; u64 denominator = 0; /* gcc */ - unsigned long anon_prio, file_prio; enum scan_balance scan_balance; - unsigned long totalcost; unsigned long ap, fp; enum lru_list lru; @@ -2301,17 +2300,22 @@ static void get_scan_count(struct lruvec * the relative IO cost of bringing back a swapped out * anonymous page vs reloading a filesystem page (swappiness). * + * Although we limit that influence to ensure no list gets + * left behind completely: at least a third of the pressure is + * applied, before swappiness. + * * With swappiness at 100, anon and file have equal IO cost. */ - anon_prio = swappiness; - file_prio = 200 - anon_prio; + total_cost = sc->anon_cost + sc->file_cost; + anon_cost = total_cost + sc->anon_cost; + file_cost = total_cost + sc->file_cost; + total_cost = anon_cost + file_cost; - totalcost = sc->anon_cost + sc->file_cost; - ap = anon_prio * (totalcost + 1); - ap /= sc->anon_cost + 1; + ap = swappiness * (total_cost + 1); + ap /= anon_cost + 1; - fp = file_prio * (totalcost + 1); - fp /= sc->file_cost + 1; + fp = (200 - swappiness) * (total_cost + 1); + fp /= file_cost + 1; fraction[0] = ap; fraction[1] = fp; _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 117/131] mm: swap: fix vmstats for huge pages 2020-06-03 22:55 incoming Andrew Morton ` (115 preceding siblings ...) 2020-06-03 23:03 ` [patch 116/131] mm: vmscan: limit the range of LRU type balancing Andrew Morton @ 2020-06-03 23:03 ` Andrew Morton 2020-06-03 23:03 ` [patch 118/131] mm: swap: memcg: fix memcg stats " Andrew Morton ` (14 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:03 UTC (permalink / raw) To: akpm, hannes, linux-mm, mm-commits, shakeelb, torvalds From: Shakeel Butt <shakeelb@google.com> Subject: mm: swap: fix vmstats for huge pages Many of the callbacks called by pagevec_lru_move_fn() does not correctly update the vmstats for huge pages. Fix that. Also __pagevec_lru_add_fn() use the irq-unsafe alternative to update the stat as the irqs are already disabled. Link: http://lkml.kernel.org/r/20200527182916.249910-1-shakeelb@google.com Signed-off-by: Shakeel Butt <shakeelb@google.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/swap.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) --- a/mm/swap.c~mm-swap-fix-vmstats-for-huge-pages +++ a/mm/swap.c @@ -241,7 +241,7 @@ static void pagevec_move_tail_fn(struct del_page_from_lru_list(page, lruvec, page_lru(page)); ClearPageActive(page); add_page_to_lru_list_tail(page, lruvec, page_lru(page)); - (*pgmoved)++; + (*pgmoved) += hpage_nr_pages(page); } } @@ -327,7 +327,7 @@ static void __activate_page(struct page add_page_to_lru_list(page, lruvec, lru); trace_mm_lru_activate(page); - __count_vm_event(PGACTIVATE); + __count_vm_events(PGACTIVATE, hpage_nr_pages(page)); } } @@ -529,6 +529,7 @@ static void lru_deactivate_file_fn(struc { int lru; bool active; + int nr_pages = hpage_nr_pages(page); if (!PageLRU(page)) return; @@ -561,11 +562,11 @@ static void lru_deactivate_file_fn(struc * We moves tha page into tail of inactive. */ add_page_to_lru_list_tail(page, lruvec, lru); - __count_vm_event(PGROTATED); + __count_vm_events(PGROTATED, nr_pages); } if (active) - __count_vm_event(PGDEACTIVATE); + __count_vm_events(PGDEACTIVATE, nr_pages); } static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, @@ -960,6 +961,7 @@ static void __pagevec_lru_add_fn(struct { enum lru_list lru; int was_unevictable = TestClearPageUnevictable(page); + int nr_pages = hpage_nr_pages(page); VM_BUG_ON_PAGE(PageLRU(page), page); @@ -995,13 +997,13 @@ static void __pagevec_lru_add_fn(struct if (page_evictable(page)) { lru = page_lru(page); if (was_unevictable) - count_vm_event(UNEVICTABLE_PGRESCUED); + __count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages); } else { lru = LRU_UNEVICTABLE; ClearPageActive(page); SetPageUnevictable(page); if (!was_unevictable) - count_vm_event(UNEVICTABLE_PGCULLED); + __count_vm_events(UNEVICTABLE_PGCULLED, nr_pages); } add_page_to_lru_list(page, lruvec, lru); _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 118/131] mm: swap: memcg: fix memcg stats for huge pages 2020-06-03 22:55 incoming Andrew Morton ` (116 preceding siblings ...) 2020-06-03 23:03 ` [patch 117/131] mm: swap: fix vmstats for huge pages Andrew Morton @ 2020-06-03 23:03 ` Andrew Morton 2020-06-03 23:03 ` [patch 119/131] tools/vm/page_owner_sort.c: filter out unneeded line Andrew Morton ` (13 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:03 UTC (permalink / raw) To: akpm, hannes, linux-mm, mm-commits, shakeelb, torvalds From: Shakeel Butt <shakeelb@google.com> Subject: mm: swap: memcg: fix memcg stats for huge pages The commit 2262185c5b28 ("mm: per-cgroup memory reclaim stats") added PGLAZYFREE, PGACTIVATE & PGDEACTIVATE stats for cgroups but missed couple of places and PGLAZYFREE missed huge page handling. Fix that. Also for PGLAZYFREE use the irq-unsafe function to update as the irq is already disabled. Link: http://lkml.kernel.org/r/20200527182947.251343-1-shakeelb@google.com Fixes: 2262185c5b28 ("mm: per-cgroup memory reclaim stats") Signed-off-by: Shakeel Butt <shakeelb@google.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/swap.c | 21 ++++++++++++++++----- 1 file changed, 16 insertions(+), 5 deletions(-) --- a/mm/swap.c~mm-swap-memcg-fix-memcg-stats-for-huge-pages +++ a/mm/swap.c @@ -320,6 +320,7 @@ static void __activate_page(struct page { if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) { int lru = page_lru_base_type(page); + int nr_pages = hpage_nr_pages(page); del_page_from_lru_list(page, lruvec, lru); SetPageActive(page); @@ -327,7 +328,9 @@ static void __activate_page(struct page add_page_to_lru_list(page, lruvec, lru); trace_mm_lru_activate(page); - __count_vm_events(PGACTIVATE, hpage_nr_pages(page)); + __count_vm_events(PGACTIVATE, nr_pages); + __count_memcg_events(lruvec_memcg(lruvec), PGACTIVATE, + nr_pages); } } @@ -565,8 +568,11 @@ static void lru_deactivate_file_fn(struc __count_vm_events(PGROTATED, nr_pages); } - if (active) + if (active) { __count_vm_events(PGDEACTIVATE, nr_pages); + __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, + nr_pages); + } } static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, @@ -574,13 +580,16 @@ static void lru_deactivate_fn(struct pag { if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) { int lru = page_lru_base_type(page); + int nr_pages = hpage_nr_pages(page); del_page_from_lru_list(page, lruvec, lru + LRU_ACTIVE); ClearPageActive(page); ClearPageReferenced(page); add_page_to_lru_list(page, lruvec, lru); - __count_vm_events(PGDEACTIVATE, hpage_nr_pages(page)); + __count_vm_events(PGDEACTIVATE, nr_pages); + __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, + nr_pages); } } @@ -590,6 +599,7 @@ static void lru_lazyfree_fn(struct page if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) && !PageSwapCache(page) && !PageUnevictable(page)) { bool active = PageActive(page); + int nr_pages = hpage_nr_pages(page); del_page_from_lru_list(page, lruvec, LRU_INACTIVE_ANON + active); @@ -603,8 +613,9 @@ static void lru_lazyfree_fn(struct page ClearPageSwapBacked(page); add_page_to_lru_list(page, lruvec, LRU_INACTIVE_FILE); - __count_vm_events(PGLAZYFREE, hpage_nr_pages(page)); - count_memcg_page_event(page, PGLAZYFREE); + __count_vm_events(PGLAZYFREE, nr_pages); + __count_memcg_events(lruvec_memcg(lruvec), PGLAZYFREE, + nr_pages); } } _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 119/131] tools/vm/page_owner_sort.c: filter out unneeded line 2020-06-03 22:55 incoming Andrew Morton ` (117 preceding siblings ...) 2020-06-03 23:03 ` [patch 118/131] mm: swap: memcg: fix memcg stats " Andrew Morton @ 2020-06-03 23:03 ` Andrew Morton 2020-06-03 23:03 ` [patch 120/131] mm, mempolicy: fix up gup usage in lookup_node Andrew Morton ` (12 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:03 UTC (permalink / raw) To: akpm, ch0.han, corbet, iamjoonsoo.kim, linux-mm, mm-commits, torvalds, vbabka From: Changhee Han <ch0.han@lge.com> Subject: tools/vm/page_owner_sort.c: filter out unneeded line To see a sorted result from page_owner, there must be a tiresome preprocessing step before running page_owner_sort. This patch simply filters out lines which start with "PFN" while reading the page owner report. Link: http://lkml.kernel.org/r/20200429052940.16968-1-ch0.han@lge.com Signed-off-by: Changhee Han <ch0.han@lge.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Jonathan Corbet <corbet@lwn.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- Documentation/vm/page_owner.rst | 3 +-- tools/vm/page_owner_sort.c | 5 +++-- 2 files changed, 4 insertions(+), 4 deletions(-) --- a/Documentation/vm/page_owner.rst~tools-vm-page_owner_sort-filter-out-unneeded-line +++ a/Documentation/vm/page_owner.rst @@ -83,8 +83,7 @@ Usage 4) Analyze information from page owner:: cat /sys/kernel/debug/page_owner > page_owner_full.txt - grep -v ^PFN page_owner_full.txt > page_owner.txt - ./page_owner_sort page_owner.txt sorted_page_owner.txt + ./page_owner_sort page_owner_full.txt sorted_page_owner.txt See the result about who allocated each page in the ``sorted_page_owner.txt``. --- a/tools/vm/page_owner_sort.c~tools-vm-page_owner_sort-filter-out-unneeded-line +++ a/tools/vm/page_owner_sort.c @@ -4,8 +4,7 @@ * * Example use: * cat /sys/kernel/debug/page_owner > page_owner_full.txt - * grep -v ^PFN page_owner_full.txt > page_owner.txt - * ./page_owner_sort page_owner.txt sorted_page_owner.txt + * ./page_owner_sort page_owner_full.txt sorted_page_owner.txt * * See Documentation/vm/page_owner.rst */ @@ -38,6 +37,8 @@ int read_block(char *buf, int buf_size, while (buf_end - curr > 1 && fgets(curr, buf_end - curr, fin)) { if (*curr == '\n') /* empty line */ return curr - buf; + if (!strncmp(curr, "PFN", 3)) + continue; curr += strlen(curr); } _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 120/131] mm, mempolicy: fix up gup usage in lookup_node 2020-06-03 22:55 incoming Andrew Morton ` (118 preceding siblings ...) 2020-06-03 23:03 ` [patch 119/131] tools/vm/page_owner_sort.c: filter out unneeded line Andrew Morton @ 2020-06-03 23:03 ` Andrew Morton 2020-06-03 23:03 ` [patch 121/131] include/linux/memblock.h: fix minor typo and unclear comment Andrew Morton ` (11 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:03 UTC (permalink / raw) To: akpm, linux-mm, mhocko, mm-commits, peterx, torvalds From: Michal Hocko <mhocko@suse.com> Subject: mm, mempolicy: fix up gup usage in lookup_node ba841078cd05 ("mm/mempolicy: Allow lookup_node() to handle fatal signal") has added a special casing for 0 return value because that was a possible gup return value when interrupted by fatal signal. This has been fixed by ae46d2aa6a7f ("mm/gup: Let __get_user_pages_locked() return -EINTR for fatal signal") in the mean time so ba841078cd05 can be reverted. This patch however doesn't go all the way to revert it because the check for 0 is wrong and confusing here. Firstly it is inherently unsafe to access the page when get_user_pages_locked returns 0 (aka no page returned). Fortunatelly this will not happen because get_user_pages_locked will not return 0 when nr_pages > 0 unless FOLL_NOWAIT is specified which is not the case here. Document this potential error code in gup code while we are at it. Link: http://lkml.kernel.org/r/20200421071026.18394-1-mhocko@kernel.org Signed-off-by: Michal Hocko <mhocko@suse.com> Cc: Peter Xu <peterx@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/gup.c | 5 +++++ mm/mempolicy.c | 5 +---- 2 files changed, 6 insertions(+), 4 deletions(-) --- a/mm/gup.c~mm-mempolicy-fix-up-gup-usage-in-lookup_node +++ a/mm/gup.c @@ -989,6 +989,7 @@ static int check_vma_flags(struct vm_are * -- If nr_pages is >0, but no pages were pinned, returns -errno. * -- If nr_pages is >0, and some pages were pinned, returns the number of * pages pinned. Again, this may be less than nr_pages. + * -- 0 return value is possible when the fault would need to be retried. * * The caller is responsible for releasing returned @pages, via put_page(). * @@ -1265,6 +1266,10 @@ retry: } EXPORT_SYMBOL_GPL(fixup_user_fault); +/* + * Please note that this function, unlike __get_user_pages will not + * return 0 for nr_pages > 0 without FOLL_NOWAIT + */ static __always_inline long __get_user_pages_locked(struct task_struct *tsk, struct mm_struct *mm, unsigned long start, --- a/mm/mempolicy.c~mm-mempolicy-fix-up-gup-usage-in-lookup_node +++ a/mm/mempolicy.c @@ -927,10 +927,7 @@ static int lookup_node(struct mm_struct int locked = 1; err = get_user_pages_locked(addr & PAGE_MASK, 1, 0, &p, &locked); - if (err == 0) { - /* E.g. GUP interrupted by fatal signal */ - err = -EFAULT; - } else if (err > 0) { + if (err > 0) { err = page_to_nid(p); put_page(p); } _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 121/131] include/linux/memblock.h: fix minor typo and unclear comment 2020-06-03 22:55 incoming Andrew Morton ` (119 preceding siblings ...) 2020-06-03 23:03 ` [patch 120/131] mm, mempolicy: fix up gup usage in lookup_node Andrew Morton @ 2020-06-03 23:03 ` Andrew Morton 2020-06-03 23:03 ` [patch 122/131] sparc32: register memory occupied by kernel as memblock.memory Andrew Morton ` (10 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:03 UTC (permalink / raw) To: akpm, chenqiwu, linux-mm, mm-commits, rppt, torvalds From: chenqiwu <chenqiwu@xiaomi.com> Subject: include/linux/memblock.h: fix minor typo and unclear comment Fix a minor typo "usabe->usable" for the current discription of member variable "memory" in struct memblock. BTW, I think it's unclear the member variable "base" in struct memblock_type is currently described as the physical address of memory region, change it to base address of the region is clearer since the variable is decorated as phys_addr_t. Link: http://lkml.kernel.org/r/1588846952-32166-1-git-send-email-qiwuchen55@gmail.com Signed-off-by: chenqiwu <chenqiwu@xiaomi.com> Reviewed-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- include/linux/memblock.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- a/include/linux/memblock.h~mm-memblock-fix-minor-typo-and-unclear-comment +++ a/include/linux/memblock.h @@ -41,7 +41,7 @@ enum memblock_flags { /** * struct memblock_region - represents a memory region - * @base: physical address of the region + * @base: base address of the region * @size: size of the region * @flags: memory region attributes * @nid: NUMA node id @@ -75,7 +75,7 @@ struct memblock_type { * struct memblock - memblock allocator metadata * @bottom_up: is bottom up direction? * @current_limit: physical address of the current allocation limit - * @memory: usabe memory regions + * @memory: usable memory regions * @reserved: reserved memory regions * @physmem: all physical memory */ _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 122/131] sparc32: register memory occupied by kernel as memblock.memory 2020-06-03 22:55 incoming Andrew Morton ` (120 preceding siblings ...) 2020-06-03 23:03 ` [patch 121/131] include/linux/memblock.h: fix minor typo and unclear comment Andrew Morton @ 2020-06-03 23:03 ` Andrew Morton 2020-06-03 23:03 ` [patch 123/131] hugetlbfs: get unmapped area below TASK_UNMAPPED_BASE for hugetlbfs Andrew Morton ` (9 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:03 UTC (permalink / raw) To: akpm, davem, linux-mm, linux, mm-commits, rppt, torvalds From: Mike Rapoport <rppt@linux.ibm.com> Subject: sparc32: register memory occupied by kernel as memblock.memory sparc32 never registered the memory occupied by the kernel image with memblock_add() and it only reserved this memory with meblock_reserve(). With openbios as system firmware, the memory occupied by the kernel is reserved in openbios and removed from mem.available. The prom setup code in the kernel uses mem.available to set up the memory banks and essentially there is a hole for the memory occupied by the kernel image. Later in bootmem_init() this memory is memblock_reserve()d. Up until recently, memmap initialization would call __init_single_page() for the pages in that hole, the free_low_memory_core_early() would mark them as reserved and everything would be Ok. After the change in memmap initialization introduced by the commit "mm: memmap_init: iterate over memblock regions rather that check each PFN", the hole is skipped and the page structs for it are not initialized. And when they are passed from memblock to page allocator as reserved, the latter gets confused. Simply registering the memory occupied by the kernel with memblock_add() resolves this issue. Tested on qemu-system-sparc with Debian Etch [1] userspace. [1] https://people.debian.org/~aurel32/qemu/sparc/debian_etch_sparc_small.qcow2 Link: https://lkml.kernel.org/r/20200517000050.GA87467@roeck-us.nlllllet/ Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Acked-by: David S. Miller <davem@davemloft.net> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/sparc/mm/init_32.c | 1 + 1 file changed, 1 insertion(+) --- a/arch/sparc/mm/init_32.c~sparc32-register-memory-occupied-by-kernel-as-memblockmemory +++ a/arch/sparc/mm/init_32.c @@ -193,6 +193,7 @@ unsigned long __init bootmem_init(unsign /* Reserve the kernel text/data/bss. */ size = (start_pfn << PAGE_SHIFT) - phys_base; memblock_reserve(phys_base, size); + memblock_add(phys_base, size); size = memblock_phys_mem_size() - memblock_reserved_size(); *pages_avail = (size >> PAGE_SHIFT) - high_pages; _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 123/131] hugetlbfs: get unmapped area below TASK_UNMAPPED_BASE for hugetlbfs 2020-06-03 22:55 incoming Andrew Morton ` (121 preceding siblings ...) 2020-06-03 23:03 ` [patch 122/131] sparc32: register memory occupied by kernel as memblock.memory Andrew Morton @ 2020-06-03 23:03 ` Andrew Morton 2020-06-03 23:03 ` [patch 124/131] mm: thp: don't need to drain lru cache when splitting and mlocking THP Andrew Morton ` (8 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:03 UTC (permalink / raw) To: akpm, cg.chen, chenjie6, hushijie3, linux-mm, lkp, mike.kravetz, mm-commits, nixiaoming, torvalds, wangkefeng.wang, will, yangerkun From: Shijie Hu <hushijie3@huawei.com> Subject: hugetlbfs: get unmapped area below TASK_UNMAPPED_BASE for hugetlbfs In a 32-bit program, running on arm64 architecture. When the address space below mmap base is completely exhausted, shmat() for huge pages will return ENOMEM, but shmat() for normal pages can still success on no-legacy mode. This seems not fair. For normal pages, the calling trace of get_unmapped_area() is: => mm->get_unmapped_area() if on legacy mode, => arch_get_unmapped_area() => vm_unmapped_area() if on no-legacy mode, => arch_get_unmapped_area_topdown() => vm_unmapped_area() For huge pages, the calling trace of get_unmapped_area() is: => file->f_op->get_unmapped_area() => hugetlb_get_unmapped_area() => vm_unmapped_area() To solve this issue, we only need to make hugetlb_get_unmapped_area() take the same way as mm->get_unmapped_area(). Add *bottomup() and *topdown() for hugetlbfs, and check current mm->get_unmapped_area() to decide which one to use. If mm->get_unmapped_area is equal to arch_get_unmapped_area_topdown(), hugetlb_get_unmapped_area() calls topdown routine, otherwise calls bottomup routine. Link: http://lkml.kernel.org/r/20200518065338.113664-1-hushijie3@huawei.com Signed-off-by: Shijie Hu <hushijie3@huawei.com> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Reported-by: kbuild test robot <lkp@intel.com> Cc: Will Deacon <will@kernel.org> Cc: Xiaoming Ni <nixiaoming@huawei.com> Cc: Kefeng Wang <wangkefeng.wang@huawei.com> Cc: yangerkun <yangerkun@huawei.com> Cc: ChenGang <cg.chen@huawei.com> Cc: Chen Jie <chenjie6@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- fs/hugetlbfs/inode.c | 67 ++++++++++++++++++++++++++++++++++++----- 1 file changed, 59 insertions(+), 8 deletions(-) --- a/fs/hugetlbfs/inode.c~hugetlbfs-get-unmapped-area-below-task_unmapped_base-for-hugetlbfs +++ a/fs/hugetlbfs/inode.c @@ -38,6 +38,7 @@ #include <linux/uio.h> #include <linux/uaccess.h> +#include <linux/sched/mm.h> static const struct super_operations hugetlbfs_ops; static const struct address_space_operations hugetlbfs_aops; @@ -191,13 +192,60 @@ out: #ifndef HAVE_ARCH_HUGETLB_UNMAPPED_AREA static unsigned long +hugetlb_get_unmapped_area_bottomup(struct file *file, unsigned long addr, + unsigned long len, unsigned long pgoff, unsigned long flags) +{ + struct hstate *h = hstate_file(file); + struct vm_unmapped_area_info info; + + info.flags = 0; + info.length = len; + info.low_limit = current->mm->mmap_base; + info.high_limit = TASK_SIZE; + info.align_mask = PAGE_MASK & ~huge_page_mask(h); + info.align_offset = 0; + return vm_unmapped_area(&info); +} + +static unsigned long +hugetlb_get_unmapped_area_topdown(struct file *file, unsigned long addr, + unsigned long len, unsigned long pgoff, unsigned long flags) +{ + struct hstate *h = hstate_file(file); + struct vm_unmapped_area_info info; + + info.flags = VM_UNMAPPED_AREA_TOPDOWN; + info.length = len; + info.low_limit = max(PAGE_SIZE, mmap_min_addr); + info.high_limit = current->mm->mmap_base; + info.align_mask = PAGE_MASK & ~huge_page_mask(h); + info.align_offset = 0; + addr = vm_unmapped_area(&info); + + /* + * A failed mmap() very likely causes application failure, + * so fall back to the bottom-up function here. This scenario + * can happen with large stack limits and large mmap() + * allocations. + */ + if (unlikely(offset_in_page(addr))) { + VM_BUG_ON(addr != -ENOMEM); + info.flags = 0; + info.low_limit = current->mm->mmap_base; + info.high_limit = TASK_SIZE; + addr = vm_unmapped_area(&info); + } + + return addr; +} + +static unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) { struct mm_struct *mm = current->mm; struct vm_area_struct *vma; struct hstate *h = hstate_file(file); - struct vm_unmapped_area_info info; if (len & ~huge_page_mask(h)) return -EINVAL; @@ -218,13 +266,16 @@ hugetlb_get_unmapped_area(struct file *f return addr; } - info.flags = 0; - info.length = len; - info.low_limit = TASK_UNMAPPED_BASE; - info.high_limit = TASK_SIZE; - info.align_mask = PAGE_MASK & ~huge_page_mask(h); - info.align_offset = 0; - return vm_unmapped_area(&info); + /* + * Use mm->get_unmapped_area value as a hint to use topdown routine. + * If architectures have special needs, they should define their own + * version of hugetlb_get_unmapped_area. + */ + if (mm->get_unmapped_area == arch_get_unmapped_area_topdown) + return hugetlb_get_unmapped_area_topdown(file, addr, len, + pgoff, flags); + return hugetlb_get_unmapped_area_bottomup(file, addr, len, + pgoff, flags); } #endif _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 124/131] mm: thp: don't need to drain lru cache when splitting and mlocking THP 2020-06-03 22:55 incoming Andrew Morton ` (122 preceding siblings ...) 2020-06-03 23:03 ` [patch 123/131] hugetlbfs: get unmapped area below TASK_UNMAPPED_BASE for hugetlbfs Andrew Morton @ 2020-06-03 23:03 ` Andrew Morton 2020-06-03 23:03 ` [patch 125/131] powerpc/mm: drop platform defined pmd_mknotpresent() Andrew Morton ` (7 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:03 UTC (permalink / raw) To: aarcange, akpm, daniel.m.jordan, hughd, kirill.shutemov, linux-mm, mm-commits, torvalds, yang.shi From: Yang Shi <yang.shi@linux.alibaba.com> Subject: mm: thp: don't need to drain lru cache when splitting and mlocking THP Since commit 8f182270dfec ("mm/swap.c: flush lru pvecs on compound page arrival") THP would not stay in pagevec anymore. So the optimization made by commit d965432234db ("thp: increase split_huge_page() success rate") doesn't make sense anymore, which tries to unpin munlocked THPs from pagevec by draining pagevec. Draining lru cache before isolating THP in mlock path is also unnecessary. b676b293fb48 ("mm, thp: fix mapped pages avoiding unevictable list on mlock") added it and 9a73f61bdb8a ("thp, mlock: do not mlock PTE-mapped file huge pages") accidentally carried it over after the above optimization went in. Link: http://lkml.kernel.org/r/1585946493-7531-1-git-send-email-yang.shi@linux.alibaba.com Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Hugh Dickins <hughd@google.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/huge_memory.c | 7 ------- 1 file changed, 7 deletions(-) --- a/mm/huge_memory.c~mm-thp-dont-need-drain-lru-cache-when-splitting-and-mlocking-thp +++ a/mm/huge_memory.c @@ -1378,7 +1378,6 @@ struct page *follow_trans_huge_pmd(struc goto skip_mlock; if (!trylock_page(page)) goto skip_mlock; - lru_add_drain(); if (page->mapping && !PageDoubleMap(page)) mlock_vma_page(page); unlock_page(page); @@ -2582,7 +2581,6 @@ int split_huge_page_to_list(struct page struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; int count, mapcount, extra_pins, ret; - bool mlocked; unsigned long flags; pgoff_t end; @@ -2641,14 +2639,9 @@ int split_huge_page_to_list(struct page goto out_unlock; } - mlocked = PageMlocked(head); unmap_page(head); VM_BUG_ON_PAGE(compound_mapcount(head), head); - /* Make sure the page is not on per-CPU pagevec as it takes pin */ - if (mlocked) - lru_add_drain(); - /* prevent PageLRU to go away from under us, and freeze lru stats */ spin_lock_irqsave(&pgdata->lru_lock, flags); _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 125/131] powerpc/mm: drop platform defined pmd_mknotpresent() 2020-06-03 22:55 incoming Andrew Morton ` (123 preceding siblings ...) 2020-06-03 23:03 ` [patch 124/131] mm: thp: don't need to drain lru cache when splitting and mlocking THP Andrew Morton @ 2020-06-03 23:03 ` Andrew Morton 2020-06-03 23:03 ` [patch 126/131] mm/thp: rename pmd_mknotpresent() as pmd_mkinvalid() Andrew Morton ` (6 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:03 UTC (permalink / raw) To: akpm, anshuman.khandual, benh, bp, catalin.marinas, dave.hansen, hpa, linux-mm, linux, luto, mingo, mm-commits, mpe, paulus, peterz, rostedt, tglx, torvalds, tsbogend, vgupta, will From: Anshuman Khandual <anshuman.khandual@arm.com> Subject: powerpc/mm: drop platform defined pmd_mknotpresent() Patch series "mm/thp: Rename pmd_mknotpresent() as pmd_mknotvalid()", v2. This series renames pmd_mknotpresent() as pmd_mknotvalid(). Before that it drops an existing pmd_mknotpresent() definition from powerpc platform which was never required as it defines it's pmdp_invalidate() through subscribing __HAVE_ARCH_PMDP_INVALIDATE. This does not create any functional change. This rename was suggested by Catalin during a previous discussion while we were trying to change the THP helpers on arm64 platform for migration. https://patchwork.kernel.org/patch/11019637/ This patch (of 2): Platform needs to define pmd_mknotpresent() for generic pmdp_invalidate() only when __HAVE_ARCH_PMDP_INVALIDATE is not subscribed. Otherwise platform specific pmd_mknotpresent() is not required. Hence just drop it. Link: http://lkml.kernel.org/r/1587520326-10099-1-git-send-email-anshuman.khandual@arm.com Link: http://lkml.kernel.org/r/1584680057-13753-1-git-send-email-anshuman.khandual@arm.com Link: http://lkml.kernel.org/r/1584680057-13753-2-git-send-email-anshuman.khandual@arm.com Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/powerpc/include/asm/book3s/64/pgtable.h | 4 ---- 1 file changed, 4 deletions(-) --- a/arch/powerpc/include/asm/book3s/64/pgtable.h~powerpc-mm-drop-platform-defined-pmd_mknotpresent +++ a/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -1168,10 +1168,6 @@ static inline int pmd_large(pmd_t pmd) return !!(pmd_raw(pmd) & cpu_to_be64(_PAGE_PTE)); } -static inline pmd_t pmd_mknotpresent(pmd_t pmd) -{ - return __pmd(pmd_val(pmd) & ~_PAGE_PRESENT); -} /* * For radix we should always find H_PAGE_HASHPTE zero. Hence * the below will work for radix too _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 126/131] mm/thp: rename pmd_mknotpresent() as pmd_mkinvalid() 2020-06-03 22:55 incoming Andrew Morton ` (124 preceding siblings ...) 2020-06-03 23:03 ` [patch 125/131] powerpc/mm: drop platform defined pmd_mknotpresent() Andrew Morton @ 2020-06-03 23:03 ` Andrew Morton 2020-06-03 23:03 ` [patch 127/131] drivers/base/memory.c: cache memory blocks in xarray to accelerate lookup Andrew Morton ` (5 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:03 UTC (permalink / raw) To: akpm, anshuman.khandual, benh, bp, catalin.marinas, dave.hansen, hpa, linux-mm, linux, luto, mingo, mm-commits, mpe, paulus, peterz, rostedt, tglx, torvalds, tsbogend, vgupta, will From: Anshuman Khandual <anshuman.khandual@arm.com> Subject: mm/thp: rename pmd_mknotpresent() as pmd_mkinvalid() pmd_present() is expected to test positive after pmdp_mknotpresent() as the PMD entry still points to a valid huge page in memory. pmdp_mknotpresent() implies that given PMD entry is just invalidated from MMU perspective while still holding on to pmd_page() referred valid huge page thus also clearing pmd_present() test. This creates the following situation which is counter intuitive. [pmd_present(pmd_mknotpresent(pmd)) = true] This renames pmd_mknotpresent() as pmd_mkinvalid() reflecting the helper's functionality more accurately while changing the above mentioned situation as follows. This does not create any functional change. [pmd_present(pmd_mkinvalid(pmd)) = true] This is not applicable for platforms that define own pmdp_invalidate() via __HAVE_ARCH_PMDP_INVALIDATE. Suggestion for renaming came during a previous discussion here. https://patchwork.kernel.org/patch/11019637/ [anshuman.khandual@arm.com: change pmd_mknotvalid() to pmd_mkinvalid() per Will] Link: http://lkml.kernel.org/r/1587520326-10099-3-git-send-email-anshuman.khandual@arm.com Link: http://lkml.kernel.org/r/1584680057-13753-3-git-send-email-anshuman.khandual@arm.com Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Suggested-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Will Deacon <will@kernel.org> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/arc/include/asm/hugepage.h | 2 +- arch/arm/include/asm/pgtable-3level.h | 2 +- arch/arm64/include/asm/pgtable.h | 2 +- arch/mips/include/asm/pgtable.h | 2 +- arch/x86/include/asm/pgtable.h | 2 +- arch/x86/mm/kmmio.c | 2 +- mm/pgtable-generic.c | 2 +- 7 files changed, 7 insertions(+), 7 deletions(-) --- a/arch/arc/include/asm/hugepage.h~mm-thp-rename-pmd_mknotpresent-as-pmd_mknotvalid +++ a/arch/arc/include/asm/hugepage.h @@ -26,7 +26,7 @@ static inline pmd_t pte_pmd(pte_t pte) #define pmd_mkold(pmd) pte_pmd(pte_mkold(pmd_pte(pmd))) #define pmd_mkyoung(pmd) pte_pmd(pte_mkyoung(pmd_pte(pmd))) #define pmd_mkhuge(pmd) pte_pmd(pte_mkhuge(pmd_pte(pmd))) -#define pmd_mknotpresent(pmd) pte_pmd(pte_mknotpresent(pmd_pte(pmd))) +#define pmd_mkinvalid(pmd) pte_pmd(pte_mknotpresent(pmd_pte(pmd))) #define pmd_mkclean(pmd) pte_pmd(pte_mkclean(pmd_pte(pmd))) #define pmd_write(pmd) pte_write(pmd_pte(pmd)) --- a/arch/arm64/include/asm/pgtable.h~mm-thp-rename-pmd_mknotpresent-as-pmd_mknotvalid +++ a/arch/arm64/include/asm/pgtable.h @@ -366,7 +366,7 @@ static inline int pmd_protnone(pmd_t pmd #define pmd_mkclean(pmd) pte_pmd(pte_mkclean(pmd_pte(pmd))) #define pmd_mkdirty(pmd) pte_pmd(pte_mkdirty(pmd_pte(pmd))) #define pmd_mkyoung(pmd) pte_pmd(pte_mkyoung(pmd_pte(pmd))) -#define pmd_mknotpresent(pmd) (__pmd(pmd_val(pmd) & ~PMD_SECT_VALID)) +#define pmd_mkinvalid(pmd) (__pmd(pmd_val(pmd) & ~PMD_SECT_VALID)) #define pmd_thp_or_huge(pmd) (pmd_huge(pmd) || pmd_trans_huge(pmd)) --- a/arch/arm/include/asm/pgtable-3level.h~mm-thp-rename-pmd_mknotpresent-as-pmd_mknotvalid +++ a/arch/arm/include/asm/pgtable-3level.h @@ -221,7 +221,7 @@ PMD_BIT_FUNC(mkyoung, |= PMD_SECT_AF); #define pmdp_establish generic_pmdp_establish /* represent a notpresent pmd by faulting entry, this is used by pmdp_invalidate */ -static inline pmd_t pmd_mknotpresent(pmd_t pmd) +static inline pmd_t pmd_mkinvalid(pmd_t pmd) { return __pmd(pmd_val(pmd) & ~L_PMD_SECT_VALID); } --- a/arch/mips/include/asm/pgtable.h~mm-thp-rename-pmd_mknotpresent-as-pmd_mknotvalid +++ a/arch/mips/include/asm/pgtable.h @@ -631,7 +631,7 @@ static inline pmd_t pmd_modify(pmd_t pmd return pmd; } -static inline pmd_t pmd_mknotpresent(pmd_t pmd) +static inline pmd_t pmd_mkinvalid(pmd_t pmd) { pmd_val(pmd) &= ~(_PAGE_PRESENT | _PAGE_VALID | _PAGE_DIRTY); --- a/arch/x86/include/asm/pgtable.h~mm-thp-rename-pmd_mknotpresent-as-pmd_mknotvalid +++ a/arch/x86/include/asm/pgtable.h @@ -624,7 +624,7 @@ static inline pud_t pfn_pud(unsigned lon return __pud(pfn | check_pgprot(pgprot)); } -static inline pmd_t pmd_mknotpresent(pmd_t pmd) +static inline pmd_t pmd_mkinvalid(pmd_t pmd) { return pfn_pmd(pmd_pfn(pmd), __pgprot(pmd_flags(pmd) & ~(_PAGE_PRESENT|_PAGE_PROTNONE))); --- a/arch/x86/mm/kmmio.c~mm-thp-rename-pmd_mknotpresent-as-pmd_mknotvalid +++ a/arch/x86/mm/kmmio.c @@ -130,7 +130,7 @@ static void clear_pmd_presence(pmd_t *pm pmdval_t v = pmd_val(*pmd); if (clear) { *old = v; - new_pmd = pmd_mknotpresent(*pmd); + new_pmd = pmd_mkinvalid(*pmd); } else { /* Presume this has been called with clear==true previously */ new_pmd = __pmd(*old); --- a/mm/pgtable-generic.c~mm-thp-rename-pmd_mknotpresent-as-pmd_mknotvalid +++ a/mm/pgtable-generic.c @@ -194,7 +194,7 @@ pgtable_t pgtable_trans_huge_withdraw(st pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp) { - pmd_t old = pmdp_establish(vma, address, pmdp, pmd_mknotpresent(*pmdp)); + pmd_t old = pmdp_establish(vma, address, pmdp, pmd_mkinvalid(*pmdp)); flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE); return old; } _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 127/131] drivers/base/memory.c: cache memory blocks in xarray to accelerate lookup 2020-06-03 22:55 incoming Andrew Morton ` (125 preceding siblings ...) 2020-06-03 23:03 ` [patch 126/131] mm/thp: rename pmd_mknotpresent() as pmd_mkinvalid() Andrew Morton @ 2020-06-03 23:03 ` Andrew Morton 2020-06-03 23:03 ` [patch 128/131] mm: add DEBUG_WX support Andrew Morton ` (4 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:03 UTC (permalink / raw) To: akpm, cheloha, cheloha, david, gregkh, linux-mm, mhocko, mm-commits, nathanl, rafael, ricklind, torvalds From: Scott Cheloha <cheloha@linux.vnet.ibm.com> Subject: drivers/base/memory.c: cache memory blocks in xarray to accelerate lookup Searching for a particular memory block by id is an O(n) operation because each memory block's underlying device is kept in an unsorted linked list on the subsystem bus. We can cut the lookup cost to O(log n) if we cache each memory block in an xarray. This time complexity improvement is significant on systems with many memory blocks. For example: 1. A 128GB POWER9 VM with 256MB memblocks has 512 blocks. With this change memory_dev_init() completes ~12ms faster and walk_memory_blocks() completes ~12ms faster. Before: [ 0.005042] memory_dev_init: adding memory blocks [ 0.021591] memory_dev_init: added memory blocks [ 0.022699] walk_memory_blocks: walking memory blocks [ 0.038730] walk_memory_blocks: walked memory blocks 0-511 After: [ 0.005057] memory_dev_init: adding memory blocks [ 0.009415] memory_dev_init: added memory blocks [ 0.010519] walk_memory_blocks: walking memory blocks [ 0.014135] walk_memory_blocks: walked memory blocks 0-511 2. A 256GB POWER9 LPAR with 256MB memblocks has 1024 blocks. With this change memory_dev_init() completes ~88ms faster and walk_memory_blocks() completes ~87ms faster. Before: [ 0.252246] memory_dev_init: adding memory blocks [ 0.395469] memory_dev_init: added memory blocks [ 0.409413] walk_memory_blocks: walking memory blocks [ 0.433028] walk_memory_blocks: walked memory blocks 0-511 [ 0.433094] walk_memory_blocks: walking memory blocks [ 0.500244] walk_memory_blocks: walked memory blocks 131072-131583 After: [ 0.245063] memory_dev_init: adding memory blocks [ 0.299539] memory_dev_init: added memory blocks [ 0.313609] walk_memory_blocks: walking memory blocks [ 0.315287] walk_memory_blocks: walked memory blocks 0-511 [ 0.315349] walk_memory_blocks: walking memory blocks [ 0.316988] walk_memory_blocks: walked memory blocks 131072-131583 3. A 32TB POWER9 LPAR with 256MB memblocks has 131072 blocks. With this change we complete memory_dev_init() ~37 minutes faster and walk_memory_blocks() at least ~30 minutes faster. The exact timing for walk_memory_blocks() is missing, though I observed that the soft lockups in walk_memory_blocks() disappeared with the change, suggesting that lower bound. Before: [ 13.703907] memory_dev_init: adding blocks [ 2287.406099] memory_dev_init: added all blocks [ 2347.494986] [c000000014c5bb60] [c000000000869af4] walk_memory_blocks+0x94/0x160 [ 2527.625378] [c000000014c5bb60] [c000000000869af4] walk_memory_blocks+0x94/0x160 [ 2707.761977] [c000000014c5bb60] [c000000000869af4] walk_memory_blocks+0x94/0x160 [ 2887.899975] [c000000014c5bb60] [c000000000869af4] walk_memory_blocks+0x94/0x160 [ 3068.028318] [c000000014c5bb60] [c000000000869af4] walk_memory_blocks+0x94/0x160 [ 3248.158764] [c000000014c5bb60] [c000000000869af4] walk_memory_blocks+0x94/0x160 [ 3428.287296] [c000000014c5bb60] [c000000000869af4] walk_memory_blocks+0x94/0x160 [ 3608.425357] [c000000014c5bb60] [c000000000869af4] walk_memory_blocks+0x94/0x160 [ 3788.554572] [c000000014c5bb60] [c000000000869af4] walk_memory_blocks+0x94/0x160 [ 3968.695071] [c000000014c5bb60] [c000000000869af4] walk_memory_blocks+0x94/0x160 [ 4148.823970] [c000000014c5bb60] [c000000000869af4] walk_memory_blocks+0x94/0x160 After: [ 13.696898] memory_dev_init: adding blocks [ 15.660035] memory_dev_init: added all blocks (the walk_memory_blocks traces disappear) There should be no significant negative impact for machines with few memory blocks. A sparse xarray has a small footprint and an O(log n) lookup is negligibly slower than an O(n) lookup for only the smallest number of memory blocks. 1. A 16GB x86 machine with 128MB memblocks has 132 blocks. With this change memory_dev_init() completes ~300us faster and walk_memory_blocks() completes no faster or slower. The improvement is pretty close to noise. Before: [ 0.224752] memory_dev_init: adding memory blocks [ 0.227116] memory_dev_init: added memory blocks [ 0.227183] walk_memory_blocks: walking memory blocks [ 0.227183] walk_memory_blocks: walked memory blocks 0-131 After: [ 0.224911] memory_dev_init: adding memory blocks [ 0.226935] memory_dev_init: added memory blocks [ 0.227089] walk_memory_blocks: walking memory blocks [ 0.227089] walk_memory_blocks: walked memory blocks 0-131 [david@redhat.com: document the locking] Link: http://lkml.kernel.org/r/bc21eec6-7251-4c91-2f57-9a0671f8d414@redhat.com Link: http://lkml.kernel.org/r/20200121231028.13699-1-cheloha@linux.ibm.com Signed-off-by: Scott Cheloha <cheloha@linux.ibm.com> Acked-by: David Hildenbrand <david@redhat.com> Acked-by: Nathan Lynch <nathanl@linux.ibm.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Rafael J. Wysocki <rafael@kernel.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Rick Lindsley <ricklind@linux.vnet.ibm.com> Cc: Scott Cheloha <cheloha@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- drivers/base/memory.c | 44 +++++++++++++++++++++++++++++----------- 1 file changed, 32 insertions(+), 12 deletions(-) --- a/drivers/base/memory.c~drivers-base-memoryc-cache-memory-blocks-in-xarray-to-accelerate-lookup +++ a/drivers/base/memory.c @@ -21,6 +21,7 @@ #include <linux/mm.h> #include <linux/stat.h> #include <linux/slab.h> +#include <linux/xarray.h> #include <linux/atomic.h> #include <linux/uaccess.h> @@ -74,6 +75,13 @@ static struct bus_type memory_subsys = { .offline = memory_subsys_offline, }; +/* + * Memory blocks are cached in a local radix tree to avoid + * a costly linear search for the corresponding device on + * the subsystem bus. + */ +static DEFINE_XARRAY(memory_blocks); + static BLOCKING_NOTIFIER_HEAD(memory_chain); int register_memory_notifier(struct notifier_block *nb) @@ -489,22 +497,23 @@ int __weak arch_get_memory_phys_device(u return 0; } -/* A reference for the returned memory block device is acquired. */ +/* + * A reference for the returned memory block device is acquired. + * + * Called under device_hotplug_lock. + */ static struct memory_block *find_memory_block_by_id(unsigned long block_id) { - struct device *dev; + struct memory_block *mem; - dev = subsys_find_device_by_id(&memory_subsys, block_id, NULL); - return dev ? to_memory_block(dev) : NULL; + mem = xa_load(&memory_blocks, block_id); + if (mem) + get_device(&mem->dev); + return mem; } /* - * For now, we have a linear search to go find the appropriate - * memory_block corresponding to a particular phys_index. If - * this gets to be a real problem, we can always use a radix - * tree or something here. - * - * This could be made generic for all device subsystems. + * Called under device_hotplug_lock. */ struct memory_block *find_memory_block(struct mem_section *section) { @@ -548,9 +557,16 @@ int register_memory(struct memory_block memory->dev.offline = memory->state == MEM_OFFLINE; ret = device_register(&memory->dev); - if (ret) + if (ret) { put_device(&memory->dev); - + return ret; + } + ret = xa_err(xa_store(&memory_blocks, memory->dev.id, memory, + GFP_KERNEL)); + if (ret) { + put_device(&memory->dev); + device_unregister(&memory->dev); + } return ret; } @@ -604,6 +620,8 @@ static void unregister_memory(struct mem if (WARN_ON_ONCE(memory->dev.bus != &memory_subsys)) return; + WARN_ON(xa_erase(&memory_blocks, memory->dev.id) == NULL); + /* drop the ref. we got via find_memory_block() */ put_device(&memory->dev); device_unregister(&memory->dev); @@ -750,6 +768,8 @@ void __init memory_dev_init(void) * * In case func() returns an error, walking is aborted and the error is * returned. + * + * Called under device_hotplug_lock. */ int walk_memory_blocks(unsigned long start, unsigned long size, void *arg, walk_memory_blocks_func_t func) _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 128/131] mm: add DEBUG_WX support 2020-06-03 22:55 incoming Andrew Morton ` (126 preceding siblings ...) 2020-06-03 23:03 ` [patch 127/131] drivers/base/memory.c: cache memory blocks in xarray to accelerate lookup Andrew Morton @ 2020-06-03 23:03 ` Andrew Morton 2020-06-03 23:03 ` [patch 129/131] riscv: support DEBUG_WX Andrew Morton ` (3 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:03 UTC (permalink / raw) To: akpm, bp, catalin.marinas, hpa, linux-mm, mingo, mm-commits, palmer, paul.walmsley, tglx, torvalds, will, zong.li From: Zong Li <zong.li@sifive.com> Subject: mm: add DEBUG_WX support Patch series "Extract DEBUG_WX to shared use". Some architectures support DEBUG_WX function, it's verbatim from each others, so extract to mm/Kconfig.debug for shared use. PPC and ARM ports don't support generic page dumper yet, so we only refine x86 and arm64 port in this patch series. For RISC-V port, the DEBUG_WX support depends on other patches which be merged already: - RISC-V page table dumper - Support strict kernel memory permissions for security This patch (of 4): Some architectures support DEBUG_WX function, it's verbatim from each others. Extract to mm/Kconfig.debug for shared use. [akpm@linux-foundation.org: reword text, per Will Deacon & Zong Li] Link: http://lkml.kernel.org/r/20200427194245.oxRJKj3fn%25akpm@linux-foundation.org [zong.li@sifive.com: remove the specific name of arm64] Link: http://lkml.kernel.org/r/3a6a92ecedc54e1d0fc941398e63d504c2cd5611.1589178399.git.zong.li@sifive.com [zong.li@sifive.com: add MMU dependency for DEBUG_WX] Link: http://lkml.kernel.org/r/4a674ac7863ff39ca91847b10e51209771f99416.1589178399.git.zong.li@sifive.com Link: http://lkml.kernel.org/r/cover.1587455584.git.zong.li@sifive.com Link: http://lkml.kernel.org/r/23980cd0f0e5d79e24a92169116407c75bcc650d.1587455584.git.zong.li@sifive.com Signed-off-by: Zong Li <zong.li@sifive.com> Suggested-by: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/Kconfig.debug | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) --- a/mm/Kconfig.debug~mm-add-debug_wx-support +++ a/mm/Kconfig.debug @@ -118,6 +118,38 @@ config DEBUG_RODATA_TEST ---help--- This option enables a testcase for the setting rodata read-only. +config ARCH_HAS_DEBUG_WX + bool + +config DEBUG_WX + bool "Warn on W+X mappings at boot" + depends on ARCH_HAS_DEBUG_WX + depends on MMU + select PTDUMP_CORE + help + Generate a warning if any W+X mappings are found at boot. + + This is useful for discovering cases where the kernel is leaving W+X + mappings after applying NX, as such mappings are a security risk. + + Look for a message in dmesg output like this: + + <arch>/mm: Checked W+X mappings: passed, no W+X pages found. + + or like this, if the check failed: + + <arch>/mm: Checked W+X mappings: failed, <N> W+X pages found. + + Note that even if the check fails, your kernel is possibly + still fine, as W+X mappings are not a security hole in + themselves, what they do is that they make the exploitation + of other unfixed kernel bugs easier. + + There is no runtime or memory usage effect of this option + once the kernel has booted up - it's a one time check. + + If in doubt, say "Y". + config GENERIC_PTDUMP bool _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 129/131] riscv: support DEBUG_WX 2020-06-03 22:55 incoming Andrew Morton ` (127 preceding siblings ...) 2020-06-03 23:03 ` [patch 128/131] mm: add DEBUG_WX support Andrew Morton @ 2020-06-03 23:03 ` Andrew Morton 2020-06-03 23:03 ` [patch 130/131] x86: mm: use ARCH_HAS_DEBUG_WX instead of arch defined Andrew Morton ` (2 subsequent siblings) 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:03 UTC (permalink / raw) To: akpm, bp, catalin.marinas, hpa, linux-mm, mingo, mm-commits, palmer, paul.walmsley, tglx, torvalds, will, zong.li From: Zong Li <zong.li@sifive.com> Subject: riscv: support DEBUG_WX Support DEBUG_WX to check whether there are mapping with write and execute permission at the same time. [akpm@linux-foundation.org: replace macros with C] Link: http://lkml.kernel.org/r/282e266311bced080bc6f7c255b92f87c1eb65d6.1587455584.git.zong.li@sifive.com Signed-off-by: Zong Li <zong.li@sifive.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/riscv/Kconfig | 1 + arch/riscv/include/asm/ptdump.h | 11 +++++++++++ arch/riscv/mm/init.c | 3 +++ 3 files changed, 15 insertions(+) --- a/arch/riscv/include/asm/ptdump.h~riscv-support-debug_wx +++ a/arch/riscv/include/asm/ptdump.h @@ -8,4 +8,15 @@ void ptdump_check_wx(void); +#ifdef CONFIG_DEBUG_WX +static inline void debug_checkwx(void) +{ + ptdump_check_wx(); +} +#else +static inline void debug_checkwx(void) +{ +} +#endif + #endif /* _ASM_RISCV_PTDUMP_H */ --- a/arch/riscv/Kconfig~riscv-support-debug_wx +++ a/arch/riscv/Kconfig @@ -16,6 +16,7 @@ config RISCV select OF_EARLY_FLATTREE select OF_IRQ select ARCH_HAS_BINFMT_FLAT + select ARCH_HAS_DEBUG_WX select ARCH_WANT_FRAME_POINTERS select CLONE_BACKWARDS select COMMON_CLK --- a/arch/riscv/mm/init.c~riscv-support-debug_wx +++ a/arch/riscv/mm/init.c @@ -19,6 +19,7 @@ #include <asm/sections.h> #include <asm/pgtable.h> #include <asm/io.h> +#include <asm/ptdump.h> #include "../kernel/head.h" @@ -514,6 +515,8 @@ void mark_rodata_ro(void) set_memory_ro(rodata_start, (data_start - rodata_start) >> PAGE_SHIFT); set_memory_nx(rodata_start, (data_start - rodata_start) >> PAGE_SHIFT); set_memory_nx(data_start, (max_low - data_start) >> PAGE_SHIFT); + + debug_checkwx(); } #endif _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 130/131] x86: mm: use ARCH_HAS_DEBUG_WX instead of arch defined 2020-06-03 22:55 incoming Andrew Morton ` (128 preceding siblings ...) 2020-06-03 23:03 ` [patch 129/131] riscv: support DEBUG_WX Andrew Morton @ 2020-06-03 23:03 ` Andrew Morton 2020-06-03 23:04 ` [patch 131/131] arm64: " Andrew Morton 2020-06-04 0:54 ` mmotm 2020-06-03-17-54 uploaded Andrew Morton 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:03 UTC (permalink / raw) To: akpm, bp, catalin.marinas, hpa, linux-mm, mingo, mm-commits, palmer, paul.walmsley, tglx, torvalds, will, zong.li From: Zong Li <zong.li@sifive.com> Subject: x86: mm: use ARCH_HAS_DEBUG_WX instead of arch defined Extract DEBUG_WX to mm/Kconfig.debug for shared use. Change to use ARCH_HAS_DEBUG_WX instead of DEBUG_WX defined by arch port. Link: http://lkml.kernel.org/r/430736828d149df3f5b462d291e845ec690e0141.1587455584.git.zong.li@sifive.com Signed-off-by: Zong Li <zong.li@sifive.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/x86/Kconfig | 1 + arch/x86/Kconfig.debug | 27 --------------------------- 2 files changed, 1 insertion(+), 27 deletions(-) --- a/arch/x86/Kconfig~x86-mm-use-arch_has_debug_wx-instead-of-arch-defined +++ a/arch/x86/Kconfig @@ -81,6 +81,7 @@ config X86 select ARCH_HAS_SYNC_CORE_BEFORE_USERMODE select ARCH_HAS_SYSCALL_WRAPPER select ARCH_HAS_UBSAN_SANITIZE_ALL + select ARCH_HAS_DEBUG_WX select ARCH_HAVE_NMI_SAFE_CMPXCHG select ARCH_MIGHT_HAVE_ACPI_PDC if ACPI select ARCH_MIGHT_HAVE_PC_PARPORT --- a/arch/x86/Kconfig.debug~x86-mm-use-arch_has_debug_wx-instead-of-arch-defined +++ a/arch/x86/Kconfig.debug @@ -72,33 +72,6 @@ config EFI_PGT_DUMP issues with the mapping of the EFI runtime regions into that table. -config DEBUG_WX - bool "Warn on W+X mappings at boot" - select PTDUMP_CORE - ---help--- - Generate a warning if any W+X mappings are found at boot. - - This is useful for discovering cases where the kernel is leaving - W+X mappings after applying NX, as such mappings are a security risk. - - Look for a message in dmesg output like this: - - x86/mm: Checked W+X mappings: passed, no W+X pages found. - - or like this, if the check failed: - - x86/mm: Checked W+X mappings: FAILED, <N> W+X pages found. - - Note that even if the check fails, your kernel is possibly - still fine, as W+X mappings are not a security hole in - themselves, what they do is that they make the exploitation - of other unfixed kernel bugs easier. - - There is no runtime or memory usage effect of this option - once the kernel has booted up - it's a one time check. - - If in doubt, say "Y". - config DEBUG_TLBFLUSH bool "Set upper limit of TLB entries to flush one-by-one" depends on DEBUG_KERNEL _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* [patch 131/131] arm64: mm: use ARCH_HAS_DEBUG_WX instead of arch defined 2020-06-03 22:55 incoming Andrew Morton ` (129 preceding siblings ...) 2020-06-03 23:03 ` [patch 130/131] x86: mm: use ARCH_HAS_DEBUG_WX instead of arch defined Andrew Morton @ 2020-06-03 23:04 ` Andrew Morton 2020-06-04 0:54 ` mmotm 2020-06-03-17-54 uploaded Andrew Morton 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-03 23:04 UTC (permalink / raw) To: akpm, bp, catalin.marinas, hpa, linux-mm, mingo, mm-commits, palmer, paul.walmsley, tglx, torvalds, will, zong.li From: Zong Li <zong.li@sifive.com> Subject: arm64: mm: use ARCH_HAS_DEBUG_WX instead of arch defined Extract DEBUG_WX to mm/Kconfig.debug for shared use. Change to use ARCH_HAS_DEBUG_WX instead of DEBUG_WX defined by arch port. Link: http://lkml.kernel.org/r/e19709e7576f65e303245fe520cad5f7bae72763.1587455584.git.zong.li@sifive.com Signed-off-by: Zong Li <zong.li@sifive.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/arm64/Kconfig | 1 + arch/arm64/Kconfig.debug | 29 ----------------------------- 2 files changed, 1 insertion(+), 29 deletions(-) --- a/arch/arm64/Kconfig~arm64-mm-use-arch_has_debug_wx-instead-of-arch-defined +++ a/arch/arm64/Kconfig @@ -9,6 +9,7 @@ config ARM64 select ACPI_MCFG if (ACPI && PCI) select ACPI_SPCR_TABLE if ACPI select ACPI_PPTT if ACPI + select ARCH_HAS_DEBUG_WX select ARCH_BINFMT_ELF_STATE select ARCH_HAS_DEBUG_VIRTUAL select ARCH_HAS_DEVMEM_IS_ALLOWED --- a/arch/arm64/Kconfig.debug~arm64-mm-use-arch_has_debug_wx-instead-of-arch-defined +++ a/arch/arm64/Kconfig.debug @@ -23,35 +23,6 @@ config ARM64_RANDOMIZE_TEXT_OFFSET of TEXT_OFFSET and platforms must not require a specific value. -config DEBUG_WX - bool "Warn on W+X mappings at boot" - select PTDUMP_CORE - ---help--- - Generate a warning if any W+X mappings are found at boot. - - This is useful for discovering cases where the kernel is leaving - W+X mappings after applying NX, as such mappings are a security risk. - This check also includes UXN, which should be set on all kernel - mappings. - - Look for a message in dmesg output like this: - - arm64/mm: Checked W+X mappings: passed, no W+X pages found. - - or like this, if the check failed: - - arm64/mm: Checked W+X mappings: FAILED, <N> W+X pages found. - - Note that even if the check fails, your kernel is possibly - still fine, as W+X mappings are not a security hole in - themselves, what they do is that they make the exploitation - of other unfixed kernel bugs easier. - - There is no runtime or memory usage effect of this option - once the kernel has booted up - it's a one time check. - - If in doubt, say "Y". - config DEBUG_EFI depends on EFI && DEBUG_INFO bool "UEFI debugging" _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* mmotm 2020-06-03-17-54 uploaded 2020-06-03 22:55 incoming Andrew Morton ` (130 preceding siblings ...) 2020-06-03 23:04 ` [patch 131/131] arm64: " Andrew Morton @ 2020-06-04 0:54 ` Andrew Morton 131 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-04 0:54 UTC (permalink / raw) To: broonie, linux-fsdevel, linux-kernel, linux-mm, linux-next, mhocko, mm-commits, sfr The mm-of-the-moment snapshot 2020-06-03-17-54 has been uploaded to http://www.ozlabs.org/~akpm/mmotm/ mmotm-readme.txt says README for mm-of-the-moment: http://www.ozlabs.org/~akpm/mmotm/ This is a snapshot of my -mm patch queue. Uploaded at random hopefully more than once a week. You will need quilt to apply these patches to the latest Linus release (5.x or 5.x-rcY). The series file is in broken-out.tar.gz and is duplicated in http://ozlabs.org/~akpm/mmotm/series The file broken-out.tar.gz contains two datestamp files: .DATE and .DATE-yyyy-mm-dd-hh-mm-ss. Both contain the string yyyy-mm-dd-hh-mm-ss, followed by the base kernel version against which this patch series is to be applied. This tree is partially included in linux-next. To see which patches are included in linux-next, consult the `series' file. Only the patches within the #NEXT_PATCHES_START/#NEXT_PATCHES_END markers are included in linux-next. A full copy of the full kernel tree with the linux-next and mmotm patches already applied is available through git within an hour of the mmotm release. Individual mmotm releases are tagged. The master branch always points to the latest release, so it's constantly rebasing. https://github.com/hnaz/linux-mm The directory http://www.ozlabs.org/~akpm/mmots/ (mm-of-the-second) contains daily snapshots of the -mm tree. It is updated more frequently than mmotm, and is untested. A git copy of this tree is also available at https://github.com/hnaz/linux-mm This mmotm tree contains the following patches against 5.7: (patches marked "*" will be included in linux-next) origin.patch * mm-slub-fix-a-memory-leak-in-sysfs_slab_add.patch * memcg-optimize-memorynuma_stat-like-memorystat.patch * mm-gup-move-__get_user_pages_fast-down-a-few-lines-in-gupc.patch * mm-gup-refactor-and-de-duplicate-gup_fast-code.patch * mm-gup-introduce-pin_user_pages_fast_only.patch * drm-i915-convert-get_user_pages-pin_user_pages.patch * mm-gup-might_lock_readmmap_sem-in-get_user_pages_fast.patch * kasan-stop-tests-being-eliminated-as-dead-code-with-fortify_source.patch * stringh-fix-incompatibility-between-fortify_source-and-kasan.patch * mm-clarify-__gfp_memalloc-usage.patch * mm-memblock-replace-dereferences-of-memblock_regionnid-with-api-calls.patch * mm-make-early_pfn_to_nid-and-related-defintions-close-to-each-other.patch * mm-remove-config_have_memblock_node_map-option.patch * mm-free_area_init-use-maximal-zone-pfns-rather-than-zone-sizes.patch * mm-use-free_area_init-instead-of-free_area_init_nodes.patch * alpha-simplify-detection-of-memory-zone-boundaries.patch * arm-simplify-detection-of-memory-zone-boundaries.patch * arm64-simplify-detection-of-memory-zone-boundaries-for-uma-configs.patch * csky-simplify-detection-of-memory-zone-boundaries.patch * m68k-mm-simplify-detection-of-memory-zone-boundaries.patch * parisc-simplify-detection-of-memory-zone-boundaries.patch * sparc32-simplify-detection-of-memory-zone-boundaries.patch * unicore32-simplify-detection-of-memory-zone-boundaries.patch * xtensa-simplify-detection-of-memory-zone-boundaries.patch * mm-memmap_init-iterate-over-memblock-regions-rather-that-check-each-pfn.patch * mm-remove-early_pfn_in_nid-and-config_nodes_span_other_nodes.patch * mm-free_area_init-allow-defining-max_zone_pfn-in-descending-order.patch * mm-rename-free_area_init_node-to-free_area_init_memoryless_node.patch * mm-clean-up-free_area_init_node-and-its-helpers.patch * mm-simplify-find_min_pfn_with_active_regions.patch * docs-vm-update-memory-models-documentation.patch * mm-page_allocc-bad_-is-not-necessary-when-pagehwpoison.patch * mm-page_allocc-bad_flags-is-not-necessary-for-bad_page.patch * mm-page_allocc-rename-free_pages_check_bad-to-check_free_page_bad.patch * mm-page_allocc-rename-free_pages_check-to-check_free_page.patch * mm-page_allocc-extract-check__page_bad-common-part-to-page_bad_reason.patch * mmpage_alloccma-conditionally-prefer-cma-pageblocks-for-movable-allocations.patch * mm-remove-unused-free_bootmem_with_active_regions.patch * mm-page_allocc-only-tune-sysctl_lowmem_reserve_ratio-value-once-when-changing-it.patch * mm-page_allocc-clear-out-zone-lowmem_reserve-if-the-zone-is-empty.patch * mm-vmstatc-do-not-show-lowmem-reserve-protection-information-of-empty-zone.patch * mm-page_alloc-use-ac-high_zoneidx-for-classzone_idx.patch * mm-page_alloc-integrate-classzone_idx-and-high_zoneidx.patch * mm-page_allocc-use-node_mask_none-in-build_zonelists.patch * mm-rename-gfpflags_to_migratetype-to-gfp_migratetype-for-same-convention.patch * mm-reset-numa-stats-for-boot-pagesets.patch * mm-page_alloc-reset-the-zone-watermark_boost-early.patch * mm-page_alloc-restrict-and-formalize-compound_page_dtors.patch * mm-call-touch_nmi_watchdog-on-max-order-boundaries-in-deferred-init.patch * mm-initialize-deferred-pages-with-interrupts-enabled.patch * mm-call-cond_resched-from-deferred_init_memmap.patch * padata-remove-exit-routine.patch * padata-initialize-earlier.patch * padata-allocate-work-structures-for-parallel-jobs-from-a-pool.patch * padata-add-basic-support-for-multithreaded-jobs.patch * mm-dont-track-number-of-pages-during-deferred-initialization.patch * mm-parallelize-deferred_init_memmap.patch * mm-make-deferred-inits-max-threads-arch-specific.patch * padata-document-multithreaded-jobs.patch * mm-page_allocc-add-missing-line-breaks.patch * khugepaged-add-self-test.patch * khugepaged-do-not-stop-collapse-if-less-than-half-ptes-are-referenced.patch * khugepaged-drain-all-lru-caches-before-scanning-pages.patch * khugepaged-drain-lru-add-pagevec-after-swapin.patch * khugepaged-allow-to-collapse-a-page-shared-across-fork.patch * khugepaged-allow-to-collapse-pte-mapped-compound-pages.patch * thp-change-cow-semantics-for-anon-thp.patch * khugepaged-introduce-max_ptes_shared-tunable.patch * hugetlbfs-add-arch_hugetlb_valid_size.patch * hugetlbfs-move-hugepagesz=-parsing-to-arch-independent-code.patch * hugetlbfs-remove-hugetlb_add_hstate-warning-for-existing-hstate.patch * hugetlbfs-clean-up-command-line-processing.patch * hugetlbfs-move-hugepagesz=-parsing-to-arch-independent-code-fix.patch * mm-hugetlb-avoid-unnecessary-check-on-pud-and-pmd-entry-in-huge_pte_offset.patch * arm64-mm-drop-__have_arch_huge_ptep_get.patch * mm-hugetlb-define-a-generic-fallback-for-is_hugepage_only_range.patch * mm-hugetlb-define-a-generic-fallback-for-arch_clear_hugepage_flags.patch * mm-simplify-calling-a-compound-page-destructor.patch * mm-vmscanc-use-update_lru_size-in-update_lru_sizes.patch * mm-vmscan-count-layzfree-pages-and-fix-nr_isolated_-mismatch.patch * mm-vmscanc-change-prototype-for-shrink_page_list.patch * mm-vmscan-update-the-comment-of-should_continue_reclaim.patch * mm-fix-numa-node-file-count-error-in-replace_page_cache.patch * mm-memcontrol-fix-stat-corrupting-race-in-charge-moving.patch * mm-memcontrol-drop-compound-parameter-from-memcg-charging-api.patch * mm-shmem-remove-rare-optimization-when-swapin-races-with-hole-punching.patch * mm-memcontrol-move-out-cgroup-swaprate-throttling.patch * mm-memcontrol-convert-page-cache-to-a-new-mem_cgroup_charge-api.patch * mm-memcontrol-prepare-uncharging-for-removal-of-private-page-type-counters.patch * mm-memcontrol-prepare-move_account-for-removal-of-private-page-type-counters.patch * mm-memcontrol-prepare-cgroup-vmstat-infrastructure-for-native-anon-counters.patch * mm-memcontrol-switch-to-native-nr_file_pages-and-nr_shmem-counters.patch * mm-memcontrol-switch-to-native-nr_anon_mapped-counter.patch * mm-memcontrol-switch-to-native-nr_anon_thps-counter.patch * mm-memcontrol-convert-anon-and-file-thp-to-new-mem_cgroup_charge-api.patch * mm-memcontrol-drop-unused-try-commit-cancel-charge-api.patch * mm-memcontrol-prepare-swap-controller-setup-for-integration.patch * mm-memcontrol-make-swap-tracking-an-integral-part-of-memory-control.patch * mm-memcontrol-charge-swapin-pages-on-instantiation.patch * mm-memcontrol-document-the-new-swap-control-behavior.patch * mm-memcontrol-delete-unused-lrucare-handling.patch * mm-memcontrol-update-page-mem_cgroup-stability-rules.patch * mm-fix-lru-balancing-effect-of-new-transparent-huge-pages.patch * mm-keep-separate-anon-and-file-statistics-on-page-reclaim-activity.patch * mm-allow-swappiness-that-prefers-reclaiming-anon-over-the-file-workingset.patch * mm-fold-and-remove-lru_cache_add_anon-and-lru_cache_add_file.patch * mm-workingset-let-cache-workingset-challenge-anon.patch * mm-remove-use-once-cache-bias-from-lru-balancing.patch * mm-vmscan-drop-unnecessary-div0-avoidance-rounding-in-get_scan_count.patch * mm-base-lru-balancing-on-an-explicit-cost-model.patch * mm-deactivations-shouldnt-bias-the-lru-balance.patch * mm-only-count-actual-rotations-as-lru-reclaim-cost.patch * mm-balance-lru-lists-based-on-relative-thrashing.patch * mm-vmscan-determine-anon-file-pressure-balance-at-the-reclaim-root.patch * mm-vmscan-reclaim-writepage-is-io-cost.patch * mm-vmscan-limit-the-range-of-lru-type-balancing.patch * mm-swap-fix-vmstats-for-huge-pages.patch * mm-swap-memcg-fix-memcg-stats-for-huge-pages.patch * tools-vm-page_owner_sort-filter-out-unneeded-line.patch * mm-mempolicy-fix-up-gup-usage-in-lookup_node.patch * mm-memblock-fix-minor-typo-and-unclear-comment.patch * sparc32-register-memory-occupied-by-kernel-as-memblockmemory.patch * hugetlbfs-get-unmapped-area-below-task_unmapped_base-for-hugetlbfs.patch * mm-thp-dont-need-drain-lru-cache-when-splitting-and-mlocking-thp.patch * powerpc-mm-drop-platform-defined-pmd_mknotpresent.patch * mm-thp-rename-pmd_mknotpresent-as-pmd_mknotvalid.patch * drivers-base-memoryc-cache-memory-blocks-in-xarray-to-accelerate-lookup.patch * mm-add-debug_wx-support.patch * riscv-support-debug_wx.patch * x86-mm-use-arch_has_debug_wx-instead-of-arch-defined.patch * arm64-mm-use-arch_has_debug_wx-instead-of-arch-defined.patch * checkpatch-test-git_dir-changes.patch * proc-kpageflags-prevent-an-integer-overflow-in-stable_page_flags.patch * proc-kpageflags-do-not-use-uninitialized-struct-pages.patch * kcov-cleanup-debug-messages.patch * kcov-fix-potential-use-after-free-in-kcov_remote_start.patch * kcov-move-t-kcov-assignments-into-kcov_start-stop.patch * kcov-move-t-kcov_sequence-assignment.patch * kcov-use-t-kcov_mode-as-enabled-indicator.patch * kcov-collect-coverage-from-interrupts.patch * usb-core-kcov-collect-coverage-from-usb-complete-callback.patch * lib-lzo-fix-ambiguous-encoding-bug-in-lzo-rle.patch * ocfs2-clear-links-count-in-ocfs2_mknod-if-an-error-occurs.patch * ocfs2-fix-ocfs2-corrupt-when-iputting-an-inode.patch * drivers-tty-serial-sh-scic-suppress-uninitialized-var-warning.patch * ramfs-support-o_tmpfile.patch * kernel-watchdog-flush-all-printk-nmi-buffers-when-hardlockup-detected.patch mm.patch * mm-mmap-fix-the-adjusted-length-error.patch * mm-page_alloc-skip-waternark_boost-for-atomic-order-0-allocations.patch * mm-page_alloc-skip-waternark_boost-for-atomic-order-0-allocations-fix.patch * mm-add-comments-on-pglist_data-zones.patch * arch-kmap-remove-bug_on.patch * arch-xtensa-move-kmap-build-bug-out-of-the-way.patch * arch-kmap-remove-redundant-arch-specific-kmaps.patch * arch-kunmap-remove-duplicate-kunmap-implementations.patch * arch-kunmap-remove-duplicate-kunmap-implementations-fix.patch * x86powerpcmicroblaze-kmap-move-preempt-disable.patch * arch-kmap_atomic-consolidate-duplicate-code.patch * arch-kmap_atomic-consolidate-duplicate-code-checkpatch-fixes.patch * arch-kunmap_atomic-consolidate-duplicate-code.patch * arch-kunmap_atomic-consolidate-duplicate-code-fix.patch * arch-kunmap_atomic-consolidate-duplicate-code-checkpatch-fixes.patch * arch-kmap-ensure-kmap_prot-visibility.patch * arch-kmap-dont-hard-code-kmap_prot-values.patch * arch-kmap-define-kmap_atomic_prot-for-all-archs.patch * drm-remove-drm-specific-kmap_atomic-code.patch * drm-remove-drm-specific-kmap_atomic-code-fix.patch * kmap-remove-kmap_atomic_to_page.patch * parisc-kmap-remove-duplicate-kmap-code.patch * sparc-remove-unnecessary-includes.patch * kmap-consolidate-kmap_prot-definitions.patch * kmap-consolidate-kmap_prot-definitions-checkpatch-fixes.patch * mm-vmstat-add-events-for-pmd-based-thp-migration-without-split.patch * mm-vmstat-add-events-for-pmd-based-thp-migration-without-split-fix.patch * mm-vmstat-add-events-for-pmd-based-thp-migration-without-split-update.patch * mm-add-kvfree_sensitive-for-freeing-sensitive-data-objects.patch * mm-memory_hotplug-refrain-from-adding-memory-into-an-impossible-node.patch * powerpc-pseries-hotplug-memory-stop-checking-is_mem_section_removable.patch * mm-memory_hotplug-remove-is_mem_section_removable.patch * mm-memory_hotplug-set-node_start_pfn-of-hotadded-pgdat-to-0.patch * mm-memory_hotplug-handle-memblocks-only-with-config_arch_keep_memblock.patch * mm-memory_hotplug-introduce-add_memory_driver_managed.patch * kexec_file-dont-place-kexec-images-on-ioresource_mem_driver_managed.patch * device-dax-add-memory-via-add_memory_driver_managed.patch * mm-replace-zero-length-array-with-flexible-array-member.patch * mm-replace-zero-length-array-with-flexible-array-member-fix.patch * mm-memory_hotplug-fix-a-typo-in-comment-recoreded-recorded.patch * mm-ksm-fix-a-typo-in-comment-alreaady-already.patch * mm-ksm-fix-a-typo-in-comment-alreaady-already-v2.patch * mm-mmap-fix-a-typo-in-comment-compatbility-compatibility.patch * mm-hugetlb-fix-a-typo-in-comment-manitained-maintained.patch * mm-hugetlb-fix-a-typo-in-comment-manitained-maintained-v2.patch * mm-hugetlb-fix-a-typo-in-comment-manitained-maintained-v2-checkpatch-fixes.patch * mm-vmsan-fix-some-typos-in-comment.patch * mm-compaction-fix-a-typo-in-comment-pessemistic-pessimistic.patch * mm-memblock-fix-a-typo-in-comment-implict-implicit.patch * mm-list_lru-fix-a-typo-in-comment-numbesr-numbers.patch * mm-filemap-fix-a-typo-in-comment-unneccssary-unnecessary.patch * mm-frontswap-fix-some-typos-in-frontswapc.patch * mm-memcg-fix-some-typos-in-memcontrolc.patch * mm-fix-a-typo-in-comment-strucure-structure.patch * mm-slub-fix-a-typo-in-comment-disambiguiation-disambiguation.patch * mm-sparse-fix-a-typo-in-comment-convienence-convenience.patch * mm-page-writeback-fix-a-typo-in-comment-effictive-effective.patch * mm-memory-fix-a-typo-in-comment-attampt-attempt.patch * mm-use-false-for-bool-variable.patch * mm-return-true-in-cpupid_pid_unset.patch * zcomp-use-array_size-for-backends-list.patch * info-task-hung-in-generic_file_write_iter.patch * info-task-hung-in-generic_file_write-fix.patch * kernel-hung_taskc-monitor-killed-tasks.patch * proc-rename-catch-function-argument.patch * x86-mm-define-mm_p4d_folded.patch * mm-debug-add-tests-validating-architecture-page-table-helpers.patch * mm-debug-add-tests-validating-architecture-page-table-helpers-v17.patch * mm-debug-add-tests-validating-architecture-page-table-helpers-v18.patch * userc-make-uidhash_table-static.patch * get_maintainer-add-email-addresses-from-yaml-files.patch * get_maintainer-fix-unexpected-behavior-for-path-to-file-double-slashes.patch * lib-math-avoid-trailing-n-hidden-in-pr_fmt.patch * lib-add-might_fault-to-strncpy_from_user.patch * lib-optimize-cpumask_local_spread.patch * lib-test_lockupc-make-test_inode-static.patch * lib-zlib-remove-outdated-and-incorrect-pre-increment-optimization.patch * percpu_ref-use-a-more-common-logging-style.patch * lib-flex_proportionsc-cleanup-__fprop_inc_percpu_max.patch * lib-make-a-test-module-with-set-clear-bit.patch * bitops-avoid-clang-shift-count-overflow-warnings.patch * bitops-simplify-get_count_order_long.patch * bitops-use-the-same-mechanism-for-get_count_order.patch * lib-test-get_count_order-long-in-test_bitopsc.patch * lib-test-get_count_order-long-in-test_bitopsc-fix.patch * checkpatch-additional-maintainer-section-entry-ordering-checks.patch * checkpatch-look-for-c99-comments-in-ctx_locate_comment.patch * checkpatch-disallow-git-and-file-fix.patch * checkpatch-use-patch-subject-when-reading-from-stdin.patch * checkpatch-use-patch-subject-when-reading-from-stdin-fix.patch * fs-binfmt_elf-remove-redundant-elf_map-ifndef.patch * elfnote-mark-all-note-sections-shf_alloc.patch * init-allow-distribution-configuration-of-default-init.patch * fat-dont-allow-to-mount-if-the-fat-length-==-0.patch * fat-improve-the-readahead-for-fat-entries.patch * fs-seq_filec-seq_read-update-pr_info_ratelimited.patch * seq_file-introduce-define_seq_attribute-helper-macro.patch * seq_file-introduce-define_seq_attribute-helper-macro-checkpatch-fixes.patch * mm-vmstat-convert-to-use-define_seq_attribute-macro.patch * kernel-kprobes-convert-to-use-define_seq_attribute-macro.patch * exec-simplify-the-copy_strings_kernel-calling-convention.patch * exec-open-code-copy_string_kernel.patch * exec-change-uselib2-is_sreg-failure-to-eacces.patch * exec-relocate-s_isreg-check.patch * exec-relocate-path_noexec-check.patch * fs-include-fmode_exec-when-converting-flags-to-f_mode.patch * umh-fix-refcount-underflow-in-fork_usermode_blob.patch * rapidio-avoid-data-race-between-file-operation-callbacks-and-mport_cdev_add.patch * rapidio-convert-get_user_pages-pin_user_pages.patch * relay-handle-alloc_percpu-returning-null-in-relay_open.patch * kernel-relayc-fix-read_pos-error-when-multiple-readers.patch * aio-simplify-read_events.patch * selftests-x86-pkeys-move-selftests-to-arch-neutral-directory.patch * selftests-vm-pkeys-rename-all-references-to-pkru-to-a-generic-name.patch * selftests-vm-pkeys-move-generic-definitions-to-header-file.patch * selftests-vm-pkeys-move-some-definitions-to-arch-specific-header.patch * selftests-vm-pkeys-make-gcc-check-arguments-of-sigsafe_printf.patch * selftests-vm-pkeys-use-sane-types-for-pkey-register.patch * selftests-vm-pkeys-add-helpers-for-pkey-bits.patch * selftests-vm-pkeys-fix-pkey_disable_clear.patch * selftests-vm-pkeys-fix-assertion-in-pkey_disable_set-clear.patch * selftests-vm-pkeys-fix-alloc_random_pkey-to-make-it-really-random.patch * selftests-vm-pkeys-use-the-correct-huge-page-size.patch * selftests-vm-pkeys-introduce-generic-pkey-abstractions.patch * selftests-vm-pkeys-introduce-powerpc-support.patch * selftests-vm-pkeys-introduce-powerpc-support-fix.patch * selftests-vm-pkeys-fix-number-of-reserved-powerpc-pkeys.patch * selftests-vm-pkeys-fix-assertion-in-test_pkey_alloc_exhaust.patch * selftests-vm-pkeys-improve-checks-to-determine-pkey-support.patch * selftests-vm-pkeys-associate-key-on-a-mapped-page-and-detect-access-violation.patch * selftests-vm-pkeys-associate-key-on-a-mapped-page-and-detect-write-violation.patch * selftests-vm-pkeys-detect-write-violation-on-a-mapped-access-denied-key-page.patch * selftests-vm-pkeys-introduce-a-sub-page-allocator.patch * selftests-vm-pkeys-test-correct-behaviour-of-pkey-0.patch * selftests-vm-pkeys-override-access-right-definitions-on-powerpc.patch * selftests-vm-pkeys-override-access-right-definitions-on-powerpc-fix.patch * selftests-vm-pkeys-use-the-correct-page-size-on-powerpc.patch * selftests-vm-pkeys-fix-multilib-builds-for-x86.patch * tools-testing-selftests-vm-remove-duplicate-headers.patch * ubsan-fix-gcc-10-warnings.patch * ipc-msg-add-missing-annotation-for-freeque.patch * ipc-use-a-work-queue-to-free_ipc.patch * ipc-convert-ipcs_idr-to-xarray.patch * ipc-convert-ipcs_idr-to-xarray-update.patch * ipc-convert-ipcs_idr-to-xarray-update-fix.patch * linux-next-pre.patch linux-next.patch linux-next-rejects.patch linux-next-git-rejects.patch * linux-next-post.patch * dynamic_debug-add-an-option-to-enable-dynamic-debug-for-modules-only.patch * dynamic_debug-add-an-option-to-enable-dynamic-debug-for-modules-only-v2.patch * kernel-add-panic_on_taint.patch * kernel-add-panic_on_taint-fix.patch * xarrayh-correct-return-code-for-xa_store_bhirq.patch * kernel-sysctl-support-setting-sysctl-parameters-from-kernel-command-line.patch * kernel-sysctl-support-handling-command-line-aliases.patch * kernel-hung_task-convert-hung_task_panic-boot-parameter-to-sysctl.patch * tools-testing-selftests-sysctl-sysctlsh-support-config_test_sysctl=y.patch * lib-test_sysctl-support-testing-of-sysctl-boot-parameter.patch * lib-test_sysctl-support-testing-of-sysctl-boot-parameter-fix.patch * kernel-watchdogc-convert-soft-hardlockup-boot-parameters-to-sysctl-aliases.patch * kernel-hung_taskc-introduce-sysctl-to-print-all-traces-when-a-hung-task-is-detected.patch * panic-add-sysctl-to-dump-all-cpus-backtraces-on-oops-event.patch * kernel-sysctl-ignore-out-of-range-taint-bits-introduced-via-kerneltainted.patch * stacktrace-cleanup-inconsistent-variable-type.patch * amdgpu-a-null-mm-does-not-mean-a-thread-is-a-kthread.patch * kernel-move-use_mm-unuse_mm-to-kthreadc.patch * kernel-move-use_mm-unuse_mm-to-kthreadc-v2.patch * kernel-better-document-the-use_mm-unuse_mm-api-contract.patch * kernel-better-document-the-use_mm-unuse_mm-api-contract-v2.patch * kernel-better-document-the-use_mm-unuse_mm-api-contract-v2-fix.patch * kernel-better-document-the-use_mm-unuse_mm-api-contract-fix-2.patch * kernel-set-user_ds-in-kthread_use_mm.patch * mm-kmemleak-silence-kcsan-splats-in-checksum.patch * kallsyms-printk-add-loglvl-to-print_ip_sym.patch * alpha-add-show_stack_loglvl.patch * arc-add-show_stack_loglvl.patch * arm-asm-add-loglvl-to-c_backtrace.patch * arm-add-loglvl-to-unwind_backtrace.patch * arm-add-loglvl-to-dump_backtrace.patch * arm-wire-up-dump_backtrace_entrystm.patch * arm-add-show_stack_loglvl.patch * arm64-add-loglvl-to-dump_backtrace.patch * arm64-add-show_stack_loglvl.patch * c6x-add-show_stack_loglvl.patch * csky-add-show_stack_loglvl.patch * h8300-add-show_stack_loglvl.patch * hexagon-add-show_stack_loglvl.patch * ia64-pass-log-level-as-arg-into-ia64_do_show_stack.patch * ia64-add-show_stack_loglvl.patch * m68k-add-show_stack_loglvl.patch * microblaze-add-loglvl-to-microblaze_unwind_inner.patch * microblaze-add-loglvl-to-microblaze_unwind.patch * microblaze-add-show_stack_loglvl.patch * mips-add-show_stack_loglvl.patch * nds32-add-show_stack_loglvl.patch * nios2-add-show_stack_loglvl.patch * openrisc-add-show_stack_loglvl.patch * parisc-add-show_stack_loglvl.patch * powerpc-add-show_stack_loglvl.patch * riscv-add-show_stack_loglvl.patch * s390-add-show_stack_loglvl.patch * sh-add-loglvl-to-dump_mem.patch * sh-remove-needless-printk.patch * sh-add-loglvl-to-printk_address.patch * sh-add-loglvl-to-show_trace.patch * sh-add-show_stack_loglvl.patch * sparc-add-show_stack_loglvl.patch * um-sysrq-remove-needless-variable-sp.patch * um-add-show_stack_loglvl.patch * unicore32-remove-unused-pmode-argument-in-c_backtrace.patch * unicore32-add-loglvl-to-c_backtrace.patch * unicore32-add-show_stack_loglvl.patch * x86-add-missing-const-qualifiers-for-log_lvl.patch * x86-add-show_stack_loglvl.patch * xtensa-add-loglvl-to-show_trace.patch * xtensa-add-loglvl-to-show_trace-fix.patch * xtensa-add-show_stack_loglvl.patch * sysrq-use-show_stack_loglvl.patch * x86-amd_gart-print-stacktrace-for-a-leak-with-kern_err.patch * power-use-show_stack_loglvl.patch * kdb-dont-play-with-console_loglevel.patch * sched-print-stack-trace-with-kern_info.patch * kernel-use-show_stack_loglvl.patch * kernel-rename-show_stack_loglvl-=-show_stack.patch * mm-frontswap-mark-various-intentional-data-races.patch * mm-page_io-mark-various-intentional-data-races.patch * mm-page_io-mark-various-intentional-data-races-v2.patch * mm-swap_state-mark-various-intentional-data-races.patch * mm-filemap-fix-a-data-race-in-filemap_fault.patch * mm-swapfile-fix-and-annotate-various-data-races.patch * mm-swapfile-fix-and-annotate-various-data-races-v2.patch * mm-page_counter-fix-various-data-races-at-memsw.patch * mm-memcontrol-fix-a-data-race-in-scan-count.patch * mm-list_lru-fix-a-data-race-in-list_lru_count_one.patch * mm-mempool-fix-a-data-race-in-mempool_free.patch * mm-util-annotate-an-data-race-at-vm_committed_as.patch * mm-rmap-annotate-a-data-race-at-tlb_flush_batched.patch * mm-annotate-a-data-race-in-page_zonenum.patch * mm-swap-annotate-data-races-for-lru_rotate_pvecs.patch * mm-gupc-convert-to-use-get_user_pagepages_fast_only.patch * mm-gup-update-pin_user_pagesrst-for-case-3-mmu-notifiers.patch * mm-gup-introduce-pin_user_pages_locked.patch * mm-gup-introduce-pin_user_pages_locked-v2.patch * mm-gup-frame_vector-convert-get_user_pages-pin_user_pages.patch * mm-gup-documentation-fix-for-pin_user_pages-apis.patch * docs-mm-gup-pin_user_pagesrst-add-a-case-5.patch * vhost-convert-get_user_pages-pin_user_pages.patch * h8300-remove-usage-of-__arch_use_5level_hack.patch * arm-add-support-for-folded-p4d-page-tables.patch * arm-add-support-for-folded-p4d-page-tables-fix.patch * arm64-add-support-for-folded-p4d-page-tables.patch * arm64-add-support-for-folded-p4d-page-tables-fix.patch * hexagon-remove-__arch_use_5level_hack.patch * ia64-add-support-for-folded-p4d-page-tables.patch * nios2-add-support-for-folded-p4d-page-tables.patch * openrisc-add-support-for-folded-p4d-page-tables.patch * powerpc-add-support-for-folded-p4d-page-tables.patch * powerpc-add-support-for-folded-p4d-page-tables-fix-2.patch * sh-fault-modernize-printing-of-kernel-messages.patch * sh-drop-__pxd_offset-macros-that-duplicate-pxd_index-ones.patch * sh-add-support-for-folded-p4d-page-tables.patch * unicore32-remove-__arch_use_5level_hack.patch * asm-generic-remove-pgtable-nop4d-hackh.patch * mm-remove-__arch_has_5level_hack-and-include-asm-generic-5level-fixuph.patch * net-zerocopy-use-vm_insert_pages-for-tcp-rcv-zerocopy.patch * mm-mmapc-add-more-sanity-checks-to-get_unmapped_area.patch * mm-mmapc-do-not-allow-mappings-outside-of-allowed-limits.patch * mm-dont-include-asm-pgtableh-if-linux-mmh-is-already-included.patch * mm-introduce-include-linux-pgtableh.patch * mm-reorder-includes-after-introduction-of-linux-pgtableh.patch * csky-replace-definitions-of-__pxd_offset-with-pxd_index.patch * m68k-mm-motorola-move-comment-about-page-table-allocation-funcitons.patch * m68k-mm-move-cachenocahe_page-definitions-close-to-their-user.patch * x86-mm-simplify-init_trampoline-and-surrounding-logic.patch * x86-mm-simplify-init_trampoline-and-surrounding-logic-fix.patch * mm-pgtable-add-shortcuts-for-accessing-kernel-pmd-and-pte.patch * mm-pgtable-add-shortcuts-for-accessing-kernel-pmd-and-pte-fix.patch * mm-consolidate-pte_index-and-pte_offset_-definitions.patch * mm-consolidate-pmd_index-and-pmd_offset-definitions.patch * mm-consolidate-pud_index-and-pud_offset-definitions.patch * mm-consolidate-pgd_index-and-pgd_offset_k-definitions.patch * mm-consolidate-pgd_index-and-pgd_offset_k-definitions-fix.patch * arm-fix-the-flush_icache_range-arguments-in-set_fiq_handler.patch * nds32-unexport-flush_icache_page.patch * powerpc-unexport-flush_icache_user_range.patch * unicore32-remove-flush_cache_user_range.patch * asm-generic-fix-the-inclusion-guards-for-cacheflushh.patch * asm-generic-dont-include-linux-mmh-in-cacheflushh.patch * asm-generic-dont-include-linux-mmh-in-cacheflushh-fix.patch * asm-generic-improve-the-flush_dcache_page-stub.patch * alpha-use-asm-generic-cacheflushh.patch * arm64-use-asm-generic-cacheflushh.patch * c6x-use-asm-generic-cacheflushh.patch * hexagon-use-asm-generic-cacheflushh.patch * ia64-use-asm-generic-cacheflushh.patch * microblaze-use-asm-generic-cacheflushh.patch * m68knommu-use-asm-generic-cacheflushh.patch * openrisc-use-asm-generic-cacheflushh.patch * powerpc-use-asm-generic-cacheflushh.patch * riscv-use-asm-generic-cacheflushh.patch * armsparcunicore32-remove-flush_icache_user_range.patch * mm-rename-flush_icache_user_range-to-flush_icache_user_page.patch * asm-generic-add-a-flush_icache_user_range-stub.patch * sh-implement-flush_icache_user_range.patch * xtensa-implement-flush_icache_user_range.patch * xtensa-implement-flush_icache_user_range-fix.patch * arm-rename-flush_cache_user_range-to-flush_icache_user_range.patch * m68k-implement-flush_icache_user_range.patch * exec-only-build-read_code-when-needed.patch * exec-use-flush_icache_user_range-in-read_code.patch * binfmt_flat-use-flush_icache_user_range.patch * nommu-use-flush_icache_user_range-in-brk-and-mmap.patch * module-move-the-set_fs-hack-for-flush_icache_range-to-m68k.patch * mmap-locking-api-initial-implementation-as-rwsem-wrappers.patch * mmu-notifier-use-the-new-mmap-locking-api.patch * dma-reservations-use-the-new-mmap-locking-api.patch * mmap-locking-api-use-coccinelle-to-convert-mmap_sem-rwsem-call-sites.patch * mmap-locking-api-convert-mmap_sem-call-sites-missed-by-coccinelle.patch * mmap-locking-api-convert-mmap_sem-call-sites-missed-by-coccinelle-fix.patch * mmap-locking-api-convert-mmap_sem-call-sites-missed-by-coccinelle-fix-fix.patch * mmap-locking-api-convert-mmap_sem-call-sites-missed-by-coccinelle-fix-fix-fix.patch * mmap-locking-api-convert-nested-write-lock-sites.patch * mmap-locking-api-add-mmap_read_trylock_non_owner.patch * mmap-locking-api-add-mmap_lock_initializer.patch * mmap-locking-api-add-mmap_assert_locked-and-mmap_assert_write_locked.patch * mmap-locking-api-rename-mmap_sem-to-mmap_lock.patch * mmap-locking-api-rename-mmap_sem-to-mmap_lock-fix.patch * mmap-locking-api-convert-mmap_sem-api-comments.patch * mmap-locking-api-convert-mmap_sem-comments.patch * mmap-locking-api-convert-mmap_sem-comments-fix.patch * mmap-locking-api-convert-mmap_sem-comments-fix-fix.patch * mmap-locking-api-convert-mmap_sem-comments-fix-fix-fix.patch * mm-pass-task-and-mm-to-do_madvise.patch * mm-introduce-external-memory-hinting-api.patch * mm-introduce-external-memory-hinting-api-fix.patch * mm-introduce-external-memory-hinting-api-fix-2.patch * mm-introduce-external-memory-hinting-api-fix-2-fix.patch * mm-check-fatal-signal-pending-of-target-process.patch * pid-move-pidfd_get_pid-function-to-pidc.patch * mm-support-both-pid-and-pidfd-for-process_madvise.patch * mm-madvise-allow-ksm-hints-for-remote-api.patch * mm-support-vector-address-ranges-for-process_madvise.patch * mm-support-vector-address-ranges-for-process_madvise-fix.patch * mm-support-vector-address-ranges-for-process_madvise-fix-fix.patch * mm-support-vector-address-ranges-for-process_madvise-fix-fix-fix.patch * mm-support-vector-address-ranges-for-process_madvise-fix-fix-fix-fix.patch * mm-support-vector-address-ranges-for-process_madvise-fix-fix-fix-fix-fix.patch * mm-use-only-pidfd-for-process_madvise-syscall.patch * mm-use-only-pidfd-for-process_madvise-syscall-fix.patch * mm-remove-duplicated-include-from-madvisec.patch * maccess-unexport-probe_kernel_write-and-probe_user_write.patch * maccess-unexport-probe_kernel_write-and-probe_user_write-fix.patch * maccess-remove-various-unused-weak-aliases.patch * maccess-remove-duplicate-kerneldoc-comments.patch * maccess-clarify-kerneldoc-comments.patch * maccess-update-the-top-of-file-comment.patch * maccess-rename-strncpy_from_unsafe_user-to-strncpy_from_user_nofault.patch * maccess-rename-strncpy_from_unsafe_strict-to-strncpy_from_kernel_nofault.patch * maccess-rename-strnlen_unsafe_user-to-strnlen_user_nofault.patch * maccess-remove-probe_read_common-and-probe_write_common.patch * maccess-unify-the-probe-kernel-arch-hooks.patch * maccess-unify-the-probe-kernel-arch-hooks-fix.patch * bpf-factor-out-a-bpf_trace_copy_string-helper.patch * bpf-handle-the-compat-string-in-bpf_trace_copy_string-better.patch * bpf-bpf_seq_printf-handle-potentially-unsafe-format-string-better.patch * bpf-rework-the-compat-kernel-probe-handling.patch * tracing-kprobes-handle-mixed-kernel-userspace-probes-better.patch * maccess-remove-strncpy_from_unsafe.patch * maccess-always-use-strict-semantics-for-probe_kernel_read.patch * maccess-always-use-strict-semantics-for-probe_kernel_read-fix.patch * maccess-move-user-access-routines-together.patch * maccess-allow-architectures-to-provide-kernel-probing-directly.patch * x86-use-non-set_fs-based-maccess-routines.patch * x86-use-non-set_fs-based-maccess-routines-checkpatch-fixes.patch * maccess-return-erange-when-copy_from_kernel_nofault_allowed-fails.patch * mm-expand-documentation-over-__read_mostly.patch * doc-cgroup-update-note-about-conditions-when-oom-killer-is-invoked.patch * doc-cgroup-update-note-about-conditions-when-oom-killer-is-invoked-fix.patch * sh-sh4a-bring-back-tmu3_device-early-device.patch * arch-sh-vmlinuxscr-align-rodata.patch * include-asm-generic-vmlinuxldsh-align-ro_after_init.patch * sh-clkfwk-remove-r8-r16-r32.patch * sh-remove-call-to-memset-after-dma_alloc_coherent.patch * sh-use-generic-strncpy.patch * sh-convert-ins-outs-macros-to-inline-functions.patch * sh-convert-ins-outs-macros-to-inline-functions-checkpatch-fixes.patch * sh-convert-iounmap-macros-to-inline-functions.patch * sh-add-missing-export_symbol-for-__delay.patch make-sure-nobodys-leaking-resources.patch releasing-resources-with-children.patch mutex-subsystem-synchro-test-module.patch kernel-forkc-export-kernel_thread-to-modules.patch workaround-for-a-pci-restoring-bug.patch ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2022-04-27 19:41 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2022-04-27 19:41 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits, patches 2 patches, based on d615b5416f8a1afeb82d13b238f8152c572d59c0. Subsystems affected by this patch series: mm/kasan mm/debug Subsystem: mm/kasan Zqiang <qiang1.zhang@intel.com>: kasan: prevent cpu_quarantine corruption when CPU offline and cache shrink occur at same time Subsystem: mm/debug Akira Yokosawa <akiyks@gmail.com>: docs: vm/page_owner: use literal blocks for param description Documentation/vm/page_owner.rst | 5 +++-- mm/kasan/quarantine.c | 7 +++++++ 2 files changed, 10 insertions(+), 2 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2022-04-21 23:35 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2022-04-21 23:35 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm, patches 13 patches, based on b253435746d9a4a701b5f09211b9c14d3370d0da. Subsystems affected by this patch series: mm/memory-failure mm/memcg mm/userfaultfd mm/hugetlbfs mm/mremap mm/oom-kill mm/kasan kcov mm/hmm Subsystem: mm/memory-failure Naoya Horiguchi <naoya.horiguchi@nec.com>: mm/hwpoison: fix race between hugetlb free/demotion and memory_failure_hugetlb() Xu Yu <xuyu@linux.alibaba.com>: mm/memory-failure.c: skip huge_zero_page in memory_failure() Subsystem: mm/memcg Shakeel Butt <shakeelb@google.com>: memcg: sync flush only if periodic flush is delayed Subsystem: mm/userfaultfd Nadav Amit <namit@vmware.com>: userfaultfd: mark uffd_wp regardless of VM_WRITE flag Subsystem: mm/hugetlbfs Christophe Leroy <christophe.leroy@csgroup.eu>: mm, hugetlb: allow for "high" userspace addresses Subsystem: mm/mremap Sidhartha Kumar <sidhartha.kumar@oracle.com>: selftest/vm: verify mmap addr in mremap_test selftest/vm: verify remap destination address in mremap_test selftest/vm: support xfail in mremap_test selftest/vm: add skip support to mremap_test Subsystem: mm/oom-kill Nico Pache <npache@redhat.com>: oom_kill.c: futex: delay the OOM reaper to allow time for proper futex cleanup Subsystem: mm/kasan Vincenzo Frascino <vincenzo.frascino@arm.com>: MAINTAINERS: add Vincenzo Frascino to KASAN reviewers Subsystem: kcov Aleksandr Nogikh <nogikh@google.com>: kcov: don't generate a warning on vm_insert_page()'s failure Subsystem: mm/hmm Alistair Popple <apopple@nvidia.com>: mm/mmu_notifier.c: fix race in mmu_interval_notifier_remove() MAINTAINERS | 1 fs/hugetlbfs/inode.c | 9 - include/linux/hugetlb.h | 6 + include/linux/memcontrol.h | 5 include/linux/mm.h | 8 + include/linux/sched.h | 1 include/linux/sched/mm.h | 8 + kernel/kcov.c | 7 - mm/hugetlb.c | 10 + mm/memcontrol.c | 12 ++ mm/memory-failure.c | 158 ++++++++++++++++++++++-------- mm/mmap.c | 8 - mm/mmu_notifier.c | 14 ++ mm/oom_kill.c | 54 +++++++--- mm/userfaultfd.c | 15 +- mm/workingset.c | 2 tools/testing/selftests/vm/mremap_test.c | 85 +++++++++++++++- tools/testing/selftests/vm/run_vmtests.sh | 11 +- 18 files changed, 327 insertions(+), 87 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2022-04-15 2:12 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2022-04-15 2:12 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits, patches 14 patches, based on 115acbb56978941bb7537a97dfc303da286106c1. Subsystems affected by this patch series: MAINTAINERS mm/tmpfs m/secretmem mm/kasan mm/kfence mm/pagealloc mm/zram mm/compaction mm/hugetlb binfmt mm/vmalloc mm/kmemleak Subsystem: MAINTAINERS Joe Perches <joe@perches.com>: MAINTAINERS: Broadcom internal lists aren't maintainers Subsystem: mm/tmpfs Hugh Dickins <hughd@google.com>: tmpfs: fix regressions from wider use of ZERO_PAGE Subsystem: m/secretmem Axel Rasmussen <axelrasmussen@google.com>: mm/secretmem: fix panic when growing a memfd_secret Subsystem: mm/kasan Zqiang <qiang1.zhang@intel.com>: irq_work: use kasan_record_aux_stack_noalloc() record callstack Vincenzo Frascino <vincenzo.frascino@arm.com>: kasan: fix hw tags enablement when KUNIT tests are disabled Subsystem: mm/kfence Marco Elver <elver@google.com>: mm, kfence: support kmem_dump_obj() for KFENCE objects Subsystem: mm/pagealloc Juergen Gross <jgross@suse.com>: mm, page_alloc: fix build_zonerefs_node() Subsystem: mm/zram Minchan Kim <minchan@kernel.org>: mm: fix unexpected zeroed page mapping with zram swap Subsystem: mm/compaction Charan Teja Kalla <quic_charante@quicinc.com>: mm: compaction: fix compiler warning when CONFIG_COMPACTION=n Subsystem: mm/hugetlb Mike Kravetz <mike.kravetz@oracle.com>: hugetlb: do not demote poisoned hugetlb pages Subsystem: binfmt Andrew Morton <akpm@linux-foundation.org>: revert "fs/binfmt_elf: fix PT_LOAD p_align values for loaders" revert "fs/binfmt_elf: use PT_LOAD p_align values for static PIE" Subsystem: mm/vmalloc Omar Sandoval <osandov@fb.com>: mm/vmalloc: fix spinning drain_vmap_work after reading from /proc/vmcore Subsystem: mm/kmemleak Patrick Wang <patrick.wang.shcn@gmail.com>: mm: kmemleak: take a full lowmem check in kmemleak_*_phys() MAINTAINERS | 64 ++++++++++++++++++++-------------------- arch/x86/include/asm/io.h | 2 - arch/x86/kernel/crash_dump_64.c | 1 fs/binfmt_elf.c | 6 +-- include/linux/kfence.h | 24 +++++++++++++++ kernel/irq_work.c | 2 - mm/compaction.c | 10 +++--- mm/filemap.c | 6 --- mm/hugetlb.c | 17 ++++++---- mm/kasan/hw_tags.c | 5 +-- mm/kasan/kasan.h | 10 +++--- mm/kfence/core.c | 21 ------------- mm/kfence/kfence.h | 21 +++++++++++++ mm/kfence/report.c | 47 +++++++++++++++++++++++++++++ mm/kmemleak.c | 8 ++--- mm/page_alloc.c | 2 - mm/page_io.c | 54 --------------------------------- mm/secretmem.c | 17 ++++++++++ mm/shmem.c | 31 ++++++++++++------- mm/slab.c | 2 - mm/slab.h | 2 - mm/slab_common.c | 9 +++++ mm/slob.c | 2 - mm/slub.c | 2 - mm/vmalloc.c | 11 ------ 25 files changed, 207 insertions(+), 169 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2022-04-08 20:08 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2022-04-08 20:08 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits, patches 9 patches, based on d00c50b35101b862c3db270ffeba53a63a1063d9. Subsystems affected by this patch series: mm/migration mm/highmem lz4 mm/sparsemem mm/mremap mm/mempolicy mailmap mm/memcg MAINTAINERS Subsystem: mm/migration Zi Yan <ziy@nvidia.com>: mm: migrate: use thp_order instead of HPAGE_PMD_ORDER for new page allocation. Subsystem: mm/highmem Max Filippov <jcmvbkbc@gmail.com>: highmem: fix checks in __kmap_local_sched_{in,out} Subsystem: lz4 Guo Xuenan <guoxuenan@huawei.com>: lz4: fix LZ4_decompress_safe_partial read out of bound Subsystem: mm/sparsemem Waiman Long <longman@redhat.com>: mm/sparsemem: fix 'mem_section' will never be NULL gcc 12 warning Subsystem: mm/mremap Paolo Bonzini <pbonzini@redhat.com>: mmmremap.c: avoid pointless invalidate_range_start/end on mremap(old_size=0) Subsystem: mm/mempolicy Miaohe Lin <linmiaohe@huawei.com>: mm/mempolicy: fix mpol_new leak in shared_policy_replace Subsystem: mailmap Vasily Averin <vasily.averin@linux.dev>: mailmap: update Vasily Averin's email address Subsystem: mm/memcg Andrew Morton <akpm@linux-foundation.org>: mm/list_lru.c: revert "mm/list_lru: optimize memcg_reparent_list_lru_node()" Subsystem: MAINTAINERS Tom Rix <trix@redhat.com>: MAINTAINERS: add Tom as clang reviewer .mailmap | 4 ++++ MAINTAINERS | 1 + include/linux/mmzone.h | 11 +++++++---- lib/lz4/lz4_decompress.c | 8 ++++++-- mm/highmem.c | 4 ++-- mm/list_lru.c | 6 ------ mm/mempolicy.c | 3 ++- mm/migrate.c | 2 +- mm/mremap.c | 3 +++ 9 files changed, 26 insertions(+), 16 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2022-04-01 18:27 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2022-04-01 18:27 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits, patches 16 patches, based on e8b767f5e04097aaedcd6e06e2270f9fe5282696. Subsystems affected by this patch series: mm/madvise ofs2 nilfs2 mm/mlock mm/mfence mailmap mm/memory-failure mm/kasan mm/debug mm/kmemleak mm/damon Subsystem: mm/madvise Charan Teja Kalla <quic_charante@quicinc.com>: Revert "mm: madvise: skip unmapped vma holes passed to process_madvise" Subsystem: ofs2 Joseph Qi <joseph.qi@linux.alibaba.com>: ocfs2: fix crash when mount with quota enabled Subsystem: nilfs2 Ryusuke Konishi <konishi.ryusuke@gmail.com>: Patch series "nilfs2 lockdep warning fixes": nilfs2: fix lockdep warnings in page operations for btree nodes nilfs2: fix lockdep warnings during disk space reclamation nilfs2: get rid of nilfs_mapping_init() Subsystem: mm/mlock Hugh Dickins <hughd@google.com>: mm/munlock: add lru_add_drain() to fix memcg_stat_test mm/munlock: update Documentation/vm/unevictable-lru.rst Sebastian Andrzej Siewior <bigeasy@linutronix.de>: mm/munlock: protect the per-CPU pagevec by a local_lock_t Subsystem: mm/kfence Muchun Song <songmuchun@bytedance.com>: mm: kfence: fix objcgs vector allocation Subsystem: mailmap Kirill Tkhai <kirill.tkhai@openvz.org>: mailmap: update Kirill's email Subsystem: mm/memory-failure Rik van Riel <riel@surriel.com>: mm,hwpoison: unmap poisoned page before invalidation Subsystem: mm/kasan Andrey Konovalov <andreyknvl@google.com>: mm, kasan: fix __GFP_BITS_SHIFT definition breaking LOCKDEP Subsystem: mm/debug Yinan Zhang <zhangyinan2019@email.szu.edu.cn>: tools/vm/page_owner_sort.c: remove -c option doc/vm/page_owner.rst: remove content related to -c option Subsystem: mm/kmemleak Kuan-Ying Lee <Kuan-Ying.Lee@mediatek.com>: mm/kmemleak: reset tag when compare object pointer Subsystem: mm/damon Jonghyeon Kim <tome01@ajou.ac.kr>: mm/damon: prevent activated scheme from sleeping by deactivated schemes .mailmap | 1 Documentation/vm/page_owner.rst | 1 Documentation/vm/unevictable-lru.rst | 473 +++++++++++++++-------------------- fs/nilfs2/btnode.c | 23 + fs/nilfs2/btnode.h | 1 fs/nilfs2/btree.c | 27 + fs/nilfs2/dat.c | 4 fs/nilfs2/gcinode.c | 7 fs/nilfs2/inode.c | 167 +++++++++++- fs/nilfs2/mdt.c | 45 ++- fs/nilfs2/mdt.h | 6 fs/nilfs2/nilfs.h | 16 - fs/nilfs2/page.c | 16 - fs/nilfs2/page.h | 1 fs/nilfs2/segment.c | 9 fs/nilfs2/super.c | 5 fs/ocfs2/quota_global.c | 23 - fs/ocfs2/quota_local.c | 2 include/linux/gfp.h | 4 mm/damon/core.c | 5 mm/gup.c | 10 mm/internal.h | 6 mm/kfence/core.c | 11 mm/kfence/kfence.h | 3 mm/kmemleak.c | 9 mm/madvise.c | 9 mm/memory.c | 12 mm/migrate.c | 2 mm/mlock.c | 46 ++- mm/page_alloc.c | 1 mm/rmap.c | 4 mm/swap.c | 4 tools/vm/page_owner_sort.c | 6 33 files changed, 560 insertions(+), 399 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2022-04-01 18:20 Andrew Morton 2022-04-01 18:27 ` incoming Andrew Morton 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2022-04-01 18:20 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits, patches 16 patches, based on e8b767f5e04097aaedcd6e06e2270f9fe5282696. Subsystems affected by this patch series: mm/madvise ofs2 nilfs2 mm/mlock mm/mfence mailmap mm/memory-failure mm/kasan mm/debug mm/kmemleak mm/damon Subsystem: mm/madvise Charan Teja Kalla <quic_charante@quicinc.com>: Revert "mm: madvise: skip unmapped vma holes passed to process_madvise" Subsystem: ofs2 Joseph Qi <joseph.qi@linux.alibaba.com>: ocfs2: fix crash when mount with quota enabled Subsystem: nilfs2 Ryusuke Konishi <konishi.ryusuke@gmail.com>: Patch series "nilfs2 lockdep warning fixes": nilfs2: fix lockdep warnings in page operations for btree nodes nilfs2: fix lockdep warnings during disk space reclamation nilfs2: get rid of nilfs_mapping_init() Subsystem: mm/mlock Hugh Dickins <hughd@google.com>: mm/munlock: add lru_add_drain() to fix memcg_stat_test mm/munlock: update Documentation/vm/unevictable-lru.rst Sebastian Andrzej Siewior <bigeasy@linutronix.de>: mm/munlock: protect the per-CPU pagevec by a local_lock_t Subsystem: mm/kfence Muchun Song <songmuchun@bytedance.com>: mm: kfence: fix objcgs vector allocation Subsystem: mailmap Kirill Tkhai <kirill.tkhai@openvz.org>: mailmap: update Kirill's email Subsystem: mm/memory-failure Rik van Riel <riel@surriel.com>: mm,hwpoison: unmap poisoned page before invalidation Subsystem: mm/kasan Andrey Konovalov <andreyknvl@google.com>: mm, kasan: fix __GFP_BITS_SHIFT definition breaking LOCKDEP Subsystem: mm/debug Yinan Zhang <zhangyinan2019@email.szu.edu.cn>: tools/vm/page_owner_sort.c: remove -c option doc/vm/page_owner.rst: remove content related to -c option Subsystem: mm/kmemleak Kuan-Ying Lee <Kuan-Ying.Lee@mediatek.com>: mm/kmemleak: reset tag when compare object pointer Subsystem: mm/damon Jonghyeon Kim <tome01@ajou.ac.kr>: mm/damon: prevent activated scheme from sleeping by deactivated schemes .mailmap | 1 Documentation/vm/page_owner.rst | 1 Documentation/vm/unevictable-lru.rst | 473 +++++++++++++++-------------------- fs/nilfs2/btnode.c | 23 + fs/nilfs2/btnode.h | 1 fs/nilfs2/btree.c | 27 + fs/nilfs2/dat.c | 4 fs/nilfs2/gcinode.c | 7 fs/nilfs2/inode.c | 167 +++++++++++- fs/nilfs2/mdt.c | 45 ++- fs/nilfs2/mdt.h | 6 fs/nilfs2/nilfs.h | 16 - fs/nilfs2/page.c | 16 - fs/nilfs2/page.h | 1 fs/nilfs2/segment.c | 9 fs/nilfs2/super.c | 5 fs/ocfs2/quota_global.c | 23 - fs/ocfs2/quota_local.c | 2 include/linux/gfp.h | 4 mm/damon/core.c | 5 mm/gup.c | 10 mm/internal.h | 6 mm/kfence/core.c | 11 mm/kfence/kfence.h | 3 mm/kmemleak.c | 9 mm/madvise.c | 9 mm/memory.c | 12 mm/migrate.c | 2 mm/mlock.c | 46 ++- mm/page_alloc.c | 1 mm/rmap.c | 4 mm/swap.c | 4 tools/vm/page_owner_sort.c | 6 33 files changed, 560 insertions(+), 399 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2022-04-01 18:20 incoming Andrew Morton @ 2022-04-01 18:27 ` Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2022-04-01 18:27 UTC (permalink / raw) To: Linus Torvalds, linux-mm, mm-commits, patches Argh, messed up in-reply-to. Let me redo... ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2022-03-25 1:07 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2022-03-25 1:07 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm, patches This is the material which was staged after willystuff in linux-next. Everything applied seamlessly on your latest, all looks well. 114 patches, based on 52deda9551a01879b3562e7b41748e85c591f14c. Subsystems affected by this patch series: mm/debug mm/selftests mm/pagecache mm/thp mm/rmap mm/migration mm/kasan mm/hugetlb mm/pagemap mm/madvise selftests Subsystem: mm/debug Sean Anderson <seanga2@gmail.com>: tools/vm/page_owner_sort.c: sort by stacktrace before culling tools/vm/page_owner_sort.c: support sorting by stack trace Yinan Zhang <zhangyinan2019@email.szu.edu.cn>: tools/vm/page_owner_sort.c: add switch between culling by stacktrace and txt Chongxi Zhao <zhaochongxi2019@email.szu.edu.cn>: tools/vm/page_owner_sort.c: support sorting pid and time Shenghong Han <hanshenghong2019@email.szu.edu.cn>: tools/vm/page_owner_sort.c: two trivial fixes Yixuan Cao <caoyixuan2019@email.szu.edu.cn>: tools/vm/page_owner_sort.c: delete invalid duplicate code Shenghong Han <hanshenghong2019@email.szu.edu.cn>: Documentation/vm/page_owner.rst: update the documentation Shuah Khan <skhan@linuxfoundation.org>: Documentation/vm/page_owner.rst: fix unexpected indentation warns Waiman Long <longman@redhat.com>: Patch series "mm/page_owner: Extend page_owner to show memcg information", v4: lib/vsprintf: avoid redundant work with 0 size mm/page_owner: use scnprintf() to avoid excessive buffer overrun check mm/page_owner: print memcg information mm/page_owner: record task command name Yixuan Cao <caoyixuan2019@email.szu.edu.cn>: mm/page_owner.c: record tgid tools/vm/page_owner_sort.c: fix the instructions for use Jiajian Ye <yejiajian2018@email.szu.edu.cn>: tools/vm/page_owner_sort.c: fix comments tools/vm/page_owner_sort.c: add a security check tools/vm/page_owner_sort.c: support sorting by tgid and update documentation tools/vm/page_owner_sort: fix three trivival places tools/vm/page_owner_sort: support for sorting by task command name tools/vm/page_owner_sort.c: support for selecting by PID, TGID or task command name tools/vm/page_owner_sort.c: support for user-defined culling rules Christoph Hellwig <hch@lst.de>: mm: unexport page_init_poison Subsystem: mm/selftests "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>: selftest/vm: add util.h and and move helper functions there Mike Rapoport <rppt@kernel.org>: selftest/vm: add helpers to detect PAGE_SIZE and PAGE_SHIFT Subsystem: mm/pagecache Hugh Dickins <hughd@google.com>: mm: delete __ClearPageWaiters() mm: filemap_unaccount_folio() large skip mapcount fixup Subsystem: mm/thp Hugh Dickins <hughd@google.com>: mm/thp: fix NR_FILE_MAPPED accounting in page_*_file_rmap() Subsystem: mm/rmap Subsystem: mm/migration Anshuman Khandual <anshuman.khandual@arm.com>: Patch series "mm/migration: Add trace events", v3: mm/migration: add trace events for THP migrations mm/migration: add trace events for base page and HugeTLB migrations Subsystem: mm/kasan Andrey Konovalov <andreyknvl@google.com>: Patch series "kasan, vmalloc, arm64: add vmalloc tagging support for SW/HW_TAGS", v6: kasan, page_alloc: deduplicate should_skip_kasan_poison kasan, page_alloc: move tag_clear_highpage out of kernel_init_free_pages kasan, page_alloc: merge kasan_free_pages into free_pages_prepare kasan, page_alloc: simplify kasan_poison_pages call site kasan, page_alloc: init memory of skipped pages on free kasan: drop skip_kasan_poison variable in free_pages_prepare mm: clarify __GFP_ZEROTAGS comment kasan: only apply __GFP_ZEROTAGS when memory is zeroed kasan, page_alloc: refactor init checks in post_alloc_hook kasan, page_alloc: merge kasan_alloc_pages into post_alloc_hook kasan, page_alloc: combine tag_clear_highpage calls in post_alloc_hook kasan, page_alloc: move SetPageSkipKASanPoison in post_alloc_hook kasan, page_alloc: move kernel_init_free_pages in post_alloc_hook kasan, page_alloc: rework kasan_unpoison_pages call site kasan: clean up metadata byte definitions kasan: define KASAN_VMALLOC_INVALID for SW_TAGS kasan, x86, arm64, s390: rename functions for modules shadow kasan, vmalloc: drop outdated VM_KASAN comment kasan: reorder vmalloc hooks kasan: add wrappers for vmalloc hooks kasan, vmalloc: reset tags in vmalloc functions kasan, fork: reset pointer tags of vmapped stacks kasan, arm64: reset pointer tags of vmapped stacks kasan, vmalloc: add vmalloc tagging for SW_TAGS kasan, vmalloc, arm64: mark vmalloc mappings as pgprot_tagged kasan, vmalloc: unpoison VM_ALLOC pages after mapping kasan, mm: only define ___GFP_SKIP_KASAN_POISON with HW_TAGS kasan, page_alloc: allow skipping unpoisoning for HW_TAGS kasan, page_alloc: allow skipping memory init for HW_TAGS kasan, vmalloc: add vmalloc tagging for HW_TAGS kasan, vmalloc: only tag normal vmalloc allocations kasan, arm64: don't tag executable vmalloc allocations kasan: mark kasan_arg_stacktrace as __initdata kasan: clean up feature flags for HW_TAGS mode kasan: add kasan.vmalloc command line flag kasan: allow enabling KASAN_VMALLOC and SW/HW_TAGS arm64: select KASAN_VMALLOC for SW/HW_TAGS modes kasan: documentation updates kasan: improve vmalloc tests kasan: test: support async (again) and asymm modes for HW_TAGS tangmeng <tangmeng@uniontech.com>: mm/kasan: remove unnecessary CONFIG_KASAN option Peter Collingbourne <pcc@google.com>: kasan: update function name in comments Andrey Konovalov <andreyknvl@google.com>: kasan: print virtual mapping info in reports Patch series "kasan: report clean-ups and improvements": kasan: drop addr check from describe_object_addr kasan: more line breaks in reports kasan: rearrange stack frame info in reports kasan: improve stack frame info in reports kasan: print basic stack frame info for SW_TAGS kasan: simplify async check in end_report() kasan: simplify kasan_update_kunit_status() and call sites kasan: check CONFIG_KASAN_KUNIT_TEST instead of CONFIG_KUNIT kasan: move update_kunit_status to start_report kasan: move disable_trace_on_warning to start_report kasan: split out print_report from __kasan_report kasan: simplify kasan_find_first_bad_addr call sites kasan: restructure kasan_report kasan: merge __kasan_report into kasan_report kasan: call print_report from kasan_report_invalid_free kasan: move and simplify kasan_report_async kasan: rename kasan_access_info to kasan_report_info kasan: add comment about UACCESS regions to kasan_report kasan: respect KASAN_BIT_REPORTED in all reporting routines kasan: reorder reporting functions kasan: move and hide kasan_save_enable/restore_multi_shot kasan: disable LOCKDEP when printing reports Subsystem: mm/hugetlb Mike Kravetz <mike.kravetz@oracle.com>: Patch series "Add hugetlb MADV_DONTNEED support", v3: mm: enable MADV_DONTNEED for hugetlb mappings selftests/vm: add hugetlb madvise MADV_DONTNEED MADV_REMOVE test userfaultfd/selftests: enable hugetlb remap and remove event testing Miaohe Lin <linmiaohe@huawei.com>: mm/huge_memory: make is_transparent_hugepage() static Subsystem: mm/pagemap David Hildenbrand <david@redhat.com>: Patch series "mm: COW fixes part 1: fix the COW security issue for THP and swap", v3: mm: optimize do_wp_page() for exclusive pages in the swapcache mm: optimize do_wp_page() for fresh pages in local LRU pagevecs mm: slightly clarify KSM logic in do_swap_page() mm: streamline COW logic in do_swap_page() mm/huge_memory: streamline COW logic in do_huge_pmd_wp_page() mm/khugepaged: remove reuse_swap_page() usage mm/swapfile: remove stale reuse_swap_page() mm/huge_memory: remove stale page_trans_huge_mapcount() mm/huge_memory: remove stale locking logic from __split_huge_pmd() Hugh Dickins <hughd@google.com>: mm: warn on deleting redirtied only if accounted mm: unmap_mapping_range_tree() with i_mmap_rwsem shared Anshuman Khandual <anshuman.khandual@arm.com>: mm: generalize ARCH_HAS_FILTER_PGPROT Subsystem: mm/madvise Mauricio Faria de Oliveira <mfo@canonical.com>: mm: fix race between MADV_FREE reclaim and blkdev direct IO read Johannes Weiner <hannes@cmpxchg.org>: mm: madvise: MADV_DONTNEED_LOCKED Subsystem: selftests Muhammad Usama Anjum <usama.anjum@collabora.com>: selftests: vm: remove dependecy from internal kernel macros Kees Cook <keescook@chromium.org>: selftests: kselftest framework: provide "finished" helper Documentation/dev-tools/kasan.rst | 17 Documentation/vm/page_owner.rst | 72 ++ arch/alpha/include/uapi/asm/mman.h | 2 arch/arm64/Kconfig | 2 arch/arm64/include/asm/vmalloc.h | 6 arch/arm64/include/asm/vmap_stack.h | 5 arch/arm64/kernel/module.c | 5 arch/arm64/mm/pageattr.c | 2 arch/arm64/net/bpf_jit_comp.c | 3 arch/mips/include/uapi/asm/mman.h | 2 arch/parisc/include/uapi/asm/mman.h | 2 arch/powerpc/mm/book3s64/trace.c | 1 arch/s390/kernel/module.c | 2 arch/x86/Kconfig | 3 arch/x86/kernel/module.c | 2 arch/x86/mm/init.c | 1 arch/xtensa/include/uapi/asm/mman.h | 2 include/linux/gfp.h | 53 +- include/linux/huge_mm.h | 6 include/linux/kasan.h | 136 +++-- include/linux/mm.h | 5 include/linux/page-flags.h | 2 include/linux/pagemap.h | 3 include/linux/swap.h | 4 include/linux/vmalloc.h | 18 include/trace/events/huge_memory.h | 1 include/trace/events/migrate.h | 31 + include/trace/events/mmflags.h | 18 include/trace/events/thp.h | 27 + include/uapi/asm-generic/mman-common.h | 2 kernel/fork.c | 13 kernel/scs.c | 16 lib/Kconfig.kasan | 18 lib/test_kasan.c | 239 ++++++++- lib/vsprintf.c | 8 mm/Kconfig | 3 mm/debug.c | 1 mm/filemap.c | 63 +- mm/huge_memory.c | 109 ---- mm/kasan/Makefile | 2 mm/kasan/common.c | 4 mm/kasan/hw_tags.c | 243 +++++++--- mm/kasan/kasan.h | 76 ++- mm/kasan/report.c | 516 +++++++++++---------- mm/kasan/report_generic.c | 34 - mm/kasan/report_hw_tags.c | 1 mm/kasan/report_sw_tags.c | 16 mm/kasan/report_tags.c | 2 mm/kasan/shadow.c | 76 +-- mm/khugepaged.c | 11 mm/madvise.c | 57 +- mm/memory.c | 129 +++-- mm/memremap.c | 2 mm/migrate.c | 4 mm/page-writeback.c | 18 mm/page_alloc.c | 270 ++++++----- mm/page_owner.c | 86 ++- mm/rmap.c | 62 +- mm/swap.c | 4 mm/swapfile.c | 104 ---- mm/vmalloc.c | 167 ++++-- tools/testing/selftests/kselftest.h | 10 tools/testing/selftests/vm/.gitignore | 1 tools/testing/selftests/vm/Makefile | 1 tools/testing/selftests/vm/gup_test.c | 3 tools/testing/selftests/vm/hugetlb-madvise.c | 410 ++++++++++++++++ tools/testing/selftests/vm/ksm_tests.c | 38 - tools/testing/selftests/vm/memfd_secret.c | 2 tools/testing/selftests/vm/run_vmtests.sh | 15 tools/testing/selftests/vm/transhuge-stress.c | 41 - tools/testing/selftests/vm/userfaultfd.c | 72 +- tools/testing/selftests/vm/util.h | 75 ++- tools/vm/page_owner_sort.c | 628 +++++++++++++++++++++----- 73 files changed, 2797 insertions(+), 1288 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2022-03-23 23:04 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2022-03-23 23:04 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm, patches Various misc subsystems, before getting into the post-linux-next material. This is all based on v5.17. I tested applying and compiling against today's 1bc191051dca28fa6. One patch required an extra whack, all looks good. 41 patches, based on f443e374ae131c168a065ea1748feac6b2e76613. Subsystems affected by this patch series: procfs misc core-kernel lib checkpatch init pipe minix fat cgroups kexec kdump taskstats panic kcov resource ubsan Subsystem: procfs Hao Lee <haolee.swjtu@gmail.com>: proc: alloc PATH_MAX bytes for /proc/${pid}/fd/ symlinks David Hildenbrand <david@redhat.com>: proc/vmcore: fix possible deadlock on concurrent mmap and read Yang Li <yang.lee@linux.alibaba.com>: proc/vmcore: fix vmcore_alloc_buf() kernel-doc comment Subsystem: misc Bjorn Helgaas <bhelgaas@google.com>: linux/types.h: remove unnecessary __bitwise__ Documentation/sparse: add hints about __CHECKER__ Subsystem: core-kernel Miaohe Lin <linmiaohe@huawei.com>: kernel/ksysfs.c: use helper macro __ATTR_RW Subsystem: lib Kees Cook <keescook@chromium.org>: Kconfig.debug: make DEBUG_INFO selectable from a choice Rasmus Villemoes <linux@rasmusvillemoes.dk>: include: drop pointless __compiler_offsetof indirection Christophe Leroy <christophe.leroy@csgroup.eu>: ilog2: force inlining of __ilog2_u32() and __ilog2_u64() Andy Shevchenko <andriy.shevchenko@linux.intel.com>: bitfield: add explicit inclusions to the example Feng Tang <feng.tang@intel.com>: lib/Kconfig.debug: add ARCH dependency for FUNCTION_ALIGN option Randy Dunlap <rdunlap@infradead.org>: lib: bitmap: fix many kernel-doc warnings Subsystem: checkpatch Joe Perches <joe@perches.com>: checkpatch: prefer MODULE_LICENSE("GPL") over MODULE_LICENSE("GPL v2") checkpatch: add --fix option for some TRAILING_STATEMENTS checkpatch: add early_param exception to blank line after struct/function test Sagar Patel <sagarmp@cs.unc.edu>: checkpatch: use python3 to find codespell dictionary Subsystem: init Mark-PK Tsai <mark-pk.tsai@mediatek.com>: init: use ktime_us_delta() to make initcall_debug log more precise Randy Dunlap <rdunlap@infradead.org>: init.h: improve __setup and early_param documentation init/main.c: return 1 from handled __setup() functions Subsystem: pipe Andrei Vagin <avagin@gmail.com>: fs/pipe: use kvcalloc to allocate a pipe_buffer array fs/pipe.c: local vars have to match types of proper pipe_inode_info fields Subsystem: minix Qinghua Jin <qhjin.dev@gmail.com>: minix: fix bug when opening a file with O_DIRECT Subsystem: fat Helge Deller <deller@gmx.de>: fat: use pointer to simple type in put_user() Subsystem: cgroups Sebastian Andrzej Siewior <bigeasy@linutronix.de>: cgroup: use irqsave in cgroup_rstat_flush_locked(). cgroup: add a comment to cgroup_rstat_flush_locked(). Subsystem: kexec Jisheng Zhang <jszhang@kernel.org>: Patch series "kexec: use IS_ENABLED(CONFIG_KEXEC_CORE) instead of #ifdef", v2: kexec: make crashk_res, crashk_low_res and crash_notes symbols always visible riscv: mm: init: use IS_ENABLED(CONFIG_KEXEC_CORE) instead of #ifdef x86/setup: use IS_ENABLED(CONFIG_KEXEC_CORE) instead of #ifdef arm64: mm: use IS_ENABLED(CONFIG_KEXEC_CORE) instead of #ifdef Subsystem: kdump Tiezhu Yang <yangtiezhu@loongson.cn>: Patch series "Update doc and fix some issues about kdump", v2: docs: kdump: update description about sysfs file system support docs: kdump: add scp example to write out the dump file panic: unset panic_on_warn inside panic() ubsan: no need to unset panic_on_warn in ubsan_epilogue() kasan: no need to unset panic_on_warn in end_report() Subsystem: taskstats Lukas Bulwahn <lukas.bulwahn@gmail.com>: taskstats: remove unneeded dead assignment Subsystem: panic "Guilherme G. Piccoli" <gpiccoli@igalia.com>: Patch series "Some improvements on panic_print": docs: sysctl/kernel: add missing bit to panic_print panic: add option to dump all CPUs backtraces in panic_print panic: move panic_print before kmsg dumpers Subsystem: kcov Aleksandr Nogikh <nogikh@google.com>: Patch series "kcov: improve mmap processing", v3: kcov: split ioctl handling into locked and unlocked parts kcov: properly handle subsequent mmap calls Subsystem: resource Miaohe Lin <linmiaohe@huawei.com>: kernel/resource: fix kfree() of bootmem memory again Subsystem: ubsan Marco Elver <elver@google.com>: Revert "ubsan, kcsan: Don't combine sanitizer with kcov on clang" Documentation/admin-guide/kdump/kdump.rst | 10 + Documentation/admin-guide/kernel-parameters.txt | 5 Documentation/admin-guide/sysctl/kernel.rst | 2 Documentation/dev-tools/sparse.rst | 2 arch/arm64/mm/init.c | 9 - arch/riscv/mm/init.c | 6 - arch/x86/kernel/setup.c | 10 - fs/fat/dir.c | 2 fs/minix/inode.c | 3 fs/pipe.c | 13 +- fs/proc/base.c | 8 - fs/proc/vmcore.c | 43 +++---- include/linux/bitfield.h | 3 include/linux/compiler_types.h | 3 include/linux/init.h | 11 + include/linux/kexec.h | 12 +- include/linux/log2.h | 4 include/linux/stddef.h | 6 - include/uapi/linux/types.h | 6 - init/main.c | 14 +- kernel/cgroup/rstat.c | 13 +- kernel/kcov.c | 102 ++++++++--------- kernel/ksysfs.c | 3 kernel/panic.c | 37 ++++-- kernel/resource.c | 41 +----- kernel/taskstats.c | 5 lib/Kconfig.debug | 142 ++++++++++++------------ lib/Kconfig.kcsan | 11 - lib/Kconfig.ubsan | 12 -- lib/bitmap.c | 24 ++-- lib/ubsan.c | 10 - mm/kasan/report.c | 10 - scripts/checkpatch.pl | 31 ++++- tools/include/linux/types.h | 5 34 files changed, 313 insertions(+), 305 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2022-03-22 21:38 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2022-03-22 21:38 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits, patches - A few misc subsystems - There is a lot of MM material in Willy's tree. Folio work and non-folio patches which depended on that work. Here I send almost all the MM patches which precede the patches in Willy's tree. The remaining ~100 MM patches are staged on Willy's tree and I'll send those along once Willy is merged up. I tried this batch against your current tree (as of 51912904076680281) and a couple need some extra persuasion to apply, but all looks OK otherwise. 227 patches, based on f443e374ae131c168a065ea1748feac6b2e76613 Subsystems affected by this patch series: kthread scripts ntfs ocfs2 block vfs mm/kasan mm/pagecache mm/gup mm/swap mm/shmem mm/memcg mm/selftests mm/pagemap mm/mremap mm/sparsemem mm/vmalloc mm/pagealloc mm/memory-failure mm/mlock mm/hugetlb mm/userfaultfd mm/vmscan mm/compaction mm/mempolicy mm/oom-kill mm/migration mm/thp mm/cma mm/autonuma mm/psi mm/ksm mm/page-poison mm/madvise mm/memory-hotplug mm/rmap mm/zswap mm/uaccess mm/ioremap mm/highmem mm/cleanups mm/kfence mm/hmm mm/damon Subsystem: kthread Rasmus Villemoes <linux@rasmusvillemoes.dk>: linux/kthread.h: remove unused macros Subsystem: scripts Colin Ian King <colin.i.king@gmail.com>: scripts/spelling.txt: add more spellings to spelling.txt Subsystem: ntfs Dongliang Mu <mudongliangabcd@gmail.com>: ntfs: add sanity check on allocation size Subsystem: ocfs2 Joseph Qi <joseph.qi@linux.alibaba.com>: ocfs2: cleanup some return variables hongnanli <hongnan.li@linux.alibaba.com>: fs/ocfs2: fix comments mentioning i_mutex Subsystem: block NeilBrown <neilb@suse.de>: Patch series "Remove remaining parts of congestion tracking code", v2: doc: convert 'subsection' to 'section' in gfp.h mm: document and polish read-ahead code mm: improve cleanup when ->readpages doesn't process all pages fuse: remove reliance on bdi congestion nfs: remove reliance on bdi congestion ceph: remove reliance on bdi congestion remove inode_congested() remove bdi_congested() and wb_congested() and related functions f2fs: replace congestion_wait() calls with io_schedule_timeout() block/bfq-iosched.c: use "false" rather than "BLK_RW_ASYNC" remove congestion tracking framework Subsystem: vfs Anthony Iliopoulos <ailiop@suse.com>: mount: warn only once about timestamp range expiration Subsystem: mm/kasan Miaohe Lin <linmiaohe@huawei.com>: mm/memremap: avoid calling kasan_remove_zero_shadow() for device private memory Subsystem: mm/pagecache Miaohe Lin <linmiaohe@huawei.com>: filemap: remove find_get_pages() mm/writeback: minor clean up for highmem_dirtyable_memory Minchan Kim <minchan@kernel.org>: mm: fs: fix lru_cache_disabled race in bh_lru Subsystem: mm/gup Peter Xu <peterx@redhat.com>: Patch series "mm/gup: some cleanups", v5: mm: fix invalid page pointer returned with FOLL_PIN gups John Hubbard <jhubbard@nvidia.com>: mm/gup: follow_pfn_pte(): -EEXIST cleanup mm/gup: remove unused pin_user_pages_locked() mm: change lookup_node() to use get_user_pages_fast() mm/gup: remove unused get_user_pages_locked() Subsystem: mm/swap Bang Li <libang.linuxer@gmail.com>: mm/swap: fix confusing comment in folio_mark_accessed Subsystem: mm/shmem Xavier Roche <xavier.roche@algolia.com>: tmpfs: support for file creation time Hugh Dickins <hughd@google.com>: shmem: mapping_set_exiting() to help mapped resilience tmpfs: do not allocate pages on read Miaohe Lin <linmiaohe@huawei.com>: mm: shmem: use helper macro __ATTR_RW Subsystem: mm/memcg Shakeel Butt <shakeelb@google.com>: memcg: replace in_interrupt() with !in_task() Yosry Ahmed <yosryahmed@google.com>: memcg: add per-memcg total kernel memory stat Wei Yang <richard.weiyang@gmail.com>: mm/memcg: mem_cgroup_per_node is already set to 0 on allocation mm/memcg: retrieve parent memcg from css.parent Shakeel Butt <shakeelb@google.com>: Patch series "memcg: robust enforcement of memory.high", v2: memcg: refactor mem_cgroup_oom memcg: unify force charging conditions selftests: memcg: test high limit for single entry allocation memcg: synchronously enforce memory.high for large overcharges Randy Dunlap <rdunlap@infradead.org>: mm/memcontrol: return 1 from cgroup.memory __setup() handler Michal Hocko <mhocko@suse.com>: Patch series "mm/memcg: Address PREEMPT_RT problems instead of disabling it", v5: mm/memcg: revert ("mm/memcg: optimize user context object stock access") Sebastian Andrzej Siewior <bigeasy@linutronix.de>: mm/memcg: disable threshold event handlers on PREEMPT_RT mm/memcg: protect per-CPU counter by disabling preemption on PREEMPT_RT where needed. Johannes Weiner <hannes@cmpxchg.org>: mm/memcg: opencode the inner part of obj_cgroup_uncharge_pages() in drain_obj_stock() Sebastian Andrzej Siewior <bigeasy@linutronix.de>: mm/memcg: protect memcg_stock with a local_lock_t mm/memcg: disable migration instead of preemption in drain_all_stock(). Muchun Song <songmuchun@bytedance.com>: Patch series "Optimize list lru memory consumption", v6: mm: list_lru: transpose the array of per-node per-memcg lru lists mm: introduce kmem_cache_alloc_lru fs: introduce alloc_inode_sb() to allocate filesystems specific inode fs: allocate inode by using alloc_inode_sb() f2fs: allocate inode by using alloc_inode_sb() mm: dcache: use kmem_cache_alloc_lru() to allocate dentry xarray: use kmem_cache_alloc_lru to allocate xa_node mm: memcontrol: move memcg_online_kmem() to mem_cgroup_css_online() mm: list_lru: allocate list_lru_one only when needed mm: list_lru: rename memcg_drain_all_list_lrus to memcg_reparent_list_lrus mm: list_lru: replace linear array with xarray mm: memcontrol: reuse memory cgroup ID for kmem ID mm: memcontrol: fix cannot alloc the maximum memcg ID mm: list_lru: rename list_lru_per_memcg to list_lru_memcg mm: memcontrol: rename memcg_cache_id to memcg_kmem_id Vasily Averin <vvs@virtuozzo.com>: memcg: enable accounting for tty-related objects Subsystem: mm/selftests Guillaume Tucker <guillaume.tucker@collabora.com>: selftests, x86: fix how check_cc.sh is being invoked Subsystem: mm/pagemap Anshuman Khandual <anshuman.khandual@arm.com>: mm: merge pte_mkhuge() call into arch_make_huge_pte() Stafford Horne <shorne@gmail.com>: mm: remove mmu_gathers storage from remaining architectures Muchun Song <songmuchun@bytedance.com>: Patch series "Fix some cache flush bugs", v5: mm: thp: fix wrong cache flush in remove_migration_pmd() mm: fix missing cache flush for all tail pages of compound page mm: hugetlb: fix missing cache flush in copy_huge_page_from_user() mm: hugetlb: fix missing cache flush in hugetlb_mcopy_atomic_pte() mm: shmem: fix missing cache flush in shmem_mfill_atomic_pte() mm: userfaultfd: fix missing cache flush in mcopy_atomic_pte() and __mcopy_atomic() mm: replace multiple dcache flush with flush_dcache_folio() Peter Xu <peterx@redhat.com>: Patch series "mm: Rework zap ptes on swap entries", v5: mm: don't skip swap entry even if zap_details specified mm: rename zap_skip_check_mapping() to should_zap_page() mm: change zap_details.zap_mapping into even_cows mm: rework swap handling of zap_pte_range Randy Dunlap <rdunlap@infradead.org>: mm/mmap: return 1 from stack_guard_gap __setup() handler Miaohe Lin <linmiaohe@huawei.com>: mm/memory.c: use helper function range_in_vma() mm/memory.c: use helper macro min and max in unmap_mapping_range_tree() Hugh Dickins <hughd@google.com>: mm: _install_special_mapping() apply VM_LOCKED_CLEAR_MASK Miaohe Lin <linmiaohe@huawei.com>: mm/mmap: remove obsolete comment in ksys_mmap_pgoff Subsystem: mm/mremap Miaohe Lin <linmiaohe@huawei.com>: mm/mremap:: use vma_lookup() instead of find_vma() Subsystem: mm/sparsemem Miaohe Lin <linmiaohe@huawei.com>: mm/sparse: make mminit_validate_memmodel_limits() static Subsystem: mm/vmalloc Miaohe Lin <linmiaohe@huawei.com>: mm/vmalloc: remove unneeded function forward declaration "Uladzislau Rezki (Sony)" <urezki@gmail.com>: mm/vmalloc: Move draining areas out of caller context Uladzislau Rezki <uladzislau.rezki@sony.com>: mm/vmalloc: add adjust_search_size parameter "Uladzislau Rezki (Sony)" <urezki@gmail.com>: mm/vmalloc: eliminate an extra orig_gfp_mask Jiapeng Chong <jiapeng.chong@linux.alibaba.com>: mm/vmalloc.c: fix "unused function" warning Bang Li <libang.linuxer@gmail.com>: mm/vmalloc: fix comments about vmap_area struct Subsystem: mm/pagealloc Zi Yan <ziy@nvidia.com>: mm: page_alloc: avoid merging non-fallbackable pageblocks with others Peter Collingbourne <pcc@google.com>: mm/mmzone.c: use try_cmpxchg() in page_cpupid_xchg_last() Miaohe Lin <linmiaohe@huawei.com>: mm/mmzone.h: remove unused macros Nicolas Saenz Julienne <nsaenzju@redhat.com>: mm/page_alloc: don't pass pfn to free_unref_page_commit() David Hildenbrand <david@redhat.com>: Patch series "mm: enforce pageblock_order < MAX_ORDER": cma: factor out minimum alignment requirement mm: enforce pageblock_order < MAX_ORDER Nathan Chancellor <nathan@kernel.org>: mm/page_alloc: mark pagesets as __maybe_unused Alistair Popple <apopple@nvidia.com>: mm/pages_alloc.c: don't create ZONE_MOVABLE beyond the end of a node Mel Gorman <mgorman@techsingularity.net>: Patch series "Follow-up on high-order PCP caching", v2: mm/page_alloc: fetch the correct pcp buddy during bulk free mm/page_alloc: track range of active PCP lists during bulk free mm/page_alloc: simplify how many pages are selected per pcp list during bulk free mm/page_alloc: drain the requested list first during bulk free mm/page_alloc: free pages in a single pass during bulk free mm/page_alloc: limit number of high-order pages on PCP during bulk free mm/page_alloc: do not prefetch buddies during bulk free Oscar Salvador <osalvador@suse.de>: arch/x86/mm/numa: Do not initialize nodes twice Suren Baghdasaryan <surenb@google.com>: mm: count time in drain_all_pages during direct reclaim as memory pressure Eric Dumazet <edumazet@google.com>: mm/page_alloc: call check_new_pages() while zone spinlock is not held Mel Gorman <mgorman@techsingularity.net>: mm/page_alloc: check high-order pages for corruption during PCP operations Subsystem: mm/memory-failure Naoya Horiguchi <naoya.horiguchi@nec.com>: mm/memory-failure.c: remove obsolete comment mm/hwpoison: fix error page recovered but reported "not recovered" Rik van Riel <riel@surriel.com>: mm: invalidate hwpoison page cache page in fault path Miaohe Lin <linmiaohe@huawei.com>: Patch series "A few cleanup and fixup patches for memory failure", v3: mm/memory-failure.c: minor clean up for memory_failure_dev_pagemap mm/memory-failure.c: catch unexpected -EFAULT from vma_address() mm/memory-failure.c: rework the signaling logic in kill_proc mm/memory-failure.c: fix race with changing page more robustly mm/memory-failure.c: remove PageSlab check in hwpoison_filter_dev mm/memory-failure.c: rework the try_to_unmap logic in hwpoison_user_mappings() mm/memory-failure.c: remove obsolete comment in __soft_offline_page mm/memory-failure.c: remove unnecessary PageTransTail check mm/hwpoison-inject: support injecting hwpoison to free page luofei <luofei@unicloud.com>: mm/hwpoison: avoid the impact of hwpoison_filter() return value on mce handler mm/hwpoison: add in-use hugepage hwpoison filter judgement Miaohe Lin <linmiaohe@huawei.com>: Patch series "A few fixup patches for memory failure", v2: mm/memory-failure.c: fix race with changing page compound again mm/memory-failure.c: avoid calling invalidate_inode_page() with unexpected pages mm/memory-failure.c: make non-LRU movable pages unhandlable Vlastimil Babka <vbabka@suse.cz>: mm, fault-injection: declare should_fail_alloc_page() Subsystem: mm/mlock Miaohe Lin <linmiaohe@huawei.com>: mm/mlock: fix potential imbalanced rlimit ucounts adjustment Subsystem: mm/hugetlb Muchun Song <songmuchun@bytedance.com>: Patch series "Free the 2nd vmemmap page associated with each HugeTLB page", v7: mm: hugetlb: free the 2nd vmemmap page associated with each HugeTLB page mm: hugetlb: replace hugetlb_free_vmemmap_enabled with a static_key mm: sparsemem: use page table lock to protect kernel pmd operations selftests: vm: add a hugetlb test case mm: sparsemem: move vmemmap related to HugeTLB to CONFIG_HUGETLB_PAGE_FREE_VMEMMAP Anshuman Khandual <anshuman.khandual@arm.com>: mm/hugetlb: generalize ARCH_WANT_GENERAL_HUGETLB Mike Kravetz <mike.kravetz@oracle.com>: hugetlb: clean up potential spectre issue warnings Miaohe Lin <linmiaohe@huawei.com>: mm/hugetlb: use helper macro __ATTR_RW David Howells <dhowells@redhat.com>: mm/hugetlb.c: export PageHeadHuge() Miaohe Lin <linmiaohe@huawei.com>: mm: remove unneeded local variable follflags Subsystem: mm/userfaultfd Nadav Amit <namit@vmware.com>: userfaultfd: provide unmasked address on page-fault Guo Zhengkui <guozhengkui@vivo.com>: userfaultfd/selftests: fix uninitialized_var.cocci warning Subsystem: mm/vmscan Hugh Dickins <hughd@google.com>: mm/fs: delete PF_SWAPWRITE mm: __isolate_lru_page_prepare() in isolate_migratepages_block() Waiman Long <longman@redhat.com>: mm/list_lru: optimize memcg_reparent_list_lru_node() Marcelo Tosatti <mtosatti@redhat.com>: mm: lru_cache_disable: replace work queue synchronization with synchronize_rcu Sebastian Andrzej Siewior <bigeasy@linutronix.de>: mm: workingset: replace IRQ-off check with a lockdep assert. Charan Teja Kalla <quic_charante@quicinc.com>: mm: vmscan: fix documentation for page_check_references() Subsystem: mm/compaction Baolin Wang <baolin.wang@linux.alibaba.com>: mm: compaction: cleanup the compaction trace events Subsystem: mm/mempolicy Hugh Dickins <hughd@google.com>: mempolicy: mbind_range() set_policy() after vma_merge() Subsystem: mm/oom-kill Miaohe Lin <linmiaohe@huawei.com>: mm/oom_kill: remove unneeded is_memcg_oom check Subsystem: mm/migration Huang Ying <ying.huang@intel.com>: mm,migrate: fix establishing demotion target "andrew.yang" <andrew.yang@mediatek.com>: mm/migrate: fix race between lock page and clear PG_Isolated Subsystem: mm/thp Hugh Dickins <hughd@google.com>: mm/thp: refix __split_huge_pmd_locked() for migration PMD Subsystem: mm/cma Hari Bathini <hbathini@linux.ibm.com>: Patch series "powerpc/fadump: handle CMA activation failure appropriately", v3: mm/cma: provide option to opt out from exposing pages on activation failure powerpc/fadump: opt out from freeing pages on cma activation failure Subsystem: mm/autonuma Huang Ying <ying.huang@intel.com>: Patch series "NUMA balancing: optimize memory placement for memory tiering system", v13: NUMA Balancing: add page promotion counter NUMA balancing: optimize page placement for memory tiering system memory tiering: skip to scan fast memory Subsystem: mm/psi Johannes Weiner <hannes@cmpxchg.org>: mm: page_io: fix psi memory pressure error on cold swapins Subsystem: mm/ksm Yang Yang <yang.yang29@zte.com.cn>: mm/vmstat: add event for ksm swapping in copy Miaohe Lin <linmiaohe@huawei.com>: mm/ksm: use helper macro __ATTR_RW Subsystem: mm/page-poison "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm/hwpoison: check the subpage, not the head page Subsystem: mm/madvise Miaohe Lin <linmiaohe@huawei.com>: mm/madvise: use vma_lookup() instead of find_vma() Charan Teja Kalla <quic_charante@quicinc.com>: Patch series "mm: madvise: return correct bytes processed with: mm: madvise: return correct bytes advised with process_madvise mm: madvise: skip unmapped vma holes passed to process_madvise Subsystem: mm/memory-hotplug Michal Hocko <mhocko@suse.com>: Patch series "mm, memory_hotplug: handle unitialized numa node gracefully": mm, memory_hotplug: make arch_alloc_nodedata independent on CONFIG_MEMORY_HOTPLUG mm: handle uninitialized numa nodes gracefully mm, memory_hotplug: drop arch_free_nodedata mm, memory_hotplug: reorganize new pgdat initialization mm: make free_area_init_node aware of memory less nodes Wei Yang <richard.weiyang@gmail.com>: memcg: do not tweak node in alloc_mem_cgroup_per_node_info David Hildenbrand <david@redhat.com>: drivers/base/memory: add memory block to memory group after registration succeeded drivers/base/node: consolidate node device subsystem initialization in node_dev_init() Miaohe Lin <linmiaohe@huawei.com>: Patch series "A few cleanup patches around memory_hotplug": mm/memory_hotplug: remove obsolete comment of __add_pages mm/memory_hotplug: avoid calling zone_intersects() for ZONE_NORMAL mm/memory_hotplug: clean up try_offline_node mm/memory_hotplug: fix misplaced comment in offline_pages David Hildenbrand <david@redhat.com>: Patch series "drivers/base/memory: determine and store zone for single-zone memory blocks", v2: drivers/base/node: rename link_mem_sections() to register_memory_block_under_node() drivers/base/memory: determine and store zone for single-zone memory blocks drivers/base/memory: clarify adding and removing of memory blocks Oscar Salvador <osalvador@suse.de>: mm: only re-generate demotion targets when a numa node changes its N_CPU state Subsystem: mm/rmap Hugh Dickins <hughd@google.com>: mm/thp: ClearPageDoubleMap in first page_add_file_rmap() Subsystem: mm/zswap "Maciej S. Szmigiero" <maciej.szmigiero@oracle.com>: mm/zswap.c: allow handling just same-value filled pages Subsystem: mm/uaccess Christophe Leroy <christophe.leroy@csgroup.eu>: mm: remove usercopy_warn() mm: uninline copy_overflow() Randy Dunlap <rdunlap@infradead.org>: mm/usercopy: return 1 from hardened_usercopy __setup() handler Subsystem: mm/ioremap Vlastimil Babka <vbabka@suse.cz>: mm/early_ioremap: declare early_memremap_pgprot_adjust() Subsystem: mm/highmem Ira Weiny <ira.weiny@intel.com>: highmem: document kunmap_local() Miaohe Lin <linmiaohe@huawei.com>: mm/highmem: remove unnecessary done label Subsystem: mm/cleanups "Dr. David Alan Gilbert" <linux@treblig.org>: mm/page_table_check.c: use strtobool for param parsing Subsystem: mm/kfence tangmeng <tangmeng@uniontech.com>: mm/kfence: remove unnecessary CONFIG_KFENCE option Tianchen Ding <dtcccc@linux.alibaba.com>: Patch series "provide the flexibility to enable KFENCE", v3: kfence: allow re-enabling KFENCE after system startup kfence: alloc kfence_pool after system startup Peng Liu <liupeng256@huawei.com>: Patch series "kunit: fix a UAF bug and do some optimization", v2: kunit: fix UAF when run kfence test case test_gfpzero kunit: make kunit_test_timeout compatible with comment kfence: test: try to avoid test_gfpzero trigger rcu_stall Marco Elver <elver@google.com>: kfence: allow use of a deferrable timer Subsystem: mm/hmm Miaohe Lin <linmiaohe@huawei.com>: mm/hmm.c: remove unneeded local variable ret Subsystem: mm/damon SeongJae Park <sj@kernel.org>: Patch series "Remove the type-unclear target id concept": mm/damon/dbgfs/init_regions: use target index instead of target id Docs/admin-guide/mm/damon/usage: update for changed initail_regions file input mm/damon/core: move damon_set_targets() into dbgfs mm/damon: remove the target id concept Baolin Wang <baolin.wang@linux.alibaba.com>: mm/damon: remove redundant page validation SeongJae Park <sj@kernel.org>: Patch series "Allow DAMON user code independent of monitoring primitives": mm/damon: rename damon_primitives to damon_operations mm/damon: let monitoring operations can be registered and selected mm/damon/paddr,vaddr: register themselves to DAMON in subsys_initcall mm/damon/reclaim: use damon_select_ops() instead of damon_{v,p}a_set_operations() mm/damon/dbgfs: use damon_select_ops() instead of damon_{v,p}a_set_operations() mm/damon/dbgfs: use operations id for knowing if the target has pid mm/damon/dbgfs-test: fix is_target_id() change mm/damon/paddr,vaddr: remove damon_{p,v}a_{target_valid,set_operations}() tangmeng <tangmeng@uniontech.com>: mm/damon: remove unnecessary CONFIG_DAMON option SeongJae Park <sj@kernel.org>: Patch series "Docs/damon: Update documents for better consistency": Docs/vm/damon: call low level monitoring primitives the operations Docs/vm/damon/design: update DAMON-Idle Page Tracking interference handling Docs/damon: update outdated term 'regions update interval' Patch series "Introduce DAMON sysfs interface", v3: mm/damon/core: allow non-exclusive DAMON start/stop mm/damon/core: add number of each enum type values mm/damon: implement a minimal stub for sysfs-based DAMON interface mm/damon/sysfs: link DAMON for virtual address spaces monitoring mm/damon/sysfs: support the physical address space monitoring mm/damon/sysfs: support DAMON-based Operation Schemes mm/damon/sysfs: support DAMOS quotas mm/damon/sysfs: support schemes prioritization mm/damon/sysfs: support DAMOS watermarks mm/damon/sysfs: support DAMOS stats selftests/damon: add a test for DAMON sysfs interface Docs/admin-guide/mm/damon/usage: document DAMON sysfs interface Docs/ABI/testing: add DAMON sysfs interface ABI document Xin Hao <xhao@linux.alibaba.com>: mm/damon/sysfs: remove repeat container_of() in damon_sysfs_kdamond_release() Documentation/ABI/testing/sysfs-kernel-mm-damon | 274 ++ Documentation/admin-guide/cgroup-v1/memory.rst | 2 Documentation/admin-guide/cgroup-v2.rst | 5 Documentation/admin-guide/kernel-parameters.txt | 2 Documentation/admin-guide/mm/damon/usage.rst | 380 +++ Documentation/admin-guide/mm/zswap.rst | 22 Documentation/admin-guide/sysctl/kernel.rst | 31 Documentation/core-api/mm-api.rst | 19 Documentation/dev-tools/kfence.rst | 12 Documentation/filesystems/porting.rst | 6 Documentation/filesystems/vfs.rst | 16 Documentation/vm/damon/design.rst | 43 Documentation/vm/damon/faq.rst | 2 MAINTAINERS | 1 arch/arm/Kconfig | 4 arch/arm64/kernel/setup.c | 3 arch/arm64/mm/hugetlbpage.c | 1 arch/hexagon/mm/init.c | 2 arch/ia64/kernel/topology.c | 10 arch/ia64/mm/discontig.c | 11 arch/mips/kernel/topology.c | 5 arch/nds32/mm/init.c | 1 arch/openrisc/mm/init.c | 2 arch/powerpc/include/asm/fadump-internal.h | 5 arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h | 4 arch/powerpc/kernel/fadump.c | 8 arch/powerpc/kernel/sysfs.c | 17 arch/riscv/Kconfig | 4 arch/riscv/kernel/setup.c | 3 arch/s390/kernel/numa.c | 7 arch/sh/kernel/topology.c | 5 arch/sparc/kernel/sysfs.c | 12 arch/sparc/mm/hugetlbpage.c | 1 arch/x86/Kconfig | 4 arch/x86/kernel/cpu/mce/core.c | 8 arch/x86/kernel/topology.c | 5 arch/x86/mm/numa.c | 33 block/bdev.c | 2 block/bfq-iosched.c | 2 drivers/base/init.c | 1 drivers/base/memory.c | 149 + drivers/base/node.c | 48 drivers/block/drbd/drbd_int.h | 3 drivers/block/drbd/drbd_req.c | 3 drivers/dax/super.c | 2 drivers/of/of_reserved_mem.c | 9 drivers/tty/tty_io.c | 2 drivers/virtio/virtio_mem.c | 9 fs/9p/vfs_inode.c | 2 fs/adfs/super.c | 2 fs/affs/super.c | 2 fs/afs/super.c | 2 fs/befs/linuxvfs.c | 2 fs/bfs/inode.c | 2 fs/btrfs/inode.c | 2 fs/buffer.c | 8 fs/ceph/addr.c | 22 fs/ceph/inode.c | 2 fs/ceph/super.c | 1 fs/ceph/super.h | 1 fs/cifs/cifsfs.c | 2 fs/coda/inode.c | 2 fs/dcache.c | 3 fs/ecryptfs/super.c | 2 fs/efs/super.c | 2 fs/erofs/super.c | 2 fs/exfat/super.c | 2 fs/ext2/ialloc.c | 5 fs/ext2/super.c | 2 fs/ext4/super.c | 2 fs/f2fs/compress.c | 4 fs/f2fs/data.c | 3 fs/f2fs/f2fs.h | 6 fs/f2fs/segment.c | 8 fs/f2fs/super.c | 14 fs/fat/inode.c | 2 fs/freevxfs/vxfs_super.c | 2 fs/fs-writeback.c | 40 fs/fuse/control.c | 17 fs/fuse/dev.c | 8 fs/fuse/file.c | 17 fs/fuse/inode.c | 2 fs/gfs2/super.c | 2 fs/hfs/super.c | 2 fs/hfsplus/super.c | 2 fs/hostfs/hostfs_kern.c | 2 fs/hpfs/super.c | 2 fs/hugetlbfs/inode.c | 2 fs/inode.c | 2 fs/isofs/inode.c | 2 fs/jffs2/super.c | 2 fs/jfs/super.c | 2 fs/minix/inode.c | 2 fs/namespace.c | 2 fs/nfs/inode.c | 2 fs/nfs/write.c | 14 fs/nilfs2/segbuf.c | 16 fs/nilfs2/super.c | 2 fs/ntfs/inode.c | 6 fs/ntfs3/super.c | 2 fs/ocfs2/alloc.c | 2 fs/ocfs2/aops.c | 2 fs/ocfs2/cluster/nodemanager.c | 2 fs/ocfs2/dir.c | 4 fs/ocfs2/dlmfs/dlmfs.c | 2 fs/ocfs2/file.c | 13 fs/ocfs2/inode.c | 2 fs/ocfs2/localalloc.c | 6 fs/ocfs2/namei.c | 2 fs/ocfs2/ocfs2.h | 4 fs/ocfs2/quota_global.c | 2 fs/ocfs2/stack_user.c | 18 fs/ocfs2/super.c | 2 fs/ocfs2/xattr.c | 2 fs/openpromfs/inode.c | 2 fs/orangefs/super.c | 2 fs/overlayfs/super.c | 2 fs/proc/inode.c | 2 fs/qnx4/inode.c | 2 fs/qnx6/inode.c | 2 fs/reiserfs/super.c | 2 fs/romfs/super.c | 2 fs/squashfs/super.c | 2 fs/sysv/inode.c | 2 fs/ubifs/super.c | 2 fs/udf/super.c | 2 fs/ufs/super.c | 2 fs/userfaultfd.c | 5 fs/vboxsf/super.c | 2 fs/xfs/libxfs/xfs_btree.c | 2 fs/xfs/xfs_buf.c | 3 fs/xfs/xfs_icache.c | 2 fs/zonefs/super.c | 2 include/linux/backing-dev-defs.h | 8 include/linux/backing-dev.h | 50 include/linux/cma.h | 14 include/linux/damon.h | 95 include/linux/fault-inject.h | 2 include/linux/fs.h | 21 include/linux/gfp.h | 10 include/linux/highmem-internal.h | 10 include/linux/hugetlb.h | 8 include/linux/kthread.h | 22 include/linux/list_lru.h | 45 include/linux/memcontrol.h | 46 include/linux/memory.h | 12 include/linux/memory_hotplug.h | 132 - include/linux/migrate.h | 8 include/linux/mm.h | 11 include/linux/mmzone.h | 22 include/linux/nfs_fs_sb.h | 1 include/linux/node.h | 25 include/linux/page-flags.h | 96 include/linux/pageblock-flags.h | 7 include/linux/pagemap.h | 7 include/linux/sched.h | 1 include/linux/sched/sysctl.h | 10 include/linux/shmem_fs.h | 1 include/linux/slab.h | 3 include/linux/swap.h | 6 include/linux/thread_info.h | 5 include/linux/uaccess.h | 2 include/linux/vm_event_item.h | 3 include/linux/vmalloc.h | 4 include/linux/xarray.h | 9 include/ras/ras_event.h | 1 include/trace/events/compaction.h | 26 include/trace/events/writeback.h | 28 include/uapi/linux/userfaultfd.h | 8 ipc/mqueue.c | 2 kernel/dma/contiguous.c | 4 kernel/sched/core.c | 21 kernel/sysctl.c | 2 lib/Kconfig.kfence | 12 lib/kunit/try-catch.c | 3 lib/xarray.c | 10 mm/Kconfig | 6 mm/backing-dev.c | 57 mm/cma.c | 31 mm/cma.h | 1 mm/compaction.c | 60 mm/damon/Kconfig | 19 mm/damon/Makefile | 7 mm/damon/core-test.h | 23 mm/damon/core.c | 190 + mm/damon/dbgfs-test.h | 103 mm/damon/dbgfs.c | 264 +- mm/damon/ops-common.c | 133 + mm/damon/ops-common.h | 16 mm/damon/paddr.c | 62 mm/damon/prmtv-common.c | 133 - mm/damon/prmtv-common.h | 16 mm/damon/reclaim.c | 11 mm/damon/sysfs.c | 2632 ++++++++++++++++++++++- mm/damon/vaddr-test.h | 8 mm/damon/vaddr.c | 67 mm/early_ioremap.c | 1 mm/fadvise.c | 5 mm/filemap.c | 17 mm/gup.c | 103 mm/highmem.c | 9 mm/hmm.c | 3 mm/huge_memory.c | 41 mm/hugetlb.c | 23 mm/hugetlb_vmemmap.c | 74 mm/hwpoison-inject.c | 7 mm/internal.h | 19 mm/kfence/Makefile | 2 mm/kfence/core.c | 147 + mm/kfence/kfence_test.c | 3 mm/ksm.c | 6 mm/list_lru.c | 690 ++---- mm/maccess.c | 6 mm/madvise.c | 18 mm/memcontrol.c | 549 ++-- mm/memory-failure.c | 148 - mm/memory.c | 116 - mm/memory_hotplug.c | 136 - mm/mempolicy.c | 29 mm/memremap.c | 3 mm/migrate.c | 128 - mm/mlock.c | 1 mm/mmap.c | 5 mm/mmzone.c | 7 mm/mprotect.c | 13 mm/mremap.c | 4 mm/oom_kill.c | 3 mm/page-writeback.c | 12 mm/page_alloc.c | 429 +-- mm/page_io.c | 7 mm/page_table_check.c | 10 mm/ptdump.c | 16 mm/readahead.c | 124 + mm/rmap.c | 15 mm/shmem.c | 46 mm/slab.c | 39 mm/slab.h | 25 mm/slob.c | 6 mm/slub.c | 42 mm/sparse-vmemmap.c | 70 mm/sparse.c | 2 mm/swap.c | 25 mm/swapfile.c | 1 mm/usercopy.c | 16 mm/userfaultfd.c | 3 mm/vmalloc.c | 102 mm/vmscan.c | 138 - mm/vmstat.c | 19 mm/workingset.c | 7 mm/zswap.c | 15 net/socket.c | 2 net/sunrpc/rpc_pipe.c | 2 scripts/spelling.txt | 16 tools/testing/selftests/cgroup/cgroup_util.c | 15 tools/testing/selftests/cgroup/cgroup_util.h | 1 tools/testing/selftests/cgroup/test_memcontrol.c | 78 tools/testing/selftests/damon/Makefile | 1 tools/testing/selftests/damon/sysfs.sh | 306 ++ tools/testing/selftests/vm/.gitignore | 1 tools/testing/selftests/vm/Makefile | 7 tools/testing/selftests/vm/hugepage-vmemmap.c | 144 + tools/testing/selftests/vm/run_vmtests.sh | 11 tools/testing/selftests/vm/userfaultfd.c | 2 tools/testing/selftests/x86/Makefile | 6 264 files changed, 7205 insertions(+), 3090 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2022-03-16 23:14 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2022-03-16 23:14 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm, patches 4 patches, based on 56e337f2cf1326323844927a04e9dbce9a244835. Subsystems affected by this patch series: mm/swap kconfig ocfs2 selftests Subsystem: mm/swap Guo Ziliang <guo.ziliang@zte.com.cn>: mm: swap: get rid of deadloop in swapin readahead Subsystem: kconfig Qian Cai <quic_qiancai@quicinc.com>: configs/debug: restore DEBUG_INFO=y for overriding Subsystem: ocfs2 Joseph Qi <joseph.qi@linux.alibaba.com>: ocfs2: fix crash when initialize filecheck kobj fails Subsystem: selftests Yosry Ahmed <yosryahmed@google.com>: selftests: vm: fix clang build error multiple output files fs/ocfs2/super.c | 22 +++++++++++----------- kernel/configs/debug.config | 1 + mm/swap_state.c | 2 +- tools/testing/selftests/vm/Makefile | 6 ++---- 4 files changed, 15 insertions(+), 16 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2022-03-05 4:28 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2022-03-05 4:28 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm, patches 8 patches, based on 07ebd38a0da24d2534da57b4841346379db9f354. Subsystems affected by this patch series: mm/hugetlb mm/pagemap memfd selftests mm/userfaultfd kconfig Subsystem: mm/hugetlb Mike Kravetz <mike.kravetz@oracle.com>: selftests/vm: cleanup hugetlb file after mremap test Subsystem: mm/pagemap Suren Baghdasaryan <surenb@google.com>: mm: refactor vm_area_struct::anon_vma_name usage code mm: prevent vm_area_struct::anon_name refcount saturation mm: fix use-after-free when anon vma name is used after vma is freed Subsystem: memfd Hugh Dickins <hughd@google.com>: memfd: fix F_SEAL_WRITE after shmem huge page allocated Subsystem: selftests Chengming Zhou <zhouchengming@bytedance.com>: kselftest/vm: fix tests build with old libc Subsystem: mm/userfaultfd Yun Zhou <yun.zhou@windriver.com>: proc: fix documentation and description of pagemap Subsystem: kconfig Qian Cai <quic_qiancai@quicinc.com>: configs/debug: set CONFIG_DEBUG_INFO=y properly Documentation/admin-guide/mm/pagemap.rst | 2 fs/proc/task_mmu.c | 9 +- fs/userfaultfd.c | 6 - include/linux/mm.h | 7 + include/linux/mm_inline.h | 105 ++++++++++++++++++--------- include/linux/mm_types.h | 5 + kernel/configs/debug.config | 2 kernel/fork.c | 4 - kernel/sys.c | 19 +++- mm/madvise.c | 98 +++++++++---------------- mm/memfd.c | 40 +++++++--- mm/mempolicy.c | 2 mm/mlock.c | 2 mm/mmap.c | 12 +-- mm/mprotect.c | 2 tools/testing/selftests/vm/hugepage-mremap.c | 26 ++++-- tools/testing/selftests/vm/run_vmtests.sh | 3 tools/testing/selftests/vm/userfaultfd.c | 1 18 files changed, 201 insertions(+), 144 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2022-02-26 3:10 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2022-02-26 3:10 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm, patches 12 patches, based on c47658311d60be064b839f329c0e4d34f5f0735b. Subsystems affected by this patch series: MAINTAINERS mm/hugetlb mm/kasan mm/hugetlbfs mm/pagemap mm/selftests mm/memcg m/slab mailmap memfd Subsystem: MAINTAINERS Luis Chamberlain <mcgrof@kernel.org>: MAINTAINERS: add sysctl-next git tree Subsystem: mm/hugetlb "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>: mm/hugetlb: fix kernel crash with hugetlb mremap Subsystem: mm/kasan Andrey Konovalov <andreyknvl@google.com>: kasan: test: prevent cache merging in kmem_cache_double_destroy Subsystem: mm/hugetlbfs Liu Yuntao <liuyuntao10@huawei.com>: hugetlbfs: fix a truncation issue in hugepages parameter Subsystem: mm/pagemap Suren Baghdasaryan <surenb@google.com>: mm: fix use-after-free bug when mm->mmap is reused after being freed Subsystem: mm/selftests "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>: selftest/vm: fix map_fixed_noreplace test failure Subsystem: mm/memcg Roman Gushchin <roman.gushchin@linux.dev>: MAINTAINERS: add Roman as a memcg co-maintainer Vladimir Davydov <vdavydov.dev@gmail.com>: MAINTAINERS: remove Vladimir from memcg maintainers Shakeel Butt <shakeelb@google.com>: MAINTAINERS: add Shakeel as a memcg co-maintainer Subsystem: m/slab Vlastimil Babka <vbabka@suse.cz>: MAINTAINERS, SLAB: add Roman as reviewer, git tree Subsystem: mailmap Roman Gushchin <roman.gushchin@linux.dev>: mailmap: update Roman Gushchin's email Subsystem: memfd Mike Kravetz <mike.kravetz@oracle.com>: selftests/memfd: clean up mapping in mfd_fail_write .mailmap | 3 + MAINTAINERS | 6 ++ lib/test_kasan.c | 5 +- mm/hugetlb.c | 11 ++--- mm/mmap.c | 1 tools/testing/selftests/memfd/memfd_test.c | 1 tools/testing/selftests/vm/map_fixed_noreplace.c | 49 +++++++++++++++++------ 7 files changed, 56 insertions(+), 20 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2022-02-12 0:27 Andrew Morton 2022-02-12 2:02 ` incoming Linus Torvalds 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2022-02-12 0:27 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits, patches 5 patches, based on f1baf68e1383f6ed93eb9cff2866d46562607a43. Subsystems affected by this patch series: binfmt procfs mm/vmscan mm/memcg mm/kfence Subsystem: binfmt Mike Rapoport <rppt@linux.ibm.com>: fs/binfmt_elf: fix PT_LOAD p_align values for loaders Subsystem: procfs Yang Shi <shy828301@gmail.com>: fs/proc: task_mmu.c: don't read mapcount for migration entry Subsystem: mm/vmscan Mel Gorman <mgorman@suse.de>: mm: vmscan: remove deadlock due to throttling failing to make progress Subsystem: mm/memcg Roman Gushchin <guro@fb.com>: mm: memcg: synchronize objcg lists with a dedicated spinlock Subsystem: mm/kfence Peng Liu <liupeng256@huawei.com>: kfence: make test case compatible with run time set sample interval fs/binfmt_elf.c | 2 +- fs/proc/task_mmu.c | 40 +++++++++++++++++++++++++++++++--------- include/linux/kfence.h | 2 ++ include/linux/memcontrol.h | 5 +++-- mm/kfence/core.c | 3 ++- mm/kfence/kfence_test.c | 8 ++++---- mm/memcontrol.c | 10 +++++----- mm/vmscan.c | 4 +++- 8 files changed, 51 insertions(+), 23 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2022-02-12 0:27 incoming Andrew Morton @ 2022-02-12 2:02 ` Linus Torvalds 2022-02-12 5:24 ` incoming Andrew Morton 0 siblings, 1 reply; 349+ messages in thread From: Linus Torvalds @ 2022-02-12 2:02 UTC (permalink / raw) To: Andrew Morton; +Cc: Linux-MM, mm-commits, patches On Fri, Feb 11, 2022 at 4:27 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > 5 patches, based on f1baf68e1383f6ed93eb9cff2866d46562607a43. So this *completely* flummoxed 'b4', because you first sent the wrong series, and then sent the right one in the same thread. I fetched the emails manually, but honestly, this was confusing even then, with two "[PATCH x/5]" series where the only way to tell the right one was basically by date of email. They did arrive in the same order in my mailbox, but even that wouldn't have been guaranteed if there had been some mailer delays somewhere.. So next time when you mess up, resend it all as a completely new series and completely new threading - so with a new header email too. Please? And since I'm here, let me just verify that yes, the series you actually want me to apply is this one (as described by the head email): Subject: [patch 1/5] fs/binfmt_elf: fix PT_LOAD p_align values .. Subject: [patch 2/5] fs/proc: task_mmu.c: don't read mapcount f.. Subject: [patch 3/5] mm: vmscan: remove deadlock due to throttl.. Subject: [patch 4/5] mm: memcg: synchronize objcg lists with a .. Subject: [patch 5/5] kfence: make test case compatible with run.. and not the other one with GUP patches? Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2022-02-12 2:02 ` incoming Linus Torvalds @ 2022-02-12 5:24 ` Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2022-02-12 5:24 UTC (permalink / raw) To: Linus Torvalds; +Cc: Linux-MM, mm-commits, patches On Fri, 11 Feb 2022 18:02:53 -0800 Linus Torvalds <torvalds@linux-foundation.org> wrote: > On Fri, Feb 11, 2022 at 4:27 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > > > 5 patches, based on f1baf68e1383f6ed93eb9cff2866d46562607a43. > > So this *completely* flummoxed 'b4', because you first sent the wrong > series, and then sent the right one in the same thread. > > I fetched the emails manually, but honestly, this was confusing even > then, with two "[PATCH x/5]" series where the only way to tell the > right one was basically by date of email. They did arrive in the same > order in my mailbox, but even that wouldn't have been guaranteed if > there had been some mailer delays somewhere.. Yes, I wondered. Sorry bout that. > So next time when you mess up, resend it all as a completely new > series and completely new threading - so with a new header email too. > Please? Wilco. > And since I'm here, let me just verify that yes, the series you > actually want me to apply is this one (as described by the head > email): > > Subject: [patch 1/5] fs/binfmt_elf: fix PT_LOAD p_align values .. > Subject: [patch 2/5] fs/proc: task_mmu.c: don't read mapcount f.. > Subject: [patch 3/5] mm: vmscan: remove deadlock due to throttl.. > Subject: [patch 4/5] mm: memcg: synchronize objcg lists with a .. > Subject: [patch 5/5] kfence: make test case compatible with run.. > > and not the other one with GUP patches? Those are the ones. Five fixes, three with cc:stable. ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2022-02-04 4:48 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2022-02-04 4:48 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 10 patches, based on 1f2cfdd349b7647f438c1e552dc1b983da86d830. Subsystems affected by this patch series: mm/vmscan mm/debug mm/pagemap ipc mm/kmemleak MAINTAINERS mm/selftests Subsystem: mm/vmscan Chen Wandun <chenwandun@huawei.com>: Revert "mm/page_isolation: unset migratetype directly for non Buddy page" Subsystem: mm/debug Pasha Tatashin <pasha.tatashin@soleen.com>: Patch series "page table check fixes and cleanups", v5: mm/debug_vm_pgtable: remove pte entry from the page table mm/page_table_check: use unsigned long for page counters and cleanup mm/khugepaged: unify collapse pmd clear, flush and free mm/page_table_check: check entries at pmd levels Subsystem: mm/pagemap Mike Rapoport <rppt@linux.ibm.com>: mm/pgtable: define pte_index so that preprocessor could recognize it Subsystem: ipc Minghao Chi <chi.minghao@zte.com.cn>: ipc/sem: do not sleep with a spin lock held Subsystem: mm/kmemleak Lang Yu <lang.yu@amd.com>: mm/kmemleak: avoid scanning potential huge holes Subsystem: MAINTAINERS Mike Rapoport <rppt@linux.ibm.com>: MAINTAINERS: update rppt's email Subsystem: mm/selftests Shuah Khan <skhan@linuxfoundation.org>: kselftest/vm: revert "tools/testing/selftests/vm/userfaultfd.c: use swap() to make code cleaner" MAINTAINERS | 2 - include/linux/page_table_check.h | 19 ++++++++++ include/linux/pgtable.h | 1 ipc/sem.c | 4 +- mm/debug_vm_pgtable.c | 2 + mm/khugepaged.c | 37 +++++++++++--------- mm/kmemleak.c | 13 +++---- mm/page_isolation.c | 2 - mm/page_table_check.c | 55 +++++++++++++++---------------- tools/testing/selftests/vm/userfaultfd.c | 11 ++++-- 10 files changed, 89 insertions(+), 57 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2022-01-29 21:40 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2022-01-29 21:40 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 12 patches, based on f8c7e4ede46fe63ff10000669652648aab09d112. Subsystems affected by this patch series: sysctl binfmt ia64 mm/memory-failure mm/folios selftests mm/kasan mm/psi ocfs2 Subsystem: sysctl Andrew Morton <akpm@linux-foundation.org>: include/linux/sysctl.h: fix register_sysctl_mount_point() return type Subsystem: binfmt Tong Zhang <ztong0001@gmail.com>: binfmt_misc: fix crash when load/unload module Subsystem: ia64 Randy Dunlap <rdunlap@infradead.org>: ia64: make IA64_MCA_RECOVERY bool instead of tristate Subsystem: mm/memory-failure Joao Martins <joao.m.martins@oracle.com>: memory-failure: fetch compound_head after pgmap_pfn_valid() Subsystem: mm/folios Wei Yang <richard.weiyang@gmail.com>: mm: page->mapping folio->mapping should have the same offset Subsystem: selftests Maor Gottlieb <maorg@nvidia.com>: tools/testing/scatterlist: add missing defines Subsystem: mm/kasan Marco Elver <elver@google.com>: kasan: test: fix compatibility with FORTIFY_SOURCE Peter Collingbourne <pcc@google.com>: mm, kasan: use compare-exchange operation to set KASAN page tag Subsystem: mm/psi Suren Baghdasaryan <surenb@google.com>: psi: fix "no previous prototype" warnings when CONFIG_CGROUPS=n psi: fix "defined but not used" warnings when CONFIG_PROC_FS=n Subsystem: ocfs2 Joseph Qi <joseph.qi@linux.alibaba.com>: Patch series "ocfs2: fix a deadlock case": jbd2: export jbd2_journal_[grab|put]_journal_head ocfs2: fix a deadlock when commit trans arch/ia64/Kconfig | 2 fs/binfmt_misc.c | 8 +-- fs/jbd2/journal.c | 2 fs/ocfs2/suballoc.c | 25 ++++------- include/linux/mm.h | 17 +++++-- include/linux/mm_types.h | 1 include/linux/psi.h | 11 ++-- include/linux/sysctl.h | 2 kernel/sched/psi.c | 79 ++++++++++++++++++----------------- lib/test_kasan.c | 5 ++ mm/memory-failure.c | 6 ++ tools/testing/scatterlist/linux/mm.h | 3 - 12 files changed, 91 insertions(+), 70 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2022-01-29 2:13 Andrew Morton 2022-01-29 4:25 ` incoming Matthew Wilcox 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2022-01-29 2:13 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 12 patches, based on 169387e2aa291a4e3cb856053730fe99d6cec06f. Subsystems affected by this patch series: sysctl binfmt ia64 mm/memory-failure mm/folios selftests mm/kasan mm/psi ocfs2 Subsystem: sysctl Andrew Morton <akpm@linux-foundation.org>: include/linux/sysctl.h: fix register_sysctl_mount_point() return type Subsystem: binfmt Tong Zhang <ztong0001@gmail.com>: binfmt_misc: fix crash when load/unload module Subsystem: ia64 Randy Dunlap <rdunlap@infradead.org>: ia64: make IA64_MCA_RECOVERY bool instead of tristate Subsystem: mm/memory-failure Joao Martins <joao.m.martins@oracle.com>: memory-failure: fetch compound_head after pgmap_pfn_valid() Subsystem: mm/folios Wei Yang <richard.weiyang@gmail.com>: mm: page->mapping folio->mapping should have the same offset Subsystem: selftests Maor Gottlieb <maorg@nvidia.com>: tools/testing/scatterlist: add missing defines Subsystem: mm/kasan Marco Elver <elver@google.com>: kasan: test: fix compatibility with FORTIFY_SOURCE Peter Collingbourne <pcc@google.com>: mm, kasan: use compare-exchange operation to set KASAN page tag Subsystem: mm/psi Suren Baghdasaryan <surenb@google.com>: psi: fix "no previous prototype" warnings when CONFIG_CGROUPS=n psi: fix "defined but not used" warnings when CONFIG_PROC_FS=n Subsystem: ocfs2 Joseph Qi <joseph.qi@linux.alibaba.com>: Patch series "ocfs2: fix a deadlock case": jbd2: export jbd2_journal_[grab|put]_journal_head ocfs2: fix a deadlock when commit trans arch/ia64/Kconfig | 2 fs/binfmt_misc.c | 8 +-- fs/jbd2/journal.c | 2 fs/ocfs2/suballoc.c | 25 ++++------- include/linux/mm.h | 17 +++++-- include/linux/mm_types.h | 1 include/linux/psi.h | 11 ++-- include/linux/sysctl.h | 2 kernel/sched/psi.c | 79 ++++++++++++++++++----------------- lib/test_kasan.c | 5 ++ mm/memory-failure.c | 6 ++ tools/testing/scatterlist/linux/mm.h | 3 - 12 files changed, 91 insertions(+), 70 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2022-01-29 2:13 incoming Andrew Morton @ 2022-01-29 4:25 ` Matthew Wilcox 2022-01-29 6:23 ` incoming Andrew Morton 0 siblings, 1 reply; 349+ messages in thread From: Matthew Wilcox @ 2022-01-29 4:25 UTC (permalink / raw) To: Andrew Morton; +Cc: Linus Torvalds, mm-commits, linux-mm On Fri, Jan 28, 2022 at 06:13:41PM -0800, Andrew Morton wrote: > 12 patches, based on 169387e2aa291a4e3cb856053730fe99d6cec06f. ^^ I see 7? ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2022-01-29 4:25 ` incoming Matthew Wilcox @ 2022-01-29 6:23 ` Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2022-01-29 6:23 UTC (permalink / raw) To: Matthew Wilcox; +Cc: Linus Torvalds, mm-commits, linux-mm On Sat, 29 Jan 2022 04:25:33 +0000 Matthew Wilcox <willy@infradead.org> wrote: > On Fri, Jan 28, 2022 at 06:13:41PM -0800, Andrew Morton wrote: > > 12 patches, based on 169387e2aa291a4e3cb856053730fe99d6cec06f. > ^^ > > I see 7? Crap, sorry, ignore all this, shall redo tomorrow. (It wasn't a good day over here. The thing with disk drives is that the bigger they are, the harder they fall). ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2022-01-22 6:10 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2022-01-22 6:10 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits This is the post-linux-next queue. Material which was based on or dependent upon material which was in -next. 69 patches, based on 9b57f458985742bd1c585f4c7f36d04634ce1143. Subsystems affected by this patch series: mm/migration sysctl mm/zsmalloc proc lib Subsystem: mm/migration Alistair Popple <apopple@nvidia.com>: mm/migrate.c: rework migration_entry_wait() to not take a pageref Subsystem: sysctl Xiaoming Ni <nixiaoming@huawei.com>: Patch series "sysctl: first set of kernel/sysctl cleanups", v2: sysctl: add a new register_sysctl_init() interface sysctl: move some boundary constants from sysctl.c to sysctl_vals hung_task: move hung_task sysctl interface to hung_task.c watchdog: move watchdog sysctl interface to watchdog.c Stephen Kitt <steve@sk2.org>: sysctl: make ngroups_max const Xiaoming Ni <nixiaoming@huawei.com>: sysctl: use const for typically used max/min proc sysctls sysctl: use SYSCTL_ZERO to replace some static int zero uses aio: move aio sysctl to aio.c dnotify: move dnotify sysctl to dnotify.c Luis Chamberlain <mcgrof@kernel.org>: Patch series "sysctl: second set of kernel/sysctl cleanups", v2: hpet: simplify subdirectory registration with register_sysctl() i915: simplify subdirectory registration with register_sysctl() macintosh/mac_hid.c: simplify subdirectory registration with register_sysctl() ocfs2: simplify subdirectory registration with register_sysctl() test_sysctl: simplify subdirectory registration with register_sysctl() Xiaoming Ni <nixiaoming@huawei.com>: inotify: simplify subdirectory registration with register_sysctl() Luis Chamberlain <mcgrof@kernel.org>: cdrom: simplify subdirectory registration with register_sysctl() Xiaoming Ni <nixiaoming@huawei.com>: eventpoll: simplify sysctl declaration with register_sysctl() Patch series "sysctl: 3rd set of kernel/sysctl cleanups", v2: firmware_loader: move firmware sysctl to its own files random: move the random sysctl declarations to its own file Luis Chamberlain <mcgrof@kernel.org>: sysctl: add helper to register a sysctl mount point fs: move binfmt_misc sysctl to its own file Xiaoming Ni <nixiaoming@huawei.com>: printk: move printk sysctl to printk/sysctl.c scsi/sg: move sg-big-buff sysctl to scsi/sg.c stackleak: move stack_erasing sysctl to stackleak.c Luis Chamberlain <mcgrof@kernel.org>: sysctl: share unsigned long const values Patch series "sysctl: 4th set of kernel/sysctl cleanups": fs: move inode sysctls to its own file fs: move fs stat sysctls to file_table.c fs: move dcache sysctls to its own file sysctl: move maxolduid as a sysctl specific const fs: move shared sysctls to fs/sysctls.c fs: move locking sysctls where they are used fs: move namei sysctls to its own file fs: move fs/exec.c sysctls into its own file fs: move pipe sysctls to is own file Patch series "sysctl: add and use base directory declarer and registration helper": sysctl: add and use base directory declarer and registration helper fs: move namespace sysctls and declare fs base directory kernel/sysctl.c: rename sysctl_init() to sysctl_init_bases() Xiaoming Ni <nixiaoming@huawei.com>: printk: fix build warning when CONFIG_PRINTK=n fs/coredump: move coredump sysctls into its own file kprobe: move sysctl_kprobes_optimization to kprobes.c Colin Ian King <colin.i.king@gmail.com>: kernel/sysctl.c: remove unused variable ten_thousand Baokun Li <libaokun1@huawei.com>: sysctl: returns -EINVAL when a negative value is passed to proc_doulongvec_minmax Subsystem: mm/zsmalloc Minchan Kim <minchan@kernel.org>: Patch series "zsmalloc: remove bit_spin_lock", v2: zsmalloc: introduce some helper functions zsmalloc: rename zs_stat_type to class_stat_type zsmalloc: decouple class actions from zspage works zsmalloc: introduce obj_allocated zsmalloc: move huge compressed obj from page to zspage zsmalloc: remove zspage isolation for migration locking/rwlocks: introduce write_lock_nested zsmalloc: replace per zpage lock with pool->migrate_lock Mike Galbraith <umgwanakikbuti@gmail.com>: zsmalloc: replace get_cpu_var with local_lock Subsystem: proc Muchun Song <songmuchun@bytedance.com>: fs: proc: store PDE()->data into inode->i_private proc: remove PDE_DATA() completely Subsystem: lib Vlastimil Babka <vbabka@suse.cz>: lib/stackdepot: allow optional init and stack_table allocation by kvmalloc() lib/stackdepot: fix spelling mistake and grammar in pr_err message lib/stackdepot: allow optional init and stack_table allocation by kvmalloc() - fixup lib/stackdepot: allow optional init and stack_table allocation by kvmalloc() - fixup3 lib/stackdepot: allow optional init and stack_table allocation by kvmalloc() - fixup4 Marco Elver <elver@google.com>: lib/stackdepot: always do filter_irq_stacks() in stack_depot_save() Christoph Hellwig <hch@lst.de>: Patch series "remove Xen tmem leftovers": mm: remove cleancache frontswap: remove frontswap_writethrough frontswap: remove frontswap_tmem_exclusive_gets frontswap: remove frontswap_shrink frontswap: remove frontswap_curr_pages frontswap: simplify frontswap_init frontswap: remove the frontswap exports mm: simplify try_to_unuse frontswap: remove frontswap_test frontswap: simplify frontswap_register_ops mm: mark swap_lock and swap_active_head static frontswap: remove support for multiple ops mm: hide the FRONTSWAP Kconfig symbol Documentation/vm/cleancache.rst | 296 ------ Documentation/vm/frontswap.rst | 31 Documentation/vm/index.rst | 1 MAINTAINERS | 7 arch/alpha/kernel/srm_env.c | 4 arch/arm/configs/bcm2835_defconfig | 1 arch/arm/configs/qcom_defconfig | 1 arch/arm/kernel/atags_proc.c | 2 arch/arm/mm/alignment.c | 2 arch/ia64/kernel/salinfo.c | 10 arch/m68k/configs/amiga_defconfig | 1 arch/m68k/configs/apollo_defconfig | 1 arch/m68k/configs/atari_defconfig | 1 arch/m68k/configs/bvme6000_defconfig | 1 arch/m68k/configs/hp300_defconfig | 1 arch/m68k/configs/mac_defconfig | 1 arch/m68k/configs/multi_defconfig | 1 arch/m68k/configs/mvme147_defconfig | 1 arch/m68k/configs/mvme16x_defconfig | 1 arch/m68k/configs/q40_defconfig | 1 arch/m68k/configs/sun3_defconfig | 1 arch/m68k/configs/sun3x_defconfig | 1 arch/powerpc/kernel/proc_powerpc.c | 4 arch/s390/configs/debug_defconfig | 1 arch/s390/configs/defconfig | 1 arch/sh/mm/alignment.c | 4 arch/xtensa/platforms/iss/simdisk.c | 4 block/bdev.c | 5 drivers/acpi/proc.c | 2 drivers/base/firmware_loader/fallback.c | 7 drivers/base/firmware_loader/fallback.h | 11 drivers/base/firmware_loader/fallback_table.c | 25 drivers/cdrom/cdrom.c | 23 drivers/char/hpet.c | 22 drivers/char/random.c | 14 drivers/gpu/drm/drm_dp_mst_topology.c | 1 drivers/gpu/drm/drm_mm.c | 4 drivers/gpu/drm/drm_modeset_lock.c | 9 drivers/gpu/drm/i915/i915_perf.c | 22 drivers/gpu/drm/i915/intel_runtime_pm.c | 3 drivers/hwmon/dell-smm-hwmon.c | 4 drivers/macintosh/mac_hid.c | 24 drivers/net/bonding/bond_procfs.c | 8 drivers/net/wireless/cisco/airo.c | 22 drivers/net/wireless/intersil/hostap/hostap_ap.c | 16 drivers/net/wireless/intersil/hostap/hostap_download.c | 2 drivers/net/wireless/intersil/hostap/hostap_proc.c | 24 drivers/net/wireless/ray_cs.c | 2 drivers/nubus/proc.c | 36 drivers/parisc/led.c | 4 drivers/pci/proc.c | 10 drivers/platform/x86/thinkpad_acpi.c | 4 drivers/platform/x86/toshiba_acpi.c | 16 drivers/pnp/isapnp/proc.c | 2 drivers/pnp/pnpbios/proc.c | 4 drivers/scsi/scsi_proc.c | 4 drivers/scsi/sg.c | 35 drivers/usb/gadget/function/rndis.c | 4 drivers/zorro/proc.c | 2 fs/Makefile | 4 fs/afs/proc.c | 6 fs/aio.c | 31 fs/binfmt_misc.c | 6 fs/btrfs/extent_io.c | 10 fs/btrfs/super.c | 2 fs/coredump.c | 66 + fs/dcache.c | 37 fs/eventpoll.c | 10 fs/exec.c | 145 +-- fs/ext4/mballoc.c | 14 fs/ext4/readpage.c | 6 fs/ext4/super.c | 3 fs/f2fs/data.c | 13 fs/file_table.c | 47 - fs/inode.c | 39 fs/jbd2/journal.c | 2 fs/locks.c | 34 fs/mpage.c | 7 fs/namei.c | 58 + fs/namespace.c | 24 fs/notify/dnotify/dnotify.c | 21 fs/notify/fanotify/fanotify_user.c | 10 fs/notify/inotify/inotify_user.c | 11 fs/ntfs3/ntfs_fs.h | 1 fs/ocfs2/stackglue.c | 25 fs/ocfs2/super.c | 2 fs/pipe.c | 64 + fs/proc/generic.c | 6 fs/proc/inode.c | 1 fs/proc/internal.h | 5 fs/proc/proc_net.c | 8 fs/proc/proc_sysctl.c | 67 + fs/super.c | 3 fs/sysctls.c | 47 - include/linux/aio.h | 4 include/linux/cleancache.h | 124 -- include/linux/coredump.h | 10 include/linux/dcache.h | 10 include/linux/dnotify.h | 1 include/linux/fanotify.h | 2 include/linux/frontswap.h | 35 include/linux/fs.h | 18 include/linux/inotify.h | 3 include/linux/kprobes.h | 6 include/linux/migrate.h | 2 include/linux/mount.h | 3 include/linux/pipe_fs_i.h | 4 include/linux/poll.h | 2 include/linux/printk.h | 4 include/linux/proc_fs.h | 17 include/linux/ref_tracker.h | 2 include/linux/rwlock.h | 6 include/linux/rwlock_api_smp.h | 8 include/linux/rwlock_rt.h | 10 include/linux/sched/sysctl.h | 14 include/linux/seq_file.h | 2 include/linux/shmem_fs.h | 3 include/linux/spinlock_api_up.h | 1 include/linux/stackdepot.h | 25 include/linux/stackleak.h | 5 include/linux/swapfile.h | 3 include/linux/sysctl.h | 67 + include/scsi/sg.h | 4 init/main.c | 9 ipc/util.c | 2 kernel/hung_task.c | 81 + kernel/irq/proc.c | 8 kernel/kprobes.c | 30 kernel/locking/spinlock.c | 10 kernel/locking/spinlock_rt.c | 12 kernel/printk/Makefile | 5 kernel/printk/internal.h | 8 kernel/printk/printk.c | 4 kernel/printk/sysctl.c | 85 + kernel/resource.c | 4 kernel/stackleak.c | 26 kernel/sysctl.c | 790 +---------------- kernel/watchdog.c | 101 ++ lib/Kconfig | 4 lib/Kconfig.kasan | 2 lib/stackdepot.c | 46 lib/test_sysctl.c | 22 mm/Kconfig | 40 mm/Makefile | 1 mm/cleancache.c | 315 ------ mm/filemap.c | 102 +- mm/frontswap.c | 259 ----- mm/kasan/common.c | 1 mm/migrate.c | 38 mm/page_owner.c | 2 mm/shmem.c | 33 mm/swapfile.c | 90 - mm/truncate.c | 15 mm/zsmalloc.c | 557 ++++------- mm/zswap.c | 8 net/atm/proc.c | 4 net/bluetooth/af_bluetooth.c | 8 net/can/bcm.c | 2 net/can/proc.c | 2 net/core/neighbour.c | 6 net/core/pktgen.c | 6 net/ipv4/netfilter/ipt_CLUSTERIP.c | 6 net/ipv4/raw.c | 8 net/ipv4/tcp_ipv4.c | 2 net/ipv4/udp.c | 6 net/netfilter/x_tables.c | 10 net/netfilter/xt_hashlimit.c | 18 net/netfilter/xt_recent.c | 4 net/sunrpc/auth_gss/svcauth_gss.c | 4 net/sunrpc/cache.c | 24 net/sunrpc/stats.c | 2 sound/core/info.c | 4 172 files changed, 1877 insertions(+), 2931 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2022-01-20 2:07 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2022-01-20 2:07 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 55 patches, based on df0cc57e057f18e44dac8e6c18aba47ab53202f9 ("Linux 5.16") Subsystems affected by this patch series: percpu procfs sysctl misc core-kernel get_maintainer lib checkpatch binfmt nilfs2 hfs fat adfs panic delayacct kconfig kcov ubsan Subsystem: percpu Kefeng Wang <wangkefeng.wang@huawei.com>: Patch series "mm: percpu: Cleanup percpu first chunk function": mm: percpu: generalize percpu related config mm: percpu: add pcpu_fc_cpu_to_node_fn_t typedef mm: percpu: add generic pcpu_fc_alloc/free funciton mm: percpu: add generic pcpu_populate_pte() function Subsystem: procfs David Hildenbrand <david@redhat.com>: proc/vmcore: don't fake reading zeroes on surprise vmcore_cb unregistration Hans de Goede <hdegoede@redhat.com>: proc: make the proc_create[_data]() stubs static inlines Qi Zheng <zhengqi.arch@bytedance.com>: proc: convert the return type of proc_fd_access_allowed() to be boolean Subsystem: sysctl Geert Uytterhoeven <geert+renesas@glider.be>: sysctl: fix duplicate path separator in printed entries luo penghao <luo.penghao@zte.com.cn>: sysctl: remove redundant ret assignment Subsystem: misc Andy Shevchenko <andriy.shevchenko@linux.intel.com>: include/linux/unaligned: replace kernel.h with the necessary inclusions kernel.h: include a note to discourage people from including it in headers Subsystem: core-kernel Yafang Shao <laoar.shao@gmail.com>: Patch series "task comm cleanups", v2: fs/exec: replace strlcpy with strscpy_pad in __set_task_comm fs/exec: replace strncpy with strscpy_pad in __get_task_comm drivers/infiniband: replace open-coded string copy with get_task_comm fs/binfmt_elf: replace open-coded string copy with get_task_comm samples/bpf/test_overhead_kprobe_kern: replace bpf_probe_read_kernel with bpf_probe_read_kernel_str to get task comm tools/bpf/bpftool/skeleton: replace bpf_probe_read_kernel with bpf_probe_read_kernel_str to get task comm tools/testing/selftests/bpf: replace open-coded 16 with TASK_COMM_LEN kthread: dynamically allocate memory to store kthread's full name Davidlohr Bueso <dave@stgolabs.net>: kernel/sys.c: only take tasklist_lock for get/setpriority(PRIO_PGRP) Subsystem: get_maintainer Randy Dunlap <rdunlap@infradead.org>: get_maintainer: don't remind about no git repo when --nogit is used Subsystem: lib Alexey Dobriyan <adobriyan@gmail.com>: kstrtox: uninline everything Andy Shevchenko <andriy.shevchenko@linux.intel.com>: list: introduce list_is_head() helper and re-use it in list.h Zhen Lei <thunder.leizhen@huawei.com>: lib/list_debug.c: print more list debugging context in __list_del_entry_valid() Isabella Basso <isabbasso@riseup.net>: Patch series "test_hash.c: refactor into KUnit", v3: hash.h: remove unused define directive test_hash.c: split test_int_hash into arch-specific functions test_hash.c: split test_hash_init lib/Kconfig.debug: properly split hash test kernel entries test_hash.c: refactor into kunit Andy Shevchenko <andriy.shevchenko@linux.intel.com>: kunit: replace kernel.h with the necessary inclusions uuid: discourage people from using UAPI header in new code uuid: remove licence boilerplate text from the header Andrey Konovalov <andreyknvl@google.com>: lib/test_meminit: destroy cache in kmem_cache_alloc_bulk() test Subsystem: checkpatch Jerome Forissier <jerome@forissier.org>: checkpatch: relax regexp for COMMIT_LOG_LONG_LINE Joe Perches <joe@perches.com>: checkpatch: improve Kconfig help test Rikard Falkeborn <rikard.falkeborn@gmail.com>: const_structs.checkpatch: add frequently used ops structs Subsystem: binfmt "H.J. Lu" <hjl.tools@gmail.com>: fs/binfmt_elf: use PT_LOAD p_align values for static PIE Subsystem: nilfs2 Colin Ian King <colin.i.king@gmail.com>: nilfs2: remove redundant pointer sbufs Subsystem: hfs Kees Cook <keescook@chromium.org>: hfsplus: use struct_group_attr() for memcpy() region Subsystem: fat "NeilBrown" <neilb@suse.de>: FAT: use io_schedule_timeout() instead of congestion_wait() Subsystem: adfs Minghao Chi <chi.minghao@zte.com.cn>: fs/adfs: remove unneeded variable make code cleaner Subsystem: panic Marco Elver <elver@google.com>: panic: use error_report_end tracepoint on warnings Sebastian Andrzej Siewior <bigeasy@linutronix.de>: panic: remove oops_id Subsystem: delayacct Yang Yang <yang.yang29@zte.com.cn>: delayacct: support swapin delay accounting for swapping without blkio delayacct: fix incomplete disable operation when switch enable to disable delayacct: cleanup flags in struct task_delay_info and functions use it wangyong <wang.yong12@zte.com.cn>: Documentation/accounting/delay-accounting.rst: add thrashing page cache and direct compact delayacct: track delays from memory compact Subsystem: kconfig Qian Cai <quic_qiancai@quicinc.com>: configs: introduce debug.config for CI-like setup Nathan Chancellor <nathan@kernel.org>: Patch series "Fix CONFIG_TEST_KMOD with 256kB page size": arch/Kconfig: split PAGE_SIZE_LESS_THAN_256KB from PAGE_SIZE_LESS_THAN_64KB btrfs: use generic Kconfig option for 256kB page size limit lib/Kconfig.debug: make TEST_KMOD depend on PAGE_SIZE_LESS_THAN_256KB Subsystem: kcov Marco Elver <elver@google.com>: kcov: fix generic Kconfig dependencies if ARCH_WANTS_NO_INSTR Subsystem: ubsan Kees Cook <keescook@chromium.org>: ubsan: remove CONFIG_UBSAN_OBJECT_SIZE Colin Ian King <colin.i.king@gmail.com>: lib: remove redundant assignment to variable ret Documentation/accounting/delay-accounting.rst | 63 +- arch/Kconfig | 4 arch/arm64/Kconfig | 20 arch/ia64/Kconfig | 9 arch/mips/Kconfig | 10 arch/mips/mm/init.c | 28 - arch/powerpc/Kconfig | 17 arch/powerpc/kernel/setup_64.c | 113 ---- arch/riscv/Kconfig | 10 arch/sparc/Kconfig | 12 arch/sparc/kernel/led.c | 8 arch/sparc/kernel/smp_64.c | 119 ----- arch/x86/Kconfig | 19 arch/x86/kernel/setup_percpu.c | 82 --- drivers/base/arch_numa.c | 78 --- drivers/infiniband/hw/qib/qib.h | 2 drivers/infiniband/hw/qib/qib_file_ops.c | 2 drivers/infiniband/sw/rxe/rxe_qp.c | 3 drivers/net/wireless/broadcom/brcm80211/brcmfmac/xtlv.c | 2 fs/adfs/inode.c | 4 fs/binfmt_elf.c | 6 fs/btrfs/Kconfig | 3 fs/exec.c | 5 fs/fat/file.c | 5 fs/hfsplus/hfsplus_raw.h | 12 fs/hfsplus/xattr.c | 4 fs/nilfs2/page.c | 4 fs/proc/array.c | 3 fs/proc/base.c | 4 fs/proc/proc_sysctl.c | 9 fs/proc/vmcore.c | 10 include/kunit/assert.h | 2 include/linux/delayacct.h | 107 ++-- include/linux/elfcore-compat.h | 5 include/linux/elfcore.h | 5 include/linux/hash.h | 5 include/linux/kernel.h | 9 include/linux/kthread.h | 1 include/linux/list.h | 36 - include/linux/percpu.h | 21 include/linux/proc_fs.h | 12 include/linux/sched.h | 9 include/linux/unaligned/packed_struct.h | 2 include/trace/events/error_report.h | 8 include/uapi/linux/taskstats.h | 6 include/uapi/linux/uuid.h | 10 kernel/configs/debug.config | 105 ++++ kernel/delayacct.c | 49 +- kernel/kthread.c | 32 + kernel/panic.c | 21 kernel/sys.c | 16 lib/Kconfig.debug | 45 + lib/Kconfig.ubsan | 13 lib/Makefile | 5 lib/asn1_encoder.c | 2 lib/kstrtox.c | 12 lib/list_debug.c | 8 lib/lz4/lz4defs.h | 2 lib/test_hash.c | 375 +++++++--------- lib/test_meminit.c | 1 lib/test_ubsan.c | 22 mm/Kconfig | 12 mm/memory.c | 4 mm/page_alloc.c | 3 mm/page_io.c | 3 mm/percpu.c | 168 +++++-- samples/bpf/offwaketime_kern.c | 4 samples/bpf/test_overhead_kprobe_kern.c | 11 samples/bpf/test_overhead_tp_kern.c | 5 scripts/Makefile.ubsan | 1 scripts/checkpatch.pl | 54 +- scripts/const_structs.checkpatch | 23 scripts/get_maintainer.pl | 2 tools/accounting/getdelays.c | 8 tools/bpf/bpftool/skeleton/pid_iter.bpf.c | 4 tools/include/linux/hash.h | 5 tools/testing/selftests/bpf/progs/test_stacktrace_map.c | 6 tools/testing/selftests/bpf/progs/test_tracepoint.c | 6 78 files changed, 943 insertions(+), 992 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2022-01-14 22:02 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2022-01-14 22:02 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 146 patches, based on df0cc57e057f18e44dac8e6c18aba47ab53202f9 ("Linux 5.16") Subsystems affected by this patch series: kthread ia64 scripts ntfs squashfs ocfs2 vfs mm/slab-generic mm/slab mm/kmemleak mm/dax mm/kasan mm/debug mm/pagecache mm/gup mm/shmem mm/frontswap mm/memremap mm/memcg mm/selftests mm/pagemap mm/dma mm/vmalloc mm/memory-failure mm/hugetlb mm/userfaultfd mm/vmscan mm/mempolicy mm/oom-kill mm/hugetlbfs mm/migration mm/thp mm/ksm mm/page-poison mm/percpu mm/rmap mm/zswap mm/zram mm/cleanups mm/hmm mm/damon Subsystem: kthread Cai Huoqing <caihuoqing@baidu.com>: kthread: add the helper function kthread_run_on_cpu() RDMA/siw: make use of the helper function kthread_run_on_cpu() ring-buffer: make use of the helper function kthread_run_on_cpu() rcutorture: make use of the helper function kthread_run_on_cpu() trace/osnoise: make use of the helper function kthread_run_on_cpu() trace/hwlat: make use of the helper function kthread_run_on_cpu() Subsystem: ia64 Yang Guang <yang.guang5@zte.com.cn>: ia64: module: use swap() to make code cleaner arch/ia64/kernel/setup.c: use swap() to make code cleaner Jason Wang <wangborong@cdjrlc.com>: ia64: fix typo in a comment Greg Kroah-Hartman <gregkh@linuxfoundation.org>: ia64: topology: use default_groups in kobj_type Subsystem: scripts Drew Fustini <dfustini@baylibre.com>: scripts/spelling.txt: add "oveflow" Subsystem: ntfs Yang Li <yang.lee@linux.alibaba.com>: fs/ntfs/attrib.c: fix one kernel-doc comment Subsystem: squashfs Zheng Liang <zhengliang6@huawei.com>: squashfs: provide backing_dev_info in order to disable read-ahead Subsystem: ocfs2 Zhang Mingyu <zhang.mingyu@zte.com.cn>: ocfs2: use BUG_ON instead of if condition followed by BUG. Joseph Qi <joseph.qi@linux.alibaba.com>: ocfs2: clearly handle ocfs2_grab_pages_for_write() return value Greg Kroah-Hartman <gregkh@linuxfoundation.org>: ocfs2: use default_groups in kobj_type Colin Ian King <colin.i.king@gmail.com>: ocfs2: remove redundant assignment to pointer root_bh Greg Kroah-Hartman <gregkh@linuxfoundation.org>: ocfs2: cluster: use default_groups in kobj_type Colin Ian King <colin.i.king@gmail.com>: ocfs2: remove redundant assignment to variable free_space Subsystem: vfs Amit Daniel Kachhap <amit.kachhap@arm.com>: fs/ioctl: remove unnecessary __user annotation Subsystem: mm/slab-generic Marco Elver <elver@google.com>: mm/slab_common: use WARN() if cache still has objects on destroy Subsystem: mm/slab Muchun Song <songmuchun@bytedance.com>: mm: slab: make slab iterator functions static Subsystem: mm/kmemleak Kuan-Ying Lee <Kuan-Ying.Lee@mediatek.com>: kmemleak: fix kmemleak false positive report with HW tag-based kasan enable Calvin Zhang <calvinzhang.cool@gmail.com>: mm: kmemleak: alloc gray object for reserved region with direct map Kefeng Wang <wangkefeng.wang@huawei.com>: mm: defer kmemleak object creation of module_alloc() Subsystem: mm/dax Joao Martins <joao.m.martins@oracle.com>: Patch series "mm, device-dax: Introduce compound pages in devmap", v7: mm/page_alloc: split prep_compound_page into head and tail subparts mm/page_alloc: refactor memmap_init_zone_device() page init mm/memremap: add ZONE_DEVICE support for compound pages device-dax: use ALIGN() for determining pgoff device-dax: use struct_size() device-dax: ensure dev_dax->pgmap is valid for dynamic devices device-dax: factor out page mapping initialization device-dax: set mapping prior to vmf_insert_pfn{,_pmd,pud}() device-dax: remove pfn from __dev_dax_{pte,pmd,pud}_fault() device-dax: compound devmap support Subsystem: mm/kasan Marco Elver <elver@google.com>: kasan: test: add globals left-out-of-bounds test kasan: add ability to detect double-kmem_cache_destroy() kasan: test: add test case for double-kmem_cache_destroy() Andrey Konovalov <andreyknvl@google.com>: kasan: fix quarantine conflicting with init_on_free Subsystem: mm/debug "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm,fs: split dump_mapping() out from dump_page() Anshuman Khandual <anshuman.khandual@arm.com>: mm/debug_vm_pgtable: update comments regarding migration swap entries Subsystem: mm/pagecache chiminghao <chi.minghao@zte.com.cn>: mm/truncate.c: remove unneeded variable Subsystem: mm/gup Christophe Leroy <christophe.leroy@csgroup.eu>: gup: avoid multiple user access locking/unlocking in fault_in_{read/write}able Li Xinhai <lixinhai.lxh@gmail.com>: mm/gup.c: stricter check on THP migration entry during follow_pmd_mask Subsystem: mm/shmem Yang Shi <shy828301@gmail.com>: mm: shmem: don't truncate page if memory failure happens Gang Li <ligang.bdlg@bytedance.com>: shmem: fix a race between shmem_unused_huge_shrink and shmem_evict_inode Subsystem: mm/frontswap Christophe JAILLET <christophe.jaillet@wanadoo.fr>: mm/frontswap.c: use non-atomic '__set_bit()' when possible Subsystem: mm/memremap Subsystem: mm/memcg Muchun Song <songmuchun@bytedance.com>: mm: memcontrol: make cgroup_memory_nokmem static Donghai Qiao <dqiao@redhat.com>: mm/page_counter: remove an incorrect call to propagate_protected_usage() Dan Schatzberg <schatzberg.dan@gmail.com>: mm/memcg: add oom_group_kill memory event Shakeel Butt <shakeelb@google.com>: memcg: better bounds on the memcg stats updates Wang Weiyang <wangweiyang2@huawei.com>: mm/memcg: use struct_size() helper in kzalloc() Shakeel Butt <shakeelb@google.com>: memcg: add per-memcg vmalloc stat Subsystem: mm/selftests chiminghao <chi.minghao@zte.com.cn>: tools/testing/selftests/vm/userfaultfd.c: use swap() to make code cleaner Subsystem: mm/pagemap Qi Zheng <zhengqi.arch@bytedance.com>: mm: remove redundant check about FAULT_FLAG_ALLOW_RETRY bit Colin Cross <ccross@google.com>: Patch series "mm: rearrange madvise code to allow for reuse", v11: mm: rearrange madvise code to allow for reuse mm: add a field to store names for private anonymous memory Suren Baghdasaryan <surenb@google.com>: mm: add anonymous vma name refcounting Arnd Bergmann <arnd@arndb.de>: mm: move anon_vma declarations to linux/mm_inline.h mm: move tlb_flush_pending inline helpers to mm_inline.h Suren Baghdasaryan <surenb@google.com>: mm: protect free_pgtables with mmap_lock write lock in exit_mmap mm: document locking restrictions for vm_operations_struct::close mm/oom_kill: allow process_mrelease to run under mmap_lock protection Shuah Khan <skhan@linuxfoundation.org>: docs/vm: add vmalloced-kernel-stacks document Pasha Tatashin <pasha.tatashin@soleen.com>: Patch series "page table check", v3: mm: change page type prior to adding page table entry mm: ptep_clear() page table helper mm: page table check x86: mm: add x86_64 support for page table check "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm: remove last argument of reuse_swap_page() mm: remove the total_mapcount argument from page_trans_huge_map_swapcount() mm: remove the total_mapcount argument from page_trans_huge_mapcount() Subsystem: mm/dma Christian König <christian.koenig@amd.com>: mm/dmapool.c: revert "make dma pool to use kmalloc_node" Subsystem: mm/vmalloc Michal Hocko <mhocko@suse.com>: Patch series "extend vmalloc support for constrained allocations", v2: mm/vmalloc: alloc GFP_NO{FS,IO} for vmalloc mm/vmalloc: add support for __GFP_NOFAIL mm/vmalloc: be more explicit about supported gfp flags. mm: allow !GFP_KERNEL allocations for kvmalloc mm: make slab and vmalloc allocators __GFP_NOLOCKDEP aware "NeilBrown" <neilb@suse.de>: mm: introduce memalloc_retry_wait() Suren Baghdasaryan <surenb@google.com>: mm/pagealloc: sysctl: change watermark_scale_factor max limit to 30% Changcheng Deng <deng.changcheng@zte.com.cn>: mm: fix boolreturn.cocci warning Xiongwei Song <sxwjean@gmail.com>: mm: page_alloc: fix building error on -Werror=array-compare Michal Hocko <mhocko@suse.com>: mm: drop node from alloc_pages_vma Miles Chen <miles.chen@mediatek.com>: include/linux/gfp.h: further document GFP_DMA32 Anshuman Khandual <anshuman.khandual@arm.com>: mm/page_alloc.c: modify the comment section for alloc_contig_pages() Baoquan He <bhe@redhat.com>: Patch series "Handle warning of allocation failure on DMA zone w/o managed pages", v4: mm_zone: add function to check if managed dma zone exists dma/pool: create dma atomic pool only if dma zone has managed pages mm/page_alloc.c: do not warn allocation failure on zone DMA if no managed pages Subsystem: mm/memory-failure Subsystem: mm/hugetlb Mina Almasry <almasrymina@google.com>: hugetlb: add hugetlb.*.numa_stat file Yosry Ahmed <yosryahmed@google.com>: mm, hugepages: make memory size variable in hugepage-mremap selftest Yang Yang <yang.yang29@zte.com.cn>: mm/vmstat: add events for THP max_ptes_* exceeds Waiman Long <longman@redhat.com>: selftests/vm: make charge_reserved_hugetlb.sh work with existing cgroup setting Subsystem: mm/userfaultfd Peter Xu <peterx@redhat.com>: selftests/uffd: allow EINTR/EAGAIN Mike Kravetz <mike.kravetz@oracle.com>: userfaultfd/selftests: clean up hugetlb allocation code Subsystem: mm/vmscan Gang Li <ligang.bdlg@bytedance.com>: vmscan: make drop_slab_node static Chen Wandun <chenwandun@huawei.com>: mm/page_isolation: unset migratetype directly for non Buddy page Subsystem: mm/mempolicy "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>: Patch series "mm: add new syscall set_mempolicy_home_node", v6: mm/mempolicy: use policy_node helper with MPOL_PREFERRED_MANY mm/mempolicy: add set_mempolicy_home_node syscall mm/mempolicy: wire up syscall set_mempolicy_home_node Randy Dunlap <rdunlap@infradead.org>: mm/mempolicy: fix all kernel-doc warnings Subsystem: mm/oom-kill Jann Horn <jannh@google.com>: mm, oom: OOM sysrq should always kill a process Subsystem: mm/hugetlbfs Sean Christopherson <seanjc@google.com>: hugetlbfs: fix off-by-one error in hugetlb_vmdelete_list() Subsystem: mm/migration Baolin Wang <baolin.wang@linux.alibaba.com>: Patch series "Improve the migration stats": mm: migrate: fix the return value of migrate_pages() mm: migrate: correct the hugetlb migration stats mm: compaction: fix the migration stats in trace_mm_compaction_migratepages() mm: migrate: support multiple target nodes demotion mm: migrate: add more comments for selecting target node randomly Huang Ying <ying.huang@intel.com>: mm/migrate: move node demotion code to near its user Colin Ian King <colin.i.king@gmail.com>: mm/migrate: remove redundant variables used in a for-loop Subsystem: mm/thp Anshuman Khandual <anshuman.khandual@arm.com>: mm/thp: drop unused trace events hugepage_[invalidate|splitting] Subsystem: mm/ksm Nanyong Sun <sunnanyong@huawei.com>: mm: ksm: fix use-after-free kasan report in ksm_might_need_to_copy Subsystem: mm/page-poison Naoya Horiguchi <naoya.horiguchi@nec.com>: Patch series "mm/hwpoison: fix unpoison_memory()", v4: mm/hwpoison: mf_mutex for soft offline and unpoison mm/hwpoison: remove MF_MSG_BUDDY_2ND and MF_MSG_POISONED_HUGE mm/hwpoison: fix unpoison_memory() Subsystem: mm/percpu Qi Zheng <zhengqi.arch@bytedance.com>: mm: memcg/percpu: account extra objcg space to memory cgroups Subsystem: mm/rmap Huang Ying <ying.huang@intel.com>: mm/rmap: fix potential batched TLB flush race Subsystem: mm/zswap Zhaoyu Liu <zackary.liu.pro@gmail.com>: zpool: remove the list of pools_head Subsystem: mm/zram Luis Chamberlain <mcgrof@kernel.org>: zram: use ATTRIBUTE_GROUPS Subsystem: mm/cleanups Quanfa Fu <fuqf0919@gmail.com>: mm: fix some comment errors Ting Liu <liuting.0x7c00@bytedance.com>: mm: make some vars and functions static or __init Subsystem: mm/hmm Alistair Popple <apopple@nvidia.com>: mm/hmm.c: allow VM_MIXEDMAP to work with hmm_range_fault Subsystem: mm/damon Xin Hao <xhao@linux.alibaba.com>: Patch series "mm/damon: Do some small changes", v4: mm/damon: unified access_check function naming rules mm/damon: add 'age' of region tracepoint support mm/damon/core: use abs() instead of diff_of() mm/damon: remove some unneeded function definitions in damon.h Yihao Han <hanyihao@vivo.com>: mm/damon/vaddr: remove swap_ranges() and replace it with swap() Xin Hao <xhao@linux.alibaba.com>: mm/damon/schemes: add the validity judgment of thresholds mm/damon: move damon_rand() definition into damon.h mm/damon: modify damon_rand() macro to static inline function SeongJae Park <sj@kernel.org>: Patch series "mm/damon: Misc cleanups": mm/damon: convert macro functions to static inline functions Docs/admin-guide/mm/damon/usage: update for scheme quotas and watermarks Docs/admin-guide/mm/damon/usage: remove redundant information Docs/admin-guide/mm/damon/usage: mention tracepoint at the beginning Docs/admin-guide/mm/damon/usage: update for kdamond_pid and (mk|rm)_contexts mm/damon: remove a mistakenly added comment for a future feature Patch series "mm/damon/schemes: Extend stats for better online analysis and tuning": mm/damon/schemes: account scheme actions that successfully applied mm/damon/schemes: account how many times quota limit has exceeded mm/damon/reclaim: provide reclamation statistics Docs/admin-guide/mm/damon/reclaim: document statistics parameters mm/damon/dbgfs: support all DAMOS stats Docs/admin-guide/mm/damon/usage: update for schemes statistics Baolin Wang <baolin.wang@linux.alibaba.com>: mm/damon: add access checking for hugetlb pages Guoqing Jiang <guoqing.jiang@linux.dev>: mm/damon: move the implementation of damon_insert_region to damon.h SeongJae Park <sj@kernel.org>: Patch series "mm/damon: Hide unnecessary information disclosures": mm/damon/dbgfs: remove an unnecessary variable mm/damon/vaddr: use pr_debug() for damon_va_three_regions() failure logging mm/damon/vaddr: hide kernel pointer from damon_va_three_regions() failure log mm/damon: hide kernel pointer from tracepoint event Documentation/admin-guide/cgroup-v1/hugetlb.rst | 4 Documentation/admin-guide/cgroup-v2.rst | 11 Documentation/admin-guide/mm/damon/reclaim.rst | 25 Documentation/admin-guide/mm/damon/usage.rst | 235 +++++-- Documentation/admin-guide/mm/numa_memory_policy.rst | 16 Documentation/admin-guide/sysctl/vm.rst | 2 Documentation/filesystems/proc.rst | 6 Documentation/vm/arch_pgtable_helpers.rst | 20 Documentation/vm/index.rst | 2 Documentation/vm/page_migration.rst | 12 Documentation/vm/page_table_check.rst | 56 + Documentation/vm/vmalloced-kernel-stacks.rst | 153 ++++ MAINTAINERS | 9 arch/Kconfig | 3 arch/alpha/kernel/syscalls/syscall.tbl | 1 arch/alpha/mm/fault.c | 16 arch/arc/mm/fault.c | 3 arch/arm/mm/fault.c | 2 arch/arm/tools/syscall.tbl | 1 arch/arm64/include/asm/unistd.h | 2 arch/arm64/include/asm/unistd32.h | 2 arch/arm64/kernel/module.c | 4 arch/arm64/mm/fault.c | 6 arch/hexagon/mm/vm_fault.c | 8 arch/ia64/kernel/module.c | 6 arch/ia64/kernel/setup.c | 5 arch/ia64/kernel/syscalls/syscall.tbl | 1 arch/ia64/kernel/topology.c | 3 arch/ia64/kernel/uncached.c | 2 arch/ia64/mm/fault.c | 16 arch/m68k/kernel/syscalls/syscall.tbl | 1 arch/m68k/mm/fault.c | 18 arch/microblaze/kernel/syscalls/syscall.tbl | 1 arch/microblaze/mm/fault.c | 18 arch/mips/kernel/syscalls/syscall_n32.tbl | 1 arch/mips/kernel/syscalls/syscall_n64.tbl | 1 arch/mips/kernel/syscalls/syscall_o32.tbl | 1 arch/mips/mm/fault.c | 19 arch/nds32/mm/fault.c | 16 arch/nios2/mm/fault.c | 18 arch/openrisc/mm/fault.c | 18 arch/parisc/kernel/syscalls/syscall.tbl | 1 arch/parisc/mm/fault.c | 18 arch/powerpc/kernel/syscalls/syscall.tbl | 1 arch/powerpc/mm/fault.c | 6 arch/riscv/mm/fault.c | 2 arch/s390/kernel/module.c | 5 arch/s390/kernel/syscalls/syscall.tbl | 1 arch/s390/mm/fault.c | 28 arch/sh/kernel/syscalls/syscall.tbl | 1 arch/sh/mm/fault.c | 18 arch/sparc/kernel/syscalls/syscall.tbl | 1 arch/sparc/mm/fault_32.c | 16 arch/sparc/mm/fault_64.c | 16 arch/um/kernel/trap.c | 8 arch/x86/Kconfig | 1 arch/x86/entry/syscalls/syscall_32.tbl | 1 arch/x86/entry/syscalls/syscall_64.tbl | 1 arch/x86/include/asm/pgtable.h | 31 - arch/x86/kernel/module.c | 7 arch/x86/mm/fault.c | 3 arch/xtensa/kernel/syscalls/syscall.tbl | 1 arch/xtensa/mm/fault.c | 17 drivers/block/zram/zram_drv.c | 11 drivers/dax/bus.c | 32 + drivers/dax/bus.h | 1 drivers/dax/device.c | 140 ++-- drivers/infiniband/sw/siw/siw_main.c | 7 drivers/of/fdt.c | 6 fs/ext4/extents.c | 8 fs/ext4/inline.c | 5 fs/ext4/page-io.c | 9 fs/f2fs/data.c | 4 fs/f2fs/gc.c | 5 fs/f2fs/inode.c | 4 fs/f2fs/node.c | 4 fs/f2fs/recovery.c | 6 fs/f2fs/segment.c | 9 fs/f2fs/super.c | 5 fs/hugetlbfs/inode.c | 7 fs/inode.c | 49 + fs/ioctl.c | 2 fs/ntfs/attrib.c | 2 fs/ocfs2/alloc.c | 2 fs/ocfs2/aops.c | 26 fs/ocfs2/cluster/masklog.c | 11 fs/ocfs2/dir.c | 2 fs/ocfs2/filecheck.c | 3 fs/ocfs2/journal.c | 6 fs/proc/task_mmu.c | 13 fs/squashfs/super.c | 33 + fs/userfaultfd.c | 8 fs/xfs/kmem.c | 3 fs/xfs/xfs_buf.c | 2 include/linux/ceph/libceph.h | 1 include/linux/damon.h | 93 +-- include/linux/fs.h | 1 include/linux/gfp.h | 12 include/linux/hugetlb.h | 4 include/linux/hugetlb_cgroup.h | 7 include/linux/kasan.h | 4 include/linux/kthread.h | 25 include/linux/memcontrol.h | 22 include/linux/mempolicy.h | 1 include/linux/memremap.h | 11 include/linux/mm.h | 76 -- include/linux/mm_inline.h | 136 ++++ include/linux/mm_types.h | 252 +++----- include/linux/mmzone.h | 9 include/linux/page-flags.h | 6 include/linux/page_idle.h | 1 include/linux/page_table_check.h | 147 ++++ include/linux/pgtable.h | 8 include/linux/sched/mm.h | 26 include/linux/swap.h | 8 include/linux/syscalls.h | 3 include/linux/vm_event_item.h | 3 include/linux/vmalloc.h | 7 include/ras/ras_event.h | 2 include/trace/events/compaction.h | 24 include/trace/events/damon.h | 15 include/trace/events/thp.h | 35 - include/uapi/asm-generic/unistd.h | 5 include/uapi/linux/prctl.h | 3 kernel/dma/pool.c | 4 kernel/fork.c | 3 kernel/kthread.c | 1 kernel/rcu/rcutorture.c | 7 kernel/sys.c | 63 ++ kernel/sys_ni.c | 1 kernel/sysctl.c | 3 kernel/trace/ring_buffer.c | 7 kernel/trace/trace_hwlat.c | 6 kernel/trace/trace_osnoise.c | 3 lib/test_hmm.c | 24 lib/test_kasan.c | 30 mm/Kconfig | 14 mm/Kconfig.debug | 24 mm/Makefile | 1 mm/compaction.c | 7 mm/damon/core.c | 45 - mm/damon/dbgfs.c | 20 mm/damon/paddr.c | 24 mm/damon/prmtv-common.h | 4 mm/damon/reclaim.c | 46 + mm/damon/vaddr.c | 186 ++++-- mm/debug.c | 52 - mm/debug_vm_pgtable.c | 6 mm/dmapool.c | 2 mm/frontswap.c | 4 mm/gup.c | 31 - mm/hmm.c | 5 mm/huge_memory.c | 32 - mm/hugetlb.c | 6 mm/hugetlb_cgroup.c | 133 +++- mm/internal.h | 7 mm/kasan/quarantine.c | 11 mm/kasan/shadow.c | 9 mm/khugepaged.c | 23 mm/kmemleak.c | 21 mm/ksm.c | 5 mm/madvise.c | 510 ++++++++++------ mm/mapping_dirty_helpers.c | 1 mm/memcontrol.c | 44 - mm/memory-failure.c | 189 +++--- mm/memory.c | 12 mm/mempolicy.c | 95 ++- mm/memremap.c | 18 mm/migrate.c | 527 ++++++++++------- mm/mlock.c | 2 mm/mmap.c | 55 + mm/mmu_gather.c | 1 mm/mprotect.c | 2 mm/oom_kill.c | 30 mm/page_alloc.c | 198 ++++-- mm/page_counter.c | 1 mm/page_ext.c | 8 mm/page_isolation.c | 2 mm/page_owner.c | 4 mm/page_table_check.c | 270 ++++++++ mm/percpu-internal.h | 18 mm/percpu.c | 10 mm/pgtable-generic.c | 1 mm/rmap.c | 43 + mm/shmem.c | 91 ++ mm/slab.h | 5 mm/slab_common.c | 34 - mm/swap.c | 2 mm/swapfile.c | 46 - mm/truncate.c | 5 mm/userfaultfd.c | 5 mm/util.c | 15 mm/vmalloc.c | 75 +- mm/vmscan.c | 2 mm/vmstat.c | 3 mm/zpool.c | 12 net/ceph/buffer.c | 4 net/ceph/ceph_common.c | 27 net/ceph/crypto.c | 2 net/ceph/messenger.c | 2 net/ceph/messenger_v2.c | 2 net/ceph/osdmap.c | 12 net/sunrpc/svc_xprt.c | 3 scripts/spelling.txt | 1 tools/testing/selftests/vm/charge_reserved_hugetlb.sh | 34 - tools/testing/selftests/vm/hmm-tests.c | 42 + tools/testing/selftests/vm/hugepage-mremap.c | 46 - tools/testing/selftests/vm/hugetlb_reparenting_test.sh | 21 tools/testing/selftests/vm/run_vmtests.sh | 2 tools/testing/selftests/vm/userfaultfd.c | 33 - tools/testing/selftests/vm/write_hugetlb_memory.sh | 2 211 files changed, 3980 insertions(+), 1759 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-12-31 4:12 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-12-31 4:12 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 2 patches, based on 4f3d93c6eaff6b84e43b63e0d7a119c5920e1020. Subsystems affected by this patch series: mm/userfaultfd mm/damon Subsystem: mm/userfaultfd Mike Kravetz <mike.kravetz@oracle.com>: userfaultfd/selftests: fix hugetlb area allocations Subsystem: mm/damon SeongJae Park <sj@kernel.org>: mm/damon/dbgfs: fix 'struct pid' leaks in 'dbgfs_target_ids_write()' mm/damon/dbgfs.c | 9 +++++++-- tools/testing/selftests/vm/userfaultfd.c | 16 ++++++++++------ 2 files changed, 17 insertions(+), 8 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-12-25 5:11 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-12-25 5:11 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 9 patches, based on bc491fb12513e79702c6f936c838f792b5389129. Subsystems affected by this patch series: mm/kfence mm/mempolicy core-kernel MAINTAINERS mm/memory-failure mm/pagemap mm/pagealloc mm/damon mm/memory-failure Subsystem: mm/kfence Baokun Li <libaokun1@huawei.com>: kfence: fix memory leak when cat kfence objects Subsystem: mm/mempolicy Andrey Ryabinin <arbn@yandex-team.com>: mm: mempolicy: fix THP allocations escaping mempolicy restrictions Subsystem: core-kernel Philipp Rudo <prudo@redhat.com>: kernel/crash_core: suppress unknown crashkernel parameter warning Subsystem: MAINTAINERS Randy Dunlap <rdunlap@infradead.org>: MAINTAINERS: mark more list instances as moderated Subsystem: mm/memory-failure Naoya Horiguchi <naoya.horiguchi@nec.com>: mm, hwpoison: fix condition in free hugetlb page path Subsystem: mm/pagemap Hugh Dickins <hughd@google.com>: mm: delete unsafe BUG from page_cache_add_speculative() Subsystem: mm/pagealloc Thibaut Sautereau <thibaut.sautereau@ssi.gouv.fr>: mm/page_alloc: fix __alloc_size attribute for alloc_pages_exact_nid Subsystem: mm/damon SeongJae Park <sj@kernel.org>: mm/damon/dbgfs: protect targets destructions with kdamond_lock Subsystem: mm/memory-failure Liu Shixin <liushixin2@huawei.com>: mm/hwpoison: clear MF_COUNT_INCREASED before retrying get_any_page() MAINTAINERS | 4 ++-- include/linux/gfp.h | 2 +- include/linux/pagemap.h | 1 - kernel/crash_core.c | 11 +++++++++++ mm/damon/dbgfs.c | 2 ++ mm/kfence/core.c | 1 + mm/memory-failure.c | 14 +++++--------- mm/mempolicy.c | 3 +-- 8 files changed, 23 insertions(+), 15 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-12-10 22:45 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-12-10 22:45 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 21 patches, based on c741e49150dbb0c0aebe234389f4aa8b47958fa8. Subsystems affected by this patch series: mm/mlock MAINTAINERS mailmap mm/pagecache mm/damon mm/slub mm/memcg mm/hugetlb mm/pagecache Subsystem: mm/mlock Drew DeVault <sir@cmpwn.com>: Increase default MLOCK_LIMIT to 8 MiB Subsystem: MAINTAINERS Dave Young <dyoung@redhat.com>: MAINTAINERS: update kdump maintainers Subsystem: mailmap Guo Ren <guoren@linux.alibaba.com>: mailmap: update email address for Guo Ren Subsystem: mm/pagecache "Matthew Wilcox (Oracle)" <willy@infradead.org>: filemap: remove PageHWPoison check from next_uptodate_page() Subsystem: mm/damon SeongJae Park <sj@kernel.org>: Patch series "mm/damon: Fix fake /proc/loadavg reports", v3: timers: implement usleep_idle_range() mm/damon/core: fix fake load reports due to uninterruptible sleeps Patch series "mm/damon: Trivial fixups and improvements": mm/damon/core: use better timer mechanisms selection threshold mm/damon/dbgfs: remove an unnecessary error message mm/damon/core: remove unnecessary error messages mm/damon/vaddr: remove an unnecessary warning message mm/damon/vaddr-test: split a test function having >1024 bytes frame size mm/damon/vaddr-test: remove unnecessary variables selftests/damon: skip test if DAMON is running selftests/damon: test DAMON enabling with empty target_ids case selftests/damon: test wrong DAMOS condition ranges input selftests/damon: test debugfs file reads/writes with huge count selftests/damon: split test cases Subsystem: mm/slub Gerald Schaefer <gerald.schaefer@linux.ibm.com>: mm/slub: fix endianness bug for alloc/free_traces attributes Subsystem: mm/memcg Waiman Long <longman@redhat.com>: mm/memcg: relocate mod_objcg_mlstate(), get_obj_stock() and put_obj_stock() Subsystem: mm/hugetlb Zhenguo Yao <yaozhenguo1@gmail.com>: hugetlbfs: fix issue of preallocation of gigantic pages can't work Subsystem: mm/pagecache Manjong Lee <mj0123.lee@samsung.com>: mm: bdi: initialize bdi_min_ratio when bdi is unregistered .mailmap | 2 MAINTAINERS | 2 include/linux/delay.h | 14 include/uapi/linux/resource.h | 13 kernel/time/timer.c | 16 - mm/backing-dev.c | 7 mm/damon/core.c | 20 - mm/damon/dbgfs.c | 4 mm/damon/vaddr-test.h | 85 ++--- mm/damon/vaddr.c | 1 mm/filemap.c | 2 mm/hugetlb.c | 2 mm/memcontrol.c | 106 +++---- mm/slub.c | 15 - tools/testing/selftests/damon/.gitignore | 2 tools/testing/selftests/damon/Makefile | 7 tools/testing/selftests/damon/_debugfs_common.sh | 52 +++ tools/testing/selftests/damon/debugfs_attrs.sh | 149 ++-------- tools/testing/selftests/damon/debugfs_empty_targets.sh | 13 tools/testing/selftests/damon/debugfs_huge_count_read_write.sh | 22 + tools/testing/selftests/damon/debugfs_schemes.sh | 19 + tools/testing/selftests/damon/debugfs_target_ids.sh | 19 + tools/testing/selftests/damon/huge_count_read_write.c | 39 ++ 23 files changed, 363 insertions(+), 248 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-11-20 0:42 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-11-20 0:42 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 15 patches, based on a90af8f15bdc9449ee2d24e1d73fa3f7e8633f81. Subsystems affected by this patch series: mm/swap ipc mm/slab-generic hexagon mm/kmemleak mm/hugetlb mm/kasan mm/damon mm/highmem proc Subsystem: mm/swap Matthew Wilcox <willy@infradead.org>: mm/swap.c:put_pages_list(): reinitialise the page list Subsystem: ipc Alexander Mikhalitsyn <alexander.mikhalitsyn@virtuozzo.com>: Patch series "shm: shm_rmid_forced feature fixes": ipc: WARN if trying to remove ipc object which is absent shm: extend forced shm destroy to support objects from several IPC nses Subsystem: mm/slab-generic Yunfeng Ye <yeyunfeng@huawei.com>: mm: emit the "free" trace report before freeing memory in kmem_cache_free() Subsystem: hexagon Nathan Chancellor <nathan@kernel.org>: Patch series "Fixes for ARCH=hexagon allmodconfig", v2: hexagon: export raw I/O routines for modules hexagon: clean up timer-regs.h hexagon: ignore vmlinux.lds Subsystem: mm/kmemleak Rustam Kovhaev <rkovhaev@gmail.com>: mm: kmemleak: slob: respect SLAB_NOLEAKTRACE flag Subsystem: mm/hugetlb Bui Quang Minh <minhquangbui99@gmail.com>: hugetlb: fix hugetlb cgroup refcounting during mremap Mina Almasry <almasrymina@google.com>: hugetlb, userfaultfd: fix reservation restore on userfaultfd error Subsystem: mm/kasan Kees Cook <keescook@chromium.org>: kasan: test: silence intentional read overflow warnings Subsystem: mm/damon SeongJae Park <sj@kernel.org>: Patch series "DAMON fixes": mm/damon/dbgfs: use '__GFP_NOWARN' for user-specified size buffer allocation mm/damon/dbgfs: fix missed use of damon_dbgfs_lock Subsystem: mm/highmem Ard Biesheuvel <ardb@kernel.org>: kmap_local: don't assume kmap PTEs are linear arrays in memory Subsystem: proc David Hildenbrand <david@redhat.com>: proc/vmcore: fix clearing user buffer by properly using clear_user() arch/arm/Kconfig | 1 arch/hexagon/include/asm/timer-regs.h | 26 ---- arch/hexagon/include/asm/timex.h | 3 arch/hexagon/kernel/.gitignore | 1 arch/hexagon/kernel/time.c | 12 +- arch/hexagon/lib/io.c | 4 fs/proc/vmcore.c | 20 ++- include/linux/hugetlb_cgroup.h | 12 ++ include/linux/ipc_namespace.h | 15 ++ include/linux/sched/task.h | 2 ipc/shm.c | 189 +++++++++++++++++++++++++--------- ipc/util.c | 6 - lib/test_kasan.c | 2 mm/Kconfig | 3 mm/damon/dbgfs.c | 20 ++- mm/highmem.c | 32 +++-- mm/hugetlb.c | 11 + mm/slab.c | 3 mm/slab.h | 2 mm/slob.c | 3 mm/slub.c | 2 mm/swap.c | 1 22 files changed, 254 insertions(+), 116 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-11-11 4:32 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-11-11 4:32 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits The post-linux-next material. 7 patches, based on debe436e77c72fcee804fb867f275e6d31aa999c. Subsystems affected by this patch series: mm/debug mm/slab-generic mm/migration mm/memcg mm/kasan Subsystem: mm/debug Yixuan Cao <caoyixuan2019@email.szu.edu.cn>: mm/page_owner.c: modify the type of argument "order" in some functions Subsystem: mm/slab-generic Ingo Molnar <mingo@kernel.org>: mm: allow only SLUB on PREEMPT_RT Subsystem: mm/migration Baolin Wang <baolin.wang@linux.alibaba.com>: mm: migrate: simplify the file-backed pages validation when migrating its mapping Alistair Popple <apopple@nvidia.com>: mm/migrate.c: remove MIGRATE_PFN_LOCKED Subsystem: mm/memcg Christoph Hellwig <hch@lst.de>: Patch series "unexport memcg locking helpers": mm: unexport folio_memcg_{,un}lock mm: unexport {,un}lock_page_memcg Subsystem: mm/kasan Kuan-Ying Lee <Kuan-Ying.Lee@mediatek.com>: kasan: add kasan mode messages when kasan init Documentation/vm/hmm.rst | 2 arch/arm64/mm/kasan_init.c | 2 arch/powerpc/kvm/book3s_hv_uvmem.c | 4 drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 2 drivers/gpu/drm/nouveau/nouveau_dmem.c | 4 include/linux/migrate.h | 1 include/linux/page_owner.h | 12 +- init/Kconfig | 2 lib/test_hmm.c | 5 - mm/kasan/hw_tags.c | 14 ++ mm/kasan/sw_tags.c | 2 mm/memcontrol.c | 4 mm/migrate.c | 151 +++++-------------------------- mm/page_owner.c | 6 - 14 files changed, 61 insertions(+), 150 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-11-09 2:30 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-11-09 2:30 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 87 patches, based on 8bb7eca972ad531c9b149c0a51ab43a417385813, plus previously sent material. Subsystems affected by this patch series: mm/pagecache mm/hugetlb procfs misc MAINTAINERS lib checkpatch binfmt kallsyms ramfs init codafs nilfs2 hfs crash_dump signals seq_file fork sysvfs kcov gdb resource selftests ipc Subsystem: mm/pagecache Johannes Weiner <hannes@cmpxchg.org>: vfs: keep inodes with page cache off the inode shrinker LRU Subsystem: mm/hugetlb zhangyiru <zhangyiru3@huawei.com>: mm,hugetlb: remove mlock ulimit for SHM_HUGETLB Subsystem: procfs Florian Weimer <fweimer@redhat.com>: procfs: do not list TID 0 in /proc/<pid>/task David Hildenbrand <david@redhat.com>: x86/xen: update xen_oldmem_pfn_is_ram() documentation x86/xen: simplify xen_oldmem_pfn_is_ram() x86/xen: print a warning when HVMOP_get_mem_type fails proc/vmcore: let pfn_is_ram() return a bool proc/vmcore: convert oldmem_pfn_is_ram callback to more generic vmcore callbacks virtio-mem: factor out hotplug specifics from virtio_mem_init() into virtio_mem_init_hotplug() virtio-mem: factor out hotplug specifics from virtio_mem_probe() into virtio_mem_init_hotplug() virtio-mem: factor out hotplug specifics from virtio_mem_remove() into virtio_mem_deinit_hotplug() virtio-mem: kdump mode to sanitize /proc/vmcore access Stephen Brennan <stephen.s.brennan@oracle.com>: proc: allow pid_revalidate() during LOOKUP_RCU Subsystem: misc Andy Shevchenko <andriy.shevchenko@linux.intel.com>: Patch series "kernel.h further split", v5: kernel.h: drop unneeded <linux/kernel.h> inclusion from other headers kernel.h: split out container_of() and typeof_member() macros include/kunit/test.h: replace kernel.h with the necessary inclusions include/linux/list.h: replace kernel.h with the necessary inclusions include/linux/llist.h: replace kernel.h with the necessary inclusions include/linux/plist.h: replace kernel.h with the necessary inclusions include/media/media-entity.h: replace kernel.h with the necessary inclusions include/linux/delay.h: replace kernel.h with the necessary inclusions include/linux/sbitmap.h: replace kernel.h with the necessary inclusions include/linux/radix-tree.h: replace kernel.h with the necessary inclusions include/linux/generic-radix-tree.h: replace kernel.h with the necessary inclusions Stephen Rothwell <sfr@canb.auug.org.au>: kernel.h: split out instruction pointer accessors Rasmus Villemoes <linux@rasmusvillemoes.dk>: linux/container_of.h: switch to static_assert Colin Ian King <colin.i.king@googlemail.com>: mailmap: update email address for Colin King Subsystem: MAINTAINERS Kees Cook <keescook@chromium.org>: MAINTAINERS: add "exec & binfmt" section with myself and Eric Lukas Bulwahn <lukas.bulwahn@gmail.com>: Patch series "Rectify file references for dt-bindings in MAINTAINERS", v5: MAINTAINERS: rectify entry for ARM/TOSHIBA VISCONTI ARCHITECTURE MAINTAINERS: rectify entry for HIKEY960 ONBOARD USB GPIO HUB DRIVER MAINTAINERS: rectify entry for INTEL KEEM BAY DRM DRIVER MAINTAINERS: rectify entry for ALLWINNER HARDWARE SPINLOCK SUPPORT Subsystem: lib Imran Khan <imran.f.khan@oracle.com>: Patch series "lib, stackdepot: check stackdepot handle before accessing slabs", v2: lib, stackdepot: check stackdepot handle before accessing slabs lib, stackdepot: add helper to print stack entries lib, stackdepot: add helper to print stack entries into buffer Lucas De Marchi <lucas.demarchi@intel.com>: include/linux/string_helpers.h: add linux/string.h for strlen() Alexey Dobriyan <adobriyan@gmail.com>: lib: uninline simple_strntoull() as well Thomas Gleixner <tglx@linutronix.de>: mm/scatterlist: replace the !preemptible warning in sg_miter_stop() Subsystem: checkpatch Rikard Falkeborn <rikard.falkeborn@gmail.com>: const_structs.checkpatch: add a few sound ops structs Joe Perches <joe@perches.com>: checkpatch: improve EXPORT_SYMBOL test for EXPORT_SYMBOL_NS uses Peter Ujfalusi <peter.ujfalusi@linux.intel.com>: checkpatch: get default codespell dictionary path from package location Subsystem: binfmt Kees Cook <keescook@chromium.org>: binfmt_elf: reintroduce using MAP_FIXED_NOREPLACE Alexey Dobriyan <adobriyan@gmail.com>: ELF: simplify STACK_ALLOC macro Subsystem: kallsyms Kefeng Wang <wangkefeng.wang@huawei.com>: Patch series "sections: Unify kernel sections range check and use", v4: kallsyms: remove arch specific text and data check kallsyms: fix address-checks for kernel related range sections: move and rename core_kernel_data() to is_kernel_core_data() sections: move is_kernel_inittext() into sections.h x86: mm: rename __is_kernel_text() to is_x86_32_kernel_text() sections: provide internal __is_kernel() and __is_kernel_text() helper mm: kasan: use is_kernel() helper extable: use is_kernel_text() helper powerpc/mm: use core_kernel_text() helper microblaze: use is_kernel_text() helper alpha: use is_kernel_text() helper Subsystem: ramfs yangerkun <yangerkun@huawei.com>: ramfs: fix mount source show for ramfs Subsystem: init Andrew Halaney <ahalaney@redhat.com>: init: make unknown command line param message clearer Subsystem: codafs Jan Harkes <jaharkes@cs.cmu.edu>: Patch series "Coda updates for -next": coda: avoid NULL pointer dereference from a bad inode coda: check for async upcall request using local state Alex Shi <alex.shi@linux.alibaba.com>: coda: remove err which no one care Jan Harkes <jaharkes@cs.cmu.edu>: coda: avoid flagging NULL inodes coda: avoid hidden code duplication in rename coda: avoid doing bad things on inode type changes during revalidation Xiyu Yang <xiyuyang19@fudan.edu.cn>: coda: convert from atomic_t to refcount_t on coda_vm_ops->refcnt Jing Yangyang <jing.yangyang@zte.com.cn>: coda: use vmemdup_user to replace the open code Jan Harkes <jaharkes@cs.cmu.edu>: coda: bump module version to 7.2 Subsystem: nilfs2 Qing Wang <wangqing@vivo.com>: Patch series "nilfs2 updates": nilfs2: replace snprintf in show functions with sysfs_emit Ryusuke Konishi <konishi.ryusuke@gmail.com>: nilfs2: remove filenames from file comments Subsystem: hfs Arnd Bergmann <arnd@arndb.de>: hfs/hfsplus: use WARN_ON for sanity check Subsystem: crash_dump Changcheng Deng <deng.changcheng@zte.com.cn>: crash_dump: fix boolreturn.cocci warning Ye Guojin <ye.guojin@zte.com.cn>: crash_dump: remove duplicate include in crash_dump.h Subsystem: signals Ye Guojin <ye.guojin@zte.com.cn>: signal: remove duplicate include in signal.h Subsystem: seq_file Andy Shevchenko <andriy.shevchenko@linux.intel.com>: seq_file: move seq_escape() to a header Muchun Song <songmuchun@bytedance.com>: seq_file: fix passing wrong private data Subsystem: fork Ran Xiaokai <ran.xiaokai@zte.com.cn>: kernel/fork.c: unshare(): use swap() to make code cleaner Subsystem: sysvfs Pavel Skripkin <paskripkin@gmail.com>: sysv: use BUILD_BUG_ON instead of runtime check Subsystem: kcov Sebastian Andrzej Siewior <bigeasy@linutronix.de>: Patch series "kcov: PREEMPT_RT fixup + misc", v2: Documentation/kcov: include types.h in the example Documentation/kcov: define `ip' in the example kcov: allocate per-CPU memory on the relevant node kcov: avoid enable+disable interrupts if !in_task() kcov: replace local_irq_save() with a local_lock_t Subsystem: gdb Douglas Anderson <dianders@chromium.org>: scripts/gdb: handle split debug for vmlinux Subsystem: resource David Hildenbrand <david@redhat.com>: Patch series "virtio-mem: disallow mapping virtio-mem memory via /dev/mem", v5: kernel/resource: clean up and optimize iomem_is_exclusive() kernel/resource: disallow access to exclusive system RAM regions virtio-mem: disallow mapping virtio-mem memory via /dev/mem Subsystem: selftests SeongJae Park <sjpark@amazon.de>: selftests/kselftest/runner/run_one(): allow running non-executable files Subsystem: ipc Michal Clapinski <mclapinski@google.com>: ipc: check checkpoint_restore_ns_capable() to modify C/R proc files Manfred Spraul <manfred@colorfullife.com>: ipc/ipc_sysctl.c: remove fallback for !CONFIG_PROC_SYSCTL .mailmap | 2 Documentation/dev-tools/kcov.rst | 5 MAINTAINERS | 21 + arch/alpha/kernel/traps.c | 4 arch/microblaze/mm/pgtable.c | 3 arch/powerpc/mm/pgtable_32.c | 7 arch/riscv/lib/delay.c | 4 arch/s390/include/asm/facility.h | 4 arch/x86/kernel/aperture_64.c | 13 arch/x86/kernel/unwind_orc.c | 2 arch/x86/mm/init_32.c | 14 arch/x86/xen/mmu_hvm.c | 39 -- drivers/gpu/drm/drm_dp_mst_topology.c | 5 drivers/gpu/drm/drm_mm.c | 5 drivers/gpu/drm/i915/i915_vma.c | 5 drivers/gpu/drm/i915/intel_runtime_pm.c | 20 - drivers/media/dvb-frontends/cxd2880/cxd2880_common.h | 1 drivers/virtio/Kconfig | 1 drivers/virtio/virtio_mem.c | 321 +++++++++++++------ fs/binfmt_elf.c | 33 + fs/coda/cnode.c | 13 fs/coda/coda_linux.c | 39 +- fs/coda/coda_linux.h | 6 fs/coda/dir.c | 20 - fs/coda/file.c | 12 fs/coda/psdev.c | 14 fs/coda/upcall.c | 3 fs/hfs/inode.c | 6 fs/hfsplus/inode.c | 12 fs/hugetlbfs/inode.c | 23 - fs/inode.c | 46 +- fs/internal.h | 1 fs/nilfs2/alloc.c | 2 fs/nilfs2/alloc.h | 2 fs/nilfs2/bmap.c | 2 fs/nilfs2/bmap.h | 2 fs/nilfs2/btnode.c | 2 fs/nilfs2/btnode.h | 2 fs/nilfs2/btree.c | 2 fs/nilfs2/btree.h | 2 fs/nilfs2/cpfile.c | 2 fs/nilfs2/cpfile.h | 2 fs/nilfs2/dat.c | 2 fs/nilfs2/dat.h | 2 fs/nilfs2/dir.c | 2 fs/nilfs2/direct.c | 2 fs/nilfs2/direct.h | 2 fs/nilfs2/file.c | 2 fs/nilfs2/gcinode.c | 2 fs/nilfs2/ifile.c | 2 fs/nilfs2/ifile.h | 2 fs/nilfs2/inode.c | 2 fs/nilfs2/ioctl.c | 2 fs/nilfs2/mdt.c | 2 fs/nilfs2/mdt.h | 2 fs/nilfs2/namei.c | 2 fs/nilfs2/nilfs.h | 2 fs/nilfs2/page.c | 2 fs/nilfs2/page.h | 2 fs/nilfs2/recovery.c | 2 fs/nilfs2/segbuf.c | 2 fs/nilfs2/segbuf.h | 2 fs/nilfs2/segment.c | 2 fs/nilfs2/segment.h | 2 fs/nilfs2/sufile.c | 2 fs/nilfs2/sufile.h | 2 fs/nilfs2/super.c | 2 fs/nilfs2/sysfs.c | 78 ++-- fs/nilfs2/sysfs.h | 2 fs/nilfs2/the_nilfs.c | 2 fs/nilfs2/the_nilfs.h | 2 fs/proc/base.c | 21 - fs/proc/vmcore.c | 109 ++++-- fs/ramfs/inode.c | 11 fs/seq_file.c | 16 fs/sysv/super.c | 6 include/asm-generic/sections.h | 75 +++- include/kunit/test.h | 13 include/linux/bottom_half.h | 3 include/linux/container_of.h | 52 ++- include/linux/crash_dump.h | 30 + include/linux/delay.h | 2 include/linux/fs.h | 1 include/linux/fwnode.h | 1 include/linux/generic-radix-tree.h | 3 include/linux/hugetlb.h | 6 include/linux/instruction_pointer.h | 8 include/linux/kallsyms.h | 21 - include/linux/kernel.h | 39 -- include/linux/list.h | 4 include/linux/llist.h | 4 include/linux/pagemap.h | 50 ++ include/linux/plist.h | 5 include/linux/radix-tree.h | 4 include/linux/rwsem.h | 1 include/linux/sbitmap.h | 11 include/linux/seq_file.h | 19 + include/linux/signal.h | 1 include/linux/smp.h | 1 include/linux/spinlock.h | 1 include/linux/stackdepot.h | 5 include/linux/string_helpers.h | 1 include/media/media-entity.h | 3 init/main.c | 4 ipc/ipc_sysctl.c | 42 +- ipc/shm.c | 8 kernel/extable.c | 33 - kernel/fork.c | 9 kernel/kcov.c | 40 +- kernel/locking/lockdep.c | 3 kernel/resource.c | 54 ++- kernel/trace/ftrace.c | 2 lib/scatterlist.c | 11 lib/stackdepot.c | 46 ++ lib/vsprintf.c | 3 mm/Kconfig | 7 mm/filemap.c | 8 mm/kasan/report.c | 17 - mm/memfd.c | 4 mm/mmap.c | 3 mm/page_owner.c | 18 - mm/truncate.c | 19 + mm/vmscan.c | 7 mm/workingset.c | 10 net/sysctl_net.c | 2 scripts/checkpatch.pl | 33 + scripts/const_structs.checkpatch | 4 scripts/gdb/linux/symbols.py | 3 tools/testing/selftests/kselftest/runner.sh | 28 + tools/testing/selftests/proc/.gitignore | 1 tools/testing/selftests/proc/Makefile | 2 tools/testing/selftests/proc/proc-tid0.c | 81 ++++ 132 files changed, 1206 insertions(+), 681 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-11-05 20:34 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-11-05 20:34 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 262 patches, based on 8bb7eca972ad531c9b149c0a51ab43a417385813 Subsystems affected by this patch series: scripts ocfs2 vfs mm/slab-generic mm/slab mm/slub mm/kconfig mm/dax mm/kasan mm/debug mm/pagecache mm/gup mm/swap mm/memcg mm/pagemap mm/mprotect mm/mremap mm/iomap mm/tracing mm/vmalloc mm/pagealloc mm/memory-failure mm/hugetlb mm/userfaultfd mm/vmscan mm/tools mm/memblock mm/oom-kill mm/hugetlbfs mm/migration mm/thp mm/readahead mm/nommu mm/ksm mm/vmstat mm/madvise mm/memory-hotplug mm/rmap mm/zsmalloc mm/highmem mm/zram mm/cleanups mm/kfence mm/damon Subsystem: scripts Colin Ian King <colin.king@canonical.com>: scripts/spelling.txt: add more spellings to spelling.txt Sven Eckelmann <sven@narfation.org>: scripts/spelling.txt: fix "mistake" version of "synchronization" weidonghui <weidonghui@allwinnertech.com>: scripts/decodecode: fix faulting instruction no print when opps.file is DOS format Subsystem: ocfs2 Chenyuan Mi <cymi20@fudan.edu.cn>: ocfs2: fix handle refcount leak in two exception handling paths Valentin Vidic <vvidic@valentin-vidic.from.hr>: ocfs2: cleanup journal init and shutdown Colin Ian King <colin.king@canonical.com>: ocfs2/dlm: remove redundant assignment of variable ret Jan Kara <jack@suse.cz>: Patch series "ocfs2: Truncate data corruption fix": ocfs2: fix data corruption on truncate ocfs2: do not zero pages beyond i_size Subsystem: vfs Arnd Bergmann <arnd@arndb.de>: fs/posix_acl.c: avoid -Wempty-body warning Jia He <justin.he@arm.com>: d_path: fix Kernel doc validator complaining Subsystem: mm/slab-generic "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm: move kvmalloc-related functions to slab.h Subsystem: mm/slab Shi Lei <shi_lei@massclouds.com>: mm/slab.c: remove useless lines in enable_cpucache() Subsystem: mm/slub Kefeng Wang <wangkefeng.wang@huawei.com>: slub: add back check for free nonslab objects Vlastimil Babka <vbabka@suse.cz>: mm, slub: change percpu partial accounting from objects to pages mm/slub: increase default cpu partial list sizes Hyeonggon Yoo <42.hyeyoo@gmail.com>: mm, slub: use prefetchw instead of prefetch Subsystem: mm/kconfig Sebastian Andrzej Siewior <bigeasy@linutronix.de>: mm: disable NUMA_BALANCING_DEFAULT_ENABLED and TRANSPARENT_HUGEPAGE on PREEMPT_RT Subsystem: mm/dax Christoph Hellwig <hch@lst.de>: mm: don't include <linux/dax.h> in <linux/mempolicy.h> Subsystem: mm/kasan Marco Elver <elver@google.com>: Patch series "stackdepot, kasan, workqueue: Avoid expanding stackdepot slabs when holding raw_spin_lock", v2: lib/stackdepot: include gfp.h lib/stackdepot: remove unused function argument lib/stackdepot: introduce __stack_depot_save() kasan: common: provide can_alloc in kasan_save_stack() kasan: generic: introduce kasan_record_aux_stack_noalloc() workqueue, kasan: avoid alloc_pages() when recording stack "Matthew Wilcox (Oracle)" <willy@infradead.org>: kasan: fix tag for large allocations when using CONFIG_SLAB Peter Collingbourne <pcc@google.com>: kasan: test: add memcpy test that avoids out-of-bounds write Subsystem: mm/debug Peter Xu <peterx@redhat.com>: Patch series "mm/smaps: Fixes and optimizations on shmem swap handling": mm/smaps: fix shmem pte hole swap calculation mm/smaps: use vma->vm_pgoff directly when counting partial swap mm/smaps: simplify shmem handling of pte holes Guo Ren <guoren@linux.alibaba.com>: mm: debug_vm_pgtable: don't use __P000 directly Kees Cook <keescook@chromium.org>: kasan: test: bypass __alloc_size checks Patch series "Add __alloc_size()", v3: rapidio: avoid bogus __alloc_size warning Compiler Attributes: add __alloc_size() for better bounds checking slab: clean up function prototypes slab: add __alloc_size attributes for better bounds checking mm/kvmalloc: add __alloc_size attributes for better bounds checking mm/vmalloc: add __alloc_size attributes for better bounds checking mm/page_alloc: add __alloc_size attributes for better bounds checking percpu: add __alloc_size attributes for better bounds checking Yinan Zhang <zhangyinan2019@email.szu.edu.cn>: mm/page_ext.c: fix a comment Subsystem: mm/pagecache David Howells <dhowells@redhat.com>: mm: stop filemap_read() from grabbing a superfluous page Christoph Hellwig <hch@lst.de>: Patch series "simplify bdi unregistation": mm: export bdi_unregister mtd: call bdi_unregister explicitly fs: explicitly unregister per-superblock BDIs mm: don't automatically unregister bdis mm: simplify bdi refcounting Jens Axboe <axboe@kernel.dk>: mm: don't read i_size of inode unless we need it "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm/filemap.c: remove bogus VM_BUG_ON Jens Axboe <axboe@kernel.dk>: mm: move more expensive part of XA setup out of mapping check Subsystem: mm/gup John Hubbard <jhubbard@nvidia.com>: mm/gup: further simplify __gup_device_huge() Subsystem: mm/swap Xu Wang <vulab@iscas.ac.cn>: mm/swapfile: remove needless request_queue NULL pointer check Rafael Aquini <aquini@redhat.com>: mm/swapfile: fix an integer overflow in swap_show() "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm: optimise put_pages_list() Subsystem: mm/memcg Peter Xu <peterx@redhat.com>: mm/memcg: drop swp_entry_t* in mc_handle_file_pte() Shakeel Butt <shakeelb@google.com>: memcg: flush stats only if updated memcg: unify memcg stat flushing Waiman Long <longman@redhat.com>: mm/memcg: remove obsolete memcg_free_kmem() Len Baker <len.baker@gmx.com>: mm/list_lru.c: prefer struct_size over open coded arithmetic Shakeel Butt <shakeelb@google.com>: memcg, kmem: further deprecate kmem.limit_in_bytes Muchun Song <songmuchun@bytedance.com>: mm: list_lru: remove holding lru lock mm: list_lru: fix the return value of list_lru_count_one() mm: memcontrol: remove kmemcg_id reparenting mm: memcontrol: remove the kmem states mm: list_lru: only add memcg-aware lrus to the global lru list Vasily Averin <vvs@virtuozzo.com>: Patch series "memcg: prohibit unconditional exceeding the limit of dying tasks", v3: mm, oom: pagefault_out_of_memory: don't force global OOM for dying tasks Michal Hocko <mhocko@suse.com>: mm, oom: do not trigger out_of_memory from the #PF Vasily Averin <vvs@virtuozzo.com>: memcg: prohibit unconditional exceeding the limit of dying tasks Subsystem: mm/pagemap Peng Liu <liupeng256@huawei.com>: mm/mmap.c: fix a data race of mm->total_vm Rolf Eike Beer <eb@emlix.com>: mm: use __pfn_to_section() instead of open coding it Amit Daniel Kachhap <amit.kachhap@arm.com>: mm/memory.c: avoid unnecessary kernel/user pointer conversion Nadav Amit <namit@vmware.com>: mm/memory.c: use correct VMA flags when freeing page-tables Peter Xu <peterx@redhat.com>: Patch series "mm: A few cleanup patches around zap, shmem and uffd", v4: mm/shmem: unconditionally set pte dirty in mfill_atomic_install_pte mm: clear vmf->pte after pte_unmap_same() returns mm: drop first_index/last_index in zap_details mm: add zap_skip_check_mapping() helper Qi Zheng <zhengqi.arch@bytedance.com>: Patch series "Do some code cleanups related to mm", v3: mm: introduce pmd_install() helper mm: remove redundant smp_wmb() Tiberiu A Georgescu <tiberiu.georgescu@nutanix.com>: Documentation: update pagemap with shmem exceptions Nicholas Piggin <npiggin@gmail.com>: Patch series "shoot lazy tlbs", v4: lazy tlb: introduce lazy mm refcount helper functions lazy tlb: allow lazy tlb mm refcounting to be configurable lazy tlb: shoot lazies, a non-refcounting lazy tlb option powerpc/64s: enable MMU_LAZY_TLB_SHOOTDOWN Lukas Bulwahn <lukas.bulwahn@gmail.com>: memory: remove unused CONFIG_MEM_BLOCK_SIZE Subsystem: mm/mprotect Liu Song <liu.song11@zte.com.cn>: mm/mprotect.c: avoid repeated assignment in do_mprotect_pkey() Subsystem: mm/mremap Dmitry Safonov <dima@arista.com>: mm/mremap: don't account pages in vma_to_resize() Subsystem: mm/iomap Lucas De Marchi <lucas.demarchi@intel.com>: include/linux/io-mapping.h: remove fallback for writecombine Subsystem: mm/tracing Gang Li <ligang.bdlg@bytedance.com>: mm: mmap_lock: remove redundant newline in TP_printk mm: mmap_lock: use DECLARE_EVENT_CLASS and DEFINE_EVENT_FN Subsystem: mm/vmalloc Vasily Averin <vvs@virtuozzo.com>: mm/vmalloc: repair warn_alloc()s in __vmalloc_area_node() Peter Zijlstra <peterz@infradead.org>: mm/vmalloc: don't allow VM_NO_GUARD on vmap() Eric Dumazet <edumazet@google.com>: mm/vmalloc: make show_numa_info() aware of hugepage mappings mm/vmalloc: make sure to dump unpurged areas in /proc/vmallocinfo "Uladzislau Rezki (Sony)" <urezki@gmail.com>: mm/vmalloc: do not adjust the search size for alignment overhead mm/vmalloc: check various alignments when debugging Vasily Averin <vvs@virtuozzo.com>: vmalloc: back off when the current task is OOM-killed Kefeng Wang <wangkefeng.wang@huawei.com>: vmalloc: choose a better start address in vm_area_register_early() arm64: support page mapping percpu first chunk allocator kasan: arm64: fix pcpu_page_first_chunk crash with KASAN_VMALLOC Michal Hocko <mhocko@suse.com>: mm/vmalloc: be more explicit about supported gfp flags Chen Wandun <chenwandun@huawei.com>: mm/vmalloc: introduce alloc_pages_bulk_array_mempolicy to accelerate memory allocation Changcheng Deng <deng.changcheng@zte.com.cn>: lib/test_vmalloc.c: use swap() to make code cleaner Subsystem: mm/pagealloc Eric Dumazet <edumazet@google.com>: mm/large system hash: avoid possible NULL deref in alloc_large_system_hash Miaohe Lin <linmiaohe@huawei.com>: Patch series "Cleanups and fixup for page_alloc", v2: mm/page_alloc.c: remove meaningless VM_BUG_ON() in pindex_to_order() mm/page_alloc.c: simplify the code by using macro K() mm/page_alloc.c: fix obsolete comment in free_pcppages_bulk() mm/page_alloc.c: use helper function zone_spans_pfn() mm/page_alloc.c: avoid allocating highmem pages via alloc_pages_exact[_nid] Bharata B Rao <bharata@amd.com>: Patch series "Fix NUMA nodes fallback list ordering": mm/page_alloc: print node fallback order Krupa Ramakrishnan <krupa.ramakrishnan@amd.com>: mm/page_alloc: use accumulated load when building node fallback list Geert Uytterhoeven <geert+renesas@glider.be>: Patch series "Fix NUMA without SMP": mm: move node_reclaim_distance to fix NUMA without SMP mm: move fold_vm_numa_events() to fix NUMA without SMP Eric Dumazet <edumazet@google.com>: mm/page_alloc.c: do not acquire zone lock in is_free_buddy_page() Feng Tang <feng.tang@intel.com>: mm/page_alloc: detect allocation forbidden by cpuset and bail out early Liangcai Fan <liangcaifan19@gmail.com>: mm/page_alloc.c: show watermark_boost of zone in zoneinfo Christophe Leroy <christophe.leroy@csgroup.eu>: mm: create a new system state and fix core_kernel_text() mm: make generic arch_is_kernel_initmem_freed() do what it says powerpc: use generic version of arch_is_kernel_initmem_freed() s390: use generic version of arch_is_kernel_initmem_freed() Sebastian Andrzej Siewior <bigeasy@linutronix.de>: mm: page_alloc: use migrate_disable() in drain_local_pages_wq() Wang ShaoBo <bobo.shaobowang@huawei.com>: mm/page_alloc: use clamp() to simplify code Subsystem: mm/memory-failure Marco Elver <elver@google.com>: mm: fix data race in PagePoisoned() Rikard Falkeborn <rikard.falkeborn@gmail.com>: mm/memory_failure: constify static mm_walk_ops Yang Shi <shy828301@gmail.com>: Patch series "Solve silent data loss caused by poisoned page cache (shmem/tmpfs)", v5: mm: filemap: coding style cleanup for filemap_map_pmd() mm: hwpoison: refactor refcount check handling mm: shmem: don't truncate page if memory failure happens mm: hwpoison: handle non-anonymous THP correctly Subsystem: mm/hugetlb Peter Xu <peterx@redhat.com>: mm/hugetlb: drop __unmap_hugepage_range definition from hugetlb.h Mike Kravetz <mike.kravetz@oracle.com>: Patch series "hugetlb: add demote/split page functionality", v4: hugetlb: add demote hugetlb page sysfs interfaces mm/cma: add cma_pages_valid to determine if pages are in CMA hugetlb: be sure to free demoted CMA pages to CMA hugetlb: add demote bool to gigantic page routines hugetlb: add hugetlb demote page support Liangcai Fan <liangcaifan19@gmail.com>: mm: khugepaged: recalculate min_free_kbytes after stopping khugepaged Mina Almasry <almasrymina@google.com>: mm, hugepages: add mremap() support for hugepage backed vma mm, hugepages: add hugetlb vma mremap() test Baolin Wang <baolin.wang@linux.alibaba.com>: hugetlb: support node specified when using cma for gigantic hugepages Ran Jianping <ran.jianping@zte.com.cn>: mm: remove duplicate include in hugepage-mremap.c Baolin Wang <baolin.wang@linux.alibaba.com>: Patch series "Some cleanups and improvements for hugetlb": hugetlb_cgroup: remove unused hugetlb_cgroup_from_counter macro hugetlb: replace the obsolete hugetlb_instantiation_mutex in the comments hugetlb: remove redundant validation in has_same_uncharge_info() hugetlb: remove redundant VM_BUG_ON() in add_reservation_in_range() Mike Kravetz <mike.kravetz@oracle.com>: hugetlb: remove unnecessary set_page_count in prep_compound_gigantic_page Subsystem: mm/userfaultfd Axel Rasmussen <axelrasmussen@google.com>: Patch series "Small userfaultfd selftest fixups", v2: userfaultfd/selftests: don't rely on GNU extensions for random numbers userfaultfd/selftests: fix feature support detection userfaultfd/selftests: fix calculation of expected ioctls Subsystem: mm/vmscan Miaohe Lin <linmiaohe@huawei.com>: mm/page_isolation: fix potential missing call to unset_migratetype_isolate() mm/page_isolation: guard against possible putback unisolated page Kai Song <songkai01@inspur.com>: mm/vmscan.c: fix -Wunused-but-set-variable warning Mel Gorman <mgorman@techsingularity.net>: Patch series "Remove dependency on congestion_wait in mm/", v5. Patch series: mm/vmscan: throttle reclaim until some writeback completes if congested mm/vmscan: throttle reclaim and compaction when too may pages are isolated mm/vmscan: throttle reclaim when no progress is being made mm/writeback: throttle based on page writeback instead of congestion mm/page_alloc: remove the throttling logic from the page allocator mm/vmscan: centralise timeout values for reclaim_throttle mm/vmscan: increase the timeout if page reclaim is not making progress mm/vmscan: delay waking of tasks throttled on NOPROGRESS Yuanzheng Song <songyuanzheng@huawei.com>: mm/vmpressure: fix data-race with memcg->socket_pressure Subsystem: mm/tools Zhenliang Wei <weizhenliang@huawei.com>: tools/vm/page_owner_sort.c: count and sort by mem Naoya Horiguchi <naoya.horiguchi@nec.com>: Patch series "tools/vm/page-types.c: a few improvements": tools/vm/page-types.c: make walk_file() aware of address range option tools/vm/page-types.c: move show_file() to summary output tools/vm/page-types.c: print file offset in hexadecimal Subsystem: mm/memblock Mike Rapoport <rppt@linux.ibm.com>: Patch series "memblock: cleanup memblock_free interface", v2: arch_numa: simplify numa_distance allocation xen/x86: free_p2m_page: use memblock_free_ptr() to free a virtual pointer memblock: drop memblock_free_early_nid() and memblock_free_early() memblock: stop aliasing __memblock_free_late with memblock_free_late memblock: rename memblock_free to memblock_phys_free memblock: use memblock_free for freeing virtual pointers Subsystem: mm/oom-kill Sultan Alsawaf <sultan@kerneltoast.com>: mm: mark the OOM reaper thread as freezable Subsystem: mm/hugetlbfs Zhenguo Yao <yaozhenguo1@gmail.com>: hugetlbfs: extend the definition of hugepages parameter to support node allocation Subsystem: mm/migration John Hubbard <jhubbard@nvidia.com>: mm/migrate: de-duplicate migrate_reason strings Yang Shi <shy828301@gmail.com>: mm: migrate: make demotion knob depend on migration Subsystem: mm/thp "George G. Davis" <davis.george@siemens.com>: selftests/vm/transhuge-stress: fix ram size thinko Rongwei Wang <rongwei.wang@linux.alibaba.com>: Patch series "fix two bugs for file THP": mm, thp: lock filemap when truncating page cache mm, thp: fix incorrect unmap behavior for private pages Subsystem: mm/readahead Lin Feng <linf@wangsu.com>: mm/readahead.c: fix incorrect comments for get_init_ra_size Subsystem: mm/nommu Kefeng Wang <wangkefeng.wang@huawei.com>: mm: nommu: kill arch_get_unmapped_area() Subsystem: mm/ksm "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>: selftest/vm: fix ksm selftest to run with different NUMA topologies Pedro Demarchi Gomes <pedrodemargomes@gmail.com>: selftests: vm: add KSM huge pages merging time test Subsystem: mm/vmstat Liu Shixin <liushixin2@huawei.com>: mm/vmstat: annotate data race for zone->free_area[order].nr_free Lin Feng <linf@wangsu.com>: mm: vmstat.c: make extfrag_index show more pretty Subsystem: mm/madvise David Hildenbrand <david@redhat.com>: selftests/vm: make MADV_POPULATE_(READ|WRITE) use in-tree headers Subsystem: mm/memory-hotplug Tang Yizhou <tangyizhou@huawei.com>: mm/memory_hotplug: add static qualifier for online_policy_to_str() David Hildenbrand <david@redhat.com>: Patch series "memory-hotplug.rst: document the "auto-movable" online policy": memory-hotplug.rst: fix two instances of "movablecore" that should be "movable_node" memory-hotplug.rst: fix wrong /sys/module/memory_hotplug/parameters/ path memory-hotplug.rst: document the "auto-movable" online policy Patch series "mm/memory_hotplug: Kconfig and 32 bit cleanups": mm/memory_hotplug: remove CONFIG_X86_64_ACPI_NUMA dependency from CONFIG_MEMORY_HOTPLUG mm/memory_hotplug: remove CONFIG_MEMORY_HOTPLUG_SPARSE mm/memory_hotplug: restrict CONFIG_MEMORY_HOTPLUG to 64 bit mm/memory_hotplug: remove HIGHMEM leftovers mm/memory_hotplug: remove stale function declarations x86: remove memory hotplug support on X86_32 Patch series "mm/memory_hotplug: full support for add_memory_driver_managed() with CONFIG_ARCH_KEEP_MEMBLOCK", v2: mm/memory_hotplug: handle memblock_add_node() failures in add_memory_resource() memblock: improve MEMBLOCK_HOTPLUG documentation memblock: allow to specify flags with memblock_add_node() memblock: add MEMBLOCK_DRIVER_MANAGED to mimic IORESOURCE_SYSRAM_DRIVER_MANAGED mm/memory_hotplug: indicate MEMBLOCK_DRIVER_MANAGED with IORESOURCE_SYSRAM_DRIVER_MANAGED Subsystem: mm/rmap Alistair Popple <apopple@nvidia.com>: mm/rmap.c: avoid double faults migrating device private pages Subsystem: mm/zsmalloc Miaohe Lin <linmiaohe@huawei.com>: mm/zsmalloc.c: close race window between zs_pool_dec_isolated() and zs_unregister_migration() Subsystem: mm/highmem Ira Weiny <ira.weiny@intel.com>: mm/highmem: remove deprecated kmap_atomic Subsystem: mm/zram Jaewon Kim <jaewon31.kim@samsung.com>: zram_drv: allow reclaim on bio_alloc Dan Carpenter <dan.carpenter@oracle.com>: zram: off by one in read_block_state() Brian Geffon <bgeffon@google.com>: zram: introduce an aged idle interface Subsystem: mm/cleanups Stephen Kitt <steve@sk2.org>: mm: remove HARDENED_USERCOPY_FALLBACK Mianhan Liu <liumh1@shanghaitech.edu.cn>: include/linux/mm.h: move nr_free_buffer_pages from swap.h to mm.h Subsystem: mm/kfence Marco Elver <elver@google.com>: stacktrace: move filter_irq_stacks() to kernel/stacktrace.c kfence: count unexpectedly skipped allocations kfence: move saving stack trace of allocations into __kfence_alloc() kfence: limit currently covered allocations when pool nearly full kfence: add note to documentation about skipping covered allocations kfence: test: use kunit_skip() to skip tests kfence: shorten critical sections of alloc/free kfence: always use static branches to guard kfence_alloc() kfence: default to dynamic branch instead of static keys mode Subsystem: mm/damon Geert Uytterhoeven <geert@linux-m68k.org>: mm/damon: grammar s/works/work/ SeongJae Park <sjpark@amazon.de>: Documentation/vm: move user guides to admin-guide/mm/ SeongJae Park <sj@kernel.org>: MAINTAINERS: update SeongJae's email address SeongJae Park <sjpark@amazon.de>: docs/vm/damon: remove broken reference include/linux/damon.h: fix kernel-doc comments for 'damon_callback' SeongJae Park <sj@kernel.org>: mm/damon/core: print kdamond start log in debug mode only Changbin Du <changbin.du@gmail.com>: mm/damon: remove unnecessary do_exit() from kdamond mm/damon: needn't hold kdamond_lock to print pid of kdamond Colin Ian King <colin.king@canonical.com>: mm/damon/core: nullify pointer ctx->kdamond with a NULL SeongJae Park <sj@kernel.org>: Patch series "Implement Data Access Monitoring-based Memory Operation Schemes": mm/damon/core: account age of target regions mm/damon/core: implement DAMON-based Operation Schemes (DAMOS) mm/damon/vaddr: support DAMON-based Operation Schemes mm/damon/dbgfs: support DAMON-based Operation Schemes mm/damon/schemes: implement statistics feature selftests/damon: add 'schemes' debugfs tests Docs/admin-guide/mm/damon: document DAMON-based Operation Schemes Patch series "DAMON: Support Physical Memory Address Space Monitoring:: mm/damon/dbgfs: allow users to set initial monitoring target regions mm/damon/dbgfs-test: add a unit test case for 'init_regions' Docs/admin-guide/mm/damon: document 'init_regions' feature mm/damon/vaddr: separate commonly usable functions mm/damon: implement primitives for physical address space monitoring mm/damon/dbgfs: support physical memory monitoring Docs/DAMON: document physical memory monitoring support Rikard Falkeborn <rikard.falkeborn@gmail.com>: mm/damon/vaddr: constify static mm_walk_ops Rongwei Wang <rongwei.wang@linux.alibaba.com>: mm/damon/dbgfs: remove unnecessary variables SeongJae Park <sj@kernel.org>: mm/damon/paddr: support the pageout scheme mm/damon/schemes: implement size quota for schemes application speed control mm/damon/schemes: skip already charged targets and regions mm/damon/schemes: implement time quota mm/damon/dbgfs: support quotas of schemes mm/damon/selftests: support schemes quotas mm/damon/schemes: prioritize regions within the quotas mm/damon/vaddr,paddr: support pageout prioritization mm/damon/dbgfs: support prioritization weights tools/selftests/damon: update for regions prioritization of schemes mm/damon/schemes: activate schemes based on a watermarks mechanism mm/damon/dbgfs: support watermarks selftests/damon: support watermarks mm/damon: introduce DAMON-based Reclamation (DAMON_RECLAIM) Documentation/admin-guide/mm/damon: add a document for DAMON_RECLAIM Xin Hao <xhao@linux.alibaba.com>: Patch series "mm/damon: Fix some small bugs", v4: mm/damon: remove unnecessary variable initialization mm/damon/dbgfs: add adaptive_targets list check before enable monitor_on SeongJae Park <sj@kernel.org>: Patch series "Fix trivial nits in Documentation/admin-guide/mm": Docs/admin-guide/mm/damon/start: fix wrong example commands Docs/admin-guide/mm/damon/start: fix a wrong link Docs/admin-guide/mm/damon/start: simplify the content Docs/admin-guide/mm/pagemap: wordsmith page flags descriptions Changbin Du <changbin.du@gmail.com>: mm/damon: simplify stop mechanism Colin Ian King <colin.i.king@googlemail.com>: mm/damon: fix a few spelling mistakes in comments and a pr_debug message Changbin Du <changbin.du@gmail.com>: mm/damon: remove return value from before_terminate callback a/Documentation/admin-guide/blockdev/zram.rst | 8 a/Documentation/admin-guide/cgroup-v1/memory.rst | 11 a/Documentation/admin-guide/kernel-parameters.txt | 14 a/Documentation/admin-guide/mm/damon/index.rst | 1 a/Documentation/admin-guide/mm/damon/reclaim.rst | 235 +++ a/Documentation/admin-guide/mm/damon/start.rst | 140 + a/Documentation/admin-guide/mm/damon/usage.rst | 117 + a/Documentation/admin-guide/mm/hugetlbpage.rst | 42 a/Documentation/admin-guide/mm/memory-hotplug.rst | 147 +- a/Documentation/admin-guide/mm/pagemap.rst | 75 - a/Documentation/core-api/memory-hotplug.rst | 3 a/Documentation/dev-tools/kfence.rst | 23 a/Documentation/translations/zh_CN/core-api/memory-hotplug.rst | 4 a/Documentation/vm/damon/design.rst | 29 a/Documentation/vm/damon/faq.rst | 5 a/Documentation/vm/damon/index.rst | 1 a/Documentation/vm/page_owner.rst | 23 a/MAINTAINERS | 2 a/Makefile | 15 a/arch/Kconfig | 28 a/arch/alpha/kernel/core_irongate.c | 6 a/arch/arc/mm/init.c | 6 a/arch/arm/mach-hisi/platmcpm.c | 2 a/arch/arm/mach-rpc/ecard.c | 2 a/arch/arm/mm/init.c | 2 a/arch/arm64/Kconfig | 4 a/arch/arm64/mm/kasan_init.c | 16 a/arch/arm64/mm/mmu.c | 4 a/arch/ia64/mm/contig.c | 2 a/arch/ia64/mm/init.c | 2 a/arch/m68k/mm/mcfmmu.c | 3 a/arch/m68k/mm/motorola.c | 6 a/arch/mips/loongson64/init.c | 4 a/arch/mips/mm/init.c | 6 a/arch/mips/sgi-ip27/ip27-memory.c | 3 a/arch/mips/sgi-ip30/ip30-setup.c | 6 a/arch/powerpc/Kconfig | 1 a/arch/powerpc/configs/skiroot_defconfig | 1 a/arch/powerpc/include/asm/machdep.h | 2 a/arch/powerpc/include/asm/sections.h | 13 a/arch/powerpc/kernel/dt_cpu_ftrs.c | 8 a/arch/powerpc/kernel/paca.c | 8 a/arch/powerpc/kernel/setup-common.c | 4 a/arch/powerpc/kernel/setup_64.c | 6 a/arch/powerpc/kernel/smp.c | 2 a/arch/powerpc/mm/book3s64/radix_tlb.c | 4 a/arch/powerpc/mm/hugetlbpage.c | 9 a/arch/powerpc/platforms/powernv/pci-ioda.c | 4 a/arch/powerpc/platforms/powernv/setup.c | 4 a/arch/powerpc/platforms/pseries/setup.c | 2 a/arch/powerpc/platforms/pseries/svm.c | 9 a/arch/riscv/kernel/setup.c | 10 a/arch/s390/include/asm/sections.h | 12 a/arch/s390/kernel/setup.c | 11 a/arch/s390/kernel/smp.c | 6 a/arch/s390/kernel/uv.c | 2 a/arch/s390/mm/init.c | 3 a/arch/s390/mm/kasan_init.c | 2 a/arch/sh/boards/mach-ap325rxa/setup.c | 2 a/arch/sh/boards/mach-ecovec24/setup.c | 4 a/arch/sh/boards/mach-kfr2r09/setup.c | 2 a/arch/sh/boards/mach-migor/setup.c | 2 a/arch/sh/boards/mach-se/7724/setup.c | 4 a/arch/sparc/kernel/smp_64.c | 4 a/arch/um/kernel/mem.c | 4 a/arch/x86/Kconfig | 6 a/arch/x86/kernel/setup.c | 4 a/arch/x86/kernel/setup_percpu.c | 2 a/arch/x86/mm/init.c | 2 a/arch/x86/mm/init_32.c | 31 a/arch/x86/mm/kasan_init_64.c | 4 a/arch/x86/mm/numa.c | 2 a/arch/x86/mm/numa_emulation.c | 2 a/arch/x86/xen/mmu_pv.c | 8 a/arch/x86/xen/p2m.c | 4 a/arch/x86/xen/setup.c | 6 a/drivers/base/Makefile | 2 a/drivers/base/arch_numa.c | 96 + a/drivers/base/node.c | 9 a/drivers/block/zram/zram_drv.c | 66 a/drivers/firmware/efi/memmap.c | 2 a/drivers/hwmon/occ/p9_sbe.c | 1 a/drivers/macintosh/smu.c | 2 a/drivers/mmc/core/mmc_test.c | 1 a/drivers/mtd/mtdcore.c | 1 a/drivers/of/kexec.c | 4 a/drivers/of/of_reserved_mem.c | 5 a/drivers/rapidio/devices/rio_mport_cdev.c | 9 a/drivers/s390/char/sclp_early.c | 4 a/drivers/usb/early/xhci-dbc.c | 10 a/drivers/virtio/Kconfig | 2 a/drivers/xen/swiotlb-xen.c | 4 a/fs/d_path.c | 8 a/fs/exec.c | 4 a/fs/ocfs2/alloc.c | 21 a/fs/ocfs2/dlm/dlmrecovery.c | 1 a/fs/ocfs2/file.c | 8 a/fs/ocfs2/inode.c | 4 a/fs/ocfs2/journal.c | 28 a/fs/ocfs2/journal.h | 3 a/fs/ocfs2/super.c | 40 a/fs/open.c | 16 a/fs/posix_acl.c | 3 a/fs/proc/task_mmu.c | 28 a/fs/super.c | 3 a/include/asm-generic/sections.h | 14 a/include/linux/backing-dev-defs.h | 3 a/include/linux/backing-dev.h | 1 a/include/linux/cma.h | 1 a/include/linux/compiler-gcc.h | 8 a/include/linux/compiler_attributes.h | 10 a/include/linux/compiler_types.h | 12 a/include/linux/cpuset.h | 17 a/include/linux/damon.h | 258 +++ a/include/linux/fs.h | 1 a/include/linux/gfp.h | 8 a/include/linux/highmem.h | 28 a/include/linux/hugetlb.h | 36 a/include/linux/io-mapping.h | 6 a/include/linux/kasan.h | 8 a/include/linux/kernel.h | 1 a/include/linux/kfence.h | 21 a/include/linux/memblock.h | 48 a/include/linux/memcontrol.h | 9 a/include/linux/memory.h | 26 a/include/linux/memory_hotplug.h | 3 a/include/linux/mempolicy.h | 5 a/include/linux/migrate.h | 23 a/include/linux/migrate_mode.h | 13 a/include/linux/mm.h | 57 a/include/linux/mm_types.h | 2 a/include/linux/mmzone.h | 41 a/include/linux/node.h | 4 a/include/linux/page-flags.h | 2 a/include/linux/percpu.h | 6 a/include/linux/sched/mm.h | 25 a/include/linux/slab.h | 181 +- a/include/linux/slub_def.h | 13 a/include/linux/stackdepot.h | 8 a/include/linux/stacktrace.h | 1 a/include/linux/swap.h | 1 a/include/linux/vmalloc.h | 24 a/include/trace/events/mmap_lock.h | 50 a/include/trace/events/vmscan.h | 42 a/include/trace/events/writeback.h | 7 a/init/Kconfig | 2 a/init/initramfs.c | 4 a/init/main.c | 6 a/kernel/cgroup/cpuset.c | 23 a/kernel/cpu.c | 2 a/kernel/dma/swiotlb.c | 6 a/kernel/exit.c | 2 a/kernel/extable.c | 2 a/kernel/fork.c | 51 a/kernel/kexec_file.c | 5 a/kernel/kthread.c | 21 a/kernel/locking/lockdep.c | 15 a/kernel/printk/printk.c | 4 a/kernel/sched/core.c | 37 a/kernel/sched/sched.h | 4 a/kernel/sched/topology.c | 1 a/kernel/stacktrace.c | 30 a/kernel/tsacct.c | 2 a/kernel/workqueue.c | 2 a/lib/Kconfig.debug | 2 a/lib/Kconfig.kfence | 26 a/lib/bootconfig.c | 2 a/lib/cpumask.c | 6 a/lib/stackdepot.c | 76 - a/lib/test_kasan.c | 26 a/lib/test_kasan_module.c | 2 a/lib/test_vmalloc.c | 6 a/mm/Kconfig | 10 a/mm/backing-dev.c | 65 a/mm/cma.c | 26 a/mm/compaction.c | 12 a/mm/damon/Kconfig | 24 a/mm/damon/Makefile | 4 a/mm/damon/core.c | 500 ++++++- a/mm/damon/dbgfs-test.h | 56 a/mm/damon/dbgfs.c | 486 +++++- a/mm/damon/paddr.c | 275 +++ a/mm/damon/prmtv-common.c | 133 + a/mm/damon/prmtv-common.h | 20 a/mm/damon/reclaim.c | 356 ++++ a/mm/damon/vaddr-test.h | 2 a/mm/damon/vaddr.c | 167 +- a/mm/debug.c | 20 a/mm/debug_vm_pgtable.c | 7 a/mm/filemap.c | 78 - a/mm/gup.c | 5 a/mm/highmem.c | 6 a/mm/hugetlb.c | 713 +++++++++- a/mm/hugetlb_cgroup.c | 3 a/mm/internal.h | 26 a/mm/kasan/common.c | 8 a/mm/kasan/generic.c | 16 a/mm/kasan/kasan.h | 2 a/mm/kasan/shadow.c | 5 a/mm/kfence/core.c | 214 ++- a/mm/kfence/kfence.h | 2 a/mm/kfence/kfence_test.c | 14 a/mm/khugepaged.c | 10 a/mm/list_lru.c | 58 a/mm/memblock.c | 35 a/mm/memcontrol.c | 217 +-- a/mm/memory-failure.c | 117 + a/mm/memory.c | 166 +- a/mm/memory_hotplug.c | 57 a/mm/mempolicy.c | 143 +- a/mm/migrate.c | 61 a/mm/mmap.c | 2 a/mm/mprotect.c | 5 a/mm/mremap.c | 86 - a/mm/nommu.c | 6 a/mm/oom_kill.c | 27 a/mm/page-writeback.c | 13 a/mm/page_alloc.c | 119 - a/mm/page_ext.c | 2 a/mm/page_isolation.c | 29 a/mm/percpu.c | 24 a/mm/readahead.c | 2 a/mm/rmap.c | 8 a/mm/shmem.c | 44 a/mm/slab.c | 16 a/mm/slab_common.c | 8 a/mm/slub.c | 117 - a/mm/sparse-vmemmap.c | 2 a/mm/sparse.c | 6 a/mm/swap.c | 23 a/mm/swapfile.c | 6 a/mm/userfaultfd.c | 8 a/mm/vmalloc.c | 107 + a/mm/vmpressure.c | 2 a/mm/vmscan.c | 194 ++ a/mm/vmstat.c | 76 - a/mm/zsmalloc.c | 7 a/net/ipv4/tcp.c | 1 a/net/ipv4/udp.c | 1 a/net/netfilter/ipvs/ip_vs_ctl.c | 1 a/net/openvswitch/meter.c | 1 a/net/sctp/protocol.c | 1 a/scripts/checkpatch.pl | 3 a/scripts/decodecode | 2 a/scripts/spelling.txt | 18 a/security/Kconfig | 14 a/tools/testing/selftests/damon/debugfs_attrs.sh | 25 a/tools/testing/selftests/memory-hotplug/config | 1 a/tools/testing/selftests/vm/.gitignore | 1 a/tools/testing/selftests/vm/Makefile | 1 a/tools/testing/selftests/vm/hugepage-mremap.c | 161 ++ a/tools/testing/selftests/vm/ksm_tests.c | 154 ++ a/tools/testing/selftests/vm/madv_populate.c | 15 a/tools/testing/selftests/vm/run_vmtests.sh | 11 a/tools/testing/selftests/vm/transhuge-stress.c | 2 a/tools/testing/selftests/vm/userfaultfd.c | 157 +- a/tools/vm/page-types.c | 38 a/tools/vm/page_owner_sort.c | 94 + b/Documentation/admin-guide/mm/index.rst | 2 b/Documentation/vm/index.rst | 26 260 files changed, 6448 insertions(+), 2327 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-10-28 21:35 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-10-28 21:35 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 11 patches, based on 411a44c24a561e449b592ff631b7ae321f1eb559. Subsystems affected by this patch series: mm/memcg mm/memory-failure mm/oom-kill ocfs2 mm/secretmem mm/vmalloc mm/hugetlb mm/damon mm/tools Subsystem: mm/memcg Shakeel Butt <shakeelb@google.com>: memcg: page_alloc: skip bulk allocator for __GFP_ACCOUNT Subsystem: mm/memory-failure Yang Shi <shy828301@gmail.com>: mm: hwpoison: remove the unnecessary THP check mm: filemap: check if THP has hwpoisoned subpage for PMD page fault Subsystem: mm/oom-kill Suren Baghdasaryan <surenb@google.com>: mm/oom_kill.c: prevent a race between process_mrelease and exit_mmap Subsystem: ocfs2 Gautham Ananthakrishna <gautham.ananthakrishna@oracle.com>: ocfs2: fix race between searching chunks and release journal_head from buffer_head Subsystem: mm/secretmem Kees Cook <keescook@chromium.org>: mm/secretmem: avoid letting secretmem_users drop to zero Subsystem: mm/vmalloc Chen Wandun <chenwandun@huawei.com>: mm/vmalloc: fix numa spreading for large hash tables Subsystem: mm/hugetlb Rongwei Wang <rongwei.wang@linux.alibaba.com>: mm, thp: bail out early in collapse_file for writeback page Yang Shi <shy828301@gmail.com>: mm: khugepaged: skip huge page collapse for special files Subsystem: mm/damon SeongJae Park <sj@kernel.org>: mm/damon/core-test: fix wrong expectations for 'damon_split_regions_of()' Subsystem: mm/tools David Yang <davidcomponentone@gmail.com>: tools/testing/selftests/vm/split_huge_page_test.c: fix application of sizeof to pointer fs/ocfs2/suballoc.c | 22 ++++++++++------- include/linux/page-flags.h | 23 ++++++++++++++++++ mm/damon/core-test.h | 4 +-- mm/huge_memory.c | 2 + mm/khugepaged.c | 26 +++++++++++++------- mm/memory-failure.c | 28 +++++++++++----------- mm/memory.c | 9 +++++++ mm/oom_kill.c | 23 +++++++++--------- mm/page_alloc.c | 8 +++++- mm/secretmem.c | 2 - mm/vmalloc.c | 15 +++++++---- tools/testing/selftests/vm/split_huge_page_test.c | 2 - 12 files changed, 110 insertions(+), 54 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-10-18 22:14 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-10-18 22:14 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 19 patches, based on 519d81956ee277b4419c723adfb154603c2565ba. Subsystems affected by this patch series: mm/userfaultfd mm/migration ocfs2 mm/memblock mm/mempolicy mm/slub binfmt vfs mm/secretmem mm/thp misc Subsystem: mm/userfaultfd Peter Xu <peterx@redhat.com>: mm/userfaultfd: selftests: fix memory corruption with thp enabled Nadav Amit <namit@vmware.com>: userfaultfd: fix a race between writeprotect and exit_mmap() Subsystem: mm/migration Dave Hansen <dave.hansen@linux.intel.com>: Patch series "mm/migrate: 5.15 fixes for automatic demotion", v2: mm/migrate: optimize hotplug-time demotion order updates mm/migrate: add CPU hotplug to demotion #ifdef Huang Ying <ying.huang@intel.com>: mm/migrate: fix CPUHP state to update node demotion order Subsystem: ocfs2 Jan Kara <jack@suse.cz>: ocfs2: fix data corruption after conversion from inline format Valentin Vidic <vvidic@valentin-vidic.from.hr>: ocfs2: mount fails with buffer overflow in strlen Subsystem: mm/memblock Peng Fan <peng.fan@nxp.com>: memblock: check memory total_size Subsystem: mm/mempolicy Eric Dumazet <edumazet@google.com>: mm/mempolicy: do not allow illegal MPOL_F_NUMA_BALANCING | MPOL_LOCAL in mbind() Subsystem: mm/slub Miaohe Lin <linmiaohe@huawei.com>: Patch series "Fixups for slub": mm, slub: fix two bugs in slab_debug_trace_open() mm, slub: fix mismatch between reconstructed freelist depth and cnt mm, slub: fix potential memoryleak in kmem_cache_open() mm, slub: fix potential use-after-free in slab_debugfs_fops mm, slub: fix incorrect memcg slab count for bulk free Subsystem: binfmt Lukas Bulwahn <lukas.bulwahn@gmail.com>: elfcore: correct reference to CONFIG_UML Subsystem: vfs "Matthew Wilcox (Oracle)" <willy@infradead.org>: vfs: check fd has read access in kernel_read_file_from_fd() Subsystem: mm/secretmem Sean Christopherson <seanjc@google.com>: mm/secretmem: fix NULL page->mapping dereference in page_is_secretmem() Subsystem: mm/thp Marek Szyprowski <m.szyprowski@samsung.com>: mm/thp: decrease nr_thps in file's mapping on THP split Subsystem: misc Andrej Shadura <andrew.shadura@collabora.co.uk>: mailmap: add Andrej Shadura .mailmap | 2 + fs/kernel_read_file.c | 2 - fs/ocfs2/alloc.c | 46 ++++++----------------- fs/ocfs2/super.c | 14 +++++-- fs/userfaultfd.c | 12 ++++-- include/linux/cpuhotplug.h | 4 ++ include/linux/elfcore.h | 2 - include/linux/memory.h | 5 ++ include/linux/secretmem.h | 2 - mm/huge_memory.c | 6 ++- mm/memblock.c | 2 - mm/mempolicy.c | 16 ++------ mm/migrate.c | 62 ++++++++++++++++++------------- mm/page_ext.c | 4 -- mm/slab.c | 4 +- mm/slub.c | 31 ++++++++++++--- tools/testing/selftests/vm/userfaultfd.c | 23 ++++++++++- 17 files changed, 138 insertions(+), 99 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-09-24 22:42 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-09-24 22:42 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 16 patches, based on 7d42e98182586f57f376406d033f05fe135edb75. Subsystems affected by this patch series: mm/memory-failure mm/kasan mm/damon xtensa mm/shmem ocfs2 scripts mm/tools lib mm/pagecache mm/debug sh mm/kasan mm/memory-failure mm/pagemap Subsystem: mm/memory-failure Naoya Horiguchi <naoya.horiguchi@nec.com>: mm, hwpoison: add is_free_buddy_page() in HWPoisonHandlable() Subsystem: mm/kasan Marco Elver <elver@google.com>: kasan: fix Kconfig check of CC_HAS_WORKING_NOSANITIZE_ADDRESS Subsystem: mm/damon Adam Borowski <kilobyte@angband.pl>: mm/damon: don't use strnlen() with known-bogus source length Subsystem: xtensa Guenter Roeck <linux@roeck-us.net>: xtensa: increase size of gcc stack frame check Subsystem: mm/shmem Liu Yuntao <liuyuntao10@huawei.com>: mm/shmem.c: fix judgment error in shmem_is_huge() Subsystem: ocfs2 Wengang Wang <wen.gang.wang@oracle.com>: ocfs2: drop acl cache for directories too Subsystem: scripts Miles Chen <miles.chen@mediatek.com>: scripts/sorttable: riscv: fix undeclared identifier 'EM_RISCV' error Subsystem: mm/tools Changbin Du <changbin.du@gmail.com>: tools/vm/page-types: remove dependency on opt_file for idle page tracking Subsystem: lib Paul Menzel <pmenzel@molgen.mpg.de>: lib/zlib_inflate/inffast: check config in C to avoid unused function warning Subsystem: mm/pagecache Minchan Kim <minchan@kernel.org>: mm: fs: invalidate bh_lrus for only cold path Subsystem: mm/debug Weizhao Ouyang <o451686892@gmail.com>: mm/debug: sync up MR_CONTIG_RANGE and MR_LONGTERM_PIN mm/debug: sync up latest migrate_reason to migrate_reason_names Subsystem: sh Geert Uytterhoeven <geert+renesas@glider.be>: sh: pgtable-3level: fix cast to pointer from integer of different size Subsystem: mm/kasan Nathan Chancellor <nathan@kernel.org>: kasan: always respect CONFIG_KASAN_STACK Subsystem: mm/memory-failure Qi Zheng <zhengqi.arch@bytedance.com>: mm/memory_failure: fix the missing pte_unmap() call Subsystem: mm/pagemap Chen Jun <chenjun102@huawei.com>: mm: fix uninitialized use in overcommit_policy_handler arch/sh/include/asm/pgtable-3level.h | 2 +- fs/buffer.c | 8 ++++++-- fs/ocfs2/dlmglue.c | 3 ++- include/linux/buffer_head.h | 4 ++-- include/linux/migrate.h | 6 +++++- lib/Kconfig.debug | 2 +- lib/Kconfig.kasan | 2 ++ lib/zlib_inflate/inffast.c | 13 ++++++------- mm/damon/dbgfs-test.h | 16 ++++++++-------- mm/debug.c | 4 +++- mm/memory-failure.c | 12 ++++++------ mm/shmem.c | 4 ++-- mm/swap.c | 19 ++++++++++++++++--- mm/util.c | 4 ++-- scripts/Makefile.kasan | 3 ++- scripts/sorttable.c | 4 ++++ tools/vm/page-types.c | 2 +- 17 files changed, 69 insertions(+), 39 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-09-10 3:09 Andrew Morton 2021-09-10 17:11 ` incoming Kees Cook 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2021-09-10 3:09 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits More post linux-next material. 9 patches, based on f154c806676ad7153c6e161f30c53a44855329d6. Subsystems affected by this patch series: mm/slab-generic rapidio mm/debug Subsystem: mm/slab-generic "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm: move kvmalloc-related functions to slab.h Subsystem: rapidio Kees Cook <keescook@chromium.org>: rapidio: avoid bogus __alloc_size warning Subsystem: mm/debug Kees Cook <keescook@chromium.org>: Patch series "Add __alloc_size() for better bounds checking", v2: Compiler Attributes: add __alloc_size() for better bounds checking checkpatch: add __alloc_size() to known $Attribute slab: clean up function declarations slab: add __alloc_size attributes for better bounds checking mm/page_alloc: add __alloc_size attributes for better bounds checking percpu: add __alloc_size attributes for better bounds checking mm/vmalloc: add __alloc_size attributes for better bounds checking Makefile | 15 +++ drivers/of/kexec.c | 1 drivers/rapidio/devices/rio_mport_cdev.c | 9 +- include/linux/compiler_attributes.h | 6 + include/linux/gfp.h | 2 include/linux/mm.h | 34 -------- include/linux/percpu.h | 3 include/linux/slab.h | 122 ++++++++++++++++++++++--------- include/linux/vmalloc.h | 11 ++ scripts/checkpatch.pl | 3 10 files changed, 132 insertions(+), 74 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2021-09-10 3:09 incoming Andrew Morton @ 2021-09-10 17:11 ` Kees Cook 2021-09-10 20:13 ` incoming Kees Cook 0 siblings, 1 reply; 349+ messages in thread From: Kees Cook @ 2021-09-10 17:11 UTC (permalink / raw) To: Linus Torvalds, Andrew Morton; +Cc: linux-mm, mm-commits On Thu, Sep 09, 2021 at 08:09:48PM -0700, Andrew Morton wrote: > > More post linux-next material. > > 9 patches, based on f154c806676ad7153c6e161f30c53a44855329d6. > > Subsystems affected by this patch series: > > mm/slab-generic > rapidio > mm/debug > > Subsystem: mm/slab-generic > > "Matthew Wilcox (Oracle)" <willy@infradead.org>: > mm: move kvmalloc-related functions to slab.h > > Subsystem: rapidio > > Kees Cook <keescook@chromium.org>: > rapidio: avoid bogus __alloc_size warning > > Subsystem: mm/debug > > Kees Cook <keescook@chromium.org>: > Patch series "Add __alloc_size() for better bounds checking", v2: > Compiler Attributes: add __alloc_size() for better bounds checking > checkpatch: add __alloc_size() to known $Attribute > slab: clean up function declarations > slab: add __alloc_size attributes for better bounds checking > mm/page_alloc: add __alloc_size attributes for better bounds checking > percpu: add __alloc_size attributes for better bounds checking > mm/vmalloc: add __alloc_size attributes for better bounds checking Hi, FYI, in overnight build testing I found yet another corner case in GCC's handling of the __alloc_size attribute. It's the gift that keeps on giving. The fix is here: https://lore.kernel.org/lkml/20210910165851.3296624-1-keescook@chromium.org/ > > Makefile | 15 +++ > drivers/of/kexec.c | 1 > drivers/rapidio/devices/rio_mport_cdev.c | 9 +- > include/linux/compiler_attributes.h | 6 + > include/linux/gfp.h | 2 > include/linux/mm.h | 34 -------- > include/linux/percpu.h | 3 > include/linux/slab.h | 122 ++++++++++++++++++++++--------- > include/linux/vmalloc.h | 11 ++ > scripts/checkpatch.pl | 3 > 10 files changed, 132 insertions(+), 74 deletions(-) > -- Kees Cook ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2021-09-10 17:11 ` incoming Kees Cook @ 2021-09-10 20:13 ` Kees Cook 0 siblings, 0 replies; 349+ messages in thread From: Kees Cook @ 2021-09-10 20:13 UTC (permalink / raw) To: linux-kernel; +Cc: Linus Torvalds, Andrew Morton, linux-mm, mm-commits On Fri, Sep 10, 2021 at 10:11:53AM -0700, Kees Cook wrote: > On Thu, Sep 09, 2021 at 08:09:48PM -0700, Andrew Morton wrote: > > > > More post linux-next material. > > > > 9 patches, based on f154c806676ad7153c6e161f30c53a44855329d6. > > > > Subsystems affected by this patch series: > > > > mm/slab-generic > > rapidio > > mm/debug > > > > Subsystem: mm/slab-generic > > > > "Matthew Wilcox (Oracle)" <willy@infradead.org>: > > mm: move kvmalloc-related functions to slab.h > > > > Subsystem: rapidio > > > > Kees Cook <keescook@chromium.org>: > > rapidio: avoid bogus __alloc_size warning > > > > Subsystem: mm/debug > > > > Kees Cook <keescook@chromium.org>: > > Patch series "Add __alloc_size() for better bounds checking", v2: > > Compiler Attributes: add __alloc_size() for better bounds checking > > checkpatch: add __alloc_size() to known $Attribute > > slab: clean up function declarations > > slab: add __alloc_size attributes for better bounds checking > > mm/page_alloc: add __alloc_size attributes for better bounds checking > > percpu: add __alloc_size attributes for better bounds checking > > mm/vmalloc: add __alloc_size attributes for better bounds checking > > Hi, > > FYI, in overnight build testing I found yet another corner case in > GCC's handling of the __alloc_size attribute. It's the gift that keeps > on giving. The fix is here: > > https://lore.kernel.org/lkml/20210910165851.3296624-1-keescook@chromium.org/ I'm so glad it's Friday. Here's the v2 fix... *sigh* https://lore.kernel.org/lkml/20210910201132.3809437-1-keescook@chromium.org/ -Kees > > > > > Makefile | 15 +++ > > drivers/of/kexec.c | 1 > > drivers/rapidio/devices/rio_mport_cdev.c | 9 +- > > include/linux/compiler_attributes.h | 6 + > > include/linux/gfp.h | 2 > > include/linux/mm.h | 34 -------- > > include/linux/percpu.h | 3 > > include/linux/slab.h | 122 ++++++++++++++++++++++--------- > > include/linux/vmalloc.h | 11 ++ > > scripts/checkpatch.pl | 3 > > 10 files changed, 132 insertions(+), 74 deletions(-) > > > > -- > Kees Cook -- Kees Cook ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-09-09 1:08 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-09-09 1:08 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm A bunch of hotfixes, mostly cc:stable. 8 patches, based on 2d338201d5311bcd79d42f66df4cecbcbc5f4f2c. Subsystems affected by this patch series: mm/hmm mm/hugetlb mm/vmscan mm/pagealloc mm/pagemap mm/kmemleak mm/mempolicy mm/memblock Subsystem: mm/hmm Li Zhijian <lizhijian@cn.fujitsu.com>: mm/hmm: bypass devmap pte when all pfn requested flags are fulfilled Subsystem: mm/hugetlb Liu Zixian <liuzixian4@huawei.com>: mm/hugetlb: initialize hugetlb_usage in mm_init Subsystem: mm/vmscan Rik van Riel <riel@surriel.com>: mm,vmscan: fix divide by zero in get_scan_count Subsystem: mm/pagealloc Miaohe Lin <linmiaohe@huawei.com>: mm/page_alloc.c: avoid accessing uninitialized pcp page migratetype Subsystem: mm/pagemap Liam Howlett <liam.howlett@oracle.com>: mmap_lock: change trace and locking order Subsystem: mm/kmemleak Naohiro Aota <naohiro.aota@wdc.com>: mm/kmemleak: allow __GFP_NOLOCKDEP passed to kmemleak's gfp Subsystem: mm/mempolicy yanghui <yanghui.def@bytedance.com>: mm/mempolicy: fix a race between offset_il_node and mpol_rebind_task Subsystem: mm/memblock Mike Rapoport <rppt@linux.ibm.com>: nds32/setup: remove unused memblock_region variable in setup_memory() arch/nds32/kernel/setup.c | 1 - include/linux/hugetlb.h | 9 +++++++++ include/linux/mmap_lock.h | 8 ++++---- kernel/fork.c | 1 + mm/hmm.c | 5 ++++- mm/kmemleak.c | 3 ++- mm/mempolicy.c | 17 +++++++++++++---- mm/page_alloc.c | 4 +++- mm/vmscan.c | 2 +- 9 files changed, 37 insertions(+), 13 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-09-08 22:17 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-09-08 22:17 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits This is the post-linux-next material, so it is based upon latest upstream to catch the now-merged dependencies. 10 patches, based on 2d338201d5311bcd79d42f66df4cecbcbc5f4f2c. Subsystems affected by this patch series: mm/vmstat mm/migration compat Subsystem: mm/vmstat Ingo Molnar <mingo@elte.hu>: mm/vmstat: protect per cpu variables with preempt disable on RT Subsystem: mm/migration Baolin Wang <baolin.wang@linux.alibaba.com>: mm: migrate: introduce a local variable to get the number of pages mm: migrate: fix the incorrect function name in comments mm: migrate: change to use bool type for 'page_was_mapped' Subsystem: compat Arnd Bergmann <arnd@arndb.de>: Patch series "compat: remove compat_alloc_user_space", v5: kexec: move locking into do_kexec_load kexec: avoid compat_alloc_user_space mm: simplify compat_sys_move_pages mm: simplify compat numa syscalls compat: remove some compat entry points arch: remove compat_alloc_user_space arch/arm64/include/asm/compat.h | 5 arch/arm64/include/asm/uaccess.h | 11 - arch/arm64/include/asm/unistd32.h | 10 - arch/arm64/lib/Makefile | 2 arch/arm64/lib/copy_in_user.S | 77 ---------- arch/mips/cavium-octeon/octeon-memcpy.S | 2 arch/mips/include/asm/compat.h | 8 - arch/mips/include/asm/uaccess.h | 26 --- arch/mips/kernel/syscalls/syscall_n32.tbl | 10 - arch/mips/kernel/syscalls/syscall_o32.tbl | 10 - arch/mips/lib/memcpy.S | 11 - arch/parisc/include/asm/compat.h | 6 arch/parisc/include/asm/uaccess.h | 2 arch/parisc/kernel/syscalls/syscall.tbl | 8 - arch/parisc/lib/memcpy.c | 9 - arch/powerpc/include/asm/compat.h | 16 -- arch/powerpc/kernel/syscalls/syscall.tbl | 10 - arch/s390/include/asm/compat.h | 10 - arch/s390/include/asm/uaccess.h | 3 arch/s390/kernel/syscalls/syscall.tbl | 10 - arch/s390/lib/uaccess.c | 63 -------- arch/sparc/include/asm/compat.h | 19 -- arch/sparc/kernel/process_64.c | 2 arch/sparc/kernel/signal32.c | 12 - arch/sparc/kernel/signal_64.c | 8 - arch/sparc/kernel/syscalls/syscall.tbl | 10 - arch/x86/entry/syscalls/syscall_32.tbl | 4 arch/x86/entry/syscalls/syscall_64.tbl | 2 arch/x86/include/asm/compat.h | 13 - arch/x86/include/asm/uaccess_64.h | 7 include/linux/compat.h | 39 +---- include/linux/uaccess.h | 10 - include/uapi/asm-generic/unistd.h | 10 - kernel/compat.c | 21 -- kernel/kexec.c | 105 +++++--------- kernel/sys_ni.c | 5 mm/mempolicy.c | 213 +++++++----------------------- mm/migrate.c | 69 +++++---- mm/vmstat.c | 48 ++++++ 39 files changed, 243 insertions(+), 663 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-09-08 2:52 Andrew Morton 2021-09-08 8:57 ` incoming Vlastimil Babka 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2021-09-08 2:52 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 147 patches, based on 7d2a07b769330c34b4deabeed939325c77a7ec2f. Subsystems affected by this patch series: mm/slub mm/memory-hotplug mm/rmap mm/ioremap mm/highmem mm/cleanups mm/secretmem mm/kfence mm/damon alpha percpu procfs misc core-kernel MAINTAINERS lib bitops checkpatch epoll init nilfs2 coredump fork pids criu kconfig selftests ipc mm/vmscan scripts Subsystem: mm/slub Vlastimil Babka <vbabka@suse.cz>: Patch series "SLUB: reduce irq disabled scope and make it RT compatible", v6: mm, slub: don't call flush_all() from slab_debug_trace_open() mm, slub: allocate private object map for debugfs listings mm, slub: allocate private object map for validate_slab_cache() mm, slub: don't disable irq for debug_check_no_locks_freed() mm, slub: remove redundant unfreeze_partials() from put_cpu_partial() mm, slub: extract get_partial() from new_slab_objects() mm, slub: dissolve new_slab_objects() into ___slab_alloc() mm, slub: return slab page from get_partial() and set c->page afterwards mm, slub: restructure new page checks in ___slab_alloc() mm, slub: simplify kmem_cache_cpu and tid setup mm, slub: move disabling/enabling irqs to ___slab_alloc() mm, slub: do initial checks in ___slab_alloc() with irqs enabled mm, slub: move disabling irqs closer to get_partial() in ___slab_alloc() mm, slub: restore irqs around calling new_slab() mm, slub: validate slab from partial list or page allocator before making it cpu slab mm, slub: check new pages with restored irqs mm, slub: stop disabling irqs around get_partial() mm, slub: move reset of c->page and freelist out of deactivate_slab() mm, slub: make locking in deactivate_slab() irq-safe mm, slub: call deactivate_slab() without disabling irqs mm, slub: move irq control into unfreeze_partials() mm, slub: discard slabs in unfreeze_partials() without irqs disabled mm, slub: detach whole partial list at once in unfreeze_partials() mm, slub: separate detaching of partial list in unfreeze_partials() from unfreezing mm, slub: only disable irq with spin_lock in __unfreeze_partials() mm, slub: don't disable irqs in slub_cpu_dead() mm, slab: split out the cpu offline variant of flush_slab() Sebastian Andrzej Siewior <bigeasy@linutronix.de>: mm: slub: move flush_cpu_slab() invocations __free_slab() invocations out of IRQ context mm: slub: make object_map_lock a raw_spinlock_t Vlastimil Babka <vbabka@suse.cz>: mm, slub: make slab_lock() disable irqs with PREEMPT_RT mm, slub: protect put_cpu_partial() with disabled irqs instead of cmpxchg mm, slub: use migrate_disable() on PREEMPT_RT mm, slub: convert kmem_cpu_slab protection to local_lock Subsystem: mm/memory-hotplug David Hildenbrand <david@redhat.com>: Patch series "memory-hotplug.rst: complete admin-guide overhaul", v3: memory-hotplug.rst: remove locking details from admin-guide memory-hotplug.rst: complete admin-guide overhaul Mike Rapoport <rppt@linux.ibm.com>: Patch series "mm: remove pfn_valid_within() and CONFIG_HOLES_IN_ZONE": mm: remove pfn_valid_within() and CONFIG_HOLES_IN_ZONE mm: memory_hotplug: cleanup after removal of pfn_valid_within() David Hildenbrand <david@redhat.com>: Patch series "mm/memory_hotplug: preparatory patches for new online policy and memory": mm/memory_hotplug: use "unsigned long" for PFN in zone_for_pfn_range() mm/memory_hotplug: remove nid parameter from arch_remove_memory() mm/memory_hotplug: remove nid parameter from remove_memory() and friends ACPI: memhotplug: memory resources cannot be enabled yet Patch series "mm/memory_hotplug: "auto-movable" online policy and memory groups", v3: mm: track present early pages per zone mm/memory_hotplug: introduce "auto-movable" online policy drivers/base/memory: introduce "memory groups" to logically group memory blocks mm/memory_hotplug: track present pages in memory groups ACPI: memhotplug: use a single static memory group for a single memory device dax/kmem: use a single static memory group for a single probed unit virtio-mem: use a single dynamic memory group for a single virtio-mem device mm/memory_hotplug: memory group aware "auto-movable" online policy mm/memory_hotplug: improved dynamic memory group aware "auto-movable" online policy Miaohe Lin <linmiaohe@huawei.com>: Patch series "Cleanup and fixups for memory hotplug": mm/memory_hotplug: use helper zone_is_zone_device() to simplify the code Subsystem: mm/rmap Muchun Song <songmuchun@bytedance.com>: mm: remove redundant compound_head() calling Subsystem: mm/ioremap Christoph Hellwig <hch@lst.de>: riscv: only select GENERIC_IOREMAP if MMU support is enabled Patch series "small ioremap cleanups": mm: move ioremap_page_range to vmalloc.c mm: don't allow executable ioremap mappings Weizhao Ouyang <o451686892@gmail.com>: mm/early_ioremap.c: remove redundant early_ioremap_shutdown() Subsystem: mm/highmem Sebastian Andrzej Siewior <bigeasy@linutronix.de>: highmem: don't disable preemption on RT in kmap_atomic() Subsystem: mm/cleanups Changbin Du <changbin.du@gmail.com>: mm: in_irq() cleanup Muchun Song <songmuchun@bytedance.com>: mm: introduce PAGEFLAGS_MASK to replace ((1UL << NR_PAGEFLAGS) - 1) Subsystem: mm/secretmem Jordy Zomer <jordy@jordyzomer.github.io>: mm/secretmem: use refcount_t instead of atomic_t Subsystem: mm/kfence Marco Elver <elver@google.com>: kfence: show cpu and timestamp in alloc/free info kfence: test: fail fast if disabled at boot Subsystem: mm/damon SeongJae Park <sjpark@amazon.de>: Patch series "Introduce Data Access MONitor (DAMON)", v34: mm: introduce Data Access MONitor (DAMON) mm/damon/core: implement region-based sampling mm/damon: adaptively adjust regions mm/idle_page_tracking: make PG_idle reusable mm/damon: implement primitives for the virtual memory address spaces mm/damon: add a tracepoint mm/damon: implement a debugfs-based user space interface mm/damon/dbgfs: export kdamond pid to the user space mm/damon/dbgfs: support multiple contexts Documentation: add documents for DAMON mm/damon: add kunit tests mm/damon: add user space selftests MAINTAINERS: update for DAMON Subsystem: alpha Randy Dunlap <rdunlap@infradead.org>: alpha: agp: make empty macros use do-while-0 style alpha: pci-sysfs: fix all kernel-doc warnings Subsystem: percpu Greg Kroah-Hartman <gregkh@linuxfoundation.org>: percpu: remove export of pcpu_base_addr Subsystem: procfs Feng Zhou <zhoufeng.zf@bytedance.com>: fs/proc/kcore.c: add mmap interface Christoph Hellwig <hch@lst.de>: proc: stop using seq_get_buf in proc_task_name Ohhoon Kwon <ohoono.kwon@samsung.com>: connector: send event on write to /proc/[pid]/comm Subsystem: misc Colin Ian King <colin.king@canonical.com>: arch: Kconfig: fix spelling mistake "seperate" -> "separate" Andy Shevchenko <andriy.shevchenko@linux.intel.com>: include/linux/once.h: fix trivia typo Not -> Note Daniel Lezcano <daniel.lezcano@linaro.org>: Patch series "Add Hz macros", v3: units: change from 'L' to 'UL' units: add the HZ macros thermal/drivers/devfreq_cooling: use HZ macros devfreq: use HZ macros iio/drivers/as73211: use HZ macros hwmon/drivers/mr75203: use HZ macros iio/drivers/hid-sensor: use HZ macros i2c/drivers/ov02q10: use HZ macros mtd/drivers/nand: use HZ macros phy/drivers/stm32: use HZ macros Subsystem: core-kernel Yang Yang <yang.yang29@zte.com.cn>: kernel/acct.c: use dedicated helper to access rlimit values Pavel Skripkin <paskripkin@gmail.com>: profiling: fix shift-out-of-bounds bugs Subsystem: MAINTAINERS Nathan Chancellor <nathan@kernel.org>: MAINTAINERS: update ClangBuiltLinux mailing list Documentation/llvm: update mailing list Documentation/llvm: update IRC location Subsystem: lib Geert Uytterhoeven <geert@linux-m68k.org>: Patch series "math: RATIONAL and RATIONAL_KUNIT_TEST improvements": math: make RATIONAL tristate math: RATIONAL_KUNIT_TEST should depend on RATIONAL instead of selecting it Matteo Croce <mcroce@microsoft.com>: Patch series "lib/string: optimized mem* functions", v2: lib/string: optimized memcpy lib/string: optimized memmove lib/string: optimized memset Daniel Latypov <dlatypov@google.com>: lib/test: convert test_sort.c to use KUnit Randy Dunlap <rdunlap@infradead.org>: lib/dump_stack: correct kernel-doc notation lib/iov_iter.c: fix kernel-doc warnings Subsystem: bitops Yury Norov <yury.norov@gmail.com>: Patch series "Resend bitmap patches": bitops: protect find_first_{,zero}_bit properly bitops: move find_bit_*_le functions from le.h to find.h include: move find.h from asm_generic to linux arch: remove GENERIC_FIND_FIRST_BIT entirely lib: add find_first_and_bit() cpumask: use find_first_and_bit() all: replace find_next{,_zero}_bit with find_first{,_zero}_bit where appropriate tools: sync tools/bitmap with mother linux cpumask: replace cpumask_next_* with cpumask_first_* where appropriate include/linux: move for_each_bit() macros from bitops.h to find.h find: micro-optimize for_each_{set,clear}_bit() bitops: replace for_each_*_bit_from() with for_each_*_bit() where appropriate Andy Shevchenko <andriy.shevchenko@linux.intel.com>: tools: rename bitmap_alloc() to bitmap_zalloc() Yury Norov <yury.norov@gmail.com>: mm/percpu: micro-optimize pcpu_is_populated() bitmap: unify find_bit operations lib: bitmap: add performance test for bitmap_print_to_pagebuf vsprintf: rework bitmap_list_string Subsystem: checkpatch Joe Perches <joe@perches.com>: checkpatch: support wide strings Mimi Zohar <zohar@linux.ibm.com>: checkpatch: make email address check case insensitive Joe Perches <joe@perches.com>: checkpatch: improve GIT_COMMIT_ID test Subsystem: epoll Nicholas Piggin <npiggin@gmail.com>: fs/epoll: use a per-cpu counter for user's watches count Subsystem: init Rasmus Villemoes <linux@rasmusvillemoes.dk>: init: move usermodehelper_enable() to populate_rootfs() Kefeng Wang <wangkefeng.wang@huawei.com>: trap: cleanup trap_init() Subsystem: nilfs2 Nanyong Sun <sunnanyong@huawei.com>: Patch series "nilfs2: fix incorrect usage of kobject": nilfs2: fix memory leak in nilfs_sysfs_create_device_group nilfs2: fix NULL pointer in nilfs_##name##_attr_release nilfs2: fix memory leak in nilfs_sysfs_create_##name##_group nilfs2: fix memory leak in nilfs_sysfs_delete_##name##_group nilfs2: fix memory leak in nilfs_sysfs_create_snapshot_group nilfs2: fix memory leak in nilfs_sysfs_delete_snapshot_group Zhen Lei <thunder.leizhen@huawei.com>: nilfs2: use refcount_dec_and_lock() to fix potential UAF Subsystem: coredump David Oberhollenzer <david.oberhollenzer@sigma-star.at>: fs/coredump.c: log if a core dump is aborted due to changed file permissions QiuXi <qiuxi1@huawei.com>: coredump: fix memleak in dump_vma_snapshot() Subsystem: fork Christoph Hellwig <hch@lst.de>: kernel/fork.c: unexport get_{mm,task}_exe_file Subsystem: pids Takahiro Itazuri <itazur@amazon.com>: pid: cleanup the stale comment mentioning pidmap_init(). Subsystem: criu Cyrill Gorcunov <gorcunov@gmail.com>: prctl: allow to setup brk for et_dyn executables Subsystem: kconfig Zenghui Yu <yuzenghui@huawei.com>: configs: remove the obsolete CONFIG_INPUT_POLLDEV Lukas Bulwahn <lukas.bulwahn@gmail.com>: Kconfig.debug: drop selecting non-existing HARDLOCKUP_DETECTOR_ARCH Subsystem: selftests Greg Thelen <gthelen@google.com>: selftests/memfd: remove unused variable Subsystem: ipc Rafael Aquini <aquini@redhat.com>: ipc: replace costly bailout check in sysvipc_find_ipc() Subsystem: mm/vmscan Randy Dunlap <rdunlap@infradead.org>: mm/workingset: correct kernel-doc notations Subsystem: scripts Randy Dunlap <rdunlap@infradead.org>: scripts: check_extable: fix typo in user error message a/Documentation/admin-guide/mm/damon/index.rst | 15 a/Documentation/admin-guide/mm/damon/start.rst | 114 + a/Documentation/admin-guide/mm/damon/usage.rst | 112 + a/Documentation/admin-guide/mm/index.rst | 1 a/Documentation/admin-guide/mm/memory-hotplug.rst | 842 ++++++----- a/Documentation/dev-tools/kfence.rst | 98 - a/Documentation/kbuild/llvm.rst | 5 a/Documentation/vm/damon/api.rst | 20 a/Documentation/vm/damon/design.rst | 166 ++ a/Documentation/vm/damon/faq.rst | 51 a/Documentation/vm/damon/index.rst | 30 a/Documentation/vm/index.rst | 1 a/MAINTAINERS | 17 a/arch/Kconfig | 2 a/arch/alpha/include/asm/agp.h | 4 a/arch/alpha/include/asm/bitops.h | 2 a/arch/alpha/kernel/pci-sysfs.c | 12 a/arch/arc/Kconfig | 1 a/arch/arc/include/asm/bitops.h | 1 a/arch/arc/kernel/traps.c | 5 a/arch/arm/configs/dove_defconfig | 1 a/arch/arm/configs/pxa_defconfig | 1 a/arch/arm/include/asm/bitops.h | 1 a/arch/arm/kernel/traps.c | 5 a/arch/arm64/Kconfig | 1 a/arch/arm64/include/asm/bitops.h | 1 a/arch/arm64/mm/mmu.c | 3 a/arch/csky/include/asm/bitops.h | 1 a/arch/h8300/include/asm/bitops.h | 1 a/arch/h8300/kernel/traps.c | 4 a/arch/hexagon/include/asm/bitops.h | 1 a/arch/hexagon/kernel/traps.c | 4 a/arch/ia64/include/asm/bitops.h | 2 a/arch/ia64/mm/init.c | 3 a/arch/m68k/include/asm/bitops.h | 2 a/arch/mips/Kconfig | 1 a/arch/mips/configs/lemote2f_defconfig | 1 a/arch/mips/configs/pic32mzda_defconfig | 1 a/arch/mips/configs/rt305x_defconfig | 1 a/arch/mips/configs/xway_defconfig | 1 a/arch/mips/include/asm/bitops.h | 1 a/arch/nds32/kernel/traps.c | 5 a/arch/nios2/kernel/traps.c | 5 a/arch/openrisc/include/asm/bitops.h | 1 a/arch/openrisc/kernel/traps.c | 5 a/arch/parisc/configs/generic-32bit_defconfig | 1 a/arch/parisc/include/asm/bitops.h | 2 a/arch/parisc/kernel/traps.c | 4 a/arch/powerpc/include/asm/bitops.h | 2 a/arch/powerpc/include/asm/cputhreads.h | 2 a/arch/powerpc/kernel/traps.c | 5 a/arch/powerpc/mm/mem.c | 3 a/arch/powerpc/platforms/pasemi/dma_lib.c | 4 a/arch/powerpc/platforms/pseries/hotplug-memory.c | 9 a/arch/riscv/Kconfig | 2 a/arch/riscv/include/asm/bitops.h | 1 a/arch/riscv/kernel/traps.c | 5 a/arch/s390/Kconfig | 1 a/arch/s390/include/asm/bitops.h | 1 a/arch/s390/kvm/kvm-s390.c | 2 a/arch/s390/mm/init.c | 3 a/arch/sh/include/asm/bitops.h | 1 a/arch/sh/mm/init.c | 3 a/arch/sparc/include/asm/bitops_32.h | 1 a/arch/sparc/include/asm/bitops_64.h | 2 a/arch/um/kernel/trap.c | 4 a/arch/x86/Kconfig | 1 a/arch/x86/configs/i386_defconfig | 1 a/arch/x86/configs/x86_64_defconfig | 1 a/arch/x86/include/asm/bitops.h | 2 a/arch/x86/kernel/apic/vector.c | 4 a/arch/x86/mm/init_32.c | 3 a/arch/x86/mm/init_64.c | 3 a/arch/x86/um/Kconfig | 1 a/arch/xtensa/include/asm/bitops.h | 1 a/block/blk-mq.c | 2 a/drivers/acpi/acpi_memhotplug.c | 46 a/drivers/base/memory.c | 231 ++- a/drivers/base/node.c | 2 a/drivers/block/rnbd/rnbd-clt.c | 2 a/drivers/dax/kmem.c | 43 a/drivers/devfreq/devfreq.c | 2 a/drivers/dma/ti/edma.c | 2 a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 4 a/drivers/hwmon/ltc2992.c | 3 a/drivers/hwmon/mr75203.c | 2 a/drivers/iio/adc/ad7124.c | 2 a/drivers/iio/common/hid-sensors/hid-sensor-attributes.c | 3 a/drivers/iio/light/as73211.c | 3 a/drivers/infiniband/hw/irdma/hw.c | 16 a/drivers/media/cec/core/cec-core.c | 2 a/drivers/media/i2c/ov02a10.c | 2 a/drivers/media/mc/mc-devnode.c | 2 a/drivers/mmc/host/renesas_sdhi_core.c | 2 a/drivers/mtd/nand/raw/intel-nand-controller.c | 2 a/drivers/net/virtio_net.c | 2 a/drivers/pci/controller/dwc/pci-dra7xx.c | 2 a/drivers/phy/st/phy-stm32-usbphyc.c | 2 a/drivers/scsi/lpfc/lpfc_sli.c | 10 a/drivers/soc/fsl/qbman/bman_portal.c | 2 a/drivers/soc/fsl/qbman/qman_portal.c | 2 a/drivers/soc/ti/k3-ringacc.c | 4 a/drivers/thermal/devfreq_cooling.c | 2 a/drivers/tty/n_tty.c | 2 a/drivers/virt/acrn/ioreq.c | 3 a/drivers/virtio/virtio_mem.c | 26 a/fs/coredump.c | 15 a/fs/eventpoll.c | 18 a/fs/f2fs/segment.c | 8 a/fs/nilfs2/sysfs.c | 26 a/fs/nilfs2/the_nilfs.c | 9 a/fs/ocfs2/cluster/heartbeat.c | 2 a/fs/ocfs2/dlm/dlmdomain.c | 4 a/fs/ocfs2/dlm/dlmmaster.c | 18 a/fs/ocfs2/dlm/dlmrecovery.c | 2 a/fs/ocfs2/dlm/dlmthread.c | 2 a/fs/proc/array.c | 18 a/fs/proc/base.c | 5 a/fs/proc/kcore.c | 73 a/include/asm-generic/bitops.h | 1 a/include/asm-generic/bitops/find.h | 198 -- a/include/asm-generic/bitops/le.h | 64 a/include/asm-generic/early_ioremap.h | 6 a/include/linux/bitmap.h | 34 a/include/linux/bitops.h | 34 a/include/linux/cpumask.h | 46 a/include/linux/damon.h | 290 +++ a/include/linux/find.h | 134 + a/include/linux/highmem-internal.h | 27 a/include/linux/memory.h | 55 a/include/linux/memory_hotplug.h | 40 a/include/linux/mmzone.h | 19 a/include/linux/once.h | 2 a/include/linux/page-flags.h | 17 a/include/linux/page_ext.h | 2 a/include/linux/page_idle.h | 6 a/include/linux/pagemap.h | 7 a/include/linux/sched/user.h | 3 a/include/linux/slub_def.h | 6 a/include/linux/threads.h | 2 a/include/linux/units.h | 10 a/include/linux/vmalloc.h | 3 a/include/trace/events/damon.h | 43 a/include/trace/events/mmflags.h | 2 a/include/trace/events/page_ref.h | 4 a/init/initramfs.c | 2 a/init/main.c | 3 a/init/noinitramfs.c | 2 a/ipc/util.c | 16 a/kernel/acct.c | 2 a/kernel/fork.c | 2 a/kernel/profile.c | 21 a/kernel/sys.c | 7 a/kernel/time/clocksource.c | 4 a/kernel/user.c | 25 a/lib/Kconfig | 3 a/lib/Kconfig.debug | 9 a/lib/dump_stack.c | 3 a/lib/find_bit.c | 21 a/lib/find_bit_benchmark.c | 21 a/lib/genalloc.c | 2 a/lib/iov_iter.c | 8 a/lib/math/Kconfig | 2 a/lib/math/rational.c | 3 a/lib/string.c | 130 + a/lib/test_bitmap.c | 37 a/lib/test_printf.c | 2 a/lib/test_sort.c | 40 a/lib/vsprintf.c | 26 a/mm/Kconfig | 15 a/mm/Makefile | 4 a/mm/compaction.c | 20 a/mm/damon/Kconfig | 68 a/mm/damon/Makefile | 5 a/mm/damon/core-test.h | 253 +++ a/mm/damon/core.c | 748 ++++++++++ a/mm/damon/dbgfs-test.h | 126 + a/mm/damon/dbgfs.c | 631 ++++++++ a/mm/damon/vaddr-test.h | 329 ++++ a/mm/damon/vaddr.c | 672 +++++++++ a/mm/early_ioremap.c | 5 a/mm/highmem.c | 2 a/mm/ioremap.c | 25 a/mm/kfence/core.c | 3 a/mm/kfence/kfence.h | 2 a/mm/kfence/kfence_test.c | 3 a/mm/kfence/report.c | 19 a/mm/kmemleak.c | 2 a/mm/memory_hotplug.c | 396 ++++- a/mm/memremap.c | 5 a/mm/page_alloc.c | 27 a/mm/page_ext.c | 12 a/mm/page_idle.c | 10 a/mm/page_isolation.c | 7 a/mm/page_owner.c | 14 a/mm/percpu.c | 36 a/mm/rmap.c | 6 a/mm/secretmem.c | 9 a/mm/slab_common.c | 2 a/mm/slub.c | 1023 +++++++++----- a/mm/vmalloc.c | 24 a/mm/workingset.c | 2 a/net/ncsi/ncsi-manage.c | 4 a/scripts/check_extable.sh | 2 a/scripts/checkpatch.pl | 93 - a/tools/include/linux/bitmap.h | 4 a/tools/perf/bench/find-bit-bench.c | 2 a/tools/perf/builtin-c2c.c | 6 a/tools/perf/builtin-record.c | 2 a/tools/perf/tests/bitmap.c | 2 a/tools/perf/tests/mem2node.c | 2 a/tools/perf/util/affinity.c | 4 a/tools/perf/util/header.c | 4 a/tools/perf/util/metricgroup.c | 2 a/tools/perf/util/mmap.c | 4 a/tools/testing/selftests/damon/Makefile | 7 a/tools/testing/selftests/damon/_chk_dependency.sh | 28 a/tools/testing/selftests/damon/debugfs_attrs.sh | 75 + a/tools/testing/selftests/kvm/dirty_log_perf_test.c | 2 a/tools/testing/selftests/kvm/dirty_log_test.c | 4 a/tools/testing/selftests/kvm/x86_64/vmx_dirty_log_test.c | 2 a/tools/testing/selftests/memfd/memfd_test.c | 2 b/MAINTAINERS | 2 b/tools/include/asm-generic/bitops.h | 1 b/tools/include/linux/bitmap.h | 7 b/tools/include/linux/find.h | 81 + b/tools/lib/find_bit.c | 20 227 files changed, 6695 insertions(+), 1875 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2021-09-08 2:52 incoming Andrew Morton @ 2021-09-08 8:57 ` Vlastimil Babka 0 siblings, 0 replies; 349+ messages in thread From: Vlastimil Babka @ 2021-09-08 8:57 UTC (permalink / raw) To: Andrew Morton, Linus Torvalds Cc: linux-mm, mm-commits, Mike Galbraith, Mel Gorman On 9/8/21 04:52, Andrew Morton wrote: > Subsystem: mm/slub > > Vlastimil Babka <vbabka@suse.cz>: > Patch series "SLUB: reduce irq disabled scope and make it RT compatible", v6: > mm, slub: don't call flush_all() from slab_debug_trace_open() > mm, slub: allocate private object map for debugfs listings > mm, slub: allocate private object map for validate_slab_cache() > mm, slub: don't disable irq for debug_check_no_locks_freed() > mm, slub: remove redundant unfreeze_partials() from put_cpu_partial() > mm, slub: extract get_partial() from new_slab_objects() > mm, slub: dissolve new_slab_objects() into ___slab_alloc() > mm, slub: return slab page from get_partial() and set c->page afterwards > mm, slub: restructure new page checks in ___slab_alloc() > mm, slub: simplify kmem_cache_cpu and tid setup > mm, slub: move disabling/enabling irqs to ___slab_alloc() > mm, slub: do initial checks in ___slab_alloc() with irqs enabled > mm, slub: move disabling irqs closer to get_partial() in ___slab_alloc() > mm, slub: restore irqs around calling new_slab() > mm, slub: validate slab from partial list or page allocator before making it cpu slab > mm, slub: check new pages with restored irqs > mm, slub: stop disabling irqs around get_partial() > mm, slub: move reset of c->page and freelist out of deactivate_slab() > mm, slub: make locking in deactivate_slab() irq-safe > mm, slub: call deactivate_slab() without disabling irqs > mm, slub: move irq control into unfreeze_partials() > mm, slub: discard slabs in unfreeze_partials() without irqs disabled > mm, slub: detach whole partial list at once in unfreeze_partials() > mm, slub: separate detaching of partial list in unfreeze_partials() from unfreezing > mm, slub: only disable irq with spin_lock in __unfreeze_partials() > mm, slub: don't disable irqs in slub_cpu_dead() > mm, slab: split out the cpu offline variant of flush_slab() > > Sebastian Andrzej Siewior <bigeasy@linutronix.de>: > mm: slub: move flush_cpu_slab() invocations __free_slab() invocations out of IRQ context > mm: slub: make object_map_lock a raw_spinlock_t > > Vlastimil Babka <vbabka@suse.cz>: > mm, slub: make slab_lock() disable irqs with PREEMPT_RT > mm, slub: protect put_cpu_partial() with disabled irqs instead of cmpxchg > mm, slub: use migrate_disable() on PREEMPT_RT > mm, slub: convert kmem_cpu_slab protection to local_lock For my own piece of mind, I've checked that this part (patches 1 to 33) are identical to the v6 posting [1] and git version [2] that Mel and Mike tested (replies to [1]). [1] https://lore.kernel.org/all/20210904105003.11688-1-vbabka@suse.cz/ [2] git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git tags/mm-slub-5.15-rc1 ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-09-02 21:48 Andrew Morton 2021-09-02 21:49 ` incoming Andrew Morton 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2021-09-02 21:48 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 212 patches, based on 4a3bb4200a5958d76cc26ebe4db4257efa56812b. Subsystems affected by this patch series: ia64 ocfs2 block mm/slub mm/debug mm/pagecache mm/gup mm/swap mm/shmem mm/memcg mm/selftests mm/pagemap mm/mremap mm/bootmem mm/sparsemem mm/vmalloc mm/kasan mm/pagealloc mm/memory-failure mm/hugetlb mm/userfaultfd mm/vmscan mm/compaction mm/mempolicy mm/memblock mm/oom-kill mm/migration mm/ksm mm/percpu mm/vmstat mm/madvise Subsystem: ia64 Jason Wang <wangborong@cdjrlc.com>: ia64: fix typo in a comment Geert Uytterhoeven <geert+renesas@glider.be>: Patch series "ia64: Miscellaneous fixes and cleanups": ia64: fix #endif comment for reserve_elfcorehdr() ia64: make reserve_elfcorehdr() static ia64: make num_rsvd_regions static Subsystem: ocfs2 Dan Carpenter <dan.carpenter@oracle.com>: ocfs2: remove an unnecessary condition Tuo Li <islituo@gmail.com>: ocfs2: quota_local: fix possible uninitialized-variable access in ocfs2_local_read_info() Gang He <ghe@suse.com>: ocfs2: ocfs2_downconvert_lock failure results in deadlock Subsystem: block kernel test robot <lkp@intel.com>: arch/csky/kernel/probes/kprobes.c: fix bugon.cocci warnings Subsystem: mm/slub Vlastimil Babka <vbabka@suse.cz>: Patch series "SLUB: reduce irq disabled scope and make it RT compatible", v4: mm, slub: don't call flush_all() from slab_debug_trace_open() mm, slub: allocate private object map for debugfs listings mm, slub: allocate private object map for validate_slab_cache() mm, slub: don't disable irq for debug_check_no_locks_freed() mm, slub: remove redundant unfreeze_partials() from put_cpu_partial() mm, slub: unify cmpxchg_double_slab() and __cmpxchg_double_slab() mm, slub: extract get_partial() from new_slab_objects() mm, slub: dissolve new_slab_objects() into ___slab_alloc() mm, slub: return slab page from get_partial() and set c->page afterwards mm, slub: restructure new page checks in ___slab_alloc() mm, slub: simplify kmem_cache_cpu and tid setup mm, slub: move disabling/enabling irqs to ___slab_alloc() mm, slub: do initial checks in ___slab_alloc() with irqs enabled mm, slub: move disabling irqs closer to get_partial() in ___slab_alloc() mm, slub: restore irqs around calling new_slab() mm, slub: validate slab from partial list or page allocator before making it cpu slab mm, slub: check new pages with restored irqs mm, slub: stop disabling irqs around get_partial() mm, slub: move reset of c->page and freelist out of deactivate_slab() mm, slub: make locking in deactivate_slab() irq-safe mm, slub: call deactivate_slab() without disabling irqs mm, slub: move irq control into unfreeze_partials() mm, slub: discard slabs in unfreeze_partials() without irqs disabled mm, slub: detach whole partial list at once in unfreeze_partials() mm, slub: separate detaching of partial list in unfreeze_partials() from unfreezing mm, slub: only disable irq with spin_lock in __unfreeze_partials() mm, slub: don't disable irqs in slub_cpu_dead() mm, slab: make flush_slab() possible to call with irqs enabled Sebastian Andrzej Siewior <bigeasy@linutronix.de>: mm: slub: move flush_cpu_slab() invocations __free_slab() invocations out of IRQ context mm: slub: make object_map_lock a raw_spinlock_t Vlastimil Babka <vbabka@suse.cz>: mm, slub: optionally save/restore irqs in slab_[un]lock()/ mm, slub: make slab_lock() disable irqs with PREEMPT_RT mm, slub: protect put_cpu_partial() with disabled irqs instead of cmpxchg mm, slub: use migrate_disable() on PREEMPT_RT mm, slub: convert kmem_cpu_slab protection to local_lock Subsystem: mm/debug Gavin Shan <gshan@redhat.com>: Patch series "mm/debug_vm_pgtable: Enhancements", v6: mm/debug_vm_pgtable: introduce struct pgtable_debug_args mm/debug_vm_pgtable: use struct pgtable_debug_args in basic tests mm/debug_vm_pgtable: use struct pgtable_debug_args in leaf and savewrite tests mm/debug_vm_pgtable: use struct pgtable_debug_args in protnone and devmap tests mm/debug_vm_pgtable: use struct pgtable_debug_args in soft_dirty and swap tests mm/debug_vm_pgtable: use struct pgtable_debug_args in migration and thp tests mm/debug_vm_pgtable: use struct pgtable_debug_args in PTE modifying tests mm/debug_vm_pgtable: use struct pgtable_debug_args in PMD modifying tests mm/debug_vm_pgtable: use struct pgtable_debug_args in PUD modifying tests mm/debug_vm_pgtable: use struct pgtable_debug_args in PGD and P4D modifying tests mm/debug_vm_pgtable: remove unused code mm/debug_vm_pgtable: fix corrupted page flag "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm: report a more useful address for reclaim acquisition liuhailong <liuhailong@oppo.com>: mm: add kernel_misc_reclaimable in show_free_areas Subsystem: mm/pagecache Jan Kara <jack@suse.cz>: Patch series "writeback: Fix bandwidth estimates", v4: writeback: track number of inodes under writeback writeback: reliably update bandwidth estimation writeback: fix bandwidth estimate for spiky workload writeback: rename domain_update_bandwidth() writeback: use READ_ONCE for unlocked reads of writeback stats Johannes Weiner <hannes@cmpxchg.org>: mm: remove irqsave/restore locking from contexts with irqs enabled fs: drop_caches: fix skipping over shadow cache inodes fs: inode: count invalidated shadow pages in pginodesteal Shakeel Butt <shakeelb@google.com>: writeback: memcg: simplify cgroup_writeback_by_id Jing Yangyang <jing.yangyang@zte.com.cn>: include/linux/buffer_head.h: fix boolreturn.cocci warnings Subsystem: mm/gup Miaohe Lin <linmiaohe@huawei.com>: Patch series "Cleanups and fixup for gup": mm: gup: remove set but unused local variable major mm: gup: remove unneed local variable orig_refs mm: gup: remove useless BUG_ON in __get_user_pages() mm: gup: fix potential pgmap refcnt leak in __gup_device_huge() mm: gup: use helper PAGE_ALIGNED in populate_vma_page_range() John Hubbard <jhubbard@nvidia.com>: Patch series "A few gup refactorings and documentation updates", v3: mm/gup: documentation corrections for gup/pup mm/gup: small refactoring: simplify try_grab_page() mm/gup: remove try_get_page(), call try_get_compound_head() directly Subsystem: mm/swap Hugh Dickins <hughd@google.com>: fs, mm: fix race in unlinking swapfile John Hubbard <jhubbard@nvidia.com>: mm: delete unused get_kernel_page() Subsystem: mm/shmem Sebastian Andrzej Siewior <bigeasy@linutronix.de>: shmem: use raw_spinlock_t for ->stat_lock Miaohe Lin <linmiaohe@huawei.com>: Patch series "Cleanups for shmem": shmem: remove unneeded variable ret shmem: remove unneeded header file shmem: remove unneeded function forward declaration shmem: include header file to declare swap_info Hugh Dickins <hughd@google.com>: Patch series "huge tmpfs: shmem_is_huge() fixes and cleanups": huge tmpfs: fix fallocate(vanilla) advance over huge pages huge tmpfs: fix split_huge_page() after FALLOC_FL_KEEP_SIZE huge tmpfs: remove shrinklist addition from shmem_setattr() huge tmpfs: revert shmem's use of transhuge_vma_enabled() huge tmpfs: move shmem_huge_enabled() upwards huge tmpfs: SGP_NOALLOC to stop collapse_file() on race huge tmpfs: shmem_is_huge(vma, inode, index) huge tmpfs: decide stat.st_blksize by shmem_is_huge() shmem: shmem_writepage() split unlikely i915 THP Subsystem: mm/memcg Suren Baghdasaryan <surenb@google.com>: mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config mm, memcg: inline swap-related functions to improve disabled memcg config Vasily Averin <vvs@virtuozzo.com>: memcg: enable accounting for pids in nested pid namespaces Shakeel Butt <shakeelb@google.com>: memcg: switch lruvec stats to rstat memcg: infrastructure to flush memcg stats Yutian Yang <nglaive@gmail.com>: memcg: charge fs_context and legacy_fs_context Vasily Averin <vvs@virtuozzo.com>: Patch series "memcg accounting from OpenVZ", v7: memcg: enable accounting for mnt_cache entries memcg: enable accounting for pollfd and select bits arrays memcg: enable accounting for file lock caches memcg: enable accounting for fasync_cache memcg: enable accounting for new namesapces and struct nsproxy memcg: enable accounting of ipc resources memcg: enable accounting for signals memcg: enable accounting for posix_timers_cache slab memcg: enable accounting for ldt_struct objects Shakeel Butt <shakeelb@google.com>: memcg: cleanup racy sum avoidance code Vasily Averin <vvs@virtuozzo.com>: memcg: replace in_interrupt() by !in_task() in active_memcg() Baolin Wang <baolin.wang@linux.alibaba.com>: mm: memcontrol: set the correct memcg swappiness restriction Miaohe Lin <linmiaohe@huawei.com>: mm, memcg: remove unused functions mm, memcg: save some atomic ops when flush is already true Michal Hocko <mhocko@suse.com>: memcg: fix up drain_local_stock comment Shakeel Butt <shakeelb@google.com>: memcg: make memcg->event_list_lock irqsafe Subsystem: mm/selftests Po-Hsu Lin <po-hsu.lin@canonical.com>: selftests/vm: use kselftest skip code for skipped tests Colin Ian King <colin.king@canonical.com>: selftests: Fix spelling mistake "cann't" -> "cannot" Subsystem: mm/pagemap Nicholas Piggin <npiggin@gmail.com>: Patch series "shoot lazy tlbs", v4: lazy tlb: introduce lazy mm refcount helper functions lazy tlb: allow lazy tlb mm refcounting to be configurable lazy tlb: shoot lazies, a non-refcounting lazy tlb option powerpc/64s: enable MMU_LAZY_TLB_SHOOTDOWN Christoph Hellwig <hch@lst.de>: Patch series "_kernel_dcache_page fixes and removal": mmc: JZ4740: remove the flush_kernel_dcache_page call in jz4740_mmc_read_data mmc: mmc_spi: replace flush_kernel_dcache_page with flush_dcache_page scatterlist: replace flush_kernel_dcache_page with flush_dcache_page mm: remove flush_kernel_dcache_page Huang Ying <ying.huang@intel.com>: mm,do_huge_pmd_numa_page: remove unnecessary TLB flushing code Greg Kroah-Hartman <gregkh@linuxfoundation.org>: mm: change fault_in_pages_* to have an unsigned size parameter Luigi Rizzo <lrizzo@google.com>: mm/pagemap: add mmap_assert_locked() annotations to find_vma*() "Liam R. Howlett" <Liam.Howlett@Oracle.com>: remap_file_pages: Use vma_lookup() instead of find_vma() Subsystem: mm/mremap Chen Wandun <chenwandun@huawei.com>: mm/mremap: fix memory account on do_munmap() failure Subsystem: mm/bootmem Muchun Song <songmuchun@bytedance.com>: mm/bootmem_info.c: mark __init on register_page_bootmem_info_section Subsystem: mm/sparsemem Ohhoon Kwon <ohoono.kwon@samsung.com>: Patch series "mm: sparse: remove __section_nr() function", v4: mm: sparse: pass section_nr to section_mark_present mm: sparse: pass section_nr to find_memory_block mm: sparse: remove __section_nr() function Naoya Horiguchi <naoya.horiguchi@nec.com>: mm/sparse: set SECTION_NID_SHIFT to 6 Matthew Wilcox <willy@infradead.org>: include/linux/mmzone.h: avoid a warning in sparse memory support Miles Chen <miles.chen@mediatek.com>: mm/sparse: clarify pgdat_to_phys Subsystem: mm/vmalloc "Uladzislau Rezki (Sony)" <urezki@gmail.com>: mm/vmalloc: use batched page requests in bulk-allocator mm/vmalloc: remove gfpflags_allow_blocking() check lib/test_vmalloc.c: add a new 'nr_pages' parameter Chen Wandun <chenwandun@huawei.com>: mm/vmalloc: fix wrong behavior in vread Subsystem: mm/kasan Woody Lin <woodylin@google.com>: mm/kasan: move kasan.fault to mm/kasan/report.c Andrey Konovalov <andreyknvl@gmail.com>: Patch series "kasan: test: avoid crashing the kernel with HW_TAGS", v2: kasan: test: rework kmalloc_oob_right kasan: test: avoid writing invalid memory kasan: test: avoid corrupting memory via memset kasan: test: disable kmalloc_memmove_invalid_size for HW_TAGS kasan: test: only do kmalloc_uaf_memset for generic mode kasan: test: clean up ksize_uaf kasan: test: avoid corrupting memory in copy_user_test kasan: test: avoid corrupting memory in kasan_rcu_uaf Subsystem: mm/pagealloc Mike Rapoport <rppt@linux.ibm.com>: Patch series "mm: ensure consistency of memory map poisoning": mm/page_alloc: always initialize memory map for the holes microblaze: simplify pte_alloc_one_kernel() mm: introduce memmap_alloc() to unify memory map allocation memblock: stop poisoning raw allocations Nico Pache <npache@redhat.com>: mm/page_alloc.c: fix 'zone_id' may be used uninitialized in this function warning Mike Rapoport <rppt@linux.ibm.com>: mm/page_alloc: make alloc_node_mem_map() __init rather than __ref Vasily Averin <vvs@virtuozzo.com>: mm/page_alloc.c: use in_task() "George G. Davis" <davis.george@siemens.com>: mm/page_isolation: tracing: trace all test_pages_isolated failures Subsystem: mm/memory-failure Miaohe Lin <linmiaohe@huawei.com>: Patch series "Cleanups and fixup for hwpoison": mm/hwpoison: remove unneeded variable unmap_success mm/hwpoison: fix potential pte_unmap_unlock pte error mm/hwpoison: change argument struct page **hpagep to *hpage mm/hwpoison: fix some obsolete comments Yang Shi <shy828301@gmail.com>: mm: hwpoison: don't drop slab caches for offlining non-LRU page doc: hwpoison: correct the support for hugepage mm: hwpoison: dump page for unhandlable page Michael Wang <yun.wang@linux.alibaba.com>: mm: fix panic caused by __page_handle_poison() Subsystem: mm/hugetlb Mike Kravetz <mike.kravetz@oracle.com>: hugetlb: simplify prep_compound_gigantic_page ref count racing code hugetlb: drop ref count earlier after page allocation hugetlb: before freeing hugetlb page set dtor to appropriate value hugetlb: fix hugetlb cgroup refcounting during vma split Subsystem: mm/userfaultfd Nadav Amit <namit@vmware.com>: Patch series "userfaultfd: minor bug fixes": userfaultfd: change mmap_changing to atomic userfaultfd: prevent concurrent API initialization selftests/vm/userfaultfd: wake after copy failure Subsystem: mm/vmscan Dave Hansen <dave.hansen@linux.intel.com>: Patch series "Migrate Pages in lieu of discard", v11: mm/numa: automatically generate node migration order mm/migrate: update node demotion order on hotplug events Yang Shi <yang.shi@linux.alibaba.com>: mm/migrate: enable returning precise migrate_pages() success count Dave Hansen <dave.hansen@linux.intel.com>: mm/migrate: demote pages during reclaim Yang Shi <yang.shi@linux.alibaba.com>: mm/vmscan: add page demotion counter Dave Hansen <dave.hansen@linux.intel.com>: mm/vmscan: add helper for querying ability to age anonymous pages Keith Busch <kbusch@kernel.org>: mm/vmscan: Consider anonymous pages without swap Dave Hansen <dave.hansen@linux.intel.com>: mm/vmscan: never demote for memcg reclaim Huang Ying <ying.huang@intel.com>: mm/migrate: add sysfs interface to enable reclaim migration Hui Su <suhui@zeku.com>: mm/vmpressure: replace vmpressure_to_css() with vmpressure_to_memcg() Miaohe Lin <linmiaohe@huawei.com>: Patch series "Cleanups for vmscan", v2: mm/vmscan: remove the PageDirty check after MADV_FREE pages are page_ref_freezed mm/vmscan: remove misleading setting to sc->priority mm/vmscan: remove unneeded return value of kswapd_run() mm/vmscan: add 'else' to remove check_pending label Vlastimil Babka <vbabka@suse.cz>: mm, vmscan: guarantee drop_slab_node() termination Subsystem: mm/compaction Charan Teja Reddy <charante@codeaurora.org>: mm: compaction: optimize proactive compaction deferrals mm: compaction: support triggering of proactive compaction by user Subsystem: mm/mempolicy Baolin Wang <baolin.wang@linux.alibaba.com>: mm/mempolicy: use readable NUMA_NO_NODE macro instead of magic number Dave Hansen <dave.hansen@linux.intel.com>: Patch series "Introduce multi-preference mempolicy", v7: mm/mempolicy: add MPOL_PREFERRED_MANY for multiple preferred nodes Feng Tang <feng.tang@intel.com>: mm/memplicy: add page allocation function for MPOL_PREFERRED_MANY policy Ben Widawsky <ben.widawsky@intel.com>: mm/hugetlb: add support for mempolicy MPOL_PREFERRED_MANY mm/mempolicy: advertise new MPOL_PREFERRED_MANY Feng Tang <feng.tang@intel.com>: mm/mempolicy: unify the create() func for bind/interleave/prefer-many policies Vasily Averin <vvs@virtuozzo.com>: mm/mempolicy.c: use in_task() in mempolicy_slab_node() Subsystem: mm/memblock Mike Rapoport <rppt@linux.ibm.com>: memblock: make memblock_find_in_range method private Subsystem: mm/oom-kill Suren Baghdasaryan <surenb@google.com>: mm: introduce process_mrelease system call mm: wire up syscall process_mrelease Subsystem: mm/migration Randy Dunlap <rdunlap@infradead.org>: mm/migrate: correct kernel-doc notation Subsystem: mm/ksm Zhansaya Bagdauletkyzy <zhansayabagdaulet@gmail.com>: Patch series "add KSM selftests": selftests: vm: add KSM merge test selftests: vm: add KSM unmerge test selftests: vm: add KSM zero page merging test selftests: vm: add KSM merging across nodes test mm: KSM: fix data type Patch series "add KSM performance tests", v3: selftests: vm: add KSM merging time test selftests: vm: add COW time test for KSM pages Subsystem: mm/percpu Jing Xiangfeng <jingxiangfeng@huawei.com>: mm/percpu,c: remove obsolete comments of pcpu_chunk_populated() Subsystem: mm/vmstat Miaohe Lin <linmiaohe@huawei.com>: Patch series "Cleanup for vmstat": mm/vmstat: correct some wrong comments mm/vmstat: simplify the array size calculation mm/vmstat: remove unneeded return value Subsystem: mm/madvise zhangkui <zhangkui@oppo.com>: mm/madvise: add MADV_WILLNEED to process_madvise() Documentation/ABI/testing/sysfs-kernel-mm-numa | 24 Documentation/admin-guide/mm/numa_memory_policy.rst | 15 Documentation/admin-guide/sysctl/vm.rst | 3 Documentation/core-api/cachetlb.rst | 86 - Documentation/dev-tools/kasan.rst | 13 Documentation/translations/zh_CN/core-api/cachetlb.rst | 9 Documentation/vm/hwpoison.rst | 1 arch/Kconfig | 28 arch/alpha/kernel/syscalls/syscall.tbl | 2 arch/arm/include/asm/cacheflush.h | 4 arch/arm/kernel/setup.c | 20 arch/arm/mach-rpc/ecard.c | 2 arch/arm/mm/flush.c | 33 arch/arm/mm/nommu.c | 6 arch/arm/tools/syscall.tbl | 2 arch/arm64/include/asm/unistd.h | 2 arch/arm64/include/asm/unistd32.h | 2 arch/arm64/kvm/hyp/reserved_mem.c | 9 arch/arm64/mm/init.c | 38 arch/csky/abiv1/cacheflush.c | 11 arch/csky/abiv1/inc/abi/cacheflush.h | 4 arch/csky/kernel/probes/kprobes.c | 3 arch/ia64/include/asm/meminit.h | 2 arch/ia64/kernel/acpi.c | 2 arch/ia64/kernel/setup.c | 55 arch/ia64/kernel/syscalls/syscall.tbl | 2 arch/m68k/kernel/syscalls/syscall.tbl | 2 arch/microblaze/include/asm/page.h | 3 arch/microblaze/include/asm/pgtable.h | 2 arch/microblaze/kernel/syscalls/syscall.tbl | 2 arch/microblaze/mm/init.c | 12 arch/microblaze/mm/pgtable.c | 17 arch/mips/include/asm/cacheflush.h | 8 arch/mips/kernel/setup.c | 14 arch/mips/kernel/syscalls/syscall_n32.tbl | 2 arch/mips/kernel/syscalls/syscall_n64.tbl | 2 arch/mips/kernel/syscalls/syscall_o32.tbl | 2 arch/nds32/include/asm/cacheflush.h | 3 arch/nds32/mm/cacheflush.c | 9 arch/parisc/include/asm/cacheflush.h | 8 arch/parisc/kernel/cache.c | 3 arch/parisc/kernel/syscalls/syscall.tbl | 2 arch/powerpc/Kconfig | 1 arch/powerpc/kernel/smp.c | 2 arch/powerpc/kernel/syscalls/syscall.tbl | 2 arch/powerpc/mm/book3s64/radix_tlb.c | 4 arch/powerpc/platforms/pseries/hotplug-memory.c | 4 arch/riscv/mm/init.c | 44 arch/s390/kernel/setup.c | 9 arch/s390/kernel/syscalls/syscall.tbl | 2 arch/s390/mm/fault.c | 2 arch/sh/include/asm/cacheflush.h | 8 arch/sh/kernel/syscalls/syscall.tbl | 2 arch/sparc/kernel/syscalls/syscall.tbl | 2 arch/x86/entry/syscalls/syscall_32.tbl | 1 arch/x86/entry/syscalls/syscall_64.tbl | 1 arch/x86/kernel/aperture_64.c | 5 arch/x86/kernel/ldt.c | 6 arch/x86/mm/init.c | 23 arch/x86/mm/numa.c | 5 arch/x86/mm/numa_emulation.c | 5 arch/x86/realmode/init.c | 2 arch/xtensa/kernel/syscalls/syscall.tbl | 2 block/blk-map.c | 2 drivers/acpi/tables.c | 5 drivers/base/arch_numa.c | 5 drivers/base/memory.c | 4 drivers/mmc/host/jz4740_mmc.c | 4 drivers/mmc/host/mmc_spi.c | 2 drivers/of/of_reserved_mem.c | 12 fs/drop_caches.c | 3 fs/exec.c | 12 fs/fcntl.c | 3 fs/fs-writeback.c | 28 fs/fs_context.c | 4 fs/inode.c | 2 fs/locks.c | 6 fs/namei.c | 8 fs/namespace.c | 7 fs/ocfs2/dlmglue.c | 14 fs/ocfs2/quota_global.c | 1 fs/ocfs2/quota_local.c | 2 fs/pipe.c | 2 fs/select.c | 4 fs/userfaultfd.c | 116 - include/linux/backing-dev-defs.h | 2 include/linux/backing-dev.h | 19 include/linux/buffer_head.h | 2 include/linux/compaction.h | 2 include/linux/highmem.h | 5 include/linux/hugetlb_cgroup.h | 12 include/linux/memblock.h | 2 include/linux/memcontrol.h | 118 + include/linux/memory.h | 2 include/linux/mempolicy.h | 16 include/linux/migrate.h | 14 include/linux/mm.h | 17 include/linux/mmzone.h | 4 include/linux/page-flags.h | 9 include/linux/pagemap.h | 4 include/linux/sched/mm.h | 35 include/linux/shmem_fs.h | 25 include/linux/slub_def.h | 6 include/linux/swap.h | 28 include/linux/syscalls.h | 1 include/linux/userfaultfd_k.h | 8 include/linux/vm_event_item.h | 2 include/linux/vmpressure.h | 2 include/linux/writeback.h | 4 include/trace/events/migrate.h | 3 include/uapi/asm-generic/unistd.h | 4 include/uapi/linux/mempolicy.h | 1 ipc/msg.c | 2 ipc/namespace.c | 2 ipc/sem.c | 9 ipc/shm.c | 2 kernel/cgroup/namespace.c | 2 kernel/cpu.c | 2 kernel/exit.c | 2 kernel/fork.c | 51 kernel/kthread.c | 21 kernel/nsproxy.c | 2 kernel/pid_namespace.c | 5 kernel/sched/core.c | 37 kernel/sched/sched.h | 4 kernel/signal.c | 2 kernel/sys_ni.c | 1 kernel/sysctl.c | 2 kernel/time/namespace.c | 4 kernel/time/posix-timers.c | 4 kernel/user_namespace.c | 2 lib/scatterlist.c | 5 lib/test_kasan.c | 80 - lib/test_kasan_module.c | 20 lib/test_vmalloc.c | 5 mm/backing-dev.c | 11 mm/bootmem_info.c | 4 mm/compaction.c | 69 - mm/debug_vm_pgtable.c | 982 +++++++++------ mm/filemap.c | 15 mm/gup.c | 109 - mm/huge_memory.c | 32 mm/hugetlb.c | 173 ++ mm/hwpoison-inject.c | 2 mm/internal.h | 9 mm/kasan/hw_tags.c | 43 mm/kasan/kasan.h | 1 mm/kasan/report.c | 29 mm/khugepaged.c | 2 mm/ksm.c | 8 mm/madvise.c | 1 mm/memblock.c | 22 mm/memcontrol.c | 234 +-- mm/memory-failure.c | 53 mm/memory_hotplug.c | 2 mm/mempolicy.c | 207 ++- mm/migrate.c | 319 ++++ mm/mmap.c | 7 mm/mremap.c | 2 mm/oom_kill.c | 70 + mm/page-writeback.c | 133 +- mm/page_alloc.c | 62 mm/page_isolation.c | 13 mm/percpu.c | 3 mm/shmem.c | 309 ++-- mm/slab_common.c | 2 mm/slub.c | 1085 ++++++++++------- mm/sparse.c | 46 mm/swap.c | 22 mm/swapfile.c | 14 mm/truncate.c | 28 mm/userfaultfd.c | 15 mm/vmalloc.c | 79 - mm/vmpressure.c | 10 mm/vmscan.c | 220 ++- mm/vmstat.c | 25 security/tomoyo/domain.c | 13 tools/testing/scatterlist/linux/mm.h | 1 tools/testing/selftests/vm/.gitignore | 1 tools/testing/selftests/vm/Makefile | 3 tools/testing/selftests/vm/charge_reserved_hugetlb.sh | 5 tools/testing/selftests/vm/hugetlb_reparenting_test.sh | 5 tools/testing/selftests/vm/ksm_tests.c | 696 ++++++++++ tools/testing/selftests/vm/mlock-random-test.c | 2 tools/testing/selftests/vm/run_vmtests.sh | 98 + tools/testing/selftests/vm/userfaultfd.c | 13 186 files changed, 4488 insertions(+), 2281 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2021-09-02 21:48 incoming Andrew Morton @ 2021-09-02 21:49 ` Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-09-02 21:49 UTC (permalink / raw) To: Linus Torvalds, linux-mm, mm-commits On Thu, 2 Sep 2021 14:48:20 -0700 Andrew Morton <akpm@linux-foundation.org> wrote: > 212 patches, based on 4a3bb4200a5958d76cc26ebe4db4257efa56812b. Make that "based on 7d2a07b769330c34b4deabeed939325c77a7ec2f". ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-08-25 19:17 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-08-25 19:17 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 2 patches, based on 6e764bcd1cf72a2846c0e53d3975a09b242c04c9. Subsystems affected by this patch series: mm/memory-hotplug MAINTAINERS Subsystem: mm/memory-hotplug Miaohe Lin <linmiaohe@huawei.com>: mm/memory_hotplug: fix potential permanent lru cache disable Subsystem: MAINTAINERS Namjae Jeon <namjae.jeon@samsung.com>: MAINTAINERS: exfat: update my email address MAINTAINERS | 2 +- mm/memory_hotplug.c | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-08-20 2:03 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-08-20 2:03 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 10 patches, based on 614cb2751d3150850d459bee596c397f344a7936. Subsystems affected by this patch series: mm/shmem mm/pagealloc mm/tracing MAINTAINERS mm/memcg mm/memory-failure mm/vmscan mm/kfence mm/hugetlb Subsystem: mm/shmem Yang Shi <shy828301@gmail.com>: Revert "mm/shmem: fix shmem_swapin() race with swapoff" Revert "mm: swap: check if swap backing device is congested or not" Subsystem: mm/pagealloc Doug Berger <opendmb@gmail.com>: mm/page_alloc: don't corrupt pcppage_migratetype Subsystem: mm/tracing Mike Rapoport <rppt@linux.ibm.com>: mmflags.h: add missing __GFP_ZEROTAGS and __GFP_SKIP_KASAN_POISON names Subsystem: MAINTAINERS Nathan Chancellor <nathan@kernel.org>: MAINTAINERS: update ClangBuiltLinux IRC chat Subsystem: mm/memcg Johannes Weiner <hannes@cmpxchg.org>: mm: memcontrol: fix occasional OOMs due to proportional memory.low reclaim Subsystem: mm/memory-failure Naoya Horiguchi <naoya.horiguchi@nec.com>: mm/hwpoison: retry with shake_page() for unhandlable pages Subsystem: mm/vmscan Johannes Weiner <hannes@cmpxchg.org>: mm: vmscan: fix missing psi annotation for node_reclaim() Subsystem: mm/kfence Marco Elver <elver@google.com>: kfence: fix is_kfence_address() for addresses below KFENCE_POOL_SIZE Subsystem: mm/hugetlb Mike Kravetz <mike.kravetz@oracle.com>: hugetlb: don't pass page cache pages to restore_reserve_on_error MAINTAINERS | 2 +- include/linux/kfence.h | 7 ++++--- include/linux/memcontrol.h | 29 +++++++++++++++-------------- include/trace/events/mmflags.h | 4 +++- mm/hugetlb.c | 19 ++++++++++++++----- mm/memory-failure.c | 12 +++++++++--- mm/page_alloc.c | 25 ++++++++++++------------- mm/shmem.c | 14 +------------- mm/swap_state.c | 7 ------- mm/vmscan.c | 30 ++++++++++++++++++++++-------- 10 files changed, 81 insertions(+), 68 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-08-13 23:53 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-08-13 23:53 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 7 patches, based on f8e6dfc64f6135d1b6c5215c14cd30b9b60a0008. Subsystems affected by this patch series: mm/kasan mm/slub mm/madvise mm/memcg lib Subsystem: mm/kasan Kuan-Ying Lee <Kuan-Ying.Lee@mediatek.com>: Patch series "kasan, slub: reset tag when printing address", v3: kasan, kmemleak: reset tags when scanning block kasan, slub: reset tag when printing address Subsystem: mm/slub Shakeel Butt <shakeelb@google.com>: slub: fix kmalloc_pagealloc_invalid_free unit test Vlastimil Babka <vbabka@suse.cz>: mm: slub: fix slub_debug disabling for list of slabs Subsystem: mm/madvise David Hildenbrand <david@redhat.com>: mm/madvise: report SIGBUS as -EFAULT for MADV_POPULATE_(READ|WRITE) Subsystem: mm/memcg Waiman Long <longman@redhat.com>: mm/memcg: fix incorrect flushing of lruvec data in obj_stock Subsystem: lib Liang Wang <wangliang101@huawei.com>: lib: use PFN_PHYS() in devmem_is_allowed() lib/devmem_is_allowed.c | 2 +- mm/gup.c | 7 +++++-- mm/kmemleak.c | 6 +++--- mm/madvise.c | 4 +++- mm/memcontrol.c | 6 ++++-- mm/slub.c | 25 ++++++++++++++----------- 6 files changed, 30 insertions(+), 20 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-07-29 21:52 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-07-29 21:52 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 7 patches, based on 7e96bf476270aecea66740a083e51b38c1371cd2. Subsystems affected by this patch series: lib ocfs2 mm/memcg mm/migration mm/slub mm/memcg Subsystem: lib Matteo Croce <mcroce@microsoft.com>: lib/test_string.c: move string selftest in the Runtime Testing menu Subsystem: ocfs2 Junxiao Bi <junxiao.bi@oracle.com>: ocfs2: fix zero out valid data ocfs2: issue zeroout to EOF blocks Subsystem: mm/memcg Johannes Weiner <hannes@cmpxchg.org>: mm: memcontrol: fix blocking rstat function called from atomic cgroup1 thresholding code Subsystem: mm/migration "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>: mm/migrate: fix NR_ISOLATED corruption on 64-bit Subsystem: mm/slub Shakeel Butt <shakeelb@google.com>: slub: fix unreclaimable slab stat for bulk free Subsystem: mm/memcg Wang Hai <wanghai38@huawei.com>: mm/memcg: fix NULL pointer dereference in memcg_slab_free_hook() fs/ocfs2/file.c | 103 ++++++++++++++++++++++++++++++++---------------------- lib/Kconfig | 3 - lib/Kconfig.debug | 3 + mm/memcontrol.c | 3 + mm/migrate.c | 2 - mm/slab.h | 2 - mm/slub.c | 22 ++++++----- 7 files changed, 81 insertions(+), 57 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-07-23 22:49 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-07-23 22:49 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 15 patches, based on 704f4cba43d4ed31ef4beb422313f1263d87bc55. Subsystems affected by this patch series: mm/userfaultfd mm/kfence mm/highmem mm/pagealloc mm/memblock mm/pagecache mm/secretmem mm/pagemap mm/hugetlbfs Subsystem: mm/userfaultfd Peter Collingbourne <pcc@google.com>: Patch series "userfaultfd: do not untag user pointers", v5: userfaultfd: do not untag user pointers selftest: use mmap instead of posix_memalign to allocate memory Subsystem: mm/kfence Weizhao Ouyang <o451686892@gmail.com>: kfence: defer kfence_test_init to ensure that kunit debugfs is created Alexander Potapenko <glider@google.com>: kfence: move the size check to the beginning of __kfence_alloc() kfence: skip all GFP_ZONEMASK allocations Subsystem: mm/highmem Christoph Hellwig <hch@lst.de>: mm: call flush_dcache_page() in memcpy_to_page() and memzero_page() mm: use kmap_local_page in memzero_page Subsystem: mm/pagealloc Sergei Trofimovich <slyfox@gentoo.org>: mm: page_alloc: fix page_poison=1 / INIT_ON_ALLOC_DEFAULT_ON interaction Subsystem: mm/memblock Mike Rapoport <rppt@linux.ibm.com>: memblock: make for_each_mem_range() traverse MEMBLOCK_HOTPLUG regions Subsystem: mm/pagecache Roman Gushchin <guro@fb.com>: writeback, cgroup: remove wb from offline list before releasing refcnt writeback, cgroup: do not reparent dax inodes Subsystem: mm/secretmem Mike Rapoport <rppt@linux.ibm.com>: mm/secretmem: wire up ->set_page_dirty Subsystem: mm/pagemap Muchun Song <songmuchun@bytedance.com>: mm: mmap_lock: fix disabling preemption directly Qi Zheng <zhengqi.arch@bytedance.com>: mm: fix the deadlock in finish_fault() Subsystem: mm/hugetlbfs Mike Kravetz <mike.kravetz@oracle.com>: hugetlbfs: fix mount mode command line processing Documentation/arm64/tagged-address-abi.rst | 26 ++++++++++++++++++-------- fs/fs-writeback.c | 3 +++ fs/hugetlbfs/inode.c | 2 +- fs/userfaultfd.c | 26 ++++++++++++-------------- include/linux/highmem.h | 6 ++++-- include/linux/memblock.h | 4 ++-- mm/backing-dev.c | 2 +- mm/kfence/core.c | 19 ++++++++++++++++--- mm/kfence/kfence_test.c | 2 +- mm/memblock.c | 3 ++- mm/memory.c | 11 ++++++++++- mm/mmap_lock.c | 4 ++-- mm/page_alloc.c | 29 ++++++++++++++++------------- mm/secretmem.c | 1 + tools/testing/selftests/vm/userfaultfd.c | 6 ++++-- 15 files changed, 93 insertions(+), 51 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-07-15 4:26 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-07-15 4:26 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 13 patches, based on 40226a3d96ef8ab8980f032681c8bfd46d63874e. Subsystems affected by this patch series: mm/kasan mm/pagealloc mm/rmap mm/hmm hfs mm/hugetlb Subsystem: mm/kasan Marco Elver <elver@google.com>: mm: move helper to check slub_debug_enabled Yee Lee <yee.lee@mediatek.com>: kasan: add memzero init for unaligned size at DEBUG Marco Elver <elver@google.com>: kasan: fix build by including kernel.h Subsystem: mm/pagealloc Matteo Croce <mcroce@microsoft.com>: Revert "mm/page_alloc: make should_fail_alloc_page() static" Mel Gorman <mgorman@techsingularity.net>: mm/page_alloc: avoid page allocator recursion with pagesets.lock held Yanfei Xu <yanfei.xu@windriver.com>: mm/page_alloc: correct return value when failing at preparing Chuck Lever <chuck.lever@oracle.com>: mm/page_alloc: further fix __alloc_pages_bulk() return value Subsystem: mm/rmap Christoph Hellwig <hch@lst.de>: mm: fix the try_to_unmap prototype for !CONFIG_MMU Subsystem: mm/hmm Alistair Popple <apopple@nvidia.com>: lib/test_hmm: remove set but unused page variable Subsystem: hfs Desmond Cheong Zhi Xi <desmondcheongzx@gmail.com>: Patch series "hfs: fix various errors", v2: hfs: add missing clean-up in hfs_fill_super hfs: fix high memory mapping in hfs_bnode_read hfs: add lock nesting notation to hfs_find_init Subsystem: mm/hugetlb Joao Martins <joao.m.martins@oracle.com>: mm/hugetlb: fix refs calculation from unaligned @vaddr fs/hfs/bfind.c | 14 +++++++++++++- fs/hfs/bnode.c | 25 ++++++++++++++++++++----- fs/hfs/btree.h | 7 +++++++ fs/hfs/super.c | 10 +++++----- include/linux/kasan.h | 1 + include/linux/rmap.h | 4 +++- lib/test_hmm.c | 2 -- mm/hugetlb.c | 5 +++-- mm/kasan/kasan.h | 12 ++++++++++++ mm/page_alloc.c | 30 ++++++++++++++++++++++-------- mm/slab.h | 15 +++++++++++---- mm/slub.c | 14 -------------- 12 files changed, 97 insertions(+), 42 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-07-08 0:59 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-07-08 0:59 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 54 patches, based on a931dd33d370896a683236bba67c0d6f3d01144d. Subsystems affected by this patch series: lib mm/slub mm/secretmem mm/cleanups mm/init debug mm/pagemap mm/mremap Subsystem: lib Zhen Lei <thunder.leizhen@huawei.com>: lib/test: fix spelling mistakes lib: fix spelling mistakes lib: fix spelling mistakes in header files Subsystem: mm/slub Nathan Chancellor <nathan@kernel.org>: Patch series "hexagon: Fix build error with CONFIG_STACKDEPOT and select CONFIG_ARCH_WANT_LD_ORPHAN_WARN": hexagon: handle {,SOFT}IRQENTRY_TEXT in linker script hexagon: use common DISCARDS macro hexagon: select ARCH_WANT_LD_ORPHAN_WARN Oliver Glitta <glittao@gmail.com>: mm/slub: use stackdepot to save stack trace in objects Subsystem: mm/secretmem Mike Rapoport <rppt@linux.ibm.com>: Patch series "mm: introduce memfd_secret system call to create "secret" memory areas", v20: mmap: make mlock_future_check() global riscv/Kconfig: make direct map manipulation options depend on MMU set_memory: allow querying whether set_direct_map_*() is actually enabled mm: introduce memfd_secret system call to create "secret" memory areas PM: hibernate: disable when there are active secretmem users arch, mm: wire up memfd_secret system call where relevant secretmem: test: add basic selftest for memfd_secret(2) Subsystem: mm/cleanups Zhen Lei <thunder.leizhen@huawei.com>: mm: fix spelling mistakes in header files Subsystem: mm/init Kefeng Wang <wangkefeng.wang@huawei.com>: Patch series "init_mm: cleanup ARCH's text/data/brk setup code", v3: mm: add setup_initial_init_mm() helper arc: convert to setup_initial_init_mm() arm: convert to setup_initial_init_mm() arm64: convert to setup_initial_init_mm() csky: convert to setup_initial_init_mm() h8300: convert to setup_initial_init_mm() m68k: convert to setup_initial_init_mm() nds32: convert to setup_initial_init_mm() nios2: convert to setup_initial_init_mm() openrisc: convert to setup_initial_init_mm() powerpc: convert to setup_initial_init_mm() riscv: convert to setup_initial_init_mm() s390: convert to setup_initial_init_mm() sh: convert to setup_initial_init_mm() x86: convert to setup_initial_init_mm() Subsystem: debug Stephen Boyd <swboyd@chromium.org>: Patch series "Add build ID to stacktraces", v6: buildid: only consider GNU notes for build ID parsing buildid: add API to parse build ID out of buffer buildid: stash away kernels build ID on init dump_stack: add vmlinux build ID to stack traces module: add printk formats to add module build ID to stacktraces arm64: stacktrace: use %pSb for backtrace printing x86/dumpstack: use %pSb/%pBb for backtrace printing scripts/decode_stacktrace.sh: support debuginfod scripts/decode_stacktrace.sh: silence stderr messages from addr2line/nm scripts/decode_stacktrace.sh: indicate 'auto' can be used for base path buildid: mark some arguments const buildid: fix kernel-doc notation kdump: use vmlinux_build_id to simplify Subsystem: mm/pagemap "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>: mm: rename pud_page_vaddr to pud_pgtable and make it return pmd_t * mm: rename p4d_page_vaddr to p4d_pgtable and make it return pud_t * Subsystem: mm/mremap "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>: Patch series "mrermap fixes", v2: selftest/mremap_test: update the test to handle pagesize other than 4K selftest/mremap_test: avoid crash with static build mm/mremap: convert huge PUD move to separate helper mm/mremap: don't enable optimized PUD move if page table levels is 2 mm/mremap: use pmd/pud_poplulate to update page table entries mm/mremap: hold the rmap lock in write mode when moving page table entries. Patch series "Speedup mremap on ppc64", v8: mm/mremap: allow arch runtime override powerpc/book3s64/mm: update flush_tlb_range to flush page walk cache powerpc/mm: enable HAVE_MOVE_PMD support Documentation/core-api/printk-formats.rst | 11 arch/alpha/include/asm/pgtable.h | 8 arch/arc/mm/init.c | 5 arch/arm/include/asm/pgtable-3level.h | 2 arch/arm/kernel/setup.c | 5 arch/arm64/include/asm/Kbuild | 1 arch/arm64/include/asm/cacheflush.h | 6 arch/arm64/include/asm/kfence.h | 2 arch/arm64/include/asm/pgtable.h | 8 arch/arm64/include/asm/set_memory.h | 17 + arch/arm64/include/uapi/asm/unistd.h | 1 arch/arm64/kernel/machine_kexec.c | 1 arch/arm64/kernel/setup.c | 5 arch/arm64/kernel/stacktrace.c | 2 arch/arm64/mm/mmu.c | 7 arch/arm64/mm/pageattr.c | 13 arch/csky/kernel/setup.c | 5 arch/h8300/kernel/setup.c | 5 arch/hexagon/Kconfig | 1 arch/hexagon/kernel/vmlinux.lds.S | 9 arch/ia64/include/asm/pgtable.h | 4 arch/m68k/include/asm/motorola_pgtable.h | 2 arch/m68k/kernel/setup_mm.c | 5 arch/m68k/kernel/setup_no.c | 5 arch/mips/include/asm/pgtable-64.h | 8 arch/nds32/kernel/setup.c | 5 arch/nios2/kernel/setup.c | 5 arch/openrisc/kernel/setup.c | 5 arch/parisc/include/asm/pgtable.h | 4 arch/powerpc/include/asm/book3s/64/pgtable.h | 11 arch/powerpc/include/asm/book3s/64/tlbflush-radix.h | 2 arch/powerpc/include/asm/nohash/64/pgtable-4k.h | 6 arch/powerpc/include/asm/nohash/64/pgtable.h | 6 arch/powerpc/include/asm/tlb.h | 6 arch/powerpc/kernel/setup-common.c | 5 arch/powerpc/mm/book3s64/radix_hugetlbpage.c | 8 arch/powerpc/mm/book3s64/radix_pgtable.c | 6 arch/powerpc/mm/book3s64/radix_tlb.c | 44 +- arch/powerpc/mm/pgtable_64.c | 4 arch/powerpc/platforms/Kconfig.cputype | 2 arch/riscv/Kconfig | 4 arch/riscv/include/asm/pgtable-64.h | 4 arch/riscv/include/asm/unistd.h | 1 arch/riscv/kernel/setup.c | 5 arch/s390/kernel/setup.c | 5 arch/sh/include/asm/pgtable-3level.h | 4 arch/sh/kernel/setup.c | 5 arch/sparc/include/asm/pgtable_32.h | 6 arch/sparc/include/asm/pgtable_64.h | 10 arch/um/include/asm/pgtable-3level.h | 2 arch/x86/entry/syscalls/syscall_32.tbl | 1 arch/x86/entry/syscalls/syscall_64.tbl | 1 arch/x86/include/asm/pgtable.h | 8 arch/x86/kernel/dumpstack.c | 2 arch/x86/kernel/setup.c | 5 arch/x86/mm/init_64.c | 4 arch/x86/mm/pat/set_memory.c | 4 arch/x86/mm/pgtable.c | 2 include/asm-generic/pgtable-nop4d.h | 2 include/asm-generic/pgtable-nopmd.h | 2 include/asm-generic/pgtable-nopud.h | 4 include/linux/bootconfig.h | 4 include/linux/buildid.h | 10 include/linux/compaction.h | 4 include/linux/cpumask.h | 2 include/linux/crash_core.h | 12 include/linux/debugobjects.h | 2 include/linux/hmm.h | 2 include/linux/hugetlb.h | 6 include/linux/kallsyms.h | 21 + include/linux/list_lru.h | 4 include/linux/lru_cache.h | 8 include/linux/mm.h | 3 include/linux/mmu_notifier.h | 8 include/linux/module.h | 9 include/linux/nodemask.h | 6 include/linux/percpu-defs.h | 2 include/linux/percpu-refcount.h | 2 include/linux/pgtable.h | 4 include/linux/scatterlist.h | 2 include/linux/secretmem.h | 54 +++ include/linux/set_memory.h | 12 include/linux/shrinker.h | 2 include/linux/syscalls.h | 1 include/linux/vmalloc.h | 4 include/uapi/asm-generic/unistd.h | 7 include/uapi/linux/magic.h | 1 init/Kconfig | 1 init/main.c | 2 kernel/crash_core.c | 50 --- kernel/kallsyms.c | 104 +++++-- kernel/module.c | 42 ++ kernel/power/hibernate.c | 5 kernel/sys_ni.c | 2 lib/Kconfig.debug | 17 - lib/asn1_encoder.c | 2 lib/buildid.c | 80 ++++- lib/devres.c | 2 lib/dump_stack.c | 13 lib/dynamic_debug.c | 2 lib/fonts/font_pearl_8x8.c | 2 lib/kfifo.c | 2 lib/list_sort.c | 2 lib/nlattr.c | 4 lib/oid_registry.c | 2 lib/pldmfw/pldmfw.c | 2 lib/reed_solomon/test_rslib.c | 2 lib/refcount.c | 2 lib/rhashtable.c | 2 lib/sbitmap.c | 2 lib/scatterlist.c | 4 lib/seq_buf.c | 2 lib/sort.c | 2 lib/stackdepot.c | 2 lib/test_bitops.c | 2 lib/test_bpf.c | 2 lib/test_kasan.c | 2 lib/test_kmod.c | 6 lib/test_scanf.c | 2 lib/vsprintf.c | 10 mm/Kconfig | 4 mm/Makefile | 1 mm/gup.c | 12 mm/init-mm.c | 9 mm/internal.h | 3 mm/mlock.c | 3 mm/mmap.c | 5 mm/mremap.c | 108 ++++++- mm/secretmem.c | 254 +++++++++++++++++ mm/slub.c | 79 +++-- scripts/checksyscalls.sh | 4 scripts/decode_stacktrace.sh | 89 +++++- tools/testing/selftests/vm/.gitignore | 1 tools/testing/selftests/vm/Makefile | 3 tools/testing/selftests/vm/memfd_secret.c | 296 ++++++++++++++++++++ tools/testing/selftests/vm/mremap_test.c | 116 ++++--- tools/testing/selftests/vm/run_vmtests.sh | 17 + 137 files changed, 1470 insertions(+), 442 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-07-01 1:46 Andrew Morton 2021-07-03 0:28 ` incoming Linus Torvalds 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2021-07-01 1:46 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits This is the rest of the -mm tree, less 66 patches which are dependent on things which are (or were recently) in linux-next. I'll trickle that material over next week. 192 patches, based on 7cf3dead1ad70c72edb03e2d98e1f3dcd332cdb2 plus the June 28 sendings. Subsystems affected by this patch series: mm/hugetlb mm/userfaultfd mm/vmscan mm/kconfig mm/proc mm/z3fold mm/zbud mm/ras mm/mempolicy mm/memblock mm/migration mm/thp mm/nommu mm/kconfig mm/madvise mm/memory-hotplug mm/zswap mm/zsmalloc mm/zram mm/cleanups mm/kfence mm/hmm procfs sysctl misc core-kernel lib lz4 checkpatch init kprobes nilfs2 hfs signals exec kcov selftests compress/decompress ipc Subsystem: mm/hugetlb Muchun Song <songmuchun@bytedance.com>: Patch series "Free some vmemmap pages of HugeTLB page", v23: mm: memory_hotplug: factor out bootmem core functions to bootmem_info.c mm: hugetlb: introduce a new config HUGETLB_PAGE_FREE_VMEMMAP mm: hugetlb: gather discrete indexes of tail page mm: hugetlb: free the vmemmap pages associated with each HugeTLB page mm: hugetlb: defer freeing of HugeTLB pages mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap mm: memory_hotplug: disable memmap_on_memory when hugetlb_free_vmemmap enabled mm: hugetlb: introduce nr_free_vmemmap_pages in the struct hstate Shixin Liu <liushixin2@huawei.com>: mm/debug_vm_pgtable: move {pmd/pud}_huge_tests out of CONFIG_TRANSPARENT_HUGEPAGE mm/debug_vm_pgtable: remove redundant pfn_{pmd/pte}() and fix one comment mistake Miaohe Lin <linmiaohe@huawei.com>: Patch series "Cleanup and fixup for huge_memory:, v3: mm/huge_memory.c: remove dedicated macro HPAGE_CACHE_INDEX_MASK mm/huge_memory.c: use page->deferred_list mm/huge_memory.c: add missing read-only THP checking in transparent_hugepage_enabled() mm/huge_memory.c: remove unnecessary tlb_remove_page_size() for huge zero pmd mm/huge_memory.c: don't discard hugepage if other processes are mapping it Christophe Leroy <christophe.leroy@csgroup.eu>: Patch series "Subject: [PATCH v2 0/5] Implement huge VMAP and VMALLOC on powerpc 8xx", v2: mm/hugetlb: change parameters of arch_make_huge_pte() mm/pgtable: add stubs for {pmd/pub}_{set/clear}_huge mm/vmalloc: enable mapping of huge pages at pte level in vmap mm/vmalloc: enable mapping of huge pages at pte level in vmalloc powerpc/8xx: add support for huge pages on VMAP and VMALLOC Nanyong Sun <sunnanyong@huawei.com>: khugepaged: selftests: remove debug_cow Mina Almasry <almasrymina@google.com>: mm, hugetlb: fix racy resv_huge_pages underflow on UFFDIO_COPY Muchun Song <songmuchun@bytedance.com>: Patch series "Split huge PMD mapping of vmemmap pages", v4: mm: sparsemem: split the huge PMD mapping of vmemmap pages mm: sparsemem: use huge PMD mapping for vmemmap pages mm: hugetlb: introduce CONFIG_HUGETLB_PAGE_FREE_VMEMMAP_DEFAULT_ON Mike Kravetz <mike.kravetz@oracle.com>: Patch series "Fix prep_compound_gigantic_page ref count adjustment": hugetlb: remove prep_compound_huge_page cleanup hugetlb: address ref count racing in prep_compound_gigantic_page Naoya Horiguchi <naoya.horiguchi@nec.com>: mm/hwpoison: disable pcp for page_handle_poison() Subsystem: mm/userfaultfd Peter Xu <peterx@redhat.com>: Patch series "userfaultfd/selftests: A few cleanups", v2: userfaultfd/selftests: use user mode only userfaultfd/selftests: remove the time() check on delayed uffd userfaultfd/selftests: dropping VERIFY check in locking_thread userfaultfd/selftests: only dump counts if mode enabled userfaultfd/selftests: unify error handling Patch series "mm/uffd: Misc fix for uffd-wp and one more test": mm/thp: simplify copying of huge zero page pmd when fork mm/userfaultfd: fix uffd-wp special cases for fork() mm/userfaultfd: fail uffd-wp registration if not supported mm/pagemap: export uffd-wp protection information userfaultfd/selftests: add pagemap uffd-wp test Axel Rasmussen <axelrasmussen@google.com>: Patch series "userfaultfd: add minor fault handling for shmem", v6: userfaultfd/shmem: combine shmem_{mcopy_atomic,mfill_zeropage}_pte userfaultfd/shmem: support minor fault registration for shmem userfaultfd/shmem: support UFFDIO_CONTINUE for shmem userfaultfd/shmem: advertise shmem minor fault support userfaultfd/shmem: modify shmem_mfill_atomic_pte to use install_pte() userfaultfd/selftests: use memfd_create for shmem test type userfaultfd/selftests: create alias mappings in the shmem test userfaultfd/selftests: reinitialize test context in each test userfaultfd/selftests: exercise minor fault handling shmem support Subsystem: mm/vmscan Yu Zhao <yuzhao@google.com>: mm/vmscan.c: fix potential deadlock in reclaim_pages() include/trace/events/vmscan.h: remove mm_vmscan_inactive_list_is_low Miaohe Lin <linmiaohe@huawei.com>: mm: workingset: define macro WORKINGSET_SHIFT Subsystem: mm/kconfig Kefeng Wang <wangkefeng.wang@huawei.com>: mm/kconfig: move HOLES_IN_ZONE into mm Subsystem: mm/proc Mike Rapoport <rppt@linux.ibm.com>: docs: proc.rst: meminfo: briefly describe gaps in memory accounting David Hildenbrand <david@redhat.com>: Patch series "fs/proc/kcore: don't read offline sections, logically offline pages and hwpoisoned pages", v3: fs/proc/kcore: drop KCORE_REMAP and KCORE_OTHER fs/proc/kcore: pfn_is_ram check only applies to KCORE_RAM fs/proc/kcore: don't read offline sections, logically offline pages and hwpoisoned pages mm: introduce page_offline_(begin|end|freeze|thaw) to synchronize setting PageOffline() virtio-mem: use page_offline_(start|end) when setting PageOffline() fs/proc/kcore: use page_offline_(freeze|thaw) Subsystem: mm/z3fold Miaohe Lin <linmiaohe@huawei.com>: Patch series "Cleanup and fixup for z3fold": mm/z3fold: define macro NCHUNKS as TOTAL_CHUNKS - ZHDR_CHUNKS mm/z3fold: avoid possible underflow in z3fold_alloc() mm/z3fold: remove magic number in z3fold_create_pool() mm/z3fold: remove unused function handle_to_z3fold_header() mm/z3fold: fix potential memory leak in z3fold_destroy_pool() mm/z3fold: use release_z3fold_page_locked() to release locked z3fold page Subsystem: mm/zbud Miaohe Lin <linmiaohe@huawei.com>: Patch series "Cleanups for zbud", v2: mm/zbud: reuse unbuddied[0] as buddied in zbud_pool mm/zbud: don't export any zbud API Subsystem: mm/ras YueHaibing <yuehaibing@huawei.com>: mm/compaction: use DEVICE_ATTR_WO macro Liu Xiang <liu.xiang@zlingsmart.com>: mm: compaction: remove duplicate !list_empty(&sublist) check Wonhyuk Yang <vvghjk1234@gmail.com>: mm/compaction: fix 'limit' in fast_isolate_freepages Subsystem: mm/mempolicy Feng Tang <feng.tang@intel.com>: Patch series "mm/mempolicy: some fix and semantics cleanup", v4: mm/mempolicy: cleanup nodemask intersection check for oom mm/mempolicy: don't handle MPOL_LOCAL like a fake MPOL_PREFERRED policy mm/mempolicy: unify the parameter sanity check for mbind and set_mempolicy Yang Shi <shy828301@gmail.com>: mm: mempolicy: don't have to split pmd for huge zero page Ben Widawsky <ben.widawsky@intel.com>: mm/mempolicy: use unified 'nodes' for bind/interleave/prefer policies Subsystem: mm/memblock Mike Rapoport <rppt@linux.ibm.com>: Patch series "arm64: drop pfn_valid_within() and simplify pfn_valid()", v4: include/linux/mmzone.h: add documentation for pfn_valid() memblock: update initialization of reserved pages arm64: decouple check whether pfn is in linear map from pfn_valid() arm64: drop pfn_valid_within() and simplify pfn_valid() Anshuman Khandual <anshuman.khandual@arm.com>: arm64/mm: drop HAVE_ARCH_PFN_VALID Subsystem: mm/migration Muchun Song <songmuchun@bytedance.com>: mm: migrate: fix missing update page_private to hugetlb_page_subpool Subsystem: mm/thp Collin Fijalkovich <cfijalkovich@google.com>: mm, thp: relax the VM_DENYWRITE constraint on file-backed THPs Yang Shi <shy828301@gmail.com>: mm: memory: add orig_pmd to struct vm_fault mm: memory: make numa_migrate_prep() non-static mm: thp: refactor NUMA fault handling mm: migrate: account THP NUMA migration counters correctly mm: migrate: don't split THP for misplaced NUMA page mm: migrate: check mapcount for THP instead of refcount mm: thp: skip make PMD PROT_NONE if THP migration is not supported Anshuman Khandual <anshuman.khandual@arm.com>: mm/thp: make ARCH_ENABLE_SPLIT_PMD_PTLOCK dependent on PGTABLE_LEVELS > 2 Yang Shi <shy828301@gmail.com>: mm: rmap: make try_to_unmap() void function Hugh Dickins <hughd@google.com>: mm/thp: remap_page() is only needed on anonymous THP mm: hwpoison_user_mappings() try_to_unmap() with TTU_SYNC "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm/thp: fix strncpy warning Subsystem: mm/nommu Chen Li <chenli@uniontech.com>: nommu: remove __GFP_HIGHMEM in vmalloc/vzalloc Liam Howlett <liam.howlett@oracle.com>: mm/nommu: unexport do_munmap() Subsystem: mm/kconfig Kefeng Wang <wangkefeng.wang@huawei.com>: mm: generalize ZONE_[DMA|DMA32] Subsystem: mm/madvise David Hildenbrand <david@redhat.com>: Patch series "mm/madvise: introduce MADV_POPULATE_(READ|WRITE) to prefault page tables", v2: mm: make variable names for populate_vma_page_range() consistent mm/madvise: introduce MADV_POPULATE_(READ|WRITE) to prefault page tables MAINTAINERS: add tools/testing/selftests/vm/ to MEMORY MANAGEMENT selftests/vm: add protection_keys_32 / protection_keys_64 to gitignore selftests/vm: add test for MADV_POPULATE_(READ|WRITE) Subsystem: mm/memory-hotplug Liam Mark <lmark@codeaurora.org>: mm/memory_hotplug: rate limit page migration warnings Oscar Salvador <osalvador@suse.de>: mm,memory_hotplug: drop unneeded locking Subsystem: mm/zswap Miaohe Lin <linmiaohe@huawei.com>: Patch series "Cleanup and fixup for zswap": mm/zswap.c: remove unused function zswap_debugfs_exit() mm/zswap.c: avoid unnecessary copy-in at map time mm/zswap.c: fix two bugs in zswap_writeback_entry() Subsystem: mm/zsmalloc Zhaoyang Huang <zhaoyang.huang@unisoc.com>: mm: zram: amend SLAB_RECLAIM_ACCOUNT on zspage_cachep Miaohe Lin <linmiaohe@huawei.com>: Patch series "Cleanup for zsmalloc": mm/zsmalloc.c: remove confusing code in obj_free() mm/zsmalloc.c: improve readability for async_free_zspage() Subsystem: mm/zram Yue Hu <huyue2@yulong.com>: zram: move backing_dev under macro CONFIG_ZRAM_WRITEBACK Subsystem: mm/cleanups Hyeonggon Yoo <42.hyeyoo@gmail.com>: mm: fix typos and grammar error in comments Anshuman Khandual <anshuman.khandual@arm.com>: mm: define default value for FIRST_USER_ADDRESS Zhen Lei <thunder.leizhen@huawei.com>: mm: fix spelling mistakes Mel Gorman <mgorman@techsingularity.net>: Patch series "Clean W=1 build warnings for mm/": mm/vmscan: remove kerneldoc-like comment from isolate_lru_pages mm/vmalloc: include header for prototype of set_iounmap_nonlazy mm/page_alloc: make should_fail_alloc_page() static mm/mapping_dirty_helpers: remove double Note in kerneldoc mm/memcontrol.c: fix kerneldoc comment for mem_cgroup_calculate_protection mm/memory_hotplug: fix kerneldoc comment for __try_online_node mm/memory_hotplug: fix kerneldoc comment for __remove_memory mm/zbud: add kerneldoc fields for zbud_pool mm/z3fold: add kerneldoc fields for z3fold_pool mm/swap: make swap_address_space an inline function mm/mmap_lock: remove dead code for !CONFIG_TRACING configurations mm/page_alloc: move prototype for find_suitable_fallback mm/swap: make NODE_DATA an inline function on CONFIG_FLATMEM Anshuman Khandual <anshuman.khandual@arm.com>: mm/thp: define default pmd_pgtable() Subsystem: mm/kfence Marco Elver <elver@google.com>: kfence: unconditionally use unbound work queue Subsystem: mm/hmm Alistair Popple <apopple@nvidia.com>: Patch series "Add support for SVM atomics in Nouveau", v11: mm: remove special swap entry functions mm/swapops: rework swap entry manipulation code mm/rmap: split try_to_munlock from try_to_unmap mm/rmap: split migration into its own function mm: rename migrate_pgmap_owner mm/memory.c: allow different return codes for copy_nonpresent_pte() mm: device exclusive memory access mm: selftests for exclusive device memory nouveau/svm: refactor nouveau_range_fault nouveau/svm: implement atomic SVM access Subsystem: procfs Marcelo Henrique Cerri <marcelo.cerri@canonical.com>: proc: Avoid mixing integer types in mem_rw() ZHOUFENG <zhoufeng.zf@bytedance.com>: fs/proc/kcore.c: add mmap interface Kalesh Singh <kaleshsingh@google.com>: procfs: allow reading fdinfo with PTRACE_MODE_READ procfs/dmabuf: add inode number to /proc/*/fdinfo Subsystem: sysctl Jiapeng Chong <jiapeng.chong@linux.alibaba.com>: sysctl: remove redundant assignment to first Subsystem: misc Andy Shevchenko <andriy.shevchenko@linux.intel.com>: drm: include only needed headers in ascii85.h Subsystem: core-kernel Andy Shevchenko <andriy.shevchenko@linux.intel.com>: kernel.h: split out panic and oops helpers Subsystem: lib Zhen Lei <thunder.leizhen@huawei.com>: lib: decompress_bunzip2: remove an unneeded semicolon Andy Shevchenko <andriy.shevchenko@linux.intel.com>: Patch series "lib/string_helpers: get rid of ugly *_escape_mem_ascii()", v3: lib/string_helpers: switch to use BIT() macro lib/string_helpers: move ESCAPE_NP check inside 'else' branch in a loop lib/string_helpers: drop indentation level in string_escape_mem() lib/string_helpers: introduce ESCAPE_NA for escaping non-ASCII lib/string_helpers: introduce ESCAPE_NAP to escape non-ASCII and non-printable lib/string_helpers: allow to append additional characters to be escaped lib/test-string_helpers: print flags in hexadecimal format lib/test-string_helpers: get rid of trailing comma in terminators lib/test-string_helpers: add test cases for new features MAINTAINERS: add myself as designated reviewer for generic string library seq_file: introduce seq_escape_mem() seq_file: add seq_escape_str() as replica of string_escape_str() seq_file: convert seq_escape() to use seq_escape_str() nfsd: avoid non-flexible API in seq_quote_mem() seq_file: drop unused *_escape_mem_ascii() Trent Piepho <tpiepho@gmail.com>: lib/math/rational.c: fix divide by zero lib/math/rational: add Kunit test cases Zhen Lei <thunder.leizhen@huawei.com>: lib/decompressors: fix spelling mistakes lib/mpi: fix spelling mistakes Alexey Dobriyan <adobriyan@gmail.com>: lib: memscan() fixlet lib: uninline simple_strtoull() Matteo Croce <mcroce@microsoft.com>: lib/test_string.c: allow module removal Andy Shevchenko <andriy.shevchenko@linux.intel.com>: kernel.h: split out kstrtox() and simple_strtox() to a separate header Subsystem: lz4 Rajat Asthana <thisisrast7@gmail.com>: lz4_decompress: declare LZ4_decompress_safe_withPrefix64k static Dimitri John Ledkov <dimitri.ledkov@canonical.com>: lib/decompress_unlz4.c: correctly handle zero-padding around initrds. Subsystem: checkpatch Guenter Roeck <linux@roeck-us.net>: checkpatch: scripts/spdxcheck.py now requires python3 Joe Perches <joe@perches.com>: checkpatch: improve the indented label test Guenter Roeck <linux@roeck-us.net>: checkpatch: do not complain about positive return values starting with EPOLL Subsystem: init Andrew Halaney <ahalaney@redhat.com>: init: print out unknown kernel parameters Subsystem: kprobes Barry Song <song.bao.hua@hisilicon.com>: kprobes: remove duplicated strong free_insn_page in x86 and s390 Subsystem: nilfs2 Colin Ian King <colin.king@canonical.com>: nilfs2: remove redundant continue statement in a while-loop Subsystem: hfs Zhen Lei <thunder.leizhen@huawei.com>: hfsplus: remove unnecessary oom message Chung-Chiang Cheng <shepjeng@gmail.com>: hfsplus: report create_date to kstat.btime Subsystem: signals Al Viro <viro@zeniv.linux.org.uk>: x86: signal: don't do sas_ss_reset() until we are certain that sigframe won't be abandoned Subsystem: exec Alexey Dobriyan <adobriyan@gmail.com>: exec: remove checks in __register_bimfmt() Subsystem: kcov Marco Elver <elver@google.com>: kcov: add __no_sanitize_coverage to fix noinstr for all architectures Subsystem: selftests Dave Hansen <dave.hansen@linux.intel.com>: Patch series "selftests/vm/pkeys: Bug fixes and a new test": selftests/vm/pkeys: fix alloc_random_pkey() to make it really, really random selftests/vm/pkeys: handle negative sys_pkey_alloc() return code selftests/vm/pkeys: refill shadow register after implicit kernel write selftests/vm/pkeys: exercise x86 XSAVE init state Subsystem: compress/decompress Yu Kuai <yukuai3@huawei.com>: lib/decompressors: remove set but not used variabled 'level' Subsystem: ipc Vasily Averin <vvs@virtuozzo.com>: Patch series "ipc: allocations cleanup", v2: ipc sem: use kvmalloc for sem_undo allocation ipc: use kmalloc for msg_queue and shmid_kernel Manfred Spraul <manfred@colorfullife.com>: ipc/sem.c: use READ_ONCE()/WRITE_ONCE() for use_global_lock ipc/util.c: use binary search for max_idx Documentation/admin-guide/kernel-parameters.txt | 35 Documentation/admin-guide/mm/hugetlbpage.rst | 11 Documentation/admin-guide/mm/memory-hotplug.rst | 13 Documentation/admin-guide/mm/pagemap.rst | 2 Documentation/admin-guide/mm/userfaultfd.rst | 3 Documentation/core-api/kernel-api.rst | 7 Documentation/filesystems/proc.rst | 48 Documentation/vm/hmm.rst | 19 Documentation/vm/unevictable-lru.rst | 33 MAINTAINERS | 10 arch/alpha/Kconfig | 5 arch/alpha/include/asm/pgalloc.h | 1 arch/alpha/include/asm/pgtable.h | 1 arch/alpha/include/uapi/asm/mman.h | 3 arch/alpha/kernel/setup.c | 2 arch/arc/include/asm/pgalloc.h | 2 arch/arc/include/asm/pgtable.h | 8 arch/arm/Kconfig | 3 arch/arm/include/asm/pgalloc.h | 1 arch/arm64/Kconfig | 15 arch/arm64/include/asm/hugetlb.h | 3 arch/arm64/include/asm/memory.h | 2 arch/arm64/include/asm/page.h | 4 arch/arm64/include/asm/pgalloc.h | 1 arch/arm64/include/asm/pgtable.h | 2 arch/arm64/kernel/setup.c | 1 arch/arm64/kvm/mmu.c | 2 arch/arm64/mm/hugetlbpage.c | 5 arch/arm64/mm/init.c | 51 arch/arm64/mm/ioremap.c | 4 arch/arm64/mm/mmu.c | 22 arch/csky/include/asm/pgalloc.h | 2 arch/csky/include/asm/pgtable.h | 1 arch/hexagon/include/asm/pgtable.h | 4 arch/ia64/Kconfig | 7 arch/ia64/include/asm/pal.h | 1 arch/ia64/include/asm/pgalloc.h | 1 arch/ia64/include/asm/pgtable.h | 1 arch/m68k/Kconfig | 5 arch/m68k/include/asm/mcf_pgalloc.h | 2 arch/m68k/include/asm/mcf_pgtable.h | 2 arch/m68k/include/asm/motorola_pgalloc.h | 1 arch/m68k/include/asm/motorola_pgtable.h | 2 arch/m68k/include/asm/pgtable_mm.h | 1 arch/m68k/include/asm/sun3_pgalloc.h | 1 arch/microblaze/Kconfig | 4 arch/microblaze/include/asm/pgalloc.h | 2 arch/microblaze/include/asm/pgtable.h | 2 arch/mips/Kconfig | 10 arch/mips/include/asm/pgalloc.h | 1 arch/mips/include/asm/pgtable-32.h | 1 arch/mips/include/asm/pgtable-64.h | 1 arch/mips/include/uapi/asm/mman.h | 3 arch/mips/kernel/relocate.c | 1 arch/mips/sgi-ip22/ip22-reset.c | 1 arch/mips/sgi-ip32/ip32-reset.c | 1 arch/nds32/include/asm/pgalloc.h | 5 arch/nios2/include/asm/pgalloc.h | 1 arch/nios2/include/asm/pgtable.h | 2 arch/openrisc/include/asm/pgalloc.h | 2 arch/openrisc/include/asm/pgtable.h | 1 arch/parisc/include/asm/pgalloc.h | 1 arch/parisc/include/asm/pgtable.h | 2 arch/parisc/include/uapi/asm/mman.h | 3 arch/parisc/kernel/pdc_chassis.c | 1 arch/powerpc/Kconfig | 6 arch/powerpc/include/asm/book3s/pgtable.h | 1 arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h | 5 arch/powerpc/include/asm/nohash/32/mmu-8xx.h | 43 arch/powerpc/include/asm/nohash/32/pgtable.h | 1 arch/powerpc/include/asm/nohash/64/pgtable.h | 2 arch/powerpc/include/asm/pgalloc.h | 5 arch/powerpc/include/asm/pgtable.h | 6 arch/powerpc/kernel/setup-common.c | 1 arch/powerpc/platforms/Kconfig.cputype | 1 arch/riscv/Kconfig | 5 arch/riscv/include/asm/pgalloc.h | 2 arch/riscv/include/asm/pgtable.h | 2 arch/s390/Kconfig | 6 arch/s390/include/asm/pgalloc.h | 3 arch/s390/include/asm/pgtable.h | 5 arch/s390/kernel/ipl.c | 1 arch/s390/kernel/kprobes.c | 5 arch/s390/mm/pgtable.c | 2 arch/sh/include/asm/pgalloc.h | 1 arch/sh/include/asm/pgtable.h | 2 arch/sparc/Kconfig | 5 arch/sparc/include/asm/pgalloc_32.h | 1 arch/sparc/include/asm/pgalloc_64.h | 1 arch/sparc/include/asm/pgtable_32.h | 3 arch/sparc/include/asm/pgtable_64.h | 8 arch/sparc/kernel/sstate.c | 1 arch/sparc/mm/hugetlbpage.c | 6 arch/sparc/mm/init_64.c | 1 arch/um/drivers/mconsole_kern.c | 1 arch/um/include/asm/pgalloc.h | 1 arch/um/include/asm/pgtable-2level.h | 1 arch/um/include/asm/pgtable-3level.h | 1 arch/um/kernel/um_arch.c | 1 arch/x86/Kconfig | 17 arch/x86/include/asm/desc.h | 1 arch/x86/include/asm/pgalloc.h | 2 arch/x86/include/asm/pgtable_types.h | 2 arch/x86/kernel/cpu/mshyperv.c | 1 arch/x86/kernel/kprobes/core.c | 6 arch/x86/kernel/setup.c | 1 arch/x86/mm/init_64.c | 21 arch/x86/mm/pgtable.c | 34 arch/x86/purgatory/purgatory.c | 2 arch/x86/xen/enlighten.c | 1 arch/xtensa/include/asm/pgalloc.h | 2 arch/xtensa/include/asm/pgtable.h | 1 arch/xtensa/include/uapi/asm/mman.h | 3 arch/xtensa/platforms/iss/setup.c | 1 drivers/block/zram/zram_drv.h | 2 drivers/bus/brcmstb_gisb.c | 1 drivers/char/ipmi/ipmi_msghandler.c | 1 drivers/clk/analogbits/wrpll-cln28hpc.c | 4 drivers/edac/altera_edac.c | 1 drivers/firmware/google/gsmi.c | 1 drivers/gpu/drm/nouveau/include/nvif/if000c.h | 1 drivers/gpu/drm/nouveau/nouveau_svm.c | 162 ++- drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h | 1 drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c | 6 drivers/hv/vmbus_drv.c | 1 drivers/hwtracing/coresight/coresight-cpu-debug.c | 1 drivers/leds/trigger/ledtrig-activity.c | 1 drivers/leds/trigger/ledtrig-heartbeat.c | 1 drivers/leds/trigger/ledtrig-panic.c | 1 drivers/misc/bcm-vk/bcm_vk_dev.c | 1 drivers/misc/ibmasm/heartbeat.c | 1 drivers/misc/pvpanic/pvpanic.c | 1 drivers/net/ipa/ipa_smp2p.c | 1 drivers/parisc/power.c | 1 drivers/power/reset/ltc2952-poweroff.c | 1 drivers/remoteproc/remoteproc_core.c | 1 drivers/s390/char/con3215.c | 1 drivers/s390/char/con3270.c | 1 drivers/s390/char/sclp.c | 1 drivers/s390/char/sclp_con.c | 1 drivers/s390/char/sclp_vt220.c | 1 drivers/s390/char/zcore.c | 1 drivers/soc/bcm/brcmstb/pm/pm-arm.c | 1 drivers/staging/olpc_dcon/olpc_dcon.c | 1 drivers/video/fbdev/hyperv_fb.c | 1 drivers/virtio/virtio_mem.c | 2 fs/Kconfig | 15 fs/exec.c | 3 fs/hfsplus/inode.c | 5 fs/hfsplus/xattr.c | 1 fs/nfsd/nfs4state.c | 2 fs/nilfs2/btree.c | 1 fs/open.c | 13 fs/proc/base.c | 6 fs/proc/fd.c | 20 fs/proc/kcore.c | 136 ++ fs/proc/task_mmu.c | 34 fs/seq_file.c | 43 fs/userfaultfd.c | 15 include/asm-generic/bug.h | 3 include/linux/ascii85.h | 3 include/linux/bootmem_info.h | 68 + include/linux/compat.h | 2 include/linux/compiler-clang.h | 17 include/linux/compiler-gcc.h | 6 include/linux/compiler_types.h | 2 include/linux/huge_mm.h | 74 - include/linux/hugetlb.h | 80 + include/linux/hugetlb_cgroup.h | 19 include/linux/kcore.h | 3 include/linux/kernel.h | 227 ---- include/linux/kprobes.h | 1 include/linux/kstrtox.h | 155 ++ include/linux/memblock.h | 4 include/linux/memory_hotplug.h | 27 include/linux/mempolicy.h | 9 include/linux/memremap.h | 2 include/linux/migrate.h | 27 include/linux/mm.h | 18 include/linux/mm_types.h | 2 include/linux/mmu_notifier.h | 26 include/linux/mmzone.h | 27 include/linux/mpi.h | 4 include/linux/page-flags.h | 22 include/linux/panic.h | 98 + include/linux/panic_notifier.h | 12 include/linux/pgtable.h | 44 include/linux/rmap.h | 13 include/linux/seq_file.h | 10 include/linux/shmem_fs.h | 19 include/linux/signal.h | 2 include/linux/string.h | 7 include/linux/string_helpers.h | 31 include/linux/sunrpc/cache.h | 1 include/linux/swap.h | 19 include/linux/swapops.h | 171 +-- include/linux/thread_info.h | 1 include/linux/userfaultfd_k.h | 5 include/linux/vmalloc.h | 15 include/linux/zbud.h | 23 include/trace/events/vmscan.h | 41 include/uapi/asm-generic/mman-common.h | 3 include/uapi/linux/mempolicy.h | 1 include/uapi/linux/userfaultfd.h | 7 init/main.c | 42 ipc/msg.c | 6 ipc/sem.c | 25 ipc/shm.c | 6 ipc/util.c | 44 ipc/util.h | 3 kernel/hung_task.c | 1 kernel/kexec_core.c | 1 kernel/kprobes.c | 2 kernel/panic.c | 1 kernel/rcu/tree.c | 2 kernel/signal.c | 14 kernel/sysctl.c | 4 kernel/trace/trace.c | 1 lib/Kconfig.debug | 12 lib/decompress_bunzip2.c | 6 lib/decompress_unlz4.c | 8 lib/decompress_unlzo.c | 3 lib/decompress_unxz.c | 2 lib/decompress_unzstd.c | 4 lib/kstrtox.c | 5 lib/lz4/lz4_decompress.c | 2 lib/math/Makefile | 1 lib/math/rational-test.c | 56 + lib/math/rational.c | 16 lib/mpi/longlong.h | 4 lib/mpi/mpicoder.c | 6 lib/mpi/mpiutil.c | 2 lib/parser.c | 1 lib/string.c | 2 lib/string_helpers.c | 142 +- lib/test-string_helpers.c | 157 ++- lib/test_hmm.c | 127 ++ lib/test_hmm_uapi.h | 2 lib/test_string.c | 5 lib/vsprintf.c | 1 lib/xz/xz_dec_bcj.c | 2 lib/xz/xz_dec_lzma2.c | 8 lib/zlib_inflate/inffast.c | 2 lib/zstd/huf.h | 2 mm/Kconfig | 16 mm/Makefile | 2 mm/bootmem_info.c | 127 ++ mm/compaction.c | 20 mm/debug_vm_pgtable.c | 109 -- mm/gup.c | 58 + mm/hmm.c | 12 mm/huge_memory.c | 269 ++--- mm/hugetlb.c | 369 +++++-- mm/hugetlb_vmemmap.c | 332 ++++++ mm/hugetlb_vmemmap.h | 53 - mm/internal.h | 29 mm/kfence/core.c | 4 mm/khugepaged.c | 20 mm/madvise.c | 66 + mm/mapping_dirty_helpers.c | 2 mm/memblock.c | 28 mm/memcontrol.c | 4 mm/memory-failure.c | 38 mm/memory.c | 239 +++- mm/memory_hotplug.c | 161 --- mm/mempolicy.c | 323 ++---- mm/migrate.c | 268 +---- mm/mlock.c | 12 mm/mmap_lock.c | 59 - mm/mprotect.c | 18 mm/nommu.c | 5 mm/oom_kill.c | 2 mm/page_alloc.c | 5 mm/page_vma_mapped.c | 15 mm/rmap.c | 644 +++++++++--- mm/shmem.c | 125 -- mm/sparse-vmemmap.c | 432 +++++++- mm/sparse.c | 1 mm/swap.c | 2 mm/swapfile.c | 2 mm/userfaultfd.c | 249 ++-- mm/util.c | 40 mm/vmalloc.c | 37 mm/vmscan.c | 20 mm/workingset.c | 10 mm/z3fold.c | 39 mm/zbud.c | 235 ++-- mm/zsmalloc.c | 5 mm/zswap.c | 26 scripts/checkpatch.pl | 16 tools/testing/selftests/vm/.gitignore | 3 tools/testing/selftests/vm/Makefile | 5 tools/testing/selftests/vm/hmm-tests.c | 158 +++ tools/testing/selftests/vm/khugepaged.c | 4 tools/testing/selftests/vm/madv_populate.c | 342 ++++++ tools/testing/selftests/vm/pkey-x86.h | 1 tools/testing/selftests/vm/protection_keys.c | 85 + tools/testing/selftests/vm/run_vmtests.sh | 16 tools/testing/selftests/vm/userfaultfd.c | 1094 ++++++++++----------- 299 files changed, 6277 insertions(+), 3183 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2021-07-01 1:46 incoming Andrew Morton @ 2021-07-03 0:28 ` Linus Torvalds 2021-07-03 1:06 ` incoming Linus Torvalds 0 siblings, 1 reply; 349+ messages in thread From: Linus Torvalds @ 2021-07-03 0:28 UTC (permalink / raw) To: Andrew Morton; +Cc: Linux-MM, mm-commits On Wed, Jun 30, 2021 at 6:46 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > This is the rest of the -mm tree, less 66 patches which are dependent on > things which are (or were recently) in linux-next. I'll trickle that > material over next week. I haven't bisected this yet, but with the current -git I'm getting watchdog: BUG: soft lockup - CPU#41 stuck for 49s! and the common call chain seems to be in flush_tlb_mm_range -> on_each_cpu_cond_mask. Commit e058a84bfddc42ba356a2316f2cf1141974625c9 is good, and looking at the pulls and merges I've done since, this -mm series looks like the obvious culprit. I'll go start bisection, but I thought I'd give a heads-up in case somebody else has seen TLB-flush-related lockups and already figured out the guilty party.. Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2021-07-03 0:28 ` incoming Linus Torvalds @ 2021-07-03 1:06 ` Linus Torvalds 0 siblings, 0 replies; 349+ messages in thread From: Linus Torvalds @ 2021-07-03 1:06 UTC (permalink / raw) To: Andrew Morton; +Cc: Linux-MM, mm-commits On Fri, Jul 2, 2021 at 5:28 PM Linus Torvalds <torvalds@linux-foundation.org> wrote: > > Commit e058a84bfddc42ba356a2316f2cf1141974625c9 is good, and looking > at the pulls and merges I've done since, this -mm series looks like > the obvious culprit. No, unless my bisection is wrong, the -mm branch is innocent, and was discarded from the suspects on the very first bisection trial. So never mind. Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-06-29 2:32 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-06-29 2:32 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 192 patches, based on 7cf3dead1ad70c72edb03e2d98e1f3dcd332cdb2. Subsystems affected by this patch series: mm/gup mm/pagealloc kthread ia64 scripts ntfs squashfs ocfs2 z kernel/watchdog mm/slab mm/slub mm/kmemleak mm/dax mm/debug mm/pagecache mm/gup mm/swap mm/memcg mm/pagemap mm/mprotect mm/bootmem mm/dma mm/tracing mm/vmalloc mm/kasan mm/initialization mm/pagealloc mm/memory-failure Subsystem: mm/gup Jann Horn <jannh@google.com>: mm/gup: fix try_grab_compound_head() race with split_huge_page() Subsystem: mm/pagealloc Mike Rapoport <rppt@linux.ibm.com>: mm/page_alloc: fix memory map initialization for descending nodes Mel Gorman <mgorman@techsingularity.net>: mm/page_alloc: correct return value of populated elements if bulk array is populated Subsystem: kthread Jonathan Neuschäfer <j.neuschaefer@gmx.net>: kthread: switch to new kerneldoc syntax for named variable macro argument Petr Mladek <pmladek@suse.com>: kthread_worker: fix return value when kthread_mod_delayed_work() races with kthread_cancel_delayed_work_sync() Subsystem: ia64 Randy Dunlap <rdunlap@infradead.org>: ia64: headers: drop duplicated words Arnd Bergmann <arnd@arndb.de>: ia64: mca_drv: fix incorrect array size calculation Subsystem: scripts "Steven Rostedt (VMware)" <rostedt@goodmis.org>: Patch series "streamline_config.pl: Fix Perl spacing": streamline_config.pl: make spacing consistent streamline_config.pl: add softtabstop=4 for vim users Colin Ian King <colin.king@canonical.com>: scripts/spelling.txt: add more spellings to spelling.txt Subsystem: ntfs Desmond Cheong Zhi Xi <desmondcheongzx@gmail.com>: ntfs: fix validity check for file name attribute Subsystem: squashfs Vincent Whitchurch <vincent.whitchurch@axis.com>: squashfs: add option to panic on errors Subsystem: ocfs2 Yang Yingliang <yangyingliang@huawei.com>: ocfs2: remove unnecessary INIT_LIST_HEAD() Subsystem: z Dan Carpenter <dan.carpenter@oracle.com>: ocfs2: fix snprintf() checking Colin Ian King <colin.king@canonical.com>: ocfs2: remove redundant assignment to pointer queue Wan Jiabing <wanjiabing@vivo.com>: ocfs2: remove repeated uptodate check for buffer Chen Huang <chenhuang5@huawei.com>: ocfs2: replace simple_strtoull() with kstrtoull() Colin Ian King <colin.king@canonical.com>: ocfs2: remove redundant initialization of variable ret Subsystem: kernel/watchdog Wang Qing <wangqing@vivo.com>: kernel: watchdog: modify the explanation related to watchdog thread doc: watchdog: modify the explanation related to watchdog thread doc: watchdog: modify the doc related to "watchdog/%u" Subsystem: mm/slab gumingtao <gumingtao1225@gmail.com>: slab: use __func__ to trace function name Subsystem: mm/slub Vlastimil Babka <vbabka@suse.cz>: kunit: make test->lock irq safe Oliver Glitta <glittao@gmail.com>: mm/slub, kunit: add a KUnit test for SLUB debugging functionality slub: remove resiliency_test() function Hyeonggon Yoo <42.hyeyoo@gmail.com>: mm, slub: change run-time assertion in kmalloc_index() to compile-time Stephen Boyd <swboyd@chromium.org>: slub: restore slub_debug=- behavior slub: actually use 'message' in restore_bytes() Joe Perches <joe@perches.com>: slub: indicate slab_fix() uses printf formats Stephen Boyd <swboyd@chromium.org>: slub: force on no_hash_pointers when slub_debug is enabled Faiyaz Mohammed <faiyazm@codeaurora.org>: mm: slub: move sysfs slab alloc/free interfaces to debugfs Georgi Djakov <quic_c_gdjako@quicinc.com>: mm/slub: add taint after the errors are printed Subsystem: mm/kmemleak Yanfei Xu <yanfei.xu@windriver.com>: mm/kmemleak: fix possible wrong memory scanning period Subsystem: mm/dax Jan Kara <jack@suse.cz>: dax: fix ENOMEM handling in grab_mapping_entry() Subsystem: mm/debug Tang Bin <tangbin@cmss.chinamobile.com>: tools/vm/page_owner_sort.c: check malloc() return Anshuman Khandual <anshuman.khandual@arm.com>: mm/debug_vm_pgtable: ensure THP availability via has_transparent_hugepage() Nicolas Saenz Julienne <nsaenzju@redhat.com>: mm: mmap_lock: use local locks instead of disabling preemption Gavin Shan <gshan@redhat.com>: Patch series "mm/page_reporting: Make page reporting work on arm64 with 64KB page size", v4: mm/page_reporting: fix code style in __page_reporting_request() mm/page_reporting: export reporting order as module parameter mm/page_reporting: allow driver to specify reporting order virtio_balloon: specify page reporting order if needed Subsystem: mm/pagecache Kefeng Wang <wangkefeng.wang@huawei.com>: mm: page-writeback: kill get_writeback_state() comments Chi Wu <wuchi.zero@gmail.com>: mm/page-writeback: Fix performance when BDI's share of ratio is 0. mm/page-writeback: update the comment of Dirty position control mm/page-writeback: use __this_cpu_inc() in account_page_dirtied() Roman Gushchin <guro@fb.com>: Patch series "cgroup, blkcg: prevent dirty inodes to pin dying memory cgroups", v9: writeback, cgroup: do not switch inodes with I_WILL_FREE flag writeback, cgroup: add smp_mb() to cgroup_writeback_umount() writeback, cgroup: increment isw_nr_in_flight before grabbing an inode writeback, cgroup: switch to rcu_work API in inode_switch_wbs() writeback, cgroup: keep list of inodes attached to bdi_writeback writeback, cgroup: split out the functional part of inode_switch_wbs_work_fn() writeback, cgroup: support switching multiple inodes at once writeback, cgroup: release dying cgwbs by switching attached inodes Christoph Hellwig <hch@lst.de>: Patch series "remove the implicit .set_page_dirty default": fs: unexport __set_page_dirty fs: move ramfs_aops to libfs mm: require ->set_page_dirty to be explicitly wired up "Matthew Wilcox (Oracle)" <willy@infradead.org>: Patch series "Further set_page_dirty cleanups": mm/writeback: move __set_page_dirty() to core mm mm/writeback: use __set_page_dirty in __set_page_dirty_nobuffers iomap: use __set_page_dirty_nobuffers fs: remove anon_set_page_dirty() fs: remove noop_set_page_dirty() mm: move page dirtying prototypes from mm.h Subsystem: mm/gup Peter Xu <peterx@redhat.com>: Patch series "mm/gup: Fix pin page write cache bouncing on has_pinned", v2: mm/gup_benchmark: support threading Andrea Arcangeli <aarcange@redhat.com>: mm: gup: allow FOLL_PIN to scale in SMP mm: gup: pack has_pinned in MMF_HAS_PINNED Christophe Leroy <christophe.leroy@csgroup.eu>: mm: pagewalk: fix walk for hugepage tables Subsystem: mm/swap Miaohe Lin <linmiaohe@huawei.com>: Patch series "close various race windows for swap", v6: mm/swapfile: use percpu_ref to serialize against concurrent swapoff swap: fix do_swap_page() race with swapoff mm/swap: remove confusing checking for non_swap_entry() in swap_ra_info() mm/shmem: fix shmem_swapin() race with swapoff Patch series "Cleanups for swap", v2: mm/swapfile: move get_swap_page_of_type() under CONFIG_HIBERNATION mm/swap: remove unused local variable nr_shadows mm/swap_slots.c: delete meaningless forward declarations Huang Ying <ying.huang@intel.com>: mm, swap: remove unnecessary smp_rmb() in swap_type_to_swap_info() mm: free idle swap cache page after COW swap: check mapping_empty() for swap cache before being freed Subsystem: mm/memcg Waiman Long <longman@redhat.com>: Patch series "mm/memcg: Reduce kmemcache memory accounting overhead", v6: mm/memcg: move mod_objcg_state() to memcontrol.c mm/memcg: cache vmstat data in percpu memcg_stock_pcp mm/memcg: improve refill_obj_stock() performance mm/memcg: optimize user context object stock access Patch series "mm: memcg/slab: Fix objcg pointer array handling problem", v4: mm: memcg/slab: properly set up gfp flags for objcg pointer array mm: memcg/slab: create a new set of kmalloc-cg-<n> caches mm: memcg/slab: disable cache merging for KMALLOC_NORMAL caches Muchun Song <songmuchun@bytedance.com>: mm: memcontrol: fix root_mem_cgroup charging Patch series "memcontrol code cleanup and simplification", v3: mm: memcontrol: fix page charging in page replacement mm: memcontrol: bail out early when !mm in get_mem_cgroup_from_mm mm: memcontrol: remove the pgdata parameter of mem_cgroup_page_lruvec mm: memcontrol: simplify lruvec_holds_page_lru_lock mm: memcontrol: rename lruvec_holds_page_lru_lock to page_matches_lruvec mm: memcontrol: simplify the logic of objcg pinning memcg mm: memcontrol: move obj_cgroup_uncharge_pages() out of css_set_lock mm: vmscan: remove noinline_for_stack wenhuizhang <wenhui@gwmail.gwu.edu>: memcontrol: use flexible-array member Dan Schatzberg <schatzberg.dan@gmail.com>: Patch series "Charge loop device i/o to issuing cgroup", v14: loop: use worker per cgroup instead of kworker mm: charge active memcg when no mm is set loop: charge i/o to mem and blk cg Huilong Deng <denghuilong@cdjrlc.com>: mm: memcontrol: remove trailing semicolon in macros Subsystem: mm/pagemap David Hildenbrand <david@redhat.com>: Patch series "perf/binfmt/mm: remove in-tree usage of MAP_EXECUTABLE": perf: MAP_EXECUTABLE does not indicate VM_MAYEXEC binfmt: remove in-tree usage of MAP_EXECUTABLE mm: ignore MAP_EXECUTABLE in ksys_mmap_pgoff() Gonzalo Matias Juarez Tello <gmjuareztello@gmail.com>: mm/mmap.c: logic of find_vma_intersection repeated in __do_munmap Liam Howlett <liam.howlett@oracle.com>: mm/mmap: introduce unlock_range() for code cleanup mm/mmap: use find_vma_intersection() in do_mmap() for overlap Liu Xiang <liu.xiang@zlingsmart.com>: mm/memory.c: fix comment of finish_mkwrite_fault() Liam Howlett <liam.howlett@oracle.com>: Patch series "mm: Add vma_lookup()", v2: mm: add vma_lookup(), update find_vma_intersection() comments drm/i915/selftests: use vma_lookup() in __igt_mmap() arch/arc/kernel/troubleshoot: use vma_lookup() instead of find_vma() arch/arm64/kvm: use vma_lookup() instead of find_vma_intersection() arch/powerpc/kvm/book3s_hv_uvmem: use vma_lookup() instead of find_vma_intersection() arch/powerpc/kvm/book3s: use vma_lookup() in kvmppc_hv_setup_htab_rma() arch/mips/kernel/traps: use vma_lookup() instead of find_vma() arch/m68k/kernel/sys_m68k: use vma_lookup() in sys_cacheflush() x86/sgx: use vma_lookup() in sgx_encl_find() virt/kvm: use vma_lookup() instead of find_vma_intersection() vfio: use vma_lookup() instead of find_vma_intersection() net/ipv5/tcp: use vma_lookup() in tcp_zerocopy_receive() drm/amdgpu: use vma_lookup() in amdgpu_ttm_tt_get_user_pages() media: videobuf2: use vma_lookup() in get_vaddr_frames() misc/sgi-gru/grufault: use vma_lookup() in gru_find_vma() kernel/events/uprobes: use vma_lookup() in find_active_uprobe() lib/test_hmm: use vma_lookup() in dmirror_migrate() mm/ksm: use vma_lookup() in find_mergeable_vma() mm/migrate: use vma_lookup() in do_pages_stat_array() mm/mremap: use vma_lookup() in vma_to_resize() mm/memory.c: use vma_lookup() in __access_remote_vm() mm/mempolicy: use vma_lookup() in __access_remote_vm() Chen Li <chenli@uniontech.com>: mm: update legacy flush_tlb_* to use vma Subsystem: mm/mprotect Peter Collingbourne <pcc@google.com>: mm: improve mprotect(R|W) efficiency on pages referenced once Subsystem: mm/bootmem Souptick Joarder <jrdr.linux@gmail.com>: h8300: remove unused variable Subsystem: mm/dma YueHaibing <yuehaibing@huawei.com>: mm/dmapool: use DEVICE_ATTR_RO macro Subsystem: mm/tracing Vincent Whitchurch <vincent.whitchurch@axis.com>: mm, tracing: unify PFN format strings Subsystem: mm/vmalloc "Uladzislau Rezki (Sony)" <urezki@gmail.com>: Patch series "vmalloc() vs bulk allocator", v2: mm/page_alloc: add an alloc_pages_bulk_array_node() helper mm/vmalloc: switch to bulk allocator in __vmalloc_area_node() mm/vmalloc: print a warning message first on failure mm/vmalloc: remove quoted strings split across lines Uladzislau Rezki <urezki@gmail.com>: mm/vmalloc: fallback to a single page allocator Rafael Aquini <aquini@redhat.com>: mm: vmalloc: add cond_resched() in __vunmap() Subsystem: mm/kasan Alexander Potapenko <glider@google.com>: printk: introduce dump_stack_lvl() kasan: use dump_stack_lvl(KERN_ERR) to print stacks David Gow <davidgow@google.com>: kasan: test: improve failure message in KUNIT_EXPECT_KASAN_FAIL() Daniel Axtens <dja@axtens.net>: Patch series "KASAN core changes for ppc64 radix KASAN", v16: kasan: allow an architecture to disable inline instrumentation kasan: allow architectures to provide an outline readiness check mm: define default MAX_PTRS_PER_* in include/pgtable.h kasan: use MAX_PTRS_PER_* for early shadow tables Kuan-Ying Lee <Kuan-Ying.Lee@mediatek.com>: Patch series "kasan: add memory corruption identification support for hw tag-based kasan", v4: kasan: rename CONFIG_KASAN_SW_TAGS_IDENTIFY to CONFIG_KASAN_TAGS_IDENTIFY kasan: integrate the common part of two KASAN tag-based modes kasan: add memory corruption identification support for hardware tag-based mode Subsystem: mm/initialization Jungseung Lee <js07.lee@samsung.com>: mm: report which part of mem is being freed on initmem case Subsystem: mm/pagealloc Mike Rapoport <rppt@linux.ibm.com>: mm/mmzone.h: simplify is_highmem_idx() "Matthew Wilcox (Oracle)" <willy@infradead.org>: Patch series "Constify struct page arguments": mm: make __dump_page static Aaron Tomlin <atomlin@redhat.com>: mm/page_alloc: bail out on fatal signal during reclaim/compaction retry attempt "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm/debug: factor PagePoisoned out of __dump_page mm/page_owner: constify dump_page_owner mm: make compound_head const-preserving mm: constify get_pfnblock_flags_mask and get_pfnblock_migratetype mm: constify page_count and page_ref_count mm: optimise nth_page for contiguous memmap Heiner Kallweit <hkallweit1@gmail.com>: mm/page_alloc: switch to pr_debug Andrii Nakryiko <andrii@kernel.org>: kbuild: skip per-CPU BTF generation for pahole v1.18-v1.21 Mel Gorman <mgorman@techsingularity.net>: mm/page_alloc: split per cpu page lists and zone stats mm/page_alloc: convert per-cpu list protection to local_lock mm/vmstat: convert NUMA statistics to basic NUMA counters mm/vmstat: inline NUMA event counter updates mm/page_alloc: batch the accounting updates in the bulk allocator mm/page_alloc: reduce duration that IRQs are disabled for VM counters mm/page_alloc: explicitly acquire the zone lock in __free_pages_ok mm/page_alloc: avoid conflating IRQs disabled with zone->lock mm/page_alloc: update PGFREE outside the zone lock in __free_pages_ok Minchan Kim <minchan@kernel.org>: mm: page_alloc: dump migrate-failed pages only at -EBUSY Mel Gorman <mgorman@techsingularity.net>: Patch series "Calculate pcp->high based on zone sizes and active CPUs", v2: mm/page_alloc: delete vm.percpu_pagelist_fraction mm/page_alloc: disassociate the pcp->high from pcp->batch mm/page_alloc: adjust pcp->high after CPU hotplug events mm/page_alloc: scale the number of pages that are batch freed mm/page_alloc: limit the number of pages on PCP lists when reclaim is active mm/page_alloc: introduce vm.percpu_pagelist_high_fraction Dong Aisheng <aisheng.dong@nxp.com>: mm: drop SECTION_SHIFT in code comments mm/page_alloc: improve memmap_pages dbg msg Liu Shixin <liushixin2@huawei.com>: mm/page_alloc: fix counting of managed_pages Mel Gorman <mgorman@techsingularity.net>: Patch series "Allow high order pages to be stored on PCP", v2: mm/page_alloc: move free_the_page Mike Rapoport <rppt@linux.ibm.com>: Patch series "Remove DISCONTIGMEM memory model", v3: alpha: remove DISCONTIGMEM and NUMA arc: update comment about HIGHMEM implementation arc: remove support for DISCONTIGMEM m68k: remove support for DISCONTIGMEM mm: remove CONFIG_DISCONTIGMEM arch, mm: remove stale mentions of DISCONIGMEM docs: remove description of DISCONTIGMEM mm: replace CONFIG_NEED_MULTIPLE_NODES with CONFIG_NUMA mm: replace CONFIG_FLAT_NODE_MEM_MAP with CONFIG_FLATMEM Mel Gorman <mgorman@techsingularity.net>: mm/page_alloc: allow high-order pages to be stored on the per-cpu lists mm/page_alloc: split pcp->high across all online CPUs for cpuless nodes Subsystem: mm/memory-failure Naoya Horiguchi <naoya.horiguchi@nec.com>: mm,hwpoison: send SIGBUS with error virutal address mm,hwpoison: make get_hwpoison_page() call get_any_page() Documentation/admin-guide/kernel-parameters.txt | 6 Documentation/admin-guide/lockup-watchdogs.rst | 4 Documentation/admin-guide/sysctl/kernel.rst | 10 Documentation/admin-guide/sysctl/vm.rst | 52 - Documentation/dev-tools/kasan.rst | 9 Documentation/vm/memory-model.rst | 45 arch/alpha/Kconfig | 22 arch/alpha/include/asm/machvec.h | 6 arch/alpha/include/asm/mmzone.h | 100 -- arch/alpha/include/asm/pgtable.h | 4 arch/alpha/include/asm/topology.h | 39 arch/alpha/kernel/core_marvel.c | 53 - arch/alpha/kernel/core_wildfire.c | 29 arch/alpha/kernel/pci_iommu.c | 29 arch/alpha/kernel/proto.h | 8 arch/alpha/kernel/setup.c | 16 arch/alpha/kernel/sys_marvel.c | 5 arch/alpha/kernel/sys_wildfire.c | 5 arch/alpha/mm/Makefile | 2 arch/alpha/mm/init.c | 3 arch/alpha/mm/numa.c | 223 ---- arch/arc/Kconfig | 13 arch/arc/include/asm/mmzone.h | 40 arch/arc/kernel/troubleshoot.c | 8 arch/arc/mm/init.c | 21 arch/arm/include/asm/tlbflush.h | 13 arch/arm/mm/tlb-v6.S | 2 arch/arm/mm/tlb-v7.S | 2 arch/arm64/Kconfig | 2 arch/arm64/kvm/mmu.c | 2 arch/h8300/kernel/setup.c | 2 arch/ia64/Kconfig | 2 arch/ia64/include/asm/pal.h | 2 arch/ia64/include/asm/spinlock.h | 2 arch/ia64/include/asm/uv/uv_hub.h | 2 arch/ia64/kernel/efi_stub.S | 2 arch/ia64/kernel/mca_drv.c | 2 arch/ia64/kernel/topology.c | 5 arch/ia64/mm/numa.c | 5 arch/m68k/Kconfig.cpu | 10 arch/m68k/include/asm/mmzone.h | 10 arch/m68k/include/asm/page.h | 2 arch/m68k/include/asm/page_mm.h | 35 arch/m68k/include/asm/tlbflush.h | 2 arch/m68k/kernel/sys_m68k.c | 4 arch/m68k/mm/init.c | 20 arch/mips/Kconfig | 2 arch/mips/include/asm/mmzone.h | 8 arch/mips/include/asm/page.h | 2 arch/mips/kernel/traps.c | 4 arch/mips/mm/init.c | 7 arch/nds32/include/asm/memory.h | 6 arch/openrisc/include/asm/tlbflush.h | 2 arch/powerpc/Kconfig | 2 arch/powerpc/include/asm/mmzone.h | 4 arch/powerpc/kernel/setup_64.c | 2 arch/powerpc/kernel/smp.c | 2 arch/powerpc/kexec/core.c | 4 arch/powerpc/kvm/book3s_hv.c | 4 arch/powerpc/kvm/book3s_hv_uvmem.c | 2 arch/powerpc/mm/Makefile | 2 arch/powerpc/mm/mem.c | 4 arch/riscv/Kconfig | 2 arch/s390/Kconfig | 2 arch/s390/include/asm/pgtable.h | 2 arch/sh/include/asm/mmzone.h | 4 arch/sh/kernel/topology.c | 2 arch/sh/mm/Kconfig | 2 arch/sh/mm/init.c | 2 arch/sparc/Kconfig | 2 arch/sparc/include/asm/mmzone.h | 4 arch/sparc/kernel/smp_64.c | 2 arch/sparc/mm/init_64.c | 12 arch/x86/Kconfig | 2 arch/x86/ia32/ia32_aout.c | 4 arch/x86/kernel/cpu/mce/core.c | 13 arch/x86/kernel/cpu/sgx/encl.h | 4 arch/x86/kernel/setup_percpu.c | 6 arch/x86/mm/init_32.c | 4 arch/xtensa/include/asm/page.h | 4 arch/xtensa/include/asm/tlbflush.h | 4 drivers/base/node.c | 18 drivers/block/loop.c | 270 ++++- drivers/block/loop.h | 15 drivers/dax/device.c | 2 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 4 drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c | 2 drivers/media/common/videobuf2/frame_vector.c | 2 drivers/misc/sgi-gru/grufault.c | 4 drivers/vfio/vfio_iommu_type1.c | 2 drivers/virtio/virtio_balloon.c | 17 fs/adfs/inode.c | 1 fs/affs/file.c | 2 fs/bfs/file.c | 1 fs/binfmt_aout.c | 4 fs/binfmt_elf.c | 2 fs/binfmt_elf_fdpic.c | 11 fs/binfmt_flat.c | 2 fs/block_dev.c | 1 fs/buffer.c | 25 fs/configfs/inode.c | 8 fs/dax.c | 3 fs/ecryptfs/mmap.c | 13 fs/exfat/inode.c | 1 fs/ext2/inode.c | 4 fs/ext4/inode.c | 2 fs/fat/inode.c | 1 fs/fs-writeback.c | 366 +++++--- fs/fuse/dax.c | 3 fs/gfs2/aops.c | 2 fs/gfs2/meta_io.c | 2 fs/hfs/inode.c | 2 fs/hfsplus/inode.c | 2 fs/hpfs/file.c | 1 fs/iomap/buffered-io.c | 27 fs/jfs/inode.c | 1 fs/kernfs/inode.c | 8 fs/libfs.c | 44 fs/minix/inode.c | 1 fs/nilfs2/mdt.c | 1 fs/ntfs/inode.c | 2 fs/ocfs2/aops.c | 4 fs/ocfs2/cluster/heartbeat.c | 7 fs/ocfs2/cluster/nodemanager.c | 2 fs/ocfs2/dlm/dlmmaster.c | 2 fs/ocfs2/filecheck.c | 6 fs/ocfs2/stackglue.c | 8 fs/omfs/file.c | 1 fs/proc/task_mmu.c | 2 fs/ramfs/inode.c | 9 fs/squashfs/block.c | 5 fs/squashfs/squashfs_fs_sb.h | 1 fs/squashfs/super.c | 86 + fs/sysv/itree.c | 1 fs/udf/file.c | 1 fs/udf/inode.c | 1 fs/ufs/inode.c | 1 fs/xfs/xfs_aops.c | 4 fs/zonefs/super.c | 4 include/asm-generic/memory_model.h | 37 include/asm-generic/pgtable-nop4d.h | 1 include/asm-generic/topology.h | 2 include/kunit/test.h | 5 include/linux/backing-dev-defs.h | 20 include/linux/cpuhotplug.h | 2 include/linux/fs.h | 6 include/linux/gfp.h | 13 include/linux/iomap.h | 1 include/linux/kasan.h | 7 include/linux/kernel.h | 2 include/linux/kthread.h | 2 include/linux/memblock.h | 6 include/linux/memcontrol.h | 60 - include/linux/mm.h | 53 - include/linux/mm_types.h | 10 include/linux/mman.h | 2 include/linux/mmdebug.h | 3 include/linux/mmzone.h | 96 +- include/linux/page-flags.h | 10 include/linux/page_owner.h | 6 include/linux/page_ref.h | 4 include/linux/page_reporting.h | 3 include/linux/pageblock-flags.h | 2 include/linux/pagemap.h | 4 include/linux/pgtable.h | 22 include/linux/printk.h | 5 include/linux/sched/coredump.h | 8 include/linux/slab.h | 59 + include/linux/swap.h | 19 include/linux/swapops.h | 5 include/linux/vmstat.h | 69 - include/linux/writeback.h | 1 include/trace/events/cma.h | 4 include/trace/events/filemap.h | 2 include/trace/events/kmem.h | 12 include/trace/events/page_pool.h | 4 include/trace/events/pagemap.h | 4 include/trace/events/vmscan.h | 2 kernel/cgroup/cgroup.c | 1 kernel/crash_core.c | 4 kernel/events/core.c | 2 kernel/events/uprobes.c | 4 kernel/fork.c | 1 kernel/kthread.c | 19 kernel/sysctl.c | 16 kernel/watchdog.c | 12 lib/Kconfig.debug | 15 lib/Kconfig.kasan | 16 lib/Makefile | 1 lib/dump_stack.c | 20 lib/kunit/test.c | 18 lib/slub_kunit.c | 152 +++ lib/test_hmm.c | 5 lib/test_kasan.c | 11 lib/vsprintf.c | 2 mm/Kconfig | 38 mm/backing-dev.c | 66 + mm/compaction.c | 2 mm/debug.c | 27 mm/debug_vm_pgtable.c | 63 + mm/dmapool.c | 5 mm/filemap.c | 2 mm/gup.c | 81 + mm/hugetlb.c | 2 mm/internal.h | 9 mm/kasan/Makefile | 4 mm/kasan/common.c | 6 mm/kasan/generic.c | 3 mm/kasan/hw_tags.c | 22 mm/kasan/init.c | 6 mm/kasan/kasan.h | 12 mm/kasan/report.c | 6 mm/kasan/report_hw_tags.c | 5 mm/kasan/report_sw_tags.c | 45 mm/kasan/report_tags.c | 51 + mm/kasan/shadow.c | 6 mm/kasan/sw_tags.c | 45 mm/kasan/tags.c | 59 + mm/kfence/kfence_test.c | 5 mm/kmemleak.c | 18 mm/ksm.c | 6 mm/memblock.c | 8 mm/memcontrol.c | 385 ++++++-- mm/memory-failure.c | 344 +++++-- mm/memory.c | 22 mm/memory_hotplug.c | 6 mm/mempolicy.c | 4 mm/migrate.c | 4 mm/mmap.c | 54 - mm/mmap_lock.c | 33 mm/mprotect.c | 52 + mm/mremap.c | 5 mm/nommu.c | 2 mm/page-writeback.c | 89 + mm/page_alloc.c | 950 +++++++++++++-------- mm/page_ext.c | 2 mm/page_owner.c | 2 mm/page_reporting.c | 19 mm/page_reporting.h | 5 mm/pagewalk.c | 58 + mm/shmem.c | 18 mm/slab.h | 24 mm/slab_common.c | 60 - mm/slub.c | 420 +++++---- mm/sparse.c | 2 mm/swap.c | 4 mm/swap_slots.c | 2 mm/swap_state.c | 20 mm/swapfile.c | 177 +-- mm/vmalloc.c | 181 ++-- mm/vmscan.c | 43 mm/vmstat.c | 282 ++---- mm/workingset.c | 2 net/ipv4/tcp.c | 4 scripts/kconfig/streamline_config.pl | 76 - scripts/link-vmlinux.sh | 4 scripts/spelling.txt | 16 tools/testing/selftests/vm/gup_test.c | 96 +- tools/vm/page_owner_sort.c | 4 virt/kvm/kvm_main.c | 2 260 files changed, 3989 insertions(+), 2996 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-06-25 1:38 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-06-25 1:38 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 24 patches, based on 4a09d388f2ab382f217a764e6a152b3f614246f6. Subsystems affected by this patch series: mm/thp nilfs2 mm/vmalloc kthread mm/hugetlb mm/memory-failure mm/pagealloc MAINTAINERS mailmap Subsystem: mm/thp Hugh Dickins <hughd@google.com>: Patch series "mm: page_vma_mapped_walk() cleanup and THP fixes": mm: page_vma_mapped_walk(): use page for pvmw->page mm: page_vma_mapped_walk(): settle PageHuge on entry mm: page_vma_mapped_walk(): use pmde for *pvmw->pmd mm: page_vma_mapped_walk(): prettify PVMW_MIGRATION block mm: page_vma_mapped_walk(): crossing page table boundary mm: page_vma_mapped_walk(): add a level of indentation mm: page_vma_mapped_walk(): use goto instead of while (1) mm: page_vma_mapped_walk(): get vma_address_end() earlier mm/thp: fix page_vma_mapped_walk() if THP mapped by ptes mm/thp: another PVMW_SYNC fix in page_vma_mapped_walk() Subsystem: nilfs2 Pavel Skripkin <paskripkin@gmail.com>: nilfs2: fix memory leak in nilfs_sysfs_delete_device_group Subsystem: mm/vmalloc Claudio Imbrenda <imbrenda@linux.ibm.com>: Patch series "mm: add vmalloc_no_huge and use it", v4: mm/vmalloc: add vmalloc_no_huge KVM: s390: prepare for hugepage vmalloc Daniel Axtens <dja@axtens.net>: mm/vmalloc: unbreak kasan vmalloc support Subsystem: kthread Petr Mladek <pmladek@suse.com>: Patch series "kthread_worker: Fix race between kthread_mod_delayed_work(): kthread_worker: split code for canceling the delayed work timer kthread: prevent deadlock when kthread_mod_delayed_work() races with kthread_cancel_delayed_work_sync() Subsystem: mm/hugetlb Hugh Dickins <hughd@google.com>: mm, futex: fix shared futex pgoff on shmem huge page Subsystem: mm/memory-failure Tony Luck <tony.luck@intel.com>: Patch series "mm,hwpoison: fix sending SIGBUS for Action Required MCE", v5: mm/memory-failure: use a mutex to avoid memory_failure() races Aili Yao <yaoaili@kingsoft.com>: mm,hwpoison: return -EHWPOISON to denote that the page has already been poisoned Naoya Horiguchi <naoya.horiguchi@nec.com>: mm/hwpoison: do not lock page again when me_huge_page() successfully recovers Subsystem: mm/pagealloc Rasmus Villemoes <linux@rasmusvillemoes.dk>: mm/page_alloc: __alloc_pages_bulk(): do bounds check before accessing array Mel Gorman <mgorman@techsingularity.net>: mm/page_alloc: do bulk array bounds check after checking populated elements Subsystem: MAINTAINERS Marek Behún <kabel@kernel.org>: MAINTAINERS: fix Marek's identity again Subsystem: mailmap Marek Behún <kabel@kernel.org>: mailmap: add Marek's other e-mail address and identity without diacritics .mailmap | 2 MAINTAINERS | 4 arch/s390/kvm/pv.c | 7 + fs/nilfs2/sysfs.c | 1 include/linux/hugetlb.h | 16 --- include/linux/pagemap.h | 13 +- include/linux/vmalloc.h | 1 kernel/futex.c | 3 kernel/kthread.c | 81 ++++++++++------ mm/hugetlb.c | 5 - mm/memory-failure.c | 83 +++++++++++------ mm/page_alloc.c | 6 + mm/page_vma_mapped.c | 233 +++++++++++++++++++++++++++--------------------- mm/vmalloc.c | 41 ++++++-- 14 files changed, 297 insertions(+), 199 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-06-16 1:22 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-06-16 1:22 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 18 patches, based on 94f0b2d4a1d0c52035aef425da5e022bd2cb1c71. Subsystems affected by this patch series: mm/memory-failure mm/swap mm/slub mm/hugetlb mm/memory-failure coredump mm/slub mm/thp mm/sparsemem Subsystem: mm/memory-failure Naoya Horiguchi <naoya.horiguchi@nec.com>: mm,hwpoison: fix race with hugetlb page allocation Subsystem: mm/swap Peter Xu <peterx@redhat.com>: mm/swap: fix pte_same_as_swp() not removing uffd-wp bit when compare Subsystem: mm/slub Kees Cook <keescook@chromium.org>: Patch series "Actually fix freelist pointer vs redzoning", v4: mm/slub: clarify verification reporting mm/slub: fix redzoning for small allocations mm/slub: actually fix freelist pointer vs redzoning Subsystem: mm/hugetlb Mike Kravetz <mike.kravetz@oracle.com>: mm/hugetlb: expand restore_reserve_on_error functionality Subsystem: mm/memory-failure yangerkun <yangerkun@huawei.com>: mm/memory-failure: make sure wait for page writeback in memory_failure Subsystem: coredump Pingfan Liu <kernelfans@gmail.com>: crash_core, vmcoreinfo: append 'SECTION_SIZE_BITS' to vmcoreinfo Subsystem: mm/slub Andrew Morton <akpm@linux-foundation.org>: mm/slub.c: include swab.h Subsystem: mm/thp Xu Yu <xuyu@linux.alibaba.com>: mm, thp: use head page in __migration_entry_wait() Hugh Dickins <hughd@google.com>: Patch series "mm/thp: fix THP splitting unmap BUGs and related", v10: mm/thp: fix __split_huge_pmd_locked() on shmem migration entry mm/thp: make is_huge_zero_pmd() safe and quicker mm/thp: try_to_unmap() use TTU_SYNC for safe splitting mm/thp: fix vma_address() if virtual address below file offset Jue Wang <juew@google.com>: mm/thp: fix page_address_in_vma() on file THP tails Hugh Dickins <hughd@google.com>: mm/thp: unmap_mapping_page() to fix THP truncate_cleanup_page() Yang Shi <shy828301@gmail.com>: mm: thp: replace DEBUG_VM BUG with VM_WARN when unmap fails for split Subsystem: mm/sparsemem Miles Chen <miles.chen@mediatek.com>: mm/sparse: fix check_usemap_section_nr warnings Documentation/vm/slub.rst | 10 +-- fs/hugetlbfs/inode.c | 1 include/linux/huge_mm.h | 8 ++ include/linux/hugetlb.h | 8 ++ include/linux/mm.h | 3 + include/linux/rmap.h | 1 include/linux/swapops.h | 15 +++-- kernel/crash_core.c | 1 mm/huge_memory.c | 58 ++++++++++--------- mm/hugetlb.c | 137 +++++++++++++++++++++++++++++++++++++--------- mm/internal.h | 51 ++++++++++++----- mm/memory-failure.c | 36 +++++++++++- mm/memory.c | 41 +++++++++++++ mm/migrate.c | 1 mm/page_vma_mapped.c | 27 +++++---- mm/pgtable-generic.c | 5 - mm/rmap.c | 41 +++++++++---- mm/slab_common.c | 3 - mm/slub.c | 37 +++++------- mm/sparse.c | 13 +++- mm/swapfile.c | 2 mm/truncate.c | 43 ++++++-------- 22 files changed, 388 insertions(+), 154 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-06-05 3:00 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-06-05 3:00 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 13 patches, based on 16f0596fc1d78a1f3ae4628cff962bb297dc908c. Subsystems affected by this patch series: mips mm/kfence init mm/debug mm/pagealloc mm/memory-hotplug mm/hugetlb proc mm/kasan mm/hugetlb lib ocfs2 mailmap Subsystem: mips Thomas Bogendoerfer <tsbogend@alpha.franken.de>: Revert "MIPS: make userspace mapping young by default" Subsystem: mm/kfence Marco Elver <elver@google.com>: kfence: use TASK_IDLE when awaiting allocation Subsystem: init Mark Rutland <mark.rutland@arm.com>: pid: take a reference when initializing `cad_pid` Subsystem: mm/debug Gerald Schaefer <gerald.schaefer@linux.ibm.com>: mm/debug_vm_pgtable: fix alignment for pmd/pud_advanced_tests() Subsystem: mm/pagealloc Ding Hui <dinghui@sangfor.com.cn>: mm/page_alloc: fix counting of free pages after take off from buddy Subsystem: mm/memory-hotplug David Hildenbrand <david@redhat.com>: drivers/base/memory: fix trying offlining memory blocks with memory holes on aarch64 Subsystem: mm/hugetlb Naoya Horiguchi <naoya.horiguchi@nec.com>: hugetlb: pass head page to remove_hugetlb_page() Subsystem: proc David Matlack <dmatlack@google.com>: proc: add .gitignore for proc-subset-pid selftest Subsystem: mm/kasan Yu Kuai <yukuai3@huawei.com>: mm/kasan/init.c: fix doc warning Subsystem: mm/hugetlb Mina Almasry <almasrymina@google.com>: mm, hugetlb: fix simple resv_huge_pages underflow on UFFDIO_COPY Subsystem: lib YueHaibing <yuehaibing@huawei.com>: lib: crc64: fix kernel-doc warning Subsystem: ocfs2 Junxiao Bi <junxiao.bi@oracle.com>: ocfs2: fix data corruption by fallocate Subsystem: mailmap Michel Lespinasse <michel@lespinasse.org>: mailmap: use private address for Michel Lespinasse .mailmap | 3 + arch/mips/mm/cache.c | 30 ++++++++--------- drivers/base/memory.c | 6 +-- fs/ocfs2/file.c | 55 +++++++++++++++++++++++++++++--- include/linux/pgtable.h | 8 ++++ init/main.c | 2 - lib/crc64.c | 2 - mm/debug_vm_pgtable.c | 4 +- mm/hugetlb.c | 16 +++++++-- mm/kasan/init.c | 4 +- mm/kfence/core.c | 6 +-- mm/memory.c | 4 ++ mm/page_alloc.c | 2 + tools/testing/selftests/proc/.gitignore | 1 14 files changed, 107 insertions(+), 36 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-05-23 0:41 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-05-23 0:41 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 10 patches, based on 4ff2473bdb4cf2bb7d208ccf4418d3d7e6b1652c. Subsystems affected by this patch series: mm/pagealloc mm/gup ipc selftests mm/kasan kernel/watchdog bitmap procfs lib mm/userfaultfd Subsystem: mm/pagealloc Arnd Bergmann <arnd@arndb.de>: mm/shuffle: fix section mismatch warning Subsystem: mm/gup Michal Hocko <mhocko@suse.com>: Revert "mm/gup: check page posion status for coredump." Subsystem: ipc Varad Gautam <varad.gautam@suse.com>: ipc/mqueue, msg, sem: avoid relying on a stack reference past its expiry Subsystem: selftests Yang Yingliang <yangyingliang@huawei.com>: tools/testing/selftests/exec: fix link error Subsystem: mm/kasan Alexander Potapenko <glider@google.com>: kasan: slab: always reset the tag in get_freepointer_safe() Subsystem: kernel/watchdog Petr Mladek <pmladek@suse.com>: watchdog: reliable handling of timestamps Subsystem: bitmap Rikard Falkeborn <rikard.falkeborn@gmail.com>: linux/bits.h: fix compilation error with GENMASK Subsystem: procfs Alexey Dobriyan <adobriyan@gmail.com>: proc: remove Alexey from MAINTAINERS Subsystem: lib Zhen Lei <thunder.leizhen@huawei.com>: lib: kunit: suppress a compilation warning of frame size Subsystem: mm/userfaultfd Mike Kravetz <mike.kravetz@oracle.com>: userfaultfd: hugetlbfs: fix new flag usage in error path MAINTAINERS | 1 - fs/hugetlbfs/inode.c | 2 +- include/linux/bits.h | 2 +- include/linux/const.h | 8 ++++++++ include/linux/minmax.h | 10 ++-------- ipc/mqueue.c | 6 ++++-- ipc/msg.c | 6 ++++-- ipc/sem.c | 6 ++++-- kernel/watchdog.c | 34 ++++++++++++++++++++-------------- lib/Makefile | 1 + mm/gup.c | 4 ---- mm/internal.h | 20 -------------------- mm/shuffle.h | 4 ++-- mm/slub.c | 1 + mm/userfaultfd.c | 28 ++++++++++++++-------------- tools/include/linux/bits.h | 2 +- tools/include/linux/const.h | 8 ++++++++ tools/testing/selftests/exec/Makefile | 6 +++--- 18 files changed, 74 insertions(+), 75 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-05-15 0:26 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-05-15 0:26 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 13 patches, based on bd3c9cdb21a2674dd0db70199df884828e37abd4. Subsystems affected by this patch series: mm/hugetlb mm/slub resource squashfs mm/userfaultfd mm/ksm mm/pagealloc mm/kasan mm/pagemap hfsplus modprobe mm/ioremap Subsystem: mm/hugetlb Peter Xu <peterx@redhat.com>: Patch series "mm/hugetlb: Fix issues on file sealing and fork", v2: mm/hugetlb: fix F_SEAL_FUTURE_WRITE mm/hugetlb: fix cow where page writtable in child Subsystem: mm/slub Vlastimil Babka <vbabka@suse.cz>: mm, slub: move slub_debug static key enabling outside slab_mutex Subsystem: resource Alistair Popple <apopple@nvidia.com>: kernel/resource: fix return code check in __request_free_mem_region Subsystem: squashfs Phillip Lougher <phillip@squashfs.org.uk>: squashfs: fix divide error in calculate_skip() Subsystem: mm/userfaultfd Axel Rasmussen <axelrasmussen@google.com>: userfaultfd: release page in error path to avoid BUG_ON Subsystem: mm/ksm Hugh Dickins <hughd@google.com>: ksm: revert "use GET_KSM_PAGE_NOLOCK to get ksm page in remove_rmap_item_from_tree()" Subsystem: mm/pagealloc "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm: fix struct page layout on 32-bit systems Subsystem: mm/kasan Peter Collingbourne <pcc@google.com>: kasan: fix unit tests with CONFIG_UBSAN_LOCAL_BOUNDS enabled Subsystem: mm/pagemap "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm/filemap: fix readahead return types Subsystem: hfsplus Jouni Roivas <jouni.roivas@tuxera.com>: hfsplus: prevent corruption in shrinking truncate Subsystem: modprobe Rasmus Villemoes <linux@rasmusvillemoes.dk>: docs: admin-guide: update description for kernel.modprobe sysctl Subsystem: mm/ioremap Christophe Leroy <christophe.leroy@csgroup.eu>: mm/ioremap: fix iomap_max_page_shift Documentation/admin-guide/sysctl/kernel.rst | 9 ++++--- fs/hfsplus/extents.c | 7 +++-- fs/hugetlbfs/inode.c | 5 ++++ fs/iomap/buffered-io.c | 4 +-- fs/squashfs/file.c | 6 ++-- include/linux/mm.h | 32 ++++++++++++++++++++++++++ include/linux/mm_types.h | 4 +-- include/linux/pagemap.h | 6 ++-- include/net/page_pool.h | 12 +++++++++ kernel/resource.c | 2 - lib/test_kasan.c | 29 ++++++++++++++++++----- mm/hugetlb.c | 1 mm/ioremap.c | 6 ++-- mm/ksm.c | 3 +- mm/shmem.c | 34 ++++++++++++---------------- mm/slab_common.c | 10 ++++++++ mm/slub.c | 9 ------- net/core/page_pool.c | 12 +++++---- 18 files changed, 129 insertions(+), 62 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-05-07 1:01 Andrew Morton 2021-05-07 7:12 ` incoming Linus Torvalds 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2021-05-07 1:01 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm This is everything else from -mm for this merge window, with the possible exception of Mike Rapoport's "secretmem" syscall patch series (https://lkml.kernel.org/r/20210303162209.8609-1-rppt@kernel.org). I've been wobbly about the secretmem patches due to doubts about whether the feature is sufficiently useful to justify inclusion, but developers are now weighing in with helpful information and I've asked Mike for an extensively updated [0/n] changelog. This will take a few days to play out so it is possible that I will prevail upon you for a post-rc1 merge. If that's a problem, there's always 5.13-rc1. 91 patches, based on 8ca5297e7e38f2dc8c753d33a5092e7be181fff0, plus previously sent patches. Thanks. Subsystems affected by this patch series: alpha procfs sysctl misc core-kernel bitmap lib compat checkpatch epoll isofs nilfs2 hpfs exit fork kexec gcov panic delayacct gdb resource selftests async initramfs ipc mm/cleanups drivers/char mm/slub spelling Subsystem: alpha Randy Dunlap <rdunlap@infradead.org>: alpha: eliminate old-style function definitions alpha: csum_partial_copy.c: add function prototypes from <net/checksum.h> Subsystem: procfs Colin Ian King <colin.king@canonical.com>: fs/proc/generic.c: fix incorrect pde_is_permanent check Alexey Dobriyan <adobriyan@gmail.com>: proc: save LOC in __xlate_proc_name() proc: mandate ->proc_lseek in "struct proc_ops" proc: delete redundant subset=pid check selftests: proc: test subset=pid Subsystem: sysctl zhouchuangao <zhouchuangao@vivo.com>: proc/sysctl: fix function name error in comments Subsystem: misc "Matthew Wilcox (Oracle)" <willy@infradead.org>: include: remove pagemap.h from blkdev.h Andy Shevchenko <andriy.shevchenko@linux.intel.com>: kernel.h: drop inclusion in bitmap.h Wan Jiabing <wanjiabing@vivo.com>: linux/profile.h: remove unnecessary declaration Subsystem: core-kernel Rasmus Villemoes <linux@rasmusvillemoes.dk>: kernel/async.c: fix pr_debug statement kernel/cred.c: make init_groups static Subsystem: bitmap Yury Norov <yury.norov@gmail.com>: Patch series "lib/find_bit: fast path for small bitmaps", v6: tools: disable -Wno-type-limits tools: bitmap: sync function declarations with the kernel tools: sync BITMAP_LAST_WORD_MASK() macro with the kernel arch: rearrange headers inclusion order in asm/bitops for m68k, sh and h8300 lib: extend the scope of small_const_nbits() macro tools: sync small_const_nbits() macro with the kernel lib: inline _find_next_bit() wrappers tools: sync find_next_bit implementation lib: add fast path for find_next_*_bit() lib: add fast path for find_first_*_bit() and find_last_bit() tools: sync lib/find_bit implementation MAINTAINERS: add entry for the bitmap API Subsystem: lib Bhaskar Chowdhury <unixbhaskar@gmail.com>: lib/bch.c: fix a typo in the file bch.c Wang Qing <wangqing@vivo.com>: lib: fix inconsistent indenting in process_bit1() ToastC <mrtoastcheng@gmail.com>: lib/list_sort.c: fix typo in function description Bhaskar Chowdhury <unixbhaskar@gmail.com>: lib/genalloc.c: Fix a typo Richard Fitzgerald <rf@opensource.cirrus.com>: lib: crc8: pointer to data block should be const Zqiang <qiang.zhang@windriver.com>: lib: stackdepot: turn depot_lock spinlock to raw_spinlock Alex Shi <alexs@kernel.org>: lib/percpu_counter: tame kernel-doc compile warning lib/genalloc: add parameter description to fix doc compile warning Randy Dunlap <rdunlap@infradead.org>: lib: parser: clean up kernel-doc Subsystem: compat Masahiro Yamada <masahiroy@kernel.org>: include/linux/compat.h: remove unneeded declaration from COMPAT_SYSCALL_DEFINEx() Subsystem: checkpatch Joe Perches <joe@perches.com>: checkpatch: warn when missing newline in return sysfs_emit() formats Vincent Mailhol <mailhol.vincent@wanadoo.fr>: checkpatch: exclude four preprocessor sub-expressions from MACRO_ARG_REUSE Christophe JAILLET <christophe.jaillet@wanadoo.fr>: checkpatch: improve ALLOC_ARRAY_ARGS test Subsystem: epoll Davidlohr Bueso <dave@stgolabs.net>: Patch series "fs/epoll: restore user-visible behavior upon event ready": kselftest: introduce new epoll test case fs/epoll: restore waking from ep_done_scan() Subsystem: isofs "Gustavo A. R. Silva" <gustavoars@kernel.org>: isofs: fix fall-through warnings for Clang Subsystem: nilfs2 Liu xuzhi <liu.xuzhi@zte.com.cn>: fs/nilfs2: fix misspellings using codespell tool Lu Jialin <lujialin4@huawei.com>: nilfs2: fix typos in comments Subsystem: hpfs "Gustavo A. R. Silva" <gustavoars@kernel.org>: hpfs: replace one-element array with flexible-array member Subsystem: exit Jim Newsome <jnewsome@torproject.org>: do_wait: make PIDTYPE_PID case O(1) instead of O(n) Subsystem: fork Rolf Eike Beer <eb@emlix.com>: kernel/fork.c: simplify copy_mm() Xiaofeng Cao <cxfcosmos@gmail.com>: kernel/fork.c: fix typos Subsystem: kexec Saeed Mirzamohammadi <saeed.mirzamohammadi@oracle.com>: kernel/crash_core: add crashkernel=auto for vmcore creation Joe LeVeque <jolevequ@microsoft.com>: kexec: Add kexec reboot string Jia-Ju Bai <baijiaju1990@gmail.com>: kernel: kexec_file: fix error return code of kexec_calculate_store_digests() Pavel Tatashin <pasha.tatashin@soleen.com>: kexec: dump kmessage before machine_kexec Subsystem: gcov Johannes Berg <johannes.berg@intel.com>: gcov: combine common code gcov: simplify buffer allocation gcov: use kvmalloc() Nick Desaulniers <ndesaulniers@google.com>: gcov: clang: drop support for clang-10 and older Subsystem: panic He Ying <heying24@huawei.com>: smp: kernel/panic.c - silence warnings Subsystem: delayacct Yafang Shao <laoar.shao@gmail.com>: delayacct: clear right task's flag after blkio completes Subsystem: gdb Johannes Berg <johannes.berg@intel.com>: gdb: lx-symbols: store the abspath() Barry Song <song.bao.hua@hisilicon.com>: Patch series "scripts/gdb: clarify the platforms supporting lx_current and add arm64 support", v2: scripts/gdb: document lx_current is only supported by x86 scripts/gdb: add lx_current support for arm64 Subsystem: resource David Hildenbrand <david@redhat.com>: Patch series "kernel/resource: make walk_system_ram_res() and walk_mem_res() search the whole tree", v2: kernel/resource: make walk_system_ram_res() find all busy IORESOURCE_SYSTEM_RAM resources kernel/resource: make walk_mem_res() find all busy IORESOURCE_MEM resources kernel/resource: remove first_lvl / siblings_only logic Alistair Popple <apopple@nvidia.com>: kernel/resource: allow region_intersects users to hold resource_lock kernel/resource: refactor __request_region to allow external locking kernel/resource: fix locking in request_free_mem_region Subsystem: selftests Zhang Yunkai <zhang.yunkai@zte.com.cn>: selftests: remove duplicate include Subsystem: async Rasmus Villemoes <linux@rasmusvillemoes.dk>: kernel/async.c: stop guarding pr_debug() statements kernel/async.c: remove async_unregister_domain() Subsystem: initramfs Rasmus Villemoes <linux@rasmusvillemoes.dk>: Patch series "background initramfs unpacking, and CONFIG_MODPROBE_PATH", v3: init/initramfs.c: do unpacking asynchronously modules: add CONFIG_MODPROBE_PATH Subsystem: ipc Bhaskar Chowdhury <unixbhaskar@gmail.com>: ipc/sem.c: mundane typo fixes Subsystem: mm/cleanups Shijie Luo <luoshijie1@huawei.com>: mm: fix some typos and code style problems Subsystem: drivers/char David Hildenbrand <david@redhat.com>: Patch series "drivers/char: remove /dev/kmem for good": drivers/char: remove /dev/kmem for good mm: remove xlate_dev_kmem_ptr() mm/vmalloc: remove vwrite() Subsystem: mm/slub Maninder Singh <maninder1.s@samsung.com>: arm: print alloc free paths for address in registers Subsystem: spelling Drew Fustini <drew@beagleboard.org>: scripts/spelling.txt: add "overlfow" zuoqilin <zuoqilin@yulong.com>: scripts/spelling.txt: Add "diabled" typo Drew Fustini <drew@beagleboard.org>: scripts/spelling.txt: add "overflw" Colin Ian King <colin.king@canonical.com>: mm/slab.c: fix spelling mistake "disired" -> "desired" Bhaskar Chowdhury <unixbhaskar@gmail.com>: include/linux/pgtable.h: few spelling fixes zhouchuangao <zhouchuangao@vivo.com>: kernel/umh.c: fix some spelling mistakes Xiaofeng Cao <cxfcosmos@gmail.com>: kernel/user_namespace.c: fix typos Bhaskar Chowdhury <unixbhaskar@gmail.com>: kernel/up.c: fix typo Xiaofeng Cao <caoxiaofeng@yulong.com>: kernel/sys.c: fix typo dingsenjie <dingsenjie@yulong.com>: fs: fat: fix spelling typo of values Bhaskar Chowdhury <unixbhaskar@gmail.com>: ipc/sem.c: spelling fix Masahiro Yamada <masahiroy@kernel.org>: treewide: remove editor modelines and cruft Ingo Molnar <mingo@kernel.org>: mm: fix typos in comments Lu Jialin <lujialin4@huawei.com>: mm: fix typos in comments Documentation/admin-guide/devices.txt | 2 Documentation/admin-guide/kdump/kdump.rst | 3 Documentation/admin-guide/kernel-parameters.txt | 18 Documentation/dev-tools/gdb-kernel-debugging.rst | 4 MAINTAINERS | 16 arch/Kconfig | 20 arch/alpha/include/asm/io.h | 5 arch/alpha/kernel/pc873xx.c | 4 arch/alpha/lib/csum_partial_copy.c | 1 arch/arm/configs/dove_defconfig | 1 arch/arm/configs/magician_defconfig | 1 arch/arm/configs/moxart_defconfig | 1 arch/arm/configs/mps2_defconfig | 1 arch/arm/configs/mvebu_v5_defconfig | 1 arch/arm/configs/xcep_defconfig | 1 arch/arm/include/asm/bug.h | 1 arch/arm/include/asm/io.h | 5 arch/arm/kernel/process.c | 11 arch/arm/kernel/traps.c | 1 arch/h8300/include/asm/bitops.h | 8 arch/hexagon/configs/comet_defconfig | 1 arch/hexagon/include/asm/io.h | 1 arch/ia64/include/asm/io.h | 1 arch/ia64/include/asm/uaccess.h | 18 arch/m68k/atari/time.c | 7 arch/m68k/configs/amcore_defconfig | 1 arch/m68k/include/asm/bitops.h | 6 arch/m68k/include/asm/io_mm.h | 5 arch/mips/include/asm/io.h | 5 arch/openrisc/configs/or1ksim_defconfig | 1 arch/parisc/include/asm/io.h | 5 arch/parisc/include/asm/pdc_chassis.h | 1 arch/powerpc/include/asm/io.h | 5 arch/s390/include/asm/io.h | 5 arch/sh/configs/edosk7705_defconfig | 1 arch/sh/configs/se7206_defconfig | 1 arch/sh/configs/sh2007_defconfig | 1 arch/sh/configs/sh7724_generic_defconfig | 1 arch/sh/configs/sh7770_generic_defconfig | 1 arch/sh/configs/sh7785lcr_32bit_defconfig | 1 arch/sh/include/asm/bitops.h | 5 arch/sh/include/asm/io.h | 5 arch/sparc/configs/sparc64_defconfig | 1 arch/sparc/include/asm/io_64.h | 5 arch/um/drivers/cow.h | 7 arch/xtensa/configs/xip_kc705_defconfig | 1 block/blk-settings.c | 1 drivers/auxdisplay/panel.c | 7 drivers/base/firmware_loader/main.c | 2 drivers/block/brd.c | 1 drivers/block/loop.c | 1 drivers/char/Kconfig | 10 drivers/char/mem.c | 231 -------- drivers/gpu/drm/qxl/qxl_drv.c | 1 drivers/isdn/capi/kcapi_proc.c | 1 drivers/md/bcache/super.c | 1 drivers/media/usb/pwc/pwc-uncompress.c | 3 drivers/net/ethernet/adaptec/starfire.c | 8 drivers/net/ethernet/amd/atarilance.c | 8 drivers/net/ethernet/amd/pcnet32.c | 7 drivers/net/wireless/intersil/hostap/hostap_proc.c | 1 drivers/net/wireless/intersil/orinoco/orinoco_nortel.c | 8 drivers/net/wireless/intersil/orinoco/orinoco_pci.c | 8 drivers/net/wireless/intersil/orinoco/orinoco_plx.c | 8 drivers/net/wireless/intersil/orinoco/orinoco_tmd.c | 8 drivers/nvdimm/btt.c | 1 drivers/nvdimm/pmem.c | 1 drivers/parport/parport_ip32.c | 12 drivers/platform/x86/dell/dell_rbu.c | 3 drivers/scsi/53c700.c | 1 drivers/scsi/53c700.h | 1 drivers/scsi/ch.c | 6 drivers/scsi/esas2r/esas2r_main.c | 1 drivers/scsi/ips.c | 20 drivers/scsi/ips.h | 20 drivers/scsi/lasi700.c | 1 drivers/scsi/megaraid/mbox_defs.h | 2 drivers/scsi/megaraid/mega_common.h | 2 drivers/scsi/megaraid/megaraid_mbox.c | 2 drivers/scsi/megaraid/megaraid_mbox.h | 2 drivers/scsi/qla1280.c | 12 drivers/scsi/scsicam.c | 1 drivers/scsi/sni_53c710.c | 1 drivers/video/fbdev/matrox/matroxfb_base.c | 9 drivers/video/fbdev/vga16fb.c | 10 fs/configfs/configfs_internal.h | 4 fs/configfs/dir.c | 4 fs/configfs/file.c | 4 fs/configfs/inode.c | 4 fs/configfs/item.c | 4 fs/configfs/mount.c | 4 fs/configfs/symlink.c | 4 fs/eventpoll.c | 6 fs/fat/fatent.c | 2 fs/hpfs/hpfs.h | 3 fs/isofs/rock.c | 1 fs/nfs/dir.c | 7 fs/nfs/nfs4proc.c | 6 fs/nfs/nfs4renewd.c | 6 fs/nfs/nfs4state.c | 6 fs/nfs/nfs4xdr.c | 6 fs/nfsd/nfs4proc.c | 6 fs/nfsd/nfs4xdr.c | 6 fs/nfsd/xdr4.h | 6 fs/nilfs2/cpfile.c | 2 fs/nilfs2/ioctl.c | 4 fs/nilfs2/segment.c | 4 fs/nilfs2/the_nilfs.c | 2 fs/ocfs2/acl.c | 4 fs/ocfs2/acl.h | 4 fs/ocfs2/alloc.c | 4 fs/ocfs2/alloc.h | 4 fs/ocfs2/aops.c | 4 fs/ocfs2/aops.h | 4 fs/ocfs2/blockcheck.c | 4 fs/ocfs2/blockcheck.h | 4 fs/ocfs2/buffer_head_io.c | 4 fs/ocfs2/buffer_head_io.h | 4 fs/ocfs2/cluster/heartbeat.c | 4 fs/ocfs2/cluster/heartbeat.h | 4 fs/ocfs2/cluster/masklog.c | 4 fs/ocfs2/cluster/masklog.h | 4 fs/ocfs2/cluster/netdebug.c | 4 fs/ocfs2/cluster/nodemanager.c | 4 fs/ocfs2/cluster/nodemanager.h | 4 fs/ocfs2/cluster/ocfs2_heartbeat.h | 4 fs/ocfs2/cluster/ocfs2_nodemanager.h | 4 fs/ocfs2/cluster/quorum.c | 4 fs/ocfs2/cluster/quorum.h | 4 fs/ocfs2/cluster/sys.c | 4 fs/ocfs2/cluster/sys.h | 4 fs/ocfs2/cluster/tcp.c | 4 fs/ocfs2/cluster/tcp.h | 4 fs/ocfs2/cluster/tcp_internal.h | 4 fs/ocfs2/dcache.c | 4 fs/ocfs2/dcache.h | 4 fs/ocfs2/dir.c | 4 fs/ocfs2/dir.h | 4 fs/ocfs2/dlm/dlmapi.h | 4 fs/ocfs2/dlm/dlmast.c | 4 fs/ocfs2/dlm/dlmcommon.h | 4 fs/ocfs2/dlm/dlmconvert.c | 4 fs/ocfs2/dlm/dlmconvert.h | 4 fs/ocfs2/dlm/dlmdebug.c | 4 fs/ocfs2/dlm/dlmdebug.h | 4 fs/ocfs2/dlm/dlmdomain.c | 4 fs/ocfs2/dlm/dlmdomain.h | 4 fs/ocfs2/dlm/dlmlock.c | 4 fs/ocfs2/dlm/dlmmaster.c | 4 fs/ocfs2/dlm/dlmrecovery.c | 4 fs/ocfs2/dlm/dlmthread.c | 4 fs/ocfs2/dlm/dlmunlock.c | 4 fs/ocfs2/dlmfs/dlmfs.c | 4 fs/ocfs2/dlmfs/userdlm.c | 4 fs/ocfs2/dlmfs/userdlm.h | 4 fs/ocfs2/dlmglue.c | 4 fs/ocfs2/dlmglue.h | 4 fs/ocfs2/export.c | 4 fs/ocfs2/export.h | 4 fs/ocfs2/extent_map.c | 4 fs/ocfs2/extent_map.h | 4 fs/ocfs2/file.c | 4 fs/ocfs2/file.h | 4 fs/ocfs2/filecheck.c | 4 fs/ocfs2/filecheck.h | 4 fs/ocfs2/heartbeat.c | 4 fs/ocfs2/heartbeat.h | 4 fs/ocfs2/inode.c | 4 fs/ocfs2/inode.h | 4 fs/ocfs2/journal.c | 4 fs/ocfs2/journal.h | 4 fs/ocfs2/localalloc.c | 4 fs/ocfs2/localalloc.h | 4 fs/ocfs2/locks.c | 4 fs/ocfs2/locks.h | 4 fs/ocfs2/mmap.c | 4 fs/ocfs2/move_extents.c | 4 fs/ocfs2/move_extents.h | 4 fs/ocfs2/namei.c | 4 fs/ocfs2/namei.h | 4 fs/ocfs2/ocfs1_fs_compat.h | 4 fs/ocfs2/ocfs2.h | 4 fs/ocfs2/ocfs2_fs.h | 4 fs/ocfs2/ocfs2_ioctl.h | 4 fs/ocfs2/ocfs2_lockid.h | 4 fs/ocfs2/ocfs2_lockingver.h | 4 fs/ocfs2/refcounttree.c | 4 fs/ocfs2/refcounttree.h | 4 fs/ocfs2/reservations.c | 4 fs/ocfs2/reservations.h | 4 fs/ocfs2/resize.c | 4 fs/ocfs2/resize.h | 4 fs/ocfs2/slot_map.c | 4 fs/ocfs2/slot_map.h | 4 fs/ocfs2/stack_o2cb.c | 4 fs/ocfs2/stack_user.c | 4 fs/ocfs2/stackglue.c | 4 fs/ocfs2/stackglue.h | 4 fs/ocfs2/suballoc.c | 4 fs/ocfs2/suballoc.h | 4 fs/ocfs2/super.c | 4 fs/ocfs2/super.h | 4 fs/ocfs2/symlink.c | 4 fs/ocfs2/symlink.h | 4 fs/ocfs2/sysfile.c | 4 fs/ocfs2/sysfile.h | 4 fs/ocfs2/uptodate.c | 4 fs/ocfs2/uptodate.h | 4 fs/ocfs2/xattr.c | 4 fs/ocfs2/xattr.h | 4 fs/proc/generic.c | 13 fs/proc/inode.c | 18 fs/proc/proc_sysctl.c | 2 fs/reiserfs/procfs.c | 10 include/asm-generic/bitops/find.h | 108 +++ include/asm-generic/bitops/le.h | 38 + include/asm-generic/bitsperlong.h | 12 include/asm-generic/io.h | 11 include/linux/align.h | 15 include/linux/async.h | 1 include/linux/bitmap.h | 11 include/linux/bitops.h | 12 include/linux/blkdev.h | 1 include/linux/compat.h | 1 include/linux/configfs.h | 4 include/linux/crc8.h | 2 include/linux/cred.h | 1 include/linux/delayacct.h | 20 include/linux/fs.h | 2 include/linux/genl_magic_func.h | 1 include/linux/genl_magic_struct.h | 1 include/linux/gfp.h | 2 include/linux/init_task.h | 1 include/linux/initrd.h | 2 include/linux/kernel.h | 9 include/linux/mm.h | 2 include/linux/mmzone.h | 2 include/linux/pgtable.h | 10 include/linux/proc_fs.h | 1 include/linux/profile.h | 3 include/linux/smp.h | 8 include/linux/swap.h | 1 include/linux/vmalloc.h | 7 include/uapi/linux/if_bonding.h | 11 include/uapi/linux/nfs4.h | 6 include/xen/interface/elfnote.h | 10 include/xen/interface/hvm/hvm_vcpu.h | 10 include/xen/interface/io/xenbus.h | 10 init/Kconfig | 12 init/initramfs.c | 38 + init/main.c | 1 ipc/sem.c | 12 kernel/async.c | 68 -- kernel/configs/android-base.config | 1 kernel/crash_core.c | 7 kernel/cred.c | 2 kernel/exit.c | 67 ++ kernel/fork.c | 23 kernel/gcov/Kconfig | 1 kernel/gcov/base.c | 49 + kernel/gcov/clang.c | 282 ---------- kernel/gcov/fs.c | 146 ++++- kernel/gcov/gcc_4_7.c | 173 ------ kernel/gcov/gcov.h | 14 kernel/kexec_core.c | 4 kernel/kexec_file.c | 4 kernel/kmod.c | 2 kernel/resource.c | 198 ++++--- kernel/sys.c | 14 kernel/umh.c | 8 kernel/up.c | 2 kernel/user_namespace.c | 6 lib/bch.c | 2 lib/crc8.c | 2 lib/decompress_unlzma.c | 2 lib/find_bit.c | 68 -- lib/genalloc.c | 7 lib/list_sort.c | 2 lib/parser.c | 61 +- lib/percpu_counter.c | 2 lib/stackdepot.c | 6 mm/balloon_compaction.c | 4 mm/compaction.c | 4 mm/filemap.c | 2 mm/gup.c | 2 mm/highmem.c | 2 mm/huge_memory.c | 6 mm/hugetlb.c | 6 mm/internal.h | 2 mm/kasan/kasan.h | 8 mm/kasan/quarantine.c | 4 mm/kasan/shadow.c | 4 mm/kfence/report.c | 2 mm/khugepaged.c | 2 mm/ksm.c | 6 mm/madvise.c | 4 mm/memcontrol.c | 18 mm/memory-failure.c | 2 mm/memory.c | 18 mm/mempolicy.c | 6 mm/migrate.c | 8 mm/mmap.c | 4 mm/mprotect.c | 2 mm/mremap.c | 2 mm/nommu.c | 10 mm/oom_kill.c | 2 mm/page-writeback.c | 4 mm/page_alloc.c | 16 mm/page_owner.c | 2 mm/page_vma_mapped.c | 2 mm/percpu-internal.h | 2 mm/percpu.c | 2 mm/pgalloc-track.h | 6 mm/rmap.c | 2 mm/slab.c | 8 mm/slub.c | 2 mm/swap.c | 4 mm/swap_slots.c | 2 mm/swap_state.c | 2 mm/vmalloc.c | 124 ---- mm/vmstat.c | 2 mm/z3fold.c | 2 mm/zpool.c | 2 mm/zsmalloc.c | 6 samples/configfs/configfs_sample.c | 2 scripts/checkpatch.pl | 15 scripts/gdb/linux/cpus.py | 23 scripts/gdb/linux/symbols.py | 3 scripts/spelling.txt | 3 tools/include/asm-generic/bitops/find.h | 85 ++- tools/include/asm-generic/bitsperlong.h | 3 tools/include/linux/bitmap.h | 18 tools/lib/bitmap.c | 4 tools/lib/find_bit.c | 56 - tools/scripts/Makefile.include | 1 tools/testing/selftests/filesystems/epoll/epoll_wakeup_test.c | 44 + tools/testing/selftests/kvm/lib/sparsebit.c | 1 tools/testing/selftests/mincore/mincore_selftest.c | 1 tools/testing/selftests/powerpc/mm/tlbie_test.c | 1 tools/testing/selftests/proc/Makefile | 1 tools/testing/selftests/proc/proc-subset-pid.c | 121 ++++ tools/testing/selftests/proc/read.c | 4 tools/usb/hcd-tests.sh | 2 343 files changed, 1383 insertions(+), 2119 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2021-05-07 1:01 incoming Andrew Morton @ 2021-05-07 7:12 ` Linus Torvalds 0 siblings, 0 replies; 349+ messages in thread From: Linus Torvalds @ 2021-05-07 7:12 UTC (permalink / raw) To: Andrew Morton; +Cc: mm-commits, Linux-MM On Thu, May 6, 2021 at 6:01 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > I've been wobbly about the secretmem patches due to doubts about > whether the feature is sufficiently useful to justify inclusion, but > developers are now weighing in with helpful information and I've asked Mike > for an extensively updated [0/n] changelog. This will take a few days > to play out so it is possible that I will prevail upon you for a post-rc1 > merge. Oh, much too late for this release by now. > If that's a problem, there's always 5.13-rc1. 5.13-rc1 is two days from now, it would be for 5.14-rc1.. How time - and version numbers - fly. Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-05-05 1:32 Andrew Morton 2021-05-05 1:47 ` incoming Linus Torvalds 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2021-05-05 1:32 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits The remainder of the main mm/ queue. 143 patches, based on 8ca5297e7e38f2dc8c753d33a5092e7be181fff0, plus previously sent patches. Subsystems affected by this patch series: mm/pagecache mm/hugetlb mm/userfaultfd mm/vmscan mm/compaction mm/migration mm/cma mm/ksm mm/vmstat mm/mmap mm/kconfig mm/util mm/memory-hotplug mm/zswap mm/zsmalloc mm/highmem mm/cleanups mm/kfence Subsystem: mm/pagecache "Matthew Wilcox (Oracle)" <willy@infradead.org>: Patch series "Remove nrexceptional tracking", v2: mm: introduce and use mapping_empty() mm: stop accounting shadow entries dax: account DAX entries as nrpages mm: remove nrexceptional from inode Hugh Dickins <hughd@google.com>: mm: remove nrexceptional from inode: remove BUG_ON Subsystem: mm/hugetlb Peter Xu <peterx@redhat.com>: Patch series "hugetlb: Disable huge pmd unshare for uffd-wp", v4: hugetlb: pass vma into huge_pte_alloc() and huge_pmd_share() hugetlb/userfaultfd: forbid huge pmd sharing when uffd enabled mm/hugetlb: move flush_hugetlb_tlb_range() into hugetlb.h hugetlb/userfaultfd: unshare all pmds for hugetlbfs when register wp Miaohe Lin <linmiaohe@huawei.com>: mm/hugetlb: remove redundant reservation check condition in alloc_huge_page() Anshuman Khandual <anshuman.khandual@arm.com>: mm: generalize HUGETLB_PAGE_SIZE_VARIABLE Miaohe Lin <linmiaohe@huawei.com>: Patch series "Some cleanups for hugetlb": mm/hugetlb: use some helper functions to cleanup code mm/hugetlb: optimize the surplus state transfer code in move_hugetlb_state() mm/hugetlb_cgroup: remove unnecessary VM_BUG_ON_PAGE in hugetlb_cgroup_migrate() mm/hugetlb: simplify the code when alloc_huge_page() failed in hugetlb_no_page() mm/hugetlb: avoid calculating fault_mutex_hash in truncate_op case Patch series "Cleanup and fixup for khugepaged", v2: khugepaged: remove unneeded return value of khugepaged_collapse_pte_mapped_thps() khugepaged: reuse the smp_wmb() inside __SetPageUptodate() khugepaged: use helper khugepaged_test_exit() in __khugepaged_enter() khugepaged: fix wrong result value for trace_mm_collapse_huge_page_isolate() mm/huge_memory.c: remove unnecessary local variable ret2 Patch series "Some cleanups for huge_memory", v3: mm/huge_memory.c: rework the function vma_adjust_trans_huge() mm/huge_memory.c: make get_huge_zero_page() return bool mm/huge_memory.c: rework the function do_huge_pmd_numa_page() slightly mm/huge_memory.c: remove redundant PageCompound() check mm/huge_memory.c: remove unused macro TRANSPARENT_HUGEPAGE_DEBUG_COW_FLAG mm/huge_memory.c: use helper function migration_entry_to_page() Yanfei Xu <yanfei.xu@windriver.com>: mm/khugepaged.c: replace barrier() with READ_ONCE() for a selective variable Miaohe Lin <linmiaohe@huawei.com>: Patch series "Cleanup for khugepaged": khugepaged: use helper function range_in_vma() in collapse_pte_mapped_thp() khugepaged: remove unnecessary out label in collapse_huge_page() khugepaged: remove meaningless !pte_present() check in khugepaged_scan_pmd() Zi Yan <ziy@nvidia.com>: mm: huge_memory: a new debugfs interface for splitting THP tests mm: huge_memory: debugfs for file-backed THP split Miaohe Lin <linmiaohe@huawei.com>: Patch series "Cleanup and fixup for hugetlb", v2: mm/hugeltb: remove redundant VM_BUG_ON() in region_add() mm/hugeltb: simplify the return code of __vma_reservation_common() mm/hugeltb: clarify (chg - freed) won't go negative in hugetlb_unreserve_pages() mm/hugeltb: handle the error case in hugetlb_fix_reserve_counts() mm/hugetlb: remove unused variable pseudo_vma in remove_inode_hugepages() Mike Kravetz <mike.kravetz@oracle.com>: Patch series "make hugetlb put_page safe for all calling contexts", v5: mm/cma: change cma mutex to irq safe spinlock hugetlb: no need to drop hugetlb_lock to call cma_release hugetlb: add per-hstate mutex to synchronize user adjustments hugetlb: create remove_hugetlb_page() to separate functionality hugetlb: call update_and_free_page without hugetlb_lock hugetlb: change free_pool_huge_page to remove_pool_huge_page hugetlb: make free_huge_page irq safe hugetlb: add lockdep_assert_held() calls for hugetlb_lock Oscar Salvador <osalvador@suse.de>: Patch series "Make alloc_contig_range handle Hugetlb pages", v10: mm,page_alloc: bail out earlier on -ENOMEM in alloc_contig_migrate_range mm,compaction: let isolate_migratepages_{range,block} return error codes mm,hugetlb: drop clearing of flag from prep_new_huge_page mm,hugetlb: split prep_new_huge_page functionality mm: make alloc_contig_range handle free hugetlb pages mm: make alloc_contig_range handle in-use hugetlb pages mm,page_alloc: drop unnecessary checks from pfn_range_valid_contig Subsystem: mm/userfaultfd Axel Rasmussen <axelrasmussen@google.com>: Patch series "userfaultfd: add minor fault handling", v9: userfaultfd: add minor fault registration mode userfaultfd: disable huge PMD sharing for MINOR registered VMAs userfaultfd: hugetlbfs: only compile UFFD helpers if config enabled userfaultfd: add UFFDIO_CONTINUE ioctl userfaultfd: update documentation to describe minor fault handling userfaultfd/selftests: add test exercising minor fault handling Subsystem: mm/vmscan Dave Hansen <dave.hansen@linux.intel.com>: mm/vmscan: move RECLAIM* bits to uapi header mm/vmscan: replace implicit RECLAIM_ZONE checks with explicit checks Yang Shi <shy828301@gmail.com>: Patch series "Make shrinker's nr_deferred memcg aware", v10: mm: vmscan: use nid from shrink_control for tracepoint mm: vmscan: consolidate shrinker_maps handling code mm: vmscan: use shrinker_rwsem to protect shrinker_maps allocation mm: vmscan: remove memcg_shrinker_map_size mm: vmscan: use kvfree_rcu instead of call_rcu mm: memcontrol: rename shrinker_map to shrinker_info mm: vmscan: add shrinker_info_protected() helper mm: vmscan: use a new flag to indicate shrinker is registered mm: vmscan: add per memcg shrinker nr_deferred mm: vmscan: use per memcg nr_deferred of shrinker mm: vmscan: don't need allocate shrinker->nr_deferred for memcg aware shrinkers mm: memcontrol: reparent nr_deferred when memcg offline mm: vmscan: shrink deferred objects proportional to priority Subsystem: mm/compaction Pintu Kumar <pintu@codeaurora.org>: mm/compaction: remove unused variable sysctl_compact_memory Charan Teja Reddy <charante@codeaurora.org>: mm: compaction: update the COMPACT[STALL|FAIL] events properly Subsystem: mm/migration Minchan Kim <minchan@kernel.org>: mm: disable LRU pagevec during the migration temporarily mm: replace migrate_[prep|finish] with lru_cache_[disable|enable] mm: fs: invalidate BH LRU during page migration Miaohe Lin <linmiaohe@huawei.com>: Patch series "Cleanup and fixup for mm/migrate.c", v3: mm/migrate.c: make putback_movable_page() static mm/migrate.c: remove unnecessary rc != MIGRATEPAGE_SUCCESS check in 'else' case mm/migrate.c: fix potential indeterminate pte entry in migrate_vma_insert_page() mm/migrate.c: use helper migrate_vma_collect_skip() in migrate_vma_collect_hole() Revert "mm: migrate: skip shared exec THP for NUMA balancing" Subsystem: mm/cma Minchan Kim <minchan@kernel.org>: mm: vmstat: add cma statistics Baolin Wang <baolin.wang@linux.alibaba.com>: mm: cma: use pr_err_ratelimited for CMA warning Liam Mark <lmark@codeaurora.org>: mm: cma: add trace events for CMA alloc perf testing Minchan Kim <minchan@kernel.org>: mm: cma: support sysfs mm: cma: add the CMA instance name to cma trace events mm: use proper type for cma_[alloc|release] Subsystem: mm/ksm Miaohe Lin <linmiaohe@huawei.com>: Patch series "Cleanup and fixup for ksm": ksm: remove redundant VM_BUG_ON_PAGE() on stable_tree_search() ksm: use GET_KSM_PAGE_NOLOCK to get ksm page in remove_rmap_item_from_tree() ksm: remove dedicated macro KSM_FLAG_MASK ksm: fix potential missing rmap_item for stable_node Chengyang Fan <cy.fan@huawei.com>: mm/ksm: remove unused parameter from remove_trailing_rmap_items() Subsystem: mm/vmstat Hugh Dickins <hughd@google.com>: mm: restore node stat checking in /proc/sys/vm/stat_refresh mm: no more EINVAL from /proc/sys/vm/stat_refresh mm: /proc/sys/vm/stat_refresh skip checking known negative stats mm: /proc/sys/vm/stat_refresh stop checking monotonic numa stats Saravanan D <saravanand@fb.com>: x86/mm: track linear mapping split events Subsystem: mm/mmap Liam Howlett <liam.howlett@oracle.com>: mm/mmap.c: don't unlock VMAs in remap_file_pages() Subsystem: mm/kconfig Anshuman Khandual <anshuman.khandual@arm.com>: Patch series "mm: some config cleanups", v2: mm: generalize ARCH_HAS_CACHE_LINE_SIZE mm: generalize SYS_SUPPORTS_HUGETLBFS (rename as ARCH_SUPPORTS_HUGETLBFS) mm: generalize ARCH_ENABLE_MEMORY_[HOTPLUG|HOTREMOVE] mm: drop redundant ARCH_ENABLE_[HUGEPAGE|THP]_MIGRATION mm: drop redundant ARCH_ENABLE_SPLIT_PMD_PTLOCK mm: drop redundant HAVE_ARCH_TRANSPARENT_HUGEPAGE Subsystem: mm/util Joe Perches <joe@perches.com>: mm/util.c: reduce mem_dump_obj() object size Bhaskar Chowdhury <unixbhaskar@gmail.com>: mm/util.c: fix typo Subsystem: mm/memory-hotplug Pavel Tatashin <pasha.tatashin@soleen.com>: Patch series "prohibit pinning pages in ZONE_MOVABLE", v11: mm/gup: don't pin migrated cma pages in movable zone mm/gup: check every subpage of a compound page during isolation mm/gup: return an error on migration failure mm/gup: check for isolation errors mm cma: rename PF_MEMALLOC_NOCMA to PF_MEMALLOC_PIN mm: apply per-task gfp constraints in fast path mm: honor PF_MEMALLOC_PIN for all movable pages mm/gup: do not migrate zero page mm/gup: migrate pinned pages out of movable zone memory-hotplug.rst: add a note about ZONE_MOVABLE and page pinning mm/gup: change index type to long as it counts pages mm/gup: longterm pin migration cleanup selftests/vm: gup_test: fix test flag selftests/vm: gup_test: test faulting in kernel, and verify pinnable pages Mel Gorman <mgorman@techsingularity.net>: mm/memory_hotplug: remove broken locking of zone PCP structures during hot remove Oscar Salvador <osalvador@suse.de>: Patch series "Allocate memmap from hotadded memory (per device)", v10: drivers/base/memory: introduce memory_block_{online,offline} mm,memory_hotplug: relax fully spanned sections check David Hildenbrand <david@redhat.com>: mm,memory_hotplug: factor out adjusting present pages into adjust_present_page_count() Oscar Salvador <osalvador@suse.de>: mm,memory_hotplug: allocate memmap from the added memory range acpi,memhotplug: enable MHP_MEMMAP_ON_MEMORY when supported mm,memory_hotplug: add kernel boot option to enable memmap_on_memory x86/Kconfig: introduce ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE arm64/Kconfig: introduce ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE Subsystem: mm/zswap Zhiyuan Dai <daizhiyuan@phytium.com.cn>: mm/zswap.c: switch from strlcpy to strscpy Subsystem: mm/zsmalloc zhouchuangao <zhouchuangao@vivo.com>: mm/zsmalloc: use BUG_ON instead of if condition followed by BUG. Subsystem: mm/highmem Ira Weiny <ira.weiny@intel.com>: Patch series "btrfs: Convert kmap/memset/kunmap to memzero_user()": iov_iter: lift memzero_page() to highmem.h btrfs: use memzero_page() instead of open coded kmap pattern songqiang <songqiang@uniontech.com>: mm/highmem.c: fix coding style issue Subsystem: mm/cleanups Zhiyuan Dai <daizhiyuan@phytium.com.cn>: mm/mempool: minor coding style tweaks Zhang Yunkai <zhang.yunkai@zte.com.cn>: mm/process_vm_access.c: remove duplicate include Subsystem: mm/kfence Marco Elver <elver@google.com>: kfence: zero guard page after out-of-bounds access Patch series "kfence: optimize timer scheduling", v2: kfence: await for allocation using wait_event kfence: maximize allocation wait timeout duration kfence: use power-efficient work queue to run delayed work Documentation/ABI/testing/sysfs-kernel-mm-cma | 25 Documentation/admin-guide/kernel-parameters.txt | 17 Documentation/admin-guide/mm/memory-hotplug.rst | 9 Documentation/admin-guide/mm/userfaultfd.rst | 105 +- arch/arc/Kconfig | 9 arch/arm/Kconfig | 10 arch/arm64/Kconfig | 34 arch/arm64/mm/hugetlbpage.c | 7 arch/ia64/Kconfig | 14 arch/ia64/mm/hugetlbpage.c | 3 arch/mips/Kconfig | 6 arch/mips/mm/hugetlbpage.c | 4 arch/parisc/Kconfig | 5 arch/parisc/mm/hugetlbpage.c | 2 arch/powerpc/Kconfig | 17 arch/powerpc/mm/hugetlbpage.c | 3 arch/powerpc/platforms/Kconfig.cputype | 16 arch/riscv/Kconfig | 5 arch/s390/Kconfig | 12 arch/s390/mm/hugetlbpage.c | 2 arch/sh/Kconfig | 7 arch/sh/mm/Kconfig | 8 arch/sh/mm/hugetlbpage.c | 2 arch/sparc/mm/hugetlbpage.c | 2 arch/x86/Kconfig | 33 arch/x86/mm/pat/set_memory.c | 8 drivers/acpi/acpi_memhotplug.c | 5 drivers/base/memory.c | 105 ++ fs/Kconfig | 5 fs/block_dev.c | 2 fs/btrfs/compression.c | 5 fs/btrfs/extent_io.c | 22 fs/btrfs/inode.c | 33 fs/btrfs/reflink.c | 6 fs/btrfs/zlib.c | 5 fs/btrfs/zstd.c | 5 fs/buffer.c | 36 fs/dax.c | 8 fs/gfs2/glock.c | 3 fs/hugetlbfs/inode.c | 9 fs/inode.c | 11 fs/proc/task_mmu.c | 3 fs/userfaultfd.c | 149 +++ include/linux/buffer_head.h | 4 include/linux/cma.h | 4 include/linux/compaction.h | 1 include/linux/fs.h | 2 include/linux/gfp.h | 2 include/linux/highmem.h | 7 include/linux/huge_mm.h | 3 include/linux/hugetlb.h | 37 include/linux/memcontrol.h | 27 include/linux/memory.h | 8 include/linux/memory_hotplug.h | 15 include/linux/memremap.h | 2 include/linux/migrate.h | 11 include/linux/mm.h | 28 include/linux/mmzone.h | 20 include/linux/pagemap.h | 5 include/linux/pgtable.h | 12 include/linux/sched.h | 2 include/linux/sched/mm.h | 27 include/linux/shrinker.h | 7 include/linux/swap.h | 21 include/linux/userfaultfd_k.h | 55 + include/linux/vm_event_item.h | 8 include/trace/events/cma.h | 92 +- include/trace/events/migrate.h | 25 include/trace/events/mmflags.h | 7 include/uapi/linux/mempolicy.h | 7 include/uapi/linux/userfaultfd.h | 36 init/Kconfig | 5 kernel/sysctl.c | 2 lib/Kconfig.kfence | 1 lib/iov_iter.c | 8 mm/Kconfig | 28 mm/Makefile | 6 mm/cma.c | 70 + mm/cma.h | 25 mm/cma_debug.c | 8 mm/cma_sysfs.c | 112 ++ mm/compaction.c | 113 ++ mm/filemap.c | 24 mm/frontswap.c | 12 mm/gup.c | 264 +++--- mm/gup_test.c | 29 mm/gup_test.h | 3 mm/highmem.c | 11 mm/huge_memory.c | 326 +++++++- mm/hugetlb.c | 843 ++++++++++++++-------- mm/hugetlb_cgroup.c | 9 mm/internal.h | 10 mm/kfence/core.c | 61 + mm/khugepaged.c | 63 - mm/ksm.c | 17 mm/list_lru.c | 6 mm/memcontrol.c | 137 --- mm/memory_hotplug.c | 220 +++++ mm/mempolicy.c | 16 mm/mempool.c | 2 mm/migrate.c | 103 -- mm/mlock.c | 4 mm/mmap.c | 18 mm/oom_kill.c | 2 mm/page_alloc.c | 83 +- mm/process_vm_access.c | 1 mm/shmem.c | 2 mm/sparse.c | 4 mm/swap.c | 69 + mm/swap_state.c | 4 mm/swapfile.c | 4 mm/truncate.c | 19 mm/userfaultfd.c | 39 - mm/util.c | 26 mm/vmalloc.c | 2 mm/vmscan.c | 543 +++++++++----- mm/vmstat.c | 45 - mm/workingset.c | 1 mm/zsmalloc.c | 6 mm/zswap.c | 2 tools/testing/selftests/vm/.gitignore | 1 tools/testing/selftests/vm/Makefile | 1 tools/testing/selftests/vm/gup_test.c | 38 tools/testing/selftests/vm/split_huge_page_test.c | 400 ++++++++++ tools/testing/selftests/vm/userfaultfd.c | 164 ++++ 125 files changed, 3596 insertions(+), 1668 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2021-05-05 1:32 incoming Andrew Morton @ 2021-05-05 1:47 ` Linus Torvalds 2021-05-05 3:16 ` incoming Andrew Morton 0 siblings, 1 reply; 349+ messages in thread From: Linus Torvalds @ 2021-05-05 1:47 UTC (permalink / raw) To: Andrew Morton; +Cc: Linux-MM, mm-commits On Tue, May 4, 2021 at 6:32 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > 143 patches Hmm. Only 140 seem to have made it to the list, with 103, 106 and 107 missing. Maybe just some mail delay? But at least right now https://lore.kernel.org/mm-commits/ doesn't show them (and thus 'b4' doesn't work). I'll check again later. Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2021-05-05 1:47 ` incoming Linus Torvalds @ 2021-05-05 3:16 ` Andrew Morton 2021-05-05 17:10 ` incoming Linus Torvalds 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2021-05-05 3:16 UTC (permalink / raw) To: Linus Torvalds; +Cc: Linux-MM, mm-commits On Tue, 4 May 2021 18:47:19 -0700 Linus Torvalds <torvalds@linux-foundation.org> wrote: > On Tue, May 4, 2021 at 6:32 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > > > 143 patches > > Hmm. Only 140 seem to have made it to the list, with 103, 106 and 107 missing. > > Maybe just some mail delay? But at least right now > > https://lore.kernel.org/mm-commits/ > > doesn't show them (and thus 'b4' doesn't work). > > I'll check again later. > Well that's strange. I see all three via cc:me, but not on linux-mm or mm-commits. Let me resend right now with the same in-reply-to. Hopefully they will land in the correct place. ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2021-05-05 3:16 ` incoming Andrew Morton @ 2021-05-05 17:10 ` Linus Torvalds 2021-05-05 17:44 ` incoming Andrew Morton 0 siblings, 1 reply; 349+ messages in thread From: Linus Torvalds @ 2021-05-05 17:10 UTC (permalink / raw) To: Andrew Morton, Konstantin Ryabitsev; +Cc: Linux-MM, mm-commits On Tue, May 4, 2021 at 8:16 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > Let me resend right now with the same in-reply-to. Hopefully they will > land in the correct place. Well, you re-sent it twice, and I have three copies in my own mailbox, bot they still don't show up on the mm-commits mailing list. So the list hates them for some odd reason. I've picked them up locally, but adding Konstantin to the participants to see if he can see what's up. Konstantin: patches 103/106/107 are missing on lore out of Andrew's series of 143. Odd. Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2021-05-05 17:10 ` incoming Linus Torvalds @ 2021-05-05 17:44 ` Andrew Morton 2021-05-06 3:19 ` incoming Anshuman Khandual 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2021-05-05 17:44 UTC (permalink / raw) To: Linus Torvalds; +Cc: Konstantin Ryabitsev, Linux-MM, mm-commits [-- Attachment #1: Type: text/plain, Size: 1387 bytes --] On Wed, 5 May 2021 10:10:33 -0700 Linus Torvalds <torvalds@linux-foundation.org> wrote: > On Tue, May 4, 2021 at 8:16 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > > > Let me resend right now with the same in-reply-to. Hopefully they will > > land in the correct place. > > Well, you re-sent it twice, and I have three copies in my own mailbox, > bot they still don't show up on the mm-commits mailing list. > > So the list hates them for some odd reason. > > I've picked them up locally, but adding Konstantin to the participants > to see if he can see what's up. > > Konstantin: patches 103/106/107 are missing on lore out of Andrew's > series of 143. Odd. It's weird. They don't turn up on linux-mm either, and that's running at kvack.org, also majordomo. They don't get through when sent with either heirloom-mailx or with sylpheed. Also, it seems that when Anshuman originally sent the patch, linux-mm and linux-kernel didn't send it back out. So perhaps a spam filter triggered? I'm seeing https://lore.kernel.org/linux-arm-kernel/1615278790-18053-3-git-send-email-anshuman.khandual@arm.com/ which is via linux-arm-kernel@lists.infradead.org but the linux-kernel server massacred that patch series. Searching https://lkml.org/lkml/2021/3/9 for "anshuman" only shows 3 of the 7 email series. One of the emails (as sent my me) is attached, if that helps. [-- Attachment #2: x.txt --] [-- Type: text/plain, Size: 21048 bytes --] Return-Path: <akpm@linux-foundation.org> X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on y X-Spam-Level: (none) X-Spam-Status: No, score=-101.5 required=2.5 tests=BAYES_00,T_DKIM_INVALID, USER_IN_WHITELIST autolearn=ham autolearn_force=no version=3.4.1 Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by localhost.localdomain (8.15.2/8.15.2/Debian-8ubuntu1) with ESMTP id 1453H2fk032202 for <akpm@localhost>; Tue, 4 May 2021 20:17:03 -0700 Received: from imap.fastmail.com [66.111.4.135] by localhost.localdomain with IMAP (fetchmail-6.3.26) for <akpm@localhost> (single-drop); Tue, 04 May 2021 20:17:03 -0700 (PDT) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by sloti11d1t06 (Cyrus 3.5.0-alpha0-442-g5daca166b9-fm-20210428.001-g5daca166) with LMTPA; Tue, 04 May 2021 23:16:31 -0400 X-Cyrus-Session-Id: sloti11d1t06-1620184591-1699471-2-6359664467419938249 X-Sieve: CMU Sieve 3.0 X-Resolved-to: akpm@mbx.kernel.org X-Delivered-to: akpm@mbx.kernel.org X-Mail-from: akpm@linux-foundation.org Received: from mx6 ([10.202.2.205]) by compute1.internal (LMTPProxy); Tue, 04 May 2021 23:16:31 -0400 Received: from mx6.messagingengine.com (localhost [127.0.0.1]) by mailmx.nyi.internal (Postfix) with ESMTP id 40796C800E1 for <akpm@mbx.kernel.org>; Tue, 4 May 2021 23:16:31 -0400 (EDT) Received: from mx6.messagingengine.com (localhost [127.0.0.1]) by mx6.messagingengine.com (Authentication Milter) with ESMTP id 14870833D7F; Tue, 4 May 2021 23:16:31 -0400 ARC-Seal: i=2; a=rsa-sha256; cv=pass; d=messagingengine.com; s=fm2; t= 1620184591; b=FBo7Gf3JFN+4QYg5Byan0oNm6RESv+sIf5HcaslVNsUd9SOTGS yI0+IsXr1CUpGH783hE6fmgEq9SyfOwQVZjdikLaJS1+7u0JtfAYQFU3RORCtXlr djJWrScfjVa8nAHX4rQCtzvtPYuzx5w7cTgGgeILgoJMxgLj7EC9xcT8BIf68+9W Lw+ohAmcuiKhL2ez+de4SMuwdh3dh2FwAIHQOsSjEU1/NV+WGxMLwYbxWgTrqQGH RQIzFNdq30qslW9huK47+e80uHOX2tXwxtshwbThFEn458bdV5LL6Y8Oh4ZWMbv1 tFgTt515DVedonZknxc07XsXtAjaJyB8bfHw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=date:from:to:subject:message-id :in-reply-to; s=fm2; t=1620184591; bh=LuH7mbm3+zp863vKBEqKeoZtnp uFxYpIb5oTVwf56Es=; b=m5E1fbz2b+an/X406oY3BuG0Zm4/W05vWAki8Lsnud gPCc1LfPUFSuXaMppcEDPbLKprp4hH3T52itK4pivXMQCLEOyme7kVStaLMVTiky Xxqh5ZdhOWvygBfda/GjfuLBSbbj2gfm8HPKpbL7CA5foelknIBhJHDzGkJyxetZ YagZfVvtdo2OEwnC1mmjUCpKPO5+m5kaZO0ol6rPdl+TV0MKGhjLg+/i6Ia+0nFp zDwV4VeACvVcGb2xY7KG5Z+BtqVxeVFn+w5JcqpWUtxEKoSBR4bWARzjwHg6eouh 7psOOKPTt/NzDKk+3f49lso5KlPiTF2xEU/+5SIttCkQ== ARC-Authentication-Results: i=2; mx6.messagingengine.com; arc=pass (as.1.google.com=pass, ams.1.google.com=pass) smtp.remote-ip=209.85.215.198; bimi=skipped (DMARC did not pass); dkim=pass (1024-bit rsa key sha256) header.d=linux-foundation.org header.i=@linux-foundation.org header.b=Gdz/3wY9 header.a=rsa-sha256 header.s=korg x-bits=1024; dmarc=none policy.published-domain-policy=none policy.applied-disposition=none policy.evaluated-disposition=none (p=none,d=none,d.eval=none) policy.policy-from=p header.from=linux-foundation.org; iprev=pass smtp.remote-ip=209.85.215.198 (mail-pg1-f198.google.com); spf=pass smtp.mailfrom=akpm@linux-foundation.org smtp.helo=mail-pg1-f198.google.com; x-aligned-from=pass (Address match); x-arc-spf=pass (google.com: domain of akpm@linux-foundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org x-arc-instance=1 x-arc-domain=google.com (Trusted from aar.1.google.com); x-csa=none; x-google-dkim=fail (message has been altered, 2048-bit rsa key) header.d=1e100.net header.i=@1e100.net header.b=VZuDOxUf; x-me-sender=none; x-ptr=pass smtp.helo=mail-pg1-f198.google.com policy.ptr=mail-pg1-f198.google.com; x-return-mx=pass header.domain=linux-foundation.org policy.is_org=yes (MX Records found: ASPMX.L.GOOGLE.COM,ALT1.ASPMX.L.GOOGLE.COM,ALT2.ASPMX.L.GOOGLE.COM,ALT3.ASPMX.L.GOOGLE.COM,ALT4.ASPMX.L.GOOGLE.COM); x-return-mx=pass smtp.domain=linux-foundation.org policy.is_org=yes (MX Records found: ASPMX.L.GOOGLE.COM,ALT1.ASPMX.L.GOOGLE.COM,ALT2.ASPMX.L.GOOGLE.COM,ALT3.ASPMX.L.GOOGLE.COM,ALT4.ASPMX.L.GOOGLE.COM); x-tls=pass smtp.version=TLSv1.3 smtp.cipher=TLS_AES_256_GCM_SHA384 smtp.bits=256/256; x-vs=clean score=40 state=0 Authentication-Results: mx6.messagingengine.com; arc=pass (as.1.google.com=pass, ams.1.google.com=pass) smtp.remote-ip=209.85.215.198; bimi=skipped (DMARC did not pass); dkim=pass (1024-bit rsa key sha256) header.d=linux-foundation.org header.i=@linux-foundation.org header.b=Gdz/3wY9 header.a=rsa-sha256 header.s=korg x-bits=1024; dmarc=none policy.published-domain-policy=none policy.applied-disposition=none policy.evaluated-disposition=none (p=none,d=none,d.eval=none) policy.policy-from=p header.from=linux-foundation.org; iprev=pass smtp.remote-ip=209.85.215.198 (mail-pg1-f198.google.com); spf=pass smtp.mailfrom=akpm@linux-foundation.org smtp.helo=mail-pg1-f198.google.com; x-aligned-from=pass (Address match); x-arc-spf=pass (google.com: domain of akpm@linux-foundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org x-arc-instance=1 x-arc-domain=google.com (Trusted from aar.1.google.com); x-csa=none; x-google-dkim=fail (message has been altered, 2048-bit rsa key) header.d=1e100.net header.i=@1e100.net header.b=VZuDOxUf; x-me-sender=none; x-ptr=pass smtp.helo=mail-pg1-f198.google.com policy.ptr=mail-pg1-f198.google.com; x-return-mx=pass header.domain=linux-foundation.org policy.is_org=yes (MX Records found: ASPMX.L.GOOGLE.COM,ALT1.ASPMX.L.GOOGLE.COM,ALT2.ASPMX.L.GOOGLE.COM,ALT3.ASPMX.L.GOOGLE.COM,ALT4.ASPMX.L.GOOGLE.COM); x-return-mx=pass smtp.domain=linux-foundation.org policy.is_org=yes (MX Records found: ASPMX.L.GOOGLE.COM,ALT1.ASPMX.L.GOOGLE.COM,ALT2.ASPMX.L.GOOGLE.COM,ALT3.ASPMX.L.GOOGLE.COM,ALT4.ASPMX.L.GOOGLE.COM); x-tls=pass smtp.version=TLSv1.3 smtp.cipher=TLS_AES_256_GCM_SHA384 smtp.bits=256/256; x-vs=clean score=40 state=0 X-ME-VSCause: gggruggvucftvghtrhhoucdtuddrgeduledrvdefjedgieegucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucgoufhorhhtvggutfgvtg hiphdvucdlgedtmdenucfjughrpeffhffvuffkjggfsedttdertddtredtnecuhfhrohhm peetnhgurhgvficuofhorhhtohhnuceorghkphhmsehlihhnuhigqdhfohhunhgurghtih honhdrohhrgheqnecuggftrfgrthhtvghrnhepjeevfeduveffvddvudetkefhgeduveeu geevvdfhhfevhfekkedtieefgfduheeinecuffhomhgrihhnpehkvghrnhgvlhdrohhrgh enucfkphepvddtledrkeehrddvudehrdduleekpdduleekrddugeehrddvledrleelnecu uegrugftvghpuhhtkfhppeduleekrddugeehrddvledrleelnecuvehluhhsthgvrhfuih iivgeptdenucfrrghrrghmpehinhgvthepvddtledrkeehrddvudehrdduleekpdhhvghl ohepmhgrihhlqdhpghduqdhfudelkedrghhoohhglhgvrdgtohhmpdhmrghilhhfrhhomh epoegrkhhpmheslhhinhhugidqfhhouhhnuggrthhiohhnrdhorhhgqe X-ME-VSScore: 40 X-ME-VSCategory: clean X-ME-CSA: none Received-SPF: pass (linux-foundation.org: Sender is authorized to use 'akpm@linux-foundation.org' in 'mfrom' identity (mechanism 'include:_spf.google.com' matched)) receiver=mx6.messagingengine.com; identity=mailfrom; envelope-from="akpm@linux-foundation.org"; helo=mail-pg1-f198.google.com; client-ip=209.85.215.198 Received: from mail-pg1-f198.google.com (mail-pg1-f198.google.com [209.85.215.198]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx6.messagingengine.com (Postfix) with ESMTPS for <akpm@mbx.kernel.org>; Tue, 4 May 2021 23:16:31 -0400 (EDT) Received: by mail-pg1-f198.google.com with SMTP id g5-20020a63f4050000b02901f6c7b9a6d0so593624pgi.5 for <akpm@mbx.kernel.org>; Tue, 04 May 2021 20:16:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:date:from:to:subject:message-id :in-reply-to:user-agent; bh=LuH7mbm3+zp863vKBEqKeoZtnpuFxYpIb5oTVwf56Es=; b=VZuDOxUfeHXJz1/CiFfcxuMVHkmW5RznvqYS+Py8Ub6nHHXprQJGE9Ze3WgH+1ylSe NJLEC7xgv15SR9A+e/MT4RTj3OVOwtd1Zi2vPav39a9K4tP+2uL2Ei+5d7FtT3LLZsjo feek/DqCGSkJ/EC5woLyU9BBkfLUuQ9/2HiDCk10BMetEfWdor69Slb39NOXES8br02X 25Btabu9ZCWroyjQj7W5gwGr5Z6Hs2nbnnfAb+e92FalcUD/4ql77lNzRcWGi4/9TT8s ntqI2g46Xv+k5LURaRH5CRBpxkkKgzcrioRPYFUHkEgOEWy1hPzg9QPk8ZO35Xm9R9d2 vl3Q== X-Gm-Message-State: AOAM531IlYUTVWcMrsTunnxZWB7SKeeOmoZj5mZ1A5tl7N/JlZUueN8L tvyRKnvxHr6a5mDaGHN9Tb1N/iCzT0U5oQgRVTxTnj1qFGibRa9+leLQNKX0aGlNg9JiaMfromb xyOlCUpVXOlVvchuwTUSTn7rXum+Hh3PWQZm5II/EX+0AkzKqez62Z8U= X-Received: by 2002:a17:90a:a581:: with SMTP id b1mr32203271pjq.53.1620184589161; Tue, 04 May 2021 20:16:29 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxffoGdRqAjUagWoMVD5p/Lk1KTEDftEhkWh8ewatgDmZLlxh0lO1hxYIdYYwoO5dsJ/i0z X-Received: by 2002:a17:90a:a581:: with SMTP id b1mr32203198pjq.53.1620184588109; Tue, 04 May 2021 20:16:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620184588; cv=none; d=google.com; s=arc-20160816; b=Fr2b2AMXJr6OeNpSql45tq1korkuDOunp7t+DpARuEBnwvQnKfagyipQ93jywsRf/c /i/mP2eTmJwOLWNORClh1MGF/0VfBx1ULoB9W4CI3LpVgGFXGGFis8LTcvUYD5yvhlsV 50rm2j34iS9lyo04FB/hbhGkwLtUhz2PGkLGuqHspTd+pUpUCf5SLxGJbZC5uCcUEsbO 8WSDBWyvaCPjFzJQZK60gK70ticKW+fCG1xHtOG4qsFCbqEpFKBy8eVK83OBazo/dQDr DOheWNWyw2o/WMP4GpZMvZuj30dx3j8xnBahIpnMIQJaog6wLMcVX9pkQ8UJym3/PGNm pO/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=user-agent:in-reply-to:message-id:subject:to:from:date :dkim-signature; bh=LuH7mbm3+zp863vKBEqKeoZtnpuFxYpIb5oTVwf56Es=; b=vVN16NPMKjoxSJQ6b36VXFCkZqnmG7wABfilgE069txZqmHpEMyZb8lRStkHy557LM Kn7UfJFP3xwsP8ZTCipVDZ6tpFW/hYFU9o4th9G8asWs+MOf9xpWX2LQZ1FTmaao2Fg5 uCHypz39cnAh0Z1EJfNsTcaTGIrkbBd6zje+mtBgs8hnfH8HcWBYTPCHCCx950Z928tb XOPd/Igs7yzD1ioBiGXZj/ciwPbWVTaZXBg4JOZSApxkDMfuMyfyLLOs++EVkyxJHUme TmgwvLkixcwEtKF7gIeqEhwvOUSVvilLuJLFVaLumwTcjJ1amVfGcJhBE7LIM9C3SMpA rOOg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux-foundation.org header.s=korg header.b="Gdz/3wY9"; spf=pass (google.com: domain of akpm@linux-foundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org Received: from mail.kernel.org (mail.kernel.org. [198.145.29.99]) by mx.google.com with ESMTPS id c85si20173199pfb.8.2021.05.04.20.16.27 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 04 May 2021 20:16:28 -0700 (PDT) Received-SPF: pass (google.com: domain of akpm@linux-foundation.org designates 198.145.29.99 as permitted sender) client-ip=198.145.29.99; Authentication-Results: mx.google.com; dkim=pass header.i=@linux-foundation.org header.s=korg header.b="Gdz/3wY9"; spf=pass (google.com: domain of akpm@linux-foundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org Received: by mail.kernel.org (Postfix) with ESMTPSA id A4DB4610D2; Wed, 5 May 2021 03:16:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1620184587; bh=TxN4wgKcKf2UUem+5pL09m9GL/7U592mEalo2U6vwAU=; h=Date:From:To:Subject:In-Reply-To:From; b=Gdz/3wY9ktH3hOmn2DAOkfh0JXwPdMJ8xsNQFa9eI25K39Z3iHdRGo9jX3QtMDtog D4Zakt52CQCYsV91c9oCai8KnCTkkAjJq/Ez7p8UHpz97Go3yYYxqg6DDl6d8HCQvN H47dTaZAgeH2sw29bjB9fRzNuTx7k4RAPlqZIpiE= Date: Tue, 04 May 2021 20:16:26 -0700 From: Andrew Morton <akpm@linux-foundation.org> To: akpm@linux-foundation.org, anshuman.khandual@arm.com, aou@eecs.berkeley.edu, arnd@arndb.de, benh@kernel.crashing.org, borntraeger@de.ibm.com, bp@alien8.de, catalin.marinas@arm.com, dalias@libc.org, deller@gmx.de, gor@linux.ibm.com, hca@linux.ibm.com, hpa@zytor.com, James.Bottomley@HansenPartnership.com, linux-mm@kvack.org, linux@armlinux.org.uk, mingo@redhat.com, mm-commits@vger.kernel.org, mpe@ellerman.id.au, palmerdabbelt@google.com, paul.walmsley@sifive.com, paulus@samba.org, tglx@linutronix.de, torvalds@linux-foundation.org, tsbogend@alpha.franken.de, vgupta@synopsys.com, viro@zeniv.linux.org.uk, will@kernel.org, ysato@users.osdn.me Subject: [patch 103/143] mm: generalize SYS_SUPPORTS_HUGETLBFS (rename as ARCH_SUPPORTS_HUGETLBFS) Message-ID: <20210505031626.c8o4WL7KE%akpm@linux-foundation.org> In-Reply-To: <20210504183219.a3cc46aee4013d77402276c5@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Gm-Original-To: akpm@linux-foundation.org From: Anshuman Khandual <anshuman.khandual@arm.com> Subject: mm: generalize SYS_SUPPORTS_HUGETLBFS (rename as ARCH_SUPPORTS_HUGETLBFS) SYS_SUPPORTS_HUGETLBFS config has duplicate definitions on platforms that subscribe it. Instead, just make it a generic option which can be selected on applicable platforms. Also rename it as ARCH_SUPPORTS_HUGETLBFS instead. This reduces code duplication and makes it cleaner. Link: https://lkml.kernel.org/r/1617259448-22529-3-git-send-email-anshuman.khandual@arm.com Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arm64] Acked-by: Palmer Dabbelt <palmerdabbelt@google.com> [riscv] Acked-by: Michael Ellerman <mpe@ellerman.id.au> [powerpc] Cc: Russell King <linux@armlinux.org.uk> Cc: Will Deacon <will@kernel.org> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Helge Deller <deller@gmx.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Rich Felker <dalias@libc.org> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- arch/arm/Kconfig | 5 +---- arch/arm64/Kconfig | 4 +--- arch/mips/Kconfig | 6 +----- arch/parisc/Kconfig | 5 +---- arch/powerpc/Kconfig | 3 --- arch/powerpc/platforms/Kconfig.cputype | 6 +++--- arch/riscv/Kconfig | 5 +---- arch/sh/Kconfig | 5 +---- fs/Kconfig | 5 ++++- 9 files changed, 13 insertions(+), 31 deletions(-) --- a/arch/arm64/Kconfig~mm-generalize-sys_supports_hugetlbfs-rename-as-arch_supports_hugetlbfs +++ a/arch/arm64/Kconfig @@ -73,6 +73,7 @@ config ARM64 select ARCH_USE_QUEUED_SPINLOCKS select ARCH_USE_SYM_ANNOTATIONS select ARCH_SUPPORTS_DEBUG_PAGEALLOC + select ARCH_SUPPORTS_HUGETLBFS select ARCH_SUPPORTS_MEMORY_FAILURE select ARCH_SUPPORTS_SHADOW_CALL_STACK if CC_HAVE_SHADOW_CALL_STACK select ARCH_SUPPORTS_LTO_CLANG if CPU_LITTLE_ENDIAN @@ -1072,9 +1073,6 @@ config HW_PERF_EVENTS def_bool y depends on ARM_PMU -config SYS_SUPPORTS_HUGETLBFS - def_bool y - config ARCH_HAS_FILTER_PGPROT def_bool y --- a/arch/arm/Kconfig~mm-generalize-sys_supports_hugetlbfs-rename-as-arch_supports_hugetlbfs +++ a/arch/arm/Kconfig @@ -31,6 +31,7 @@ config ARM select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT if CPU_V7 select ARCH_SUPPORTS_ATOMIC_RMW + select ARCH_SUPPORTS_HUGETLBFS if ARM_LPAE select ARCH_USE_BUILTIN_BSWAP select ARCH_USE_CMPXCHG_LOCKREF select ARCH_USE_MEMTEST @@ -1511,10 +1512,6 @@ config HW_PERF_EVENTS def_bool y depends on ARM_PMU -config SYS_SUPPORTS_HUGETLBFS - def_bool y - depends on ARM_LPAE - config HAVE_ARCH_TRANSPARENT_HUGEPAGE def_bool y depends on ARM_LPAE --- a/arch/mips/Kconfig~mm-generalize-sys_supports_hugetlbfs-rename-as-arch_supports_hugetlbfs +++ a/arch/mips/Kconfig @@ -19,6 +19,7 @@ config MIPS select ARCH_USE_MEMTEST select ARCH_USE_QUEUED_RWLOCKS select ARCH_USE_QUEUED_SPINLOCKS + select ARCH_SUPPORTS_HUGETLBFS if CPU_SUPPORTS_HUGEPAGES select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU select ARCH_WANT_IPC_PARSE_VERSION select ARCH_WANT_LD_ORPHAN_WARN @@ -1287,11 +1288,6 @@ config SYS_SUPPORTS_BIG_ENDIAN config SYS_SUPPORTS_LITTLE_ENDIAN bool -config SYS_SUPPORTS_HUGETLBFS - bool - depends on CPU_SUPPORTS_HUGEPAGES - default y - config MIPS_HUGE_TLB_SUPPORT def_bool HUGETLB_PAGE || TRANSPARENT_HUGEPAGE --- a/arch/parisc/Kconfig~mm-generalize-sys_supports_hugetlbfs-rename-as-arch_supports_hugetlbfs +++ a/arch/parisc/Kconfig @@ -12,6 +12,7 @@ config PARISC select ARCH_HAS_STRICT_KERNEL_RWX select ARCH_HAS_UBSAN_SANITIZE_ALL select ARCH_NO_SG_CHAIN + select ARCH_SUPPORTS_HUGETLBFS if PA20 select ARCH_SUPPORTS_MEMORY_FAILURE select DMA_OPS select RTC_CLASS @@ -138,10 +139,6 @@ config PGTABLE_LEVELS default 3 if 64BIT && PARISC_PAGE_SIZE_4KB default 2 -config SYS_SUPPORTS_HUGETLBFS - def_bool y if PA20 - - menu "Processor type and features" choice --- a/arch/powerpc/Kconfig~mm-generalize-sys_supports_hugetlbfs-rename-as-arch_supports_hugetlbfs +++ a/arch/powerpc/Kconfig @@ -697,9 +697,6 @@ config ARCH_SPARSEMEM_DEFAULT def_bool y depends on PPC_BOOK3S_64 -config SYS_SUPPORTS_HUGETLBFS - bool - config ILLEGAL_POINTER_VALUE hex # This is roughly half way between the top of user space and the bottom --- a/arch/powerpc/platforms/Kconfig.cputype~mm-generalize-sys_supports_hugetlbfs-rename-as-arch_supports_hugetlbfs +++ a/arch/powerpc/platforms/Kconfig.cputype @@ -40,8 +40,8 @@ config PPC_85xx config PPC_8xx bool "Freescale 8xx" + select ARCH_SUPPORTS_HUGETLBFS select FSL_SOC - select SYS_SUPPORTS_HUGETLBFS select PPC_HAVE_KUEP select PPC_HAVE_KUAP select HAVE_ARCH_VMAP_STACK @@ -95,9 +95,9 @@ config PPC_BOOK3S_64 bool "Server processors" select PPC_FPU select PPC_HAVE_PMU_SUPPORT - select SYS_SUPPORTS_HUGETLBFS select HAVE_ARCH_TRANSPARENT_HUGEPAGE select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE + select ARCH_SUPPORTS_HUGETLBFS select ARCH_SUPPORTS_NUMA_BALANCING select IRQ_WORK select PPC_MM_SLICES @@ -278,9 +278,9 @@ config FSL_BOOKE # this is for common code between PPC32 & PPC64 FSL BOOKE config PPC_FSL_BOOK3E bool + select ARCH_SUPPORTS_HUGETLBFS if PHYS_64BIT || PPC64 select FSL_EMB_PERFMON select PPC_SMP_MUXED_IPI - select SYS_SUPPORTS_HUGETLBFS if PHYS_64BIT || PPC64 select PPC_DOORBELL default y if FSL_BOOKE --- a/arch/riscv/Kconfig~mm-generalize-sys_supports_hugetlbfs-rename-as-arch_supports_hugetlbfs +++ a/arch/riscv/Kconfig @@ -30,6 +30,7 @@ config RISCV select ARCH_HAS_STRICT_KERNEL_RWX if MMU select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT + select ARCH_SUPPORTS_HUGETLBFS if MMU select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU select ARCH_WANT_FRAME_POINTERS select ARCH_WANT_HUGE_PMD_SHARE if 64BIT @@ -165,10 +166,6 @@ config ARCH_WANT_GENERAL_HUGETLB config ARCH_SUPPORTS_UPROBES def_bool y -config SYS_SUPPORTS_HUGETLBFS - depends on MMU - def_bool y - config STACKTRACE_SUPPORT def_bool y --- a/arch/sh/Kconfig~mm-generalize-sys_supports_hugetlbfs-rename-as-arch_supports_hugetlbfs +++ a/arch/sh/Kconfig @@ -101,9 +101,6 @@ config SYS_SUPPORTS_APM_EMULATION bool select ARCH_SUSPEND_POSSIBLE -config SYS_SUPPORTS_HUGETLBFS - bool - config SYS_SUPPORTS_SMP bool @@ -175,12 +172,12 @@ config CPU_SH3 config CPU_SH4 bool + select ARCH_SUPPORTS_HUGETLBFS if MMU select CPU_HAS_INTEVT select CPU_HAS_SR_RB select CPU_HAS_FPU if !CPU_SH4AL_DSP select SH_INTC select SYS_SUPPORTS_SH_TMU - select SYS_SUPPORTS_HUGETLBFS if MMU config CPU_SH4A bool --- a/fs/Kconfig~mm-generalize-sys_supports_hugetlbfs-rename-as-arch_supports_hugetlbfs +++ a/fs/Kconfig @@ -223,10 +223,13 @@ config TMPFS_INODE64 If unsure, say N. +config ARCH_SUPPORTS_HUGETLBFS + def_bool n + config HUGETLBFS bool "HugeTLB file system support" depends on X86 || IA64 || SPARC64 || (S390 && 64BIT) || \ - SYS_SUPPORTS_HUGETLBFS || BROKEN + ARCH_SUPPORTS_HUGETLBFS || BROKEN help hugetlbfs is a filesystem backing for HugeTLB pages, based on ramfs. For architectures that support it, say Y here and read _ ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2021-05-05 17:44 ` incoming Andrew Morton @ 2021-05-06 3:19 ` Anshuman Khandual 0 siblings, 0 replies; 349+ messages in thread From: Anshuman Khandual @ 2021-05-06 3:19 UTC (permalink / raw) To: Andrew Morton, Linus Torvalds; +Cc: Konstantin Ryabitsev, Linux-MM, mm-commits On 5/5/21 11:14 PM, Andrew Morton wrote: > On Wed, 5 May 2021 10:10:33 -0700 Linus Torvalds <torvalds@linux-foundation.org> wrote: > >> On Tue, May 4, 2021 at 8:16 PM Andrew Morton <akpm@linux-foundation.org> wrote: >>> Let me resend right now with the same in-reply-to. Hopefully they will >>> land in the correct place. >> Well, you re-sent it twice, and I have three copies in my own mailbox, >> bot they still don't show up on the mm-commits mailing list. >> >> So the list hates them for some odd reason. >> >> I've picked them up locally, but adding Konstantin to the participants >> to see if he can see what's up. >> >> Konstantin: patches 103/106/107 are missing on lore out of Andrew's >> series of 143. Odd. > It's weird. They don't turn up on linux-mm either, and that's running > at kvack.org, also majordomo. They don't get through when sent with > either heirloom-mailx or with sylpheed. > > Also, it seems that when Anshuman originally sent the patch, linux-mm > and linux-kernel didn't send it back out. So perhaps a spam filter > triggered? > > I'm seeing > > https://lore.kernel.org/linux-arm-kernel/1615278790-18053-3-git-send-email-anshuman.khandual@arm.com/ > > which is via linux-arm-kernel@lists.infradead.org but the linux-kernel > server massacred that patch series. Searching > https://lkml.org/lkml/2021/3/9 for "anshuman" only shows 3 of the 7 > email series. Yeah these patches faced problem from the very beginning getting into the MM/LKML list for some strange reason. ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-04-30 5:52 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-04-30 5:52 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits A few misc subsystems and some of MM. 178 patches, based on 8ca5297e7e38f2dc8c753d33a5092e7be181fff0. Subsystems affected by this patch series: ia64 kbuild scripts sh ocfs2 kfifo vfs kernel/watchdog mm/slab-generic mm/slub mm/kmemleak mm/debug mm/pagecache mm/msync mm/gup mm/memremap mm/memcg mm/pagemap mm/mremap mm/dma mm/sparsemem mm/vmalloc mm/documentation mm/kasan mm/initialization mm/pagealloc mm/memory-failure Subsystem: ia64 Zhang Yunkai <zhang.yunkai@zte.com.cn>: arch/ia64/kernel/head.S: remove duplicate include Bhaskar Chowdhury <unixbhaskar@gmail.com>: arch/ia64/kernel/fsys.S: fix typos arch/ia64/include/asm/pgtable.h: minor typo fixes Valentin Schneider <valentin.schneider@arm.com>: ia64: ensure proper NUMA distance and possible map initialization Sergei Trofimovich <slyfox@gentoo.org>: ia64: drop unused IA64_FW_EMU ifdef ia64: simplify code flow around swiotlb init Bhaskar Chowdhury <unixbhaskar@gmail.com>: ia64: trivial spelling fixes Sergei Trofimovich <slyfox@gentoo.org>: ia64: fix EFI_DEBUG build ia64: mca: always make IA64_MCA_DEBUG an expression ia64: drop marked broken DISCONTIGMEM and VIRTUAL_MEM_MAP ia64: module: fix symbolizer crash on fdescr Subsystem: kbuild Luc Van Oostenryck <luc.vanoostenryck@gmail.com>: include/linux/compiler-gcc.h: sparse can do constant folding of __builtin_bswap*() Subsystem: scripts Tom Saeger <tom.saeger@oracle.com>: scripts/spelling.txt: add entries for recent discoveries Wan Jiabing <wanjiabing@vivo.com>: scripts: a new script for checking duplicate struct declaration Subsystem: sh Zhang Yunkai <zhang.yunkai@zte.com.cn>: arch/sh/include/asm/tlb.h: remove duplicate include Subsystem: ocfs2 Yang Li <yang.lee@linux.alibaba.com>: ocfs2: replace DEFINE_SIMPLE_ATTRIBUTE with DEFINE_DEBUGFS_ATTRIBUTE Joseph Qi <joseph.qi@linux.alibaba.com>: ocfs2: map flags directly in flags_to_o2dlm() Bhaskar Chowdhury <unixbhaskar@gmail.com>: ocfs2: fix a typo Jiapeng Chong <jiapeng.chong@linux.alibaba.com>: ocfs2/dlm: remove unused function Subsystem: kfifo Dan Carpenter <dan.carpenter@oracle.com>: kfifo: fix ternary sign extension bugs Subsystem: vfs Randy Dunlap <rdunlap@infradead.org>: vfs: fs_parser: clean up kernel-doc warnings Subsystem: kernel/watchdog Petr Mladek <pmladek@suse.com>: Patch series "watchdog/softlockup: Report overall time and some cleanup", v2: watchdog: rename __touch_watchdog() to a better descriptive name watchdog: explicitly update timestamp when reporting softlockup watchdog/softlockup: report the overall time of softlockups watchdog/softlockup: remove logic that tried to prevent repeated reports watchdog: fix barriers when printing backtraces from all CPUs watchdog: cleanup handling of false positives Subsystem: mm/slab-generic Rafael Aquini <aquini@redhat.com>: mm/slab_common: provide "slab_merge" option for !IS_ENABLED(CONFIG_SLAB_MERGE_DEFAULT) builds Subsystem: mm/slub Vlastimil Babka <vbabka@suse.cz>: mm, slub: enable slub_debug static key when creating cache with explicit debug flags Oliver Glitta <glittao@gmail.com>: kunit: add a KUnit test for SLUB debugging functionality slub: remove resiliency_test() function Bhaskar Chowdhury <unixbhaskar@gmail.com>: mm/slub.c: trivial typo fixes Subsystem: mm/kmemleak Bhaskar Chowdhury <unixbhaskar@gmail.com>: mm/kmemleak.c: fix a typo Subsystem: mm/debug Georgi Djakov <georgi.djakov@linaro.org>: mm/page_owner: record the timestamp of all pages during free zhongjiang-ali <zhongjiang-ali@linux.alibaba.com>: mm, page_owner: remove unused parameter in __set_page_owner_handle Sergei Trofimovich <slyfox@gentoo.org>: mm: page_owner: fetch backtrace only for tracked pages mm: page_owner: use kstrtobool() to parse bool option mm: page_owner: detect page_owner recursion via task_struct mm: page_poison: print page info when corruption is caught Anshuman Khandual <anshuman.khandual@arm.com>: mm/memtest: add ARCH_USE_MEMTEST Subsystem: mm/pagecache Jens Axboe <axboe@kernel.dk>: Patch series "Improve IOCB_NOWAIT O_DIRECT reads", v3: mm: provide filemap_range_needs_writeback() helper mm: use filemap_range_needs_writeback() for O_DIRECT reads iomap: use filemap_range_needs_writeback() for O_DIRECT reads "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm/filemap: use filemap_read_page in filemap_fault mm/filemap: drop check for truncated page after I/O Johannes Weiner <hannes@cmpxchg.org>: mm: page-writeback: simplify memcg handling in test_clear_page_writeback() "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm: move page_mapping_file to pagemap.h Rui Sun <sunrui26@huawei.com>: mm/filemap: update stale comment Subsystem: mm/msync Nikita Ermakov <sh1r4s3@mail.si-head.nl>: mm/msync: exit early when the flags is an MS_ASYNC and start < vm_start Subsystem: mm/gup Joao Martins <joao.m.martins@oracle.com>: Patch series "mm/gup: page unpining improvements", v4: mm/gup: add compound page list iterator mm/gup: decrement head page once for group of subpages mm/gup: add a range variant of unpin_user_pages_dirty_lock() RDMA/umem: batch page unpin in __ib_umem_release() Yang Shi <shy828301@gmail.com>: mm: gup: remove FOLL_SPLIT Subsystem: mm/memremap Zhiyuan Dai <daizhiyuan@phytium.com.cn>: mm/memremap.c: fix improper SPDX comment style Subsystem: mm/memcg Muchun Song <songmuchun@bytedance.com>: mm: memcontrol: fix kernel stack account Shakeel Butt <shakeelb@google.com>: memcg: cleanup root memcg checks memcg: enable memcg oom-kill for __GFP_NOFAIL Johannes Weiner <hannes@cmpxchg.org>: Patch series "mm: memcontrol: switch to rstat", v3: mm: memcontrol: fix cpuhotplug statistics flushing mm: memcontrol: kill mem_cgroup_nodeinfo() mm: memcontrol: privatize memcg_page_state query functions cgroup: rstat: support cgroup1 cgroup: rstat: punt root-level optimization to individual controllers mm: memcontrol: switch to rstat mm: memcontrol: consolidate lruvec stat flushing kselftests: cgroup: update kmem test for new vmstat implementation Shakeel Butt <shakeelb@google.com>: memcg: charge before adding to swapcache on swapin Muchun Song <songmuchun@bytedance.com>: Patch series "Use obj_cgroup APIs to charge kmem pages", v5: mm: memcontrol: slab: fix obtain a reference to a freeing memcg mm: memcontrol: introduce obj_cgroup_{un}charge_pages mm: memcontrol: directly access page->memcg_data in mm/page_alloc.c mm: memcontrol: change ug->dummy_page only if memcg changed mm: memcontrol: use obj_cgroup APIs to charge kmem pages mm: memcontrol: inline __memcg_kmem_{un}charge() into obj_cgroup_{un}charge_pages() mm: memcontrol: move PageMemcgKmem to the scope of CONFIG_MEMCG_KMEM Wan Jiabing <wanjiabing@vivo.com>: linux/memcontrol.h: remove duplicate struct declaration Johannes Weiner <hannes@cmpxchg.org>: mm: page_counter: mitigate consequences of a page_counter underflow Subsystem: mm/pagemap Wang Qing <wangqing@vivo.com>: mm/memory.c: do_numa_page(): delete bool "migrated" Zhiyuan Dai <daizhiyuan@phytium.com.cn>: mm/interval_tree: add comments to improve code readability Oscar Salvador <osalvador@suse.de>: Patch series "Cleanup and fixups for vmemmap handling", v6: x86/vmemmap: drop handling of 4K unaligned vmemmap range x86/vmemmap: drop handling of 1GB vmemmap ranges x86/vmemmap: handle unpopulated sub-pmd ranges x86/vmemmap: optimize for consecutive sections in partial populated PMDs Ovidiu Panait <ovidiu.panait@windriver.com>: mm, tracing: improve rss_stat tracepoint message Christoph Hellwig <hch@lst.de>: Patch series "add remap_pfn_range_notrack instead of reinventing it in i915", v2: mm: add remap_pfn_range_notrack mm: add a io_mapping_map_user helper i915: use io_mapping_map_user i915: fix remap_io_sg to verify the pgprot Huang Ying <ying.huang@intel.com>: NUMA balancing: reduce TLB flush via delaying mapping on hint page fault Subsystem: mm/mremap Brian Geffon <bgeffon@google.com>: Patch series "mm: Extend MREMAP_DONTUNMAP to non-anonymous mappings", v5: mm: extend MREMAP_DONTUNMAP to non-anonymous mappings Revert "mremap: don't allow MREMAP_DONTUNMAP on special_mappings and aio" selftests: add a MREMAP_DONTUNMAP selftest for shmem Subsystem: mm/dma Zhiyuan Dai <daizhiyuan@phytium.com.cn>: mm/dmapool: switch from strlcpy to strscpy Subsystem: mm/sparsemem Wang Wensheng <wangwensheng4@huawei.com>: mm/sparse: add the missing sparse_buffer_fini() in error branch Subsystem: mm/vmalloc Christoph Hellwig <hch@lst.de>: Patch series "remap_vmalloc_range cleanups": samples/vfio-mdev/mdpy: use remap_vmalloc_range mm: unexport remap_vmalloc_range_partial Serapheim Dimitropoulos <serapheim.dimitro@delphix.com>: mm/vmalloc: use rb_tree instead of list for vread() lookups Nicholas Piggin <npiggin@gmail.com>: Patch series "huge vmalloc mappings", v13: ARM: mm: add missing pud_page define to 2-level page tables mm/vmalloc: fix HUGE_VMAP regression by enabling huge pages in vmalloc_to_page mm: apply_to_pte_range warn and fail if a large pte is encountered mm/vmalloc: rename vmap_*_range vmap_pages_*_range mm/ioremap: rename ioremap_*_range to vmap_*_range mm: HUGE_VMAP arch support cleanup powerpc: inline huge vmap supported functions arm64: inline huge vmap supported functions x86: inline huge vmap supported functions mm/vmalloc: provide fallback arch huge vmap support functions mm: move vmap_range from mm/ioremap.c to mm/vmalloc.c mm/vmalloc: add vmap_range_noflush variant mm/vmalloc: hugepage vmalloc mappings Patch series "mm/vmalloc: cleanup after hugepage series", v2: mm/vmalloc: remove map_kernel_range kernel/dma: remove unnecessary unmap_kernel_range powerpc/xive: remove unnecessary unmap_kernel_range mm/vmalloc: remove unmap_kernel_range mm/vmalloc: improve allocation failure error messages Vijayanand Jitta <vjitta@codeaurora.org>: mm: vmalloc: prevent use after free in _vm_unmap_aliases "Uladzislau Rezki (Sony)" <urezki@gmail.com>: lib/test_vmalloc.c: remove two kvfree_rcu() tests lib/test_vmalloc.c: add a new 'nr_threads' parameter vm/test_vmalloc.sh: adapt for updated driver interface mm/vmalloc: refactor the preloading loagic mm/vmalloc: remove an empty line Subsystem: mm/documentation "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm/doc: fix fault_flag_allow_retry_first kerneldoc mm/doc: fix page_maybe_dma_pinned kerneldoc mm/doc: turn fault flags into an enum mm/doc: add mm.h and mm_types.h to the mm-api document Lukas Bulwahn <lukas.bulwahn@gmail.com>: Patch series "kernel-doc and MAINTAINERS clean-up": MAINTAINERS: assign pagewalk.h to MEMORY MANAGEMENT pagewalk: prefix struct kernel-doc descriptions Subsystem: mm/kasan Zhiyuan Dai <daizhiyuan@phytium.com.cn>: mm/kasan: switch from strlcpy to strscpy Peter Collingbourne <pcc@google.com>: kasan: fix kasan_byte_accessible() to be consistent with actual checks Andrey Konovalov <andreyknvl@google.com>: kasan: initialize shadow to TAG_INVALID for SW_TAGS mm, kasan: don't poison boot memory with tag-based modes Patch series "kasan: integrate with init_on_alloc/free", v3: arm64: kasan: allow to init memory when setting tags kasan: init memory in kasan_(un)poison for HW_TAGS kasan, mm: integrate page_alloc init with HW_TAGS kasan, mm: integrate slab init_on_alloc with HW_TAGS kasan, mm: integrate slab init_on_free with HW_TAGS kasan: docs: clean up sections kasan: docs: update overview section kasan: docs: update usage section kasan: docs: update error reports section kasan: docs: update boot parameters section kasan: docs: update GENERIC implementation details section kasan: docs: update SW_TAGS implementation details section kasan: docs: update HW_TAGS implementation details section kasan: docs: update shadow memory section kasan: docs: update ignoring accesses section kasan: docs: update tests section Walter Wu <walter-zh.wu@mediatek.com>: kasan: record task_work_add() call stack Andrey Konovalov <andreyknvl@google.com>: kasan: detect false-positives in tests Zqiang <qiang.zhang@windriver.com>: irq_work: record irq_work_queue() call stack Subsystem: mm/initialization Kefeng Wang <wangkefeng.wang@huawei.com>: mm: move mem_init_print_info() into mm_init() Subsystem: mm/pagealloc David Hildenbrand <david@redhat.com>: mm/page_alloc: drop pr_info_ratelimited() in alloc_contig_range() Minchan Kim <minchan@kernel.org>: mm: remove lru_add_drain_all in alloc_contig_range Yu Zhao <yuzhao@google.com>: include/linux/page-flags-layout.h: correctly determine LAST_CPUPID_WIDTH include/linux/page-flags-layout.h: cleanups "Matthew Wilcox (Oracle)" <willy@infradead.org>: Patch series "Rationalise __alloc_pages wrappers", v3: mm/page_alloc: rename alloc_mask to alloc_gfp mm/page_alloc: rename gfp_mask to gfp mm/page_alloc: combine __alloc_pages and __alloc_pages_nodemask mm/mempolicy: rename alloc_pages_current to alloc_pages mm/mempolicy: rewrite alloc_pages documentation mm/mempolicy: rewrite alloc_pages_vma documentation mm/mempolicy: fix mpol_misplaced kernel-doc Minchan Kim <minchan@kernel.org>: mm: page_alloc: dump migrate-failed pages Geert Uytterhoeven <geert@linux-m68k.org>: mm/Kconfig: remove default DISCONTIGMEM_MANUAL Kefeng Wang <wangkefeng.wang@huawei.com>: mm, page_alloc: avoid page_to_pfn() in move_freepages() zhouchuangao <zhouchuangao@vivo.com>: mm/page_alloc: duplicate include linux/vmalloc.h Mel Gorman <mgorman@techsingularity.net>: Patch series "Introduce a bulk order-0 page allocator with two in-tree users", v6: mm/page_alloc: rename alloced to allocated mm/page_alloc: add a bulk page allocator mm/page_alloc: add an array-based interface to the bulk page allocator Jesper Dangaard Brouer <brouer@redhat.com>: mm/page_alloc: optimize code layout for __alloc_pages_bulk mm/page_alloc: inline __rmqueue_pcplist Chuck Lever <chuck.lever@oracle.com>: Patch series "SUNRPC consumer for the bulk page allocator": SUNRPC: set rq_page_end differently SUNRPC: refresh rq_pages using a bulk page allocator Jesper Dangaard Brouer <brouer@redhat.com>: net: page_pool: refactor dma_map into own function page_pool_dma_map net: page_pool: use alloc_pages_bulk in refill code path Sergei Trofimovich <slyfox@gentoo.org>: mm: page_alloc: ignore init_on_free=1 for debug_pagealloc=1 huxiang <huxiang@uniontech.com>: mm/page_alloc: redundant definition variables of pfn in for loop Mike Rapoport <rppt@linux.ibm.com>: mm/mmzone.h: fix existing kernel-doc comments and link them to core-api Subsystem: mm/memory-failure Jane Chu <jane.chu@oracle.com>: mm/memory-failure: unnecessary amount of unmapping Documentation/admin-guide/kernel-parameters.txt | 7 Documentation/admin-guide/mm/transhuge.rst | 2 Documentation/core-api/cachetlb.rst | 4 Documentation/core-api/mm-api.rst | 6 Documentation/dev-tools/kasan.rst | 355 +++++----- Documentation/vm/page_owner.rst | 2 Documentation/vm/transhuge.rst | 5 MAINTAINERS | 1 arch/Kconfig | 11 arch/alpha/mm/init.c | 1 arch/arc/mm/init.c | 1 arch/arm/Kconfig | 1 arch/arm/include/asm/pgtable-3level.h | 2 arch/arm/include/asm/pgtable.h | 3 arch/arm/mm/copypage-v4mc.c | 1 arch/arm/mm/copypage-v6.c | 1 arch/arm/mm/copypage-xscale.c | 1 arch/arm/mm/init.c | 2 arch/arm64/Kconfig | 1 arch/arm64/include/asm/memory.h | 4 arch/arm64/include/asm/mte-kasan.h | 39 - arch/arm64/include/asm/vmalloc.h | 38 - arch/arm64/mm/init.c | 4 arch/arm64/mm/mmu.c | 36 - arch/csky/abiv1/cacheflush.c | 1 arch/csky/mm/init.c | 1 arch/h8300/mm/init.c | 2 arch/hexagon/mm/init.c | 1 arch/ia64/Kconfig | 23 arch/ia64/configs/bigsur_defconfig | 1 arch/ia64/include/asm/meminit.h | 11 arch/ia64/include/asm/module.h | 6 arch/ia64/include/asm/page.h | 25 arch/ia64/include/asm/pgtable.h | 7 arch/ia64/kernel/Makefile | 2 arch/ia64/kernel/acpi.c | 7 arch/ia64/kernel/efi.c | 11 arch/ia64/kernel/fsys.S | 4 arch/ia64/kernel/head.S | 6 arch/ia64/kernel/ia64_ksyms.c | 12 arch/ia64/kernel/machine_kexec.c | 2 arch/ia64/kernel/mca.c | 4 arch/ia64/kernel/module.c | 29 arch/ia64/kernel/pal.S | 6 arch/ia64/mm/Makefile | 1 arch/ia64/mm/contig.c | 4 arch/ia64/mm/discontig.c | 21 arch/ia64/mm/fault.c | 15 arch/ia64/mm/init.c | 221 ------ arch/m68k/mm/init.c | 1 arch/microblaze/mm/init.c | 1 arch/mips/Kconfig | 1 arch/mips/loongson64/numa.c | 1 arch/mips/mm/cache.c | 1 arch/mips/mm/init.c | 1 arch/mips/sgi-ip27/ip27-memory.c | 1 arch/nds32/mm/init.c | 1 arch/nios2/mm/cacheflush.c | 1 arch/nios2/mm/init.c | 1 arch/openrisc/mm/init.c | 2 arch/parisc/mm/init.c | 2 arch/powerpc/Kconfig | 1 arch/powerpc/include/asm/vmalloc.h | 34 - arch/powerpc/kernel/isa-bridge.c | 4 arch/powerpc/kernel/pci_64.c | 2 arch/powerpc/mm/book3s64/radix_pgtable.c | 29 arch/powerpc/mm/ioremap.c | 2 arch/powerpc/mm/mem.c | 1 arch/powerpc/sysdev/xive/common.c | 4 arch/riscv/mm/init.c | 1 arch/s390/mm/init.c | 2 arch/sh/include/asm/tlb.h | 10 arch/sh/mm/cache-sh4.c | 1 arch/sh/mm/cache-sh7705.c | 1 arch/sh/mm/init.c | 1 arch/sparc/include/asm/pgtable_32.h | 3 arch/sparc/mm/init_32.c | 2 arch/sparc/mm/init_64.c | 1 arch/sparc/mm/tlb.c | 1 arch/um/kernel/mem.c | 1 arch/x86/Kconfig | 1 arch/x86/include/asm/vmalloc.h | 42 - arch/x86/kernel/cpu/resctrl/pseudo_lock.c | 2 arch/x86/mm/init_32.c | 2 arch/x86/mm/init_64.c | 222 ++++-- arch/x86/mm/ioremap.c | 33 arch/x86/mm/pgtable.c | 13 arch/xtensa/Kconfig | 1 arch/xtensa/mm/init.c | 1 block/blk-cgroup.c | 17 drivers/gpu/drm/i915/Kconfig | 1 drivers/gpu/drm/i915/gem/i915_gem_mman.c | 9 drivers/gpu/drm/i915/i915_drv.h | 3 drivers/gpu/drm/i915/i915_mm.c | 117 --- drivers/infiniband/core/umem.c | 12 drivers/pci/pci.c | 2 fs/aio.c | 5 fs/fs_parser.c | 2 fs/iomap/direct-io.c | 24 fs/ocfs2/blockcheck.c | 2 fs/ocfs2/dlm/dlmrecovery.c | 7 fs/ocfs2/stack_o2cb.c | 36 - fs/ocfs2/stackglue.c | 2 include/linux/compiler-gcc.h | 8 include/linux/fs.h | 2 include/linux/gfp.h | 45 - include/linux/io-mapping.h | 3 include/linux/io.h | 9 include/linux/kasan.h | 51 + include/linux/memcontrol.h | 271 ++++---- include/linux/mm.h | 50 - include/linux/mmzone.h | 43 - include/linux/page-flags-layout.h | 64 - include/linux/pagemap.h | 10 include/linux/pagewalk.h | 4 include/linux/sched.h | 4 include/linux/slab.h | 2 include/linux/slub_def.h | 2 include/linux/vmalloc.h | 73 +- include/linux/vmstat.h | 24 include/net/page_pool.h | 2 include/trace/events/kmem.h | 24 init/main.c | 2 kernel/cgroup/cgroup.c | 34 - kernel/cgroup/rstat.c | 61 + kernel/dma/remap.c | 1 kernel/fork.c | 13 kernel/irq_work.c | 7 kernel/task_work.c | 3 kernel/watchdog.c | 102 +-- lib/Kconfig.debug | 14 lib/Makefile | 1 lib/test_kasan.c | 59 - lib/test_slub.c | 124 +++ lib/test_vmalloc.c | 128 +-- mm/Kconfig | 4 mm/Makefile | 1 mm/debug_vm_pgtable.c | 4 mm/dmapool.c | 2 mm/filemap.c | 61 + mm/gup.c | 145 +++- mm/hugetlb.c | 2 mm/internal.h | 25 mm/interval_tree.c | 2 mm/io-mapping.c | 29 mm/ioremap.c | 361 ++-------- mm/kasan/common.c | 53 - mm/kasan/generic.c | 12 mm/kasan/kasan.h | 28 mm/kasan/report_generic.c | 2 mm/kasan/shadow.c | 10 mm/kasan/sw_tags.c | 12 mm/kmemleak.c | 2 mm/memcontrol.c | 798 ++++++++++++------------ mm/memory-failure.c | 2 mm/memory.c | 191 +++-- mm/mempolicy.c | 78 -- mm/mempool.c | 4 mm/memremap.c | 2 mm/migrate.c | 2 mm/mm_init.c | 4 mm/mmap.c | 6 mm/mremap.c | 6 mm/msync.c | 6 mm/page-writeback.c | 9 mm/page_alloc.c | 430 +++++++++--- mm/page_counter.c | 8 mm/page_owner.c | 68 -- mm/page_poison.c | 6 mm/percpu-vm.c | 7 mm/slab.c | 43 - mm/slab.h | 24 mm/slab_common.c | 10 mm/slub.c | 215 ++---- mm/sparse.c | 1 mm/swap_state.c | 13 mm/util.c | 10 mm/vmalloc.c | 728 ++++++++++++++++----- net/core/page_pool.c | 127 ++- net/sunrpc/svc_xprt.c | 38 - samples/kfifo/bytestream-example.c | 8 samples/kfifo/inttype-example.c | 8 samples/kfifo/record-example.c | 8 samples/vfio-mdev/mdpy.c | 4 scripts/checkdeclares.pl | 53 + scripts/spelling.txt | 26 tools/testing/selftests/cgroup/test_kmem.c | 22 tools/testing/selftests/vm/mremap_dontunmap.c | 52 + tools/testing/selftests/vm/test_vmalloc.sh | 21 189 files changed, 3642 insertions(+), 3013 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-04-23 21:28 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-04-23 21:28 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 5 patches, based on 5bfc75d92efd494db37f5c4c173d3639d4772966. Subsystems affected by this patch series: coda overlayfs mm/pagecache mm/memcg Subsystem: coda Christian König <christian.koenig@amd.com>: coda: fix reference counting in coda_file_mmap error path Subsystem: overlayfs Christian König <christian.koenig@amd.com>: ovl: fix reference counting in ovl_mmap error path Subsystem: mm/pagecache Hugh Dickins <hughd@google.com>: mm/filemap: fix find_lock_entries hang on 32-bit THP mm/filemap: fix mapping_seek_hole_data on THP & 32-bit Subsystem: mm/memcg Vasily Averin <vvs@virtuozzo.com>: tools/cgroup/slabinfo.py: updated to work on current kernel fs/coda/file.c | 6 +++--- fs/overlayfs/file.c | 11 +---------- mm/filemap.c | 31 +++++++++++++++++++------------ tools/cgroup/memcg_slabinfo.py | 8 ++++---- 4 files changed, 27 insertions(+), 29 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-04-16 22:45 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-04-16 22:45 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 12 patches, based on 06c2aac4014c38247256fe49c61b7f55890271e7. Subsystems affected by this patch series: mm/documentation mm/kasan csky ia64 mm/pagemap gcov lib Subsystem: mm/documentation Randy Dunlap <rdunlap@infradead.org>: mm: eliminate "expecting prototype" kernel-doc warnings Subsystem: mm/kasan Arnd Bergmann <arnd@arndb.de>: kasan: fix hwasan build for gcc Walter Wu <walter-zh.wu@mediatek.com>: kasan: remove redundant config option Subsystem: csky Randy Dunlap <rdunlap@infradead.org>: csky: change a Kconfig symbol name to fix e1000 build error Subsystem: ia64 Randy Dunlap <rdunlap@infradead.org>: ia64: remove duplicate entries in generic_defconfig ia64: fix discontig.c section mismatches John Paul Adrian Glaubitz <glaubitz () physik ! fu-berlin ! de>: ia64: tools: remove inclusion of ia64-specific version of errno.h header John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>: ia64: tools: remove duplicate definition of ia64_mf() on ia64 Subsystem: mm/pagemap Zack Rusin <zackr@vmware.com>: mm/mapping_dirty_helpers: guard hugepage pud's usage Christophe Leroy <christophe.leroy@csgroup.eu>: mm: ptdump: fix build failure Subsystem: gcov Johannes Berg <johannes.berg@intel.com>: gcov: clang: fix clang-11+ build Subsystem: lib Randy Dunlap <rdunlap@infradead.org>: lib: remove "expecting prototype" kernel-doc warnings arch/arm64/kernel/sleep.S | 2 +- arch/csky/Kconfig | 2 +- arch/csky/include/asm/page.h | 2 +- arch/ia64/configs/generic_defconfig | 2 -- arch/ia64/mm/discontig.c | 6 +++--- arch/x86/kernel/acpi/wakeup_64.S | 2 +- include/linux/kasan.h | 2 +- kernel/gcov/clang.c | 2 +- lib/Kconfig.kasan | 9 ++------- lib/earlycpio.c | 4 ++-- lib/lru_cache.c | 3 ++- lib/parman.c | 4 ++-- lib/radix-tree.c | 11 ++++++----- mm/kasan/common.c | 2 +- mm/kasan/kasan.h | 2 +- mm/kasan/report_generic.c | 2 +- mm/mapping_dirty_helpers.c | 2 ++ mm/mmu_gather.c | 29 +++++++++++++++++++---------- mm/oom_kill.c | 2 +- mm/ptdump.c | 2 +- mm/shuffle.c | 4 ++-- scripts/Makefile.kasan | 22 ++++++++++++++-------- security/Kconfig.hardening | 4 ++-- tools/arch/ia64/include/asm/barrier.h | 3 --- tools/include/uapi/asm/errno.h | 2 -- 25 files changed, 67 insertions(+), 60 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-04-09 20:26 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-04-09 20:26 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 16 patches, based on 17e7124aad766b3f158943acb51467f86220afe9. Subsystems affected by this patch series: MAINTAINERS mailmap mm/kasan mm/gup nds32 gcov ocfs2 ia64 mm/pagecache mm/kasan mm/kfence lib Subsystem: MAINTAINERS Marek Behún <kabel@kernel.org>: MAINTAINERS: update CZ.NIC's Turris information treewide: change my e-mail address, fix my name Subsystem: mailmap Jordan Crouse <jordan@cosmicpenguin.net>: mailmap: update email address for Jordan Crouse Matthew Wilcox <willy@infradead.org>: .mailmap: fix old email addresses Subsystem: mm/kasan Arnd Bergmann <arnd@arndb.de>: kasan: fix hwasan build for gcc Walter Wu <walter-zh.wu@mediatek.com>: kasan: remove redundant config option Subsystem: mm/gup Aili Yao <yaoaili@kingsoft.com>: mm/gup: check page posion status for coredump. Subsystem: nds32 Mike Rapoport <rppt@linux.ibm.com>: nds32: flush_dcache_page: use page_mapping_file to avoid races with swapoff Subsystem: gcov Nick Desaulniers <ndesaulniers@google.com>: gcov: re-fix clang-11+ support Subsystem: ocfs2 Wengang Wang <wen.gang.wang@oracle.com>: ocfs2: fix deadlock between setattr and dio_end_io_write Subsystem: ia64 Sergei Trofimovich <slyfox@gentoo.org>: ia64: fix user_stack_pointer() for ptrace() Subsystem: mm/pagecache Jack Qiu <jack.qiu@huawei.com>: fs: direct-io: fix missing sdio->boundary Subsystem: mm/kasan Andrey Konovalov <andreyknvl@google.com>: kasan: fix conflict with page poisoning Andrew Morton <akpm@linux-foundation.org>: lib/test_kasan_module.c: suppress unused var warning Subsystem: mm/kfence Marco Elver <elver@google.com>: kfence, x86: fix preemptible warning on KPTI-enabled systems Subsystem: lib Julian Braha <julianbraha@gmail.com>: lib: fix kconfig dependency on ARCH_WANT_FRAME_POINTERS .mailmap | 7 ++ Documentation/ABI/testing/debugfs-moxtet | 4 - Documentation/ABI/testing/debugfs-turris-mox-rwtm | 2 Documentation/ABI/testing/sysfs-bus-moxtet-devices | 6 +- Documentation/ABI/testing/sysfs-class-led-driver-turris-omnia | 2 Documentation/ABI/testing/sysfs-firmware-turris-mox-rwtm | 10 +-- Documentation/devicetree/bindings/leds/cznic,turris-omnia-leds.yaml | 2 MAINTAINERS | 13 +++- arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts | 2 arch/arm64/kernel/sleep.S | 2 arch/ia64/include/asm/ptrace.h | 8 -- arch/nds32/mm/cacheflush.c | 2 arch/x86/include/asm/kfence.h | 7 ++ arch/x86/kernel/acpi/wakeup_64.S | 2 drivers/bus/moxtet.c | 4 - drivers/firmware/turris-mox-rwtm.c | 4 - drivers/gpio/gpio-moxtet.c | 4 - drivers/leds/leds-turris-omnia.c | 4 - drivers/mailbox/armada-37xx-rwtm-mailbox.c | 4 - drivers/watchdog/armada_37xx_wdt.c | 4 - fs/direct-io.c | 5 + fs/ocfs2/aops.c | 11 --- fs/ocfs2/file.c | 8 ++ include/dt-bindings/bus/moxtet.h | 2 include/linux/armada-37xx-rwtm-mailbox.h | 2 include/linux/kasan.h | 2 include/linux/moxtet.h | 2 kernel/gcov/clang.c | 29 ++++++---- lib/Kconfig.debug | 6 +- lib/Kconfig.kasan | 9 --- lib/test_kasan_module.c | 2 mm/gup.c | 4 + mm/internal.h | 20 ++++++ mm/kasan/common.c | 2 mm/kasan/kasan.h | 2 mm/kasan/report_generic.c | 2 mm/page_poison.c | 4 + scripts/Makefile.kasan | 18 ++++-- security/Kconfig.hardening | 4 - 39 files changed, 136 insertions(+), 91 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-03-25 4:36 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-03-25 4:36 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 14 patches, based on 7acac4b3196caee5e21fb5ea53f8bc124e6a16fc. Subsystems affected by this patch series: mm/hugetlb mm/kasan mm/gup mm/selftests mm/z3fold squashfs ia64 gcov mm/kfence mm/memblock mm/highmem mailmap Subsystem: mm/hugetlb Miaohe Lin <linmiaohe@huawei.com>: hugetlb_cgroup: fix imbalanced css_get and css_put pair for shared mappings Subsystem: mm/kasan Andrey Konovalov <andreyknvl@google.com>: kasan: fix per-page tags for non-page_alloc pages Subsystem: mm/gup Sean Christopherson <seanjc@google.com>: mm/mmu_notifiers: ensure range_end() is paired with range_start() Subsystem: mm/selftests Rong Chen <rong.a.chen@intel.com>: selftests/vm: fix out-of-tree build Subsystem: mm/z3fold Thomas Hebb <tommyhebb@gmail.com>: z3fold: prevent reclaim/free race for headless pages Subsystem: squashfs Sean Nyekjaer <sean@geanix.com>: squashfs: fix inode lookup sanity checks Phillip Lougher <phillip@squashfs.org.uk>: squashfs: fix xattr id and id lookup sanity checks Subsystem: ia64 Sergei Trofimovich <slyfox@gentoo.org>: ia64: mca: allocate early mca with GFP_ATOMIC ia64: fix format strings for err_inject Subsystem: gcov Nick Desaulniers <ndesaulniers@google.com>: gcov: fix clang-11+ support Subsystem: mm/kfence Marco Elver <elver@google.com>: kfence: make compatible with kmemleak Subsystem: mm/memblock Mike Rapoport <rppt@linux.ibm.com>: mm: memblock: fix section mismatch warning again Subsystem: mm/highmem Ira Weiny <ira.weiny@intel.com>: mm/highmem: fix CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP Subsystem: mailmap Andrey Konovalov <andreyknvl@google.com>: mailmap: update Andrey Konovalov's email address .mailmap | 1 arch/ia64/kernel/err_inject.c | 22 +++++------ arch/ia64/kernel/mca.c | 2 - fs/squashfs/export.c | 8 +++- fs/squashfs/id.c | 6 ++- fs/squashfs/squashfs_fs.h | 1 fs/squashfs/xattr_id.c | 6 ++- include/linux/hugetlb_cgroup.h | 15 ++++++- include/linux/memblock.h | 4 +- include/linux/mm.h | 18 +++++++-- include/linux/mmu_notifier.h | 10 ++--- kernel/gcov/clang.c | 69 ++++++++++++++++++++++++++++++++++++ mm/highmem.c | 4 +- mm/hugetlb.c | 41 +++++++++++++++++++-- mm/hugetlb_cgroup.c | 10 ++++- mm/kfence/core.c | 9 ++++ mm/kmemleak.c | 3 + mm/mmu_notifier.c | 23 ++++++++++++ mm/z3fold.c | 16 +++++++- tools/testing/selftests/vm/Makefile | 4 +- 20 files changed, 230 insertions(+), 42 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-03-13 5:06 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-03-13 5:06 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 29 patches, based on f78d76e72a4671ea52d12752d92077788b4f5d50. Subsystems affected by this patch series: mm/memblock core-kernel kconfig mm/pagealloc fork mm/hugetlb mm/highmem binfmt MAINTAINERS kbuild mm/kfence mm/oom-kill mm/madvise mm/kasan mm/userfaultfd mm/memory-failure ia64 mm/memcg mm/zram Subsystem: mm/memblock Arnd Bergmann <arnd@arndb.de>: memblock: fix section mismatch warning Subsystem: core-kernel Arnd Bergmann <arnd@arndb.de>: stop_machine: mark helpers __always_inline Subsystem: kconfig Masahiro Yamada <masahiroy@kernel.org>: init/Kconfig: make COMPILE_TEST depend on HAS_IOMEM Subsystem: mm/pagealloc Mike Rapoport <rppt@linux.ibm.com>: mm/page_alloc.c: refactor initialization of struct page for holes in memory layout Subsystem: fork Fenghua Yu <fenghua.yu@intel.com>: mm/fork: clear PASID for new mm Subsystem: mm/hugetlb Peter Xu <peterx@redhat.com>: Patch series "mm/hugetlb: Early cow on fork, and a few cleanups", v5: hugetlb: dedup the code to add a new file_region hugetlb: break earlier in add_reservation_in_range() when we can mm: introduce page_needs_cow_for_dma() for deciding whether cow mm: use is_cow_mapping() across tree where proper hugetlb: do early cow when page pinned on src mm Subsystem: mm/highmem OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>: mm/highmem.c: fix zero_user_segments() with start > end Subsystem: binfmt Lior Ribak <liorribak@gmail.com>: binfmt_misc: fix possible deadlock in bm_register_write Subsystem: MAINTAINERS Vlastimil Babka <vbabka@suse.cz>: MAINTAINERS: exclude uapi directories in API/ABI section Subsystem: kbuild Arnd Bergmann <arnd@arndb.de>: linux/compiler-clang.h: define HAVE_BUILTIN_BSWAP* Subsystem: mm/kfence Marco Elver <elver@google.com>: kfence: fix printk format for ptrdiff_t kfence, slab: fix cache_alloc_debugcheck_after() for bulk allocations kfence: fix reports if constant function prefixes exist Subsystem: mm/oom-kill "Matthew Wilcox (Oracle)" <willy@infradead.org>: include/linux/sched/mm.h: use rcu_dereference in in_vfork() Subsystem: mm/madvise Suren Baghdasaryan <surenb@google.com>: mm/madvise: replace ptrace attach requirement for process_madvise Subsystem: mm/kasan Andrey Konovalov <andreyknvl@google.com>: kasan, mm: fix crash with HW_TAGS and DEBUG_PAGEALLOC kasan: fix KASAN_STACK dependency for HW_TAGS Subsystem: mm/userfaultfd Nadav Amit <namit@vmware.com>: mm/userfaultfd: fix memory corruption due to writeprotect Subsystem: mm/memory-failure Naoya Horiguchi <naoya.horiguchi@nec.com>: mm, hwpoison: do not lock page again when me_huge_page() successfully recovers Subsystem: ia64 Sergei Trofimovich <slyfox@gentoo.org>: ia64: fix ia64_syscall_get_set_arguments() for break-based syscalls ia64: fix ptrace(PTRACE_SYSCALL_INFO_EXIT) sign Subsystem: mm/memcg Zhou Guanghui <zhouguanghui1@huawei.com>: mm/memcg: rename mem_cgroup_split_huge_fixup to split_page_memcg and add nr_pages argument mm/memcg: set memcg when splitting page Subsystem: mm/zram Minchan Kim <minchan@kernel.org>: zram: fix return value on writeback_store zram: fix broken page writeback MAINTAINERS | 4 arch/ia64/include/asm/syscall.h | 2 arch/ia64/kernel/ptrace.c | 24 +++- drivers/block/zram/zram_drv.c | 17 +- drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c | 4 drivers/gpu/drm/vmwgfx/vmwgfx_ttm_glue.c | 2 fs/binfmt_misc.c | 29 ++--- fs/proc/task_mmu.c | 2 include/linux/compiler-clang.h | 6 + include/linux/memblock.h | 4 include/linux/memcontrol.h | 6 - include/linux/mm.h | 21 +++ include/linux/mm_types.h | 1 include/linux/sched/mm.h | 3 include/linux/stop_machine.h | 11 + init/Kconfig | 3 kernel/fork.c | 8 + lib/Kconfig.kasan | 1 mm/highmem.c | 17 ++ mm/huge_memory.c | 10 - mm/hugetlb.c | 123 +++++++++++++++------ mm/internal.h | 5 mm/kfence/report.c | 30 +++-- mm/madvise.c | 13 ++ mm/memcontrol.c | 15 +- mm/memory-failure.c | 4 mm/memory.c | 16 +- mm/page_alloc.c | 167 ++++++++++++++--------------- mm/slab.c | 2 29 files changed, 334 insertions(+), 216 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-02-26 1:14 Andrew Morton 2021-02-26 17:55 ` incoming Linus Torvalds 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2021-02-26 1:14 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm - The rest of MM. Includes kfence - another runtime memory validator. Not as thorough as KASAN, but it has unmeasurable overhead and is intended to be usable in production builds. - Everything else 118 patches, based on 6fbd6cf85a3be127454a1ad58525a3adcf8612ab. Subsystems affected by this patch series: mm/thp mm/cma mm/vmstat mm/memory-hotplug mm/mlock mm/rmap mm/zswap mm/zsmalloc mm/cleanups mm/kfence mm/kasan2 alpha procfs sysctl misc core-kernel MAINTAINERS lib bitops checkpatch init coredump seq_file gdb ubsan initramfs mm/pagemap2 Subsystem: mm/thp "Matthew Wilcox (Oracle)" <willy@infradead.org>: Patch series "Overhaul multi-page lookups for THP", v4: mm: make pagecache tagged lookups return only head pages mm/shmem: use pagevec_lookup in shmem_unlock_mapping mm/swap: optimise get_shadow_from_swap_cache mm: add FGP_ENTRY mm/filemap: rename find_get_entry to mapping_get_entry mm/filemap: add helper for finding pages mm/filemap: add mapping_seek_hole_data iomap: use mapping_seek_hole_data mm: add and use find_lock_entries mm: add an 'end' parameter to find_get_entries mm: add an 'end' parameter to pagevec_lookup_entries mm: remove nr_entries parameter from pagevec_lookup_entries mm: pass pvec directly to find_get_entries mm: remove pagevec_lookup_entries Rik van Riel <riel@surriel.com>: Patch series "mm,thp,shm: limit shmem THP alloc gfp_mask", v6: mm,thp,shmem: limit shmem THP alloc gfp_mask mm,thp,shm: limit gfp mask to no more than specified mm,thp,shmem: make khugepaged obey tmpfs mount flags mm,shmem,thp: limit shmem THP allocations to requested zones Subsystem: mm/cma Roman Gushchin <guro@fb.com>: mm: cma: allocate cma areas bottom-up David Hildenbrand <david@redhat.com>: mm/cma: expose all pages to the buddy if activation of an area fails mm/page_alloc: count CMA pages per zone and print them in /proc/zoneinfo Patrick Daly <pdaly@codeaurora.org>: mm: cma: print region name on failure Subsystem: mm/vmstat Johannes Weiner <hannes@cmpxchg.org>: mm: vmstat: fix NOHZ wakeups for node stat changes mm: vmstat: add some comments on internal storage of byte items Jiang Biao <benbjiang@tencent.com>: mm/vmstat.c: erase latency in vmstat_shepherd Subsystem: mm/memory-hotplug Dan Williams <dan.j.williams@intel.com>: Patch series "mm: Fix pfn_to_online_page() with respect to ZONE_DEVICE", v4: mm: move pfn_to_online_page() out of line mm: teach pfn_to_online_page() to consider subsection validity mm: teach pfn_to_online_page() about ZONE_DEVICE section collisions mm: fix memory_failure() handling of dax-namespace metadata Anshuman Khandual <anshuman.khandual@arm.com>: mm/memory_hotplug: rename all existing 'memhp' into 'mhp' David Hildenbrand <david@redhat.com>: mm/memory_hotplug: MEMHP_MERGE_RESOURCE -> MHP_MERGE_RESOURCE Miaohe Lin <linmiaohe@huawei.com>: mm/memory_hotplug: use helper function zone_end_pfn() to get end_pfn David Hildenbrand <david@redhat.com>: drivers/base/memory: don't store phys_device in memory blocks Documentation: sysfs/memory: clarify some memory block device properties Anshuman Khandual <anshuman.khandual@arm.com>: Patch series "mm/memory_hotplug: Pre-validate the address range with platform", v5: mm/memory_hotplug: prevalidate the address range being added with platform arm64/mm: define arch_get_mappable_range() s390/mm: define arch_get_mappable_range() David Hildenbrand <david@redhat.com>: virtio-mem: check against mhp_get_pluggable_range() which memory we can hotplug Subsystem: mm/mlock Miaohe Lin <linmiaohe@huawei.com>: mm/mlock: stop counting mlocked pages when none vma is found Subsystem: mm/rmap Miaohe Lin <linmiaohe@huawei.com>: mm/rmap: correct some obsolete comments of anon_vma mm/rmap: remove unneeded semicolon in page_not_mapped() mm/rmap: fix obsolete comment in __page_check_anon_rmap() mm/rmap: use page_not_mapped in try_to_unmap() mm/rmap: correct obsolete comment of page_get_anon_vma() mm/rmap: fix potential pte_unmap on an not mapped pte Subsystem: mm/zswap Randy Dunlap <rdunlap@infradead.org>: mm: zswap: clean up confusing comment Tian Tao <tiantao6@hisilicon.com>: Patch series "Fix the compatibility of zsmalloc and zswap": mm/zswap: add the flag can_sleep_mapped mm: set the sleep_mapped to true for zbud and z3fold Subsystem: mm/zsmalloc Miaohe Lin <linmiaohe@huawei.com>: mm/zsmalloc.c: convert to use kmem_cache_zalloc in cache_alloc_zspage() Rokudo Yan <wu-yan@tcl.com>: zsmalloc: account the number of compacted pages correctly Miaohe Lin <linmiaohe@huawei.com>: mm/zsmalloc.c: use page_private() to access page->private Subsystem: mm/cleanups Guo Ren <guoren@linux.alibaba.com>: mm: page-flags.h: Typo fix (It -> If) Daniel Vetter <daniel.vetter@ffwll.ch>: mm/dmapool: use might_alloc() mm/backing-dev.c: use might_alloc() Stephen Zhang <stephenzhangzsd@gmail.com>: mm/early_ioremap.c: use __func__ instead of function name Subsystem: mm/kfence Alexander Potapenko <glider@google.com>: Patch series "KFENCE: A low-overhead sampling-based memory safety error detector", v7: mm: add Kernel Electric-Fence infrastructure x86, kfence: enable KFENCE for x86 Marco Elver <elver@google.com>: arm64, kfence: enable KFENCE for ARM64 kfence: use pt_regs to generate stack trace on faults Alexander Potapenko <glider@google.com>: mm, kfence: insert KFENCE hooks for SLAB mm, kfence: insert KFENCE hooks for SLUB kfence, kasan: make KFENCE compatible with KASAN Marco Elver <elver@google.com>: kfence, Documentation: add KFENCE documentation kfence: add test suite MAINTAINERS: add entry for KFENCE kfence: report sensitive information based on no_hash_pointers Alexander Potapenko <glider@google.com>: Patch series "Add error_report_end tracepoint to KFENCE and KASAN", v3: tracing: add error_report_end trace point kfence: use error_report_end tracepoint kasan: use error_report_end tracepoint Subsystem: mm/kasan2 Andrey Konovalov <andreyknvl@google.com>: Patch series "kasan: optimizations and fixes for HW_TAGS", v4: kasan, mm: don't save alloc stacks twice kasan, mm: optimize kmalloc poisoning kasan: optimize large kmalloc poisoning kasan: clean up setting free info in kasan_slab_free kasan: unify large kfree checks kasan: rework krealloc tests kasan, mm: fail krealloc on freed objects kasan, mm: optimize krealloc poisoning kasan: ensure poisoning size alignment arm64: kasan: simplify and inline MTE functions kasan: inline HW_TAGS helper functions kasan: clarify that only first bug is reported in HW_TAGS Subsystem: alpha Randy Dunlap <rdunlap@infradead.org>: alpha: remove CONFIG_EXPERIMENTAL from defconfigs Subsystem: procfs Helge Deller <deller@gmx.de>: proc/wchan: use printk format instead of lookup_symbol_name() Josef Bacik <josef@toxicpanda.com>: proc: use kvzalloc for our kernel buffer Subsystem: sysctl Lin Feng <linf@wangsu.com>: sysctl.c: fix underflow value setting risk in vm_table Subsystem: misc Randy Dunlap <rdunlap@infradead.org>: include/linux: remove repeated words Miguel Ojeda <ojeda@kernel.org>: treewide: Miguel has moved Subsystem: core-kernel Hubert Jasudowicz <hubert.jasudowicz@gmail.com>: groups: use flexible-array member in struct group_info groups: simplify struct group_info allocation Randy Dunlap <rdunlap@infradead.org>: kernel: delete repeated words in comments Subsystem: MAINTAINERS Vlastimil Babka <vbabka@suse.cz>: MAINTAINERS: add uapi directories to API/ABI section Subsystem: lib Huang Shijie <sjhuang@iluvatar.ai>: lib/genalloc.c: change return type to unsigned long for bitmap_set_ll Francis Laniel <laniel_francis@privacyrequired.com>: string.h: move fortified functions definitions in a dedicated header. Yogesh Lal <ylal@codeaurora.org>: lib: stackdepot: add support to configure STACK_HASH_SIZE Vijayanand Jitta <vjitta@codeaurora.org>: lib: stackdepot: add support to disable stack depot lib: stackdepot: fix ignoring return value warning Masahiro Yamada <masahiroy@kernel.org>: lib/cmdline: remove an unneeded local variable in next_arg() Subsystem: bitops Geert Uytterhoeven <geert+renesas@glider.be>: include/linux/bitops.h: spelling s/synomyn/synonym/ Subsystem: checkpatch Joe Perches <joe@perches.com>: checkpatch: improve blank line after declaration test Peng Wang <rocking@linux.alibaba.com>: checkpatch: ignore warning designated initializers using NR_CPUS Dwaipayan Ray <dwaipayanray1@gmail.com>: checkpatch: trivial style fixes Joe Perches <joe@perches.com>: checkpatch: prefer ftrace over function entry/exit printks checkpatch: improve TYPECAST_INT_CONSTANT test message Aditya Srivastava <yashsri421@gmail.com>: checkpatch: add warning for avoiding .L prefix symbols in assembly files Joe Perches <joe@perches.com>: checkpatch: add kmalloc_array_node to unnecessary OOM message check Chris Down <chris@chrisdown.name>: checkpatch: don't warn about colon termination in linker scripts Song Liu <songliubraving@fb.com>: checkpatch: do not apply "initialise globals to 0" check to BPF progs Subsystem: init Masahiro Yamada <masahiroy@kernel.org>: init/version.c: remove Version_<LINUX_VERSION_CODE> symbol init: clean up early_param_on_off() macro Bhaskar Chowdhury <unixbhaskar@gmail.com>: init/Kconfig: fix a typo in CC_VERSION_TEXT help text Subsystem: coredump Ira Weiny <ira.weiny@intel.com>: fs/coredump: use kmap_local_page() Subsystem: seq_file NeilBrown <neilb@suse.de>: Patch series "Fix some seq_file users that were recently broken": seq_file: document how per-entry resources are managed. x86: fix seq_file iteration for pat/memtype.c Subsystem: gdb George Prekas <prekageo@amazon.com>: scripts/gdb: fix list_for_each Sumit Garg <sumit.garg@linaro.org>: kgdb: fix to kill breakpoints on initmem after boot Subsystem: ubsan Andrey Ryabinin <ryabinin.a.a@gmail.com>: ubsan: remove overflow checks Subsystem: initramfs Florian Fainelli <f.fainelli@gmail.com>: initramfs: panic with memory information Subsystem: mm/pagemap2 Huang Pei <huangpei@loongson.cn>: MIPS: make userspace mapping young by default .mailmap | 1 CREDITS | 9 Documentation/ABI/testing/sysfs-devices-memory | 58 - Documentation/admin-guide/auxdisplay/cfag12864b.rst | 2 Documentation/admin-guide/auxdisplay/ks0108.rst | 2 Documentation/admin-guide/kernel-parameters.txt | 6 Documentation/admin-guide/mm/memory-hotplug.rst | 20 Documentation/dev-tools/index.rst | 1 Documentation/dev-tools/kasan.rst | 8 Documentation/dev-tools/kfence.rst | 318 +++++++ Documentation/filesystems/seq_file.rst | 6 MAINTAINERS | 26 arch/alpha/configs/defconfig | 1 arch/arm64/Kconfig | 1 arch/arm64/include/asm/cache.h | 1 arch/arm64/include/asm/kasan.h | 1 arch/arm64/include/asm/kfence.h | 26 arch/arm64/include/asm/mte-def.h | 2 arch/arm64/include/asm/mte-kasan.h | 65 + arch/arm64/include/asm/mte.h | 2 arch/arm64/kernel/mte.c | 46 - arch/arm64/lib/mte.S | 16 arch/arm64/mm/fault.c | 8 arch/arm64/mm/mmu.c | 23 arch/mips/mm/cache.c | 30 arch/s390/mm/init.c | 1 arch/s390/mm/vmem.c | 14 arch/x86/Kconfig | 1 arch/x86/include/asm/kfence.h | 76 + arch/x86/mm/fault.c | 10 arch/x86/mm/pat/memtype.c | 4 drivers/auxdisplay/cfag12864b.c | 4 drivers/auxdisplay/cfag12864bfb.c | 4 drivers/auxdisplay/ks0108.c | 4 drivers/base/memory.c | 35 drivers/block/zram/zram_drv.c | 2 drivers/hv/hv_balloon.c | 2 drivers/virtio/virtio_mem.c | 43 drivers/xen/balloon.c | 2 fs/coredump.c | 4 fs/iomap/seek.c | 125 -- fs/proc/base.c | 21 fs/proc/proc_sysctl.c | 4 include/linux/bitops.h | 2 include/linux/cfag12864b.h | 2 include/linux/cred.h | 2 include/linux/fortify-string.h | 302 ++++++ include/linux/gfp.h | 2 include/linux/init.h | 4 include/linux/kasan.h | 25 include/linux/kfence.h | 230 +++++ include/linux/kgdb.h | 2 include/linux/khugepaged.h | 2 include/linux/ks0108.h | 2 include/linux/mdev.h | 2 include/linux/memory.h | 3 include/linux/memory_hotplug.h | 33 include/linux/memremap.h | 6 include/linux/mmzone.h | 49 - include/linux/page-flags.h | 4 include/linux/pagemap.h | 10 include/linux/pagevec.h | 10 include/linux/pgtable.h | 8 include/linux/ptrace.h | 2 include/linux/rmap.h | 3 include/linux/slab_def.h | 3 include/linux/slub_def.h | 3 include/linux/stackdepot.h | 9 include/linux/string.h | 282 ------ include/linux/vmstat.h | 6 include/linux/zpool.h | 3 include/linux/zsmalloc.h | 2 include/trace/events/error_report.h | 74 + include/uapi/linux/firewire-cdev.h | 2 include/uapi/linux/input.h | 2 init/Kconfig | 2 init/initramfs.c | 19 init/main.c | 6 init/version.c | 8 kernel/debug/debug_core.c | 11 kernel/events/core.c | 8 kernel/events/uprobes.c | 2 kernel/groups.c | 7 kernel/locking/rtmutex.c | 4 kernel/locking/rwsem.c | 2 kernel/locking/semaphore.c | 2 kernel/sched/fair.c | 2 kernel/sched/membarrier.c | 2 kernel/sysctl.c | 8 kernel/trace/Makefile | 1 kernel/trace/error_report-traces.c | 12 lib/Kconfig | 9 lib/Kconfig.debug | 1 lib/Kconfig.kfence | 84 + lib/Kconfig.ubsan | 17 lib/cmdline.c | 7 lib/genalloc.c | 3 lib/stackdepot.c | 41 lib/test_kasan.c | 111 ++ lib/test_ubsan.c | 49 - lib/ubsan.c | 68 - mm/Makefile | 1 mm/backing-dev.c | 3 mm/cma.c | 64 - mm/dmapool.c | 3 mm/early_ioremap.c | 12 mm/filemap.c | 361 +++++--- mm/huge_memory.c | 6 mm/internal.h | 6 mm/kasan/common.c | 213 +++- mm/kasan/generic.c | 3 mm/kasan/hw_tags.c | 2 mm/kasan/kasan.h | 97 +- mm/kasan/report.c | 8 mm/kasan/shadow.c | 78 + mm/kfence/Makefile | 6 mm/kfence/core.c | 875 +++++++++++++++++++- mm/kfence/kfence.h | 126 ++ mm/kfence/kfence_test.c | 860 +++++++++++++++++++ mm/kfence/report.c | 350 ++++++-- mm/khugepaged.c | 22 mm/memory-failure.c | 6 mm/memory.c | 4 mm/memory_hotplug.c | 178 +++- mm/memremap.c | 23 mm/mlock.c | 2 mm/page_alloc.c | 1 mm/rmap.c | 24 mm/shmem.c | 160 +-- mm/slab.c | 38 mm/slab_common.c | 29 mm/slub.c | 63 + mm/swap.c | 54 - mm/swap_state.c | 7 mm/truncate.c | 141 --- mm/vmstat.c | 35 mm/z3fold.c | 1 mm/zbud.c | 1 mm/zpool.c | 13 mm/zsmalloc.c | 22 mm/zswap.c | 57 + samples/auxdisplay/cfag12864b-example.c | 2 scripts/Makefile.ubsan | 2 scripts/checkpatch.pl | 152 ++- scripts/gdb/linux/lists.py | 5 145 files changed, 5046 insertions(+), 1682 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2021-02-26 1:14 incoming Andrew Morton @ 2021-02-26 17:55 ` Linus Torvalds 2021-02-26 19:16 ` incoming Andrew Morton 0 siblings, 1 reply; 349+ messages in thread From: Linus Torvalds @ 2021-02-26 17:55 UTC (permalink / raw) To: Andrew Morton; +Cc: mm-commits, Linux-MM On Thu, Feb 25, 2021 at 5:14 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > - The rest of MM. > > Includes kfence - another runtime memory validator. Not as > thorough as KASAN, but it has unmeasurable overhead and is intended > to be usable in production builds. > > - Everything else Just to clarify: you have nothing else really pending? I'm hoping to just do -rc1 this weekend after all - despite my late start due to loss of power for several days. I'll allow late stragglers with good reason through, but the fewer of those there are, the better, of course. Thanks, Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2021-02-26 17:55 ` incoming Linus Torvalds @ 2021-02-26 19:16 ` Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-02-26 19:16 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, Linux-MM On Fri, 26 Feb 2021 09:55:27 -0800 Linus Torvalds <torvalds@linux-foundation.org> wrote: > On Thu, Feb 25, 2021 at 5:14 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > > > - The rest of MM. > > > > Includes kfence - another runtime memory validator. Not as > > thorough as KASAN, but it has unmeasurable overhead and is intended > > to be usable in production builds. > > > > - Everything else > > Just to clarify: you have nothing else really pending? Yes, that's it from me for -rc1. ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-02-24 19:58 Andrew Morton 2021-02-24 21:30 ` incoming Linus Torvalds 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2021-02-24 19:58 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits A few small subsystems and some of MM. 173 patches, based on c03c21ba6f4e95e406a1a7b4c34ef334b977c194. Subsystems affected by this patch series: hexagon scripts ntfs ocfs2 vfs mm/slab-generic mm/slab mm/slub mm/debug mm/pagecache mm/swap mm/memcg mm/pagemap mm/mprotect mm/mremap mm/page-reporting mm/vmalloc mm/kasan mm/pagealloc mm/memory-failure mm/hugetlb mm/vmscan mm/z3fold mm/compaction mm/mempolicy mm/oom-kill mm/hugetlbfs mm/migration Subsystem: hexagon Randy Dunlap <rdunlap@infradead.org>: hexagon: remove CONFIG_EXPERIMENTAL from defconfigs Subsystem: scripts tangchunyou <tangchunyou@yulong.com>: scripts/spelling.txt: increase error-prone spell checking zuoqilin <zuoqilin@yulong.com>: scripts/spelling.txt: check for "exeeds" dingsenjie <dingsenjie@yulong.com>: scripts/spelling.txt: add "allocted" and "exeeds" typo Colin Ian King <colin.king@canonical.com>: scripts/spelling.txt: add more spellings to spelling.txt Subsystem: ntfs Randy Dunlap <rdunlap@infradead.org>: ntfs: layout.h: delete duplicated words Rustam Kovhaev <rkovhaev@gmail.com>: ntfs: check for valid standard information attribute Subsystem: ocfs2 Yi Li <yili@winhong.com>: ocfs2: remove redundant conditional before iput guozh <guozh88@chinatelecom.cn>: ocfs2: clean up some definitions which are not used any more Dan Carpenter <dan.carpenter@oracle.com>: ocfs2: fix a use after free on error Jiapeng Chong <jiapeng.chong@linux.alibaba.com>: ocfs2: simplify the calculation of variables Subsystem: vfs Randy Dunlap <rdunlap@infradead.org>: fs: delete repeated words in comments Alexey Dobriyan <adobriyan@gmail.com>: ramfs: support O_TMPFILE Subsystem: mm/slab-generic Jacob Wen <jian.w.wen@oracle.com>: mm, tracing: record slab name for kmem_cache_free() Nikolay Borisov <nborisov@suse.com>: mm/sl?b.c: remove ctor argument from kmem_cache_flags Subsystem: mm/slab Zhiyuan Dai <daizhiyuan@phytium.com.cn>: mm/slab: minor coding style tweaks Subsystem: mm/slub Johannes Berg <johannes.berg@intel.com>: mm/slub: disable user tracing for kmemleak caches by default Vlastimil Babka <vbabka@suse.cz>: Patch series "mm, slab, slub: remove cpu and memory hotplug locks": mm, slub: stop freeing kmem_cache_node structures on node offline mm, slab, slub: stop taking memory hotplug lock mm, slab, slub: stop taking cpu hotplug lock mm, slub: splice cpu and page freelists in deactivate_slab() mm, slub: remove slub_memcg_sysfs boot param and CONFIG_SLUB_MEMCG_SYSFS_ON Zhiyuan Dai <daizhiyuan@phytium.com.cn>: mm/slub: minor coding style tweaks Subsystem: mm/debug "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm/debug: improve memcg debugging Anshuman Khandual <anshuman.khandual@arm.com>: Patch series "mm/debug_vm_pgtable: Some minor updates", v3: mm/debug_vm_pgtable/basic: add validation for dirtiness after write protect mm/debug_vm_pgtable/basic: iterate over entire protection_map[] Miaohe Lin <linmiaohe@huawei.com>: mm/page_owner: use helper function zone_end_pfn() to get end_pfn Subsystem: mm/pagecache Baolin Wang <baolin.wang@linux.alibaba.com>: mm/filemap: remove unused parameter and change to void type for replace_page_cache_page() Pavel Begunkov <asml.silence@gmail.com>: mm/filemap: don't revert iter on -EIOCBQUEUED "Matthew Wilcox (Oracle)" <willy@infradead.org>: Patch series "Refactor generic_file_buffered_read", v5: mm/filemap: rename generic_file_buffered_read subfunctions mm/filemap: remove dynamically allocated array from filemap_read mm/filemap: convert filemap_get_pages to take a pagevec mm/filemap: use head pages in generic_file_buffered_read mm/filemap: pass a sleep state to put_and_wait_on_page_locked mm/filemap: support readpage splitting a page mm/filemap: inline __wait_on_page_locked_async into caller mm/filemap: don't call ->readpage if IOCB_WAITQ is set mm/filemap: change filemap_read_page calling conventions mm/filemap: change filemap_create_page calling conventions mm/filemap: convert filemap_update_page to return an errno mm/filemap: move the iocb checks into filemap_update_page mm/filemap: add filemap_range_uptodate mm/filemap: split filemap_readahead out of filemap_get_pages mm/filemap: restructure filemap_get_pages mm/filemap: don't relock the page after calling readpage Christoph Hellwig <hch@lst.de>: mm/filemap: rename generic_file_buffered_read to filemap_read mm/filemap: simplify generic_file_read_iter Yang Guo <guoyang2@huawei.com>: fs/buffer.c: add checking buffer head stat before clear Baolin Wang <baolin.wang@linux.alibaba.com>: mm: backing-dev: Remove duplicated macro definition Subsystem: mm/swap Yang Li <abaci-bugfix@linux.alibaba.com>: mm/swap_slots.c: remove redundant NULL check Stephen Zhang <stephenzhangzsd@gmail.com>: mm/swapfile.c: fix debugging information problem Georgi Djakov <georgi.djakov@linaro.org>: mm/page_io: use pr_alert_ratelimited for swap read/write errors Rikard Falkeborn <rikard.falkeborn@gmail.com>: mm/swap_state: constify static struct attribute_group Yu Zhao <yuzhao@google.com>: mm/swap: don't SetPageWorkingset unconditionally during swapin Subsystem: mm/memcg Roman Gushchin <guro@fb.com>: mm: memcg/slab: pre-allocate obj_cgroups for slab caches with SLAB_ACCOUNT Muchun Song <songmuchun@bytedance.com>: mm: memcontrol: optimize per-lruvec stats counter memory usage Patch series "Convert all THP vmstat counters to pages", v6: mm: memcontrol: fix NR_ANON_THPS accounting in charge moving mm: memcontrol: convert NR_ANON_THPS account to pages mm: memcontrol: convert NR_FILE_THPS account to pages mm: memcontrol: convert NR_SHMEM_THPS account to pages mm: memcontrol: convert NR_SHMEM_PMDMAPPED account to pages mm: memcontrol: convert NR_FILE_PMDMAPPED account to pages mm: memcontrol: make the slab calculation consistent Alex Shi <alex.shi@linux.alibaba.com>: mm/memcg: revise the using condition of lock_page_lruvec function series mm/memcg: remove rcu locking for lock_page_lruvec function series Shakeel Butt <shakeelb@google.com>: mm: memcg: add swapcache stat for memcg v2 Roman Gushchin <guro@fb.com>: mm: kmem: make __memcg_kmem_(un)charge static Feng Tang <feng.tang@intel.com>: mm: page_counter: re-layout structure to reduce false sharing Yang Li <abaci-bugfix@linux.alibaba.com>: mm/memcontrol: remove redundant NULL check Muchun Song <songmuchun@bytedance.com>: mm: memcontrol: replace the loop with a list_for_each_entry() Shakeel Butt <shakeelb@google.com>: mm/list_lru.c: remove kvfree_rcu_local() Johannes Weiner <hannes@cmpxchg.org>: fs: buffer: use raw page_memcg() on locked page Muchun Song <songmuchun@bytedance.com>: mm: memcontrol: fix swap undercounting in cgroup2 mm: memcontrol: fix get_active_memcg return value mm: memcontrol: fix slub memory accounting Subsystem: mm/pagemap Adrian Huang <ahuang12@lenovo.com>: mm/mmap.c: remove unnecessary local variable Miaohe Lin <linmiaohe@huawei.com>: mm/memory.c: fix potential pte_unmap_unlock pte error mm/pgtable-generic.c: simplify the VM_BUG_ON condition in pmdp_huge_clear_flush() mm/pgtable-generic.c: optimize the VM_BUG_ON condition in pmdp_huge_clear_flush() mm/memory.c: fix potential pte_unmap_unlock pte error Subsystem: mm/mprotect Tianjia Zhang <tianjia.zhang@linux.alibaba.com>: mm/mprotect.c: optimize error detection in do_mprotect_pkey() Subsystem: mm/mremap Li Xinhai <lixinhai.lxh@gmail.com>: mm: rmap: explicitly reset vma->anon_vma in unlink_anon_vmas() mm: mremap: unlink anon_vmas when mremap with MREMAP_DONTUNMAP success Subsystem: mm/page-reporting sh <sh_def@163.com>: mm/page_reporting: use list_entry_is_head() in page_reporting_cycle() Subsystem: mm/vmalloc Yang Li <abaci-bugfix@linux.alibaba.com>: vmalloc: remove redundant NULL check Subsystem: mm/kasan Andrey Konovalov <andreyknvl@google.com>: Patch series "kasan: HW_TAGS tests support and fixes", v4: kasan: prefix global functions with kasan_ kasan: clarify HW_TAGS impact on TBI kasan: clean up comments in tests kasan: add macros to simplify checking test constraints kasan: add match-all tag tests kasan, arm64: allow using KUnit tests with HW_TAGS mode kasan: rename CONFIG_TEST_KASAN_MODULE kasan: add compiler barriers to KUNIT_EXPECT_KASAN_FAIL kasan: adapt kmalloc_uaf2 test to HW_TAGS mode kasan: fix memory corruption in kasan_bitops_tags test kasan: move _RET_IP_ to inline wrappers kasan: fix bug detection via ksize for HW_TAGS mode kasan: add proper page allocator tests kasan: add a test for kmem_cache_alloc/free_bulk kasan: don't run tests when KASAN is not enabled Walter Wu <walter-zh.wu@mediatek.com>: kasan: remove redundant config option Subsystem: mm/pagealloc Baoquan He <bhe@redhat.com>: Patch series "mm: clean up names and parameters of memmap_init_xxxx functions", v5: mm: fix prototype warning from kernel test robot mm: rename memmap_init() and memmap_init_zone() mm: simplify parater of function memmap_init_zone() mm: simplify parameter of setup_usemap() mm: remove unneeded local variable in free_area_init_core David Hildenbrand <david@redhat.com>: Patch series "mm: simplify free_highmem_page() and free_reserved_page()": video: fbdev: acornfb: remove free_unused_pages() mm: simplify free_highmem_page() and free_reserved_page() "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm/gfp: add kernel-doc for gfp_t Subsystem: mm/memory-failure Aili Yao <yaoaili@kingsoft.com>: mm,hwpoison: send SIGBUS to PF_MCE_EARLY processes on action required events Subsystem: mm/hugetlb Bibo Mao <maobibo@loongson.cn>: mm/huge_memory.c: update tlb entry if pmd is changed MIPS: do not call flush_tlb_all when setting pmd entry Miaohe Lin <linmiaohe@huawei.com>: mm/hugetlb: fix potential double free in hugetlb_register_node() error path Li Xinhai <lixinhai.lxh@gmail.com>: mm/hugetlb.c: fix unnecessary address expansion of pmd sharing Miaohe Lin <linmiaohe@huawei.com>: mm/hugetlb: avoid unnecessary hugetlb_acct_memory() call mm/hugetlb: use helper huge_page_order and pages_per_huge_page mm/hugetlb: fix use after free when subpool max_hpages accounting is not enabled Jiapeng Zhong <abaci-bugfix@linux.alibaba.com>: mm/hugetlb: simplify the calculation of variables Joao Martins <joao.m.martins@oracle.com>: Patch series "mm/hugetlb: follow_hugetlb_page() improvements", v2: mm/hugetlb: grab head page refcount once for group of subpages mm/hugetlb: refactor subpage recording Miaohe Lin <linmiaohe@huawei.com>: mm/hugetlb: fix some comment typos Yanfei Xu <yanfei.xu@windriver.com>: mm/hugetlb: remove redundant check in preparing and destroying gigantic page Zhiyuan Dai <daizhiyuan@phytium.com.cn>: mm/hugetlb.c: fix typos in comments Miaohe Lin <linmiaohe@huawei.com>: mm/huge_memory.c: remove unused return value of set_huge_zero_page() "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>: mm/pmem: avoid inserting hugepage PTE entry with fsdax if hugepage support is disabled Miaohe Lin <linmiaohe@huawei.com>: hugetlb_cgroup: use helper pages_per_huge_page() in hugetlb_cgroup mm/hugetlb: use helper function range_in_vma() in page_table_shareable() mm/hugetlb: remove unnecessary VM_BUG_ON_PAGE on putback_active_hugepage() mm/hugetlb: use helper huge_page_size() to get hugepage size Mike Kravetz <mike.kravetz@oracle.com>: hugetlb: fix update_and_free_page contig page struct assumption hugetlb: fix copy_huge_page_from_user contig page struct assumption Chen Wandun <chenwandun@huawei.com>: mm/hugetlb: suppress wrong warning info when alloc gigantic page Subsystem: mm/vmscan Alex Shi <alex.shi@linux.alibaba.com>: mm/vmscan: __isolate_lru_page_prepare() cleanup Miaohe Lin <linmiaohe@huawei.com>: mm/workingset.c: avoid unnecessary max_nodes estimation in count_shadow_nodes() Yu Zhao <yuzhao@google.com>: Patch series "mm: lru related cleanups", v2: mm/vmscan.c: use add_page_to_lru_list() include/linux/mm_inline.h: shuffle lru list addition and deletion functions mm: don't pass "enum lru_list" to lru list addition functions mm/swap.c: don't pass "enum lru_list" to trace_mm_lru_insertion() mm/swap.c: don't pass "enum lru_list" to del_page_from_lru_list() mm: add __clear_page_lru_flags() to replace page_off_lru() mm: VM_BUG_ON lru page flags include/linux/mm_inline.h: fold page_lru_base_type() into its sole caller include/linux/mm_inline.h: fold __update_lru_size() into its sole caller mm/vmscan.c: make lruvec_lru_size() static Oscar Salvador <osalvador@suse.de>: mm: workingset: clarify eviction order and distance calculation Mike Kravetz <mike.kravetz@oracle.com>: Patch series "create hugetlb flags to consolidate state", v3: hugetlb: use page.private for hugetlb specific page flags hugetlb: convert page_huge_active() HPageMigratable flag hugetlb: convert PageHugeTemporary() to HPageTemporary flag hugetlb: convert PageHugeFreed to HPageFreed flag include/linux/hugetlb.h: add synchronization information for new hugetlb specific flags hugetlb: fix uninitialized subpool pointer Dave Hansen <dave.hansen@linux.intel.com>: mm/vmscan: restore zone_reclaim_mode ABI Subsystem: mm/z3fold Miaohe Lin <linmiaohe@huawei.com>: z3fold: remove unused attribute for release_z3fold_page z3fold: simplify the zhdr initialization code in init_z3fold_page() Subsystem: mm/compaction Alex Shi <alex.shi@linux.alibaba.com>: mm/compaction: remove rcu_read_lock during page compaction Miaohe Lin <linmiaohe@huawei.com>: mm/compaction: remove duplicated VM_BUG_ON_PAGE !PageLocked Charan Teja Reddy <charante@codeaurora.org>: mm/compaction: correct deferral logic for proactive compaction Wonhyuk Yang <vvghjk1234@gmail.com>: mm/compaction: fix misbehaviors of fast_find_migrateblock() Vlastimil Babka <vbabka@suse.cz>: mm, compaction: make fast_isolate_freepages() stay within zone Subsystem: mm/mempolicy Huang Ying <ying.huang@intel.com>: numa balancing: migrate on fault among multiple bound nodes Miaohe Lin <linmiaohe@huawei.com>: mm/mempolicy: use helper range_in_vma() in queue_pages_test_walk() Subsystem: mm/oom-kill Tang Yizhou <tangyizhou@huawei.com>: mm, oom: fix a comment in dump_task() Subsystem: mm/hugetlbfs Mike Kravetz <mike.kravetz@oracle.com>: mm/hugetlb: change hugetlb_reserve_pages() to type bool hugetlbfs: remove special hugetlbfs_set_page_dirty() Miaohe Lin <linmiaohe@huawei.com>: hugetlbfs: remove useless BUG_ON(!inode) in hugetlbfs_setattr() hugetlbfs: use helper macro default_hstate in init_hugetlbfs_fs hugetlbfs: correct obsolete function name in hugetlbfs_read_iter() hugetlbfs: remove meaningless variable avoid_reserve hugetlbfs: make hugepage size conversion more readable hugetlbfs: correct some obsolete comments about inode i_mutex hugetlbfs: fix some comment typos hugetlbfs: remove unneeded return value of hugetlb_vmtruncate() Subsystem: mm/migration Chengyang Fan <cy.fan@huawei.com>: mm/migrate: remove unneeded semicolons Documentation/admin-guide/cgroup-v2.rst | 4 Documentation/admin-guide/kernel-parameters.txt | 8 Documentation/admin-guide/sysctl/vm.rst | 10 Documentation/core-api/mm-api.rst | 7 Documentation/dev-tools/kasan.rst | 24 Documentation/vm/arch_pgtable_helpers.rst | 8 arch/arm64/include/asm/memory.h | 1 arch/arm64/include/asm/mte-kasan.h | 12 arch/arm64/kernel/mte.c | 12 arch/arm64/kernel/sleep.S | 2 arch/arm64/mm/fault.c | 20 arch/hexagon/configs/comet_defconfig | 1 arch/ia64/include/asm/pgtable.h | 6 arch/ia64/mm/init.c | 18 arch/mips/mm/pgtable-32.c | 1 arch/mips/mm/pgtable-64.c | 1 arch/x86/kernel/acpi/wakeup_64.S | 2 drivers/base/node.c | 33 drivers/video/fbdev/acornfb.c | 34 fs/block_dev.c | 2 fs/btrfs/file.c | 2 fs/buffer.c | 7 fs/dcache.c | 4 fs/direct-io.c | 4 fs/exec.c | 4 fs/fhandle.c | 2 fs/fuse/dev.c | 6 fs/hugetlbfs/inode.c | 72 -- fs/ntfs/inode.c | 6 fs/ntfs/layout.h | 4 fs/ocfs2/cluster/heartbeat.c | 8 fs/ocfs2/dlm/dlmast.c | 10 fs/ocfs2/dlm/dlmcommon.h | 4 fs/ocfs2/refcounttree.c | 2 fs/ocfs2/super.c | 2 fs/pipe.c | 2 fs/proc/meminfo.c | 10 fs/proc/vmcore.c | 7 fs/ramfs/inode.c | 13 include/linux/fs.h | 4 include/linux/gfp.h | 14 include/linux/highmem-internal.h | 5 include/linux/huge_mm.h | 15 include/linux/hugetlb.h | 98 ++ include/linux/kasan-checks.h | 6 include/linux/kasan.h | 39 - include/linux/memcontrol.h | 43 - include/linux/migrate.h | 2 include/linux/mm.h | 28 include/linux/mm_inline.h | 123 +-- include/linux/mmzone.h | 30 include/linux/page-flags.h | 6 include/linux/page_counter.h | 9 include/linux/pagemap.h | 5 include/linux/swap.h | 8 include/trace/events/kmem.h | 24 include/trace/events/pagemap.h | 11 include/uapi/linux/mempolicy.h | 4 init/Kconfig | 14 lib/Kconfig.kasan | 14 lib/Makefile | 2 lib/test_kasan.c | 446 ++++++++---- lib/test_kasan_module.c | 5 mm/backing-dev.c | 6 mm/compaction.c | 73 +- mm/debug.c | 10 mm/debug_vm_pgtable.c | 86 ++ mm/filemap.c | 859 +++++++++++------------- mm/gup.c | 5 mm/huge_memory.c | 28 mm/hugetlb.c | 376 ++++------ mm/hugetlb_cgroup.c | 6 mm/kasan/common.c | 60 - mm/kasan/generic.c | 40 - mm/kasan/hw_tags.c | 16 mm/kasan/kasan.h | 87 +- mm/kasan/quarantine.c | 22 mm/kasan/report.c | 15 mm/kasan/report_generic.c | 10 mm/kasan/report_hw_tags.c | 8 mm/kasan/report_sw_tags.c | 8 mm/kasan/shadow.c | 27 mm/kasan/sw_tags.c | 22 mm/khugepaged.c | 6 mm/list_lru.c | 12 mm/memcontrol.c | 309 ++++---- mm/memory-failure.c | 34 mm/memory.c | 24 mm/memory_hotplug.c | 11 mm/mempolicy.c | 18 mm/mempool.c | 2 mm/migrate.c | 10 mm/mlock.c | 3 mm/mmap.c | 4 mm/mprotect.c | 7 mm/mremap.c | 8 mm/oom_kill.c | 5 mm/page_alloc.c | 70 - mm/page_io.c | 12 mm/page_owner.c | 4 mm/page_reporting.c | 2 mm/pgtable-generic.c | 9 mm/rmap.c | 35 mm/shmem.c | 2 mm/slab.c | 21 mm/slab.h | 20 mm/slab_common.c | 40 - mm/slob.c | 2 mm/slub.c | 169 ++-- mm/swap.c | 54 - mm/swap_slots.c | 3 mm/swap_state.c | 31 mm/swapfile.c | 8 mm/vmscan.c | 100 +- mm/vmstat.c | 14 mm/workingset.c | 7 mm/z3fold.c | 11 scripts/Makefile.kasan | 10 scripts/spelling.txt | 30 tools/objtool/check.c | 2 120 files changed, 2249 insertions(+), 1954 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2021-02-24 19:58 incoming Andrew Morton @ 2021-02-24 21:30 ` Linus Torvalds 2021-02-24 21:37 ` incoming Linus Torvalds 0 siblings, 1 reply; 349+ messages in thread From: Linus Torvalds @ 2021-02-24 21:30 UTC (permalink / raw) To: Andrew Morton; +Cc: Linux-MM, mm-commits On Wed, Feb 24, 2021 at 11:58 AM Andrew Morton <akpm@linux-foundation.org> wrote: > > A few small subsystems and some of MM. Hmm. I haven't bisected things yet, but I suspect it's something with the KASAN patches. With this all applied, I get: lib/crypto/curve25519-hacl64.c: In function ‘ladder_cmult.constprop’: lib/crypto/curve25519-hacl64.c:601:1: warning: the frame size of 2288 bytes is larger than 2048 bytes [-Wframe-larger-than=] and lib/bitfield_kunit.c: In function ‘test_bitfields_constants’: lib/bitfield_kunit.c:93:1: warning: the frame size of 11200 bytes is larger than 2048 bytes [-Wframe-larger-than=] which is obviously not really acceptable. A 11kB stack frame _will_ cause issues. Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2021-02-24 21:30 ` incoming Linus Torvalds @ 2021-02-24 21:37 ` Linus Torvalds 2021-02-25 8:53 ` incoming Arnd Bergmann 0 siblings, 1 reply; 349+ messages in thread From: Linus Torvalds @ 2021-02-24 21:37 UTC (permalink / raw) To: Andrew Morton, Walter Wu, Dmitry Vyukov, Nathan Chancellor, Arnd Bergmann, Andrey Konovalov Cc: Linux-MM, mm-commits, Andrey Ryabinin, Alexander Potapenko On Wed, Feb 24, 2021 at 1:30 PM Linus Torvalds <torvalds@linux-foundation.org> wrote: > > Hmm. I haven't bisected things yet, but I suspect it's something with > the KASAN patches. With this all applied, I get: > > lib/crypto/curve25519-hacl64.c: In function ‘ladder_cmult.constprop’: > lib/crypto/curve25519-hacl64.c:601:1: warning: the frame size of > 2288 bytes is larger than 2048 bytes [-Wframe-larger-than=] > > and > > lib/bitfield_kunit.c: In function ‘test_bitfields_constants’: > lib/bitfield_kunit.c:93:1: warning: the frame size of 11200 bytes is > larger than 2048 bytes [-Wframe-larger-than=] > > which is obviously not really acceptable. A 11kB stack frame _will_ > cause issues. A quick bisect shoes that this was introduced by "[patch 101/173] kasan: remove redundant config option". I didn't check what part of that patch screws up, but it's definitely doing something bad. I will drop that patch. Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2021-02-24 21:37 ` incoming Linus Torvalds @ 2021-02-25 8:53 ` Arnd Bergmann 2021-02-25 9:12 ` incoming Andrey Ryabinin 0 siblings, 1 reply; 349+ messages in thread From: Arnd Bergmann @ 2021-02-25 8:53 UTC (permalink / raw) To: Linus Torvalds Cc: Andrew Morton, Walter Wu, Dmitry Vyukov, Nathan Chancellor, Arnd Bergmann, Andrey Konovalov, Linux-MM, mm-commits, Andrey Ryabinin, Alexander Potapenko On Wed, Feb 24, 2021 at 10:37 PM Linus Torvalds <torvalds@linux-foundation.org> wrote: > > On Wed, Feb 24, 2021 at 1:30 PM Linus Torvalds > <torvalds@linux-foundation.org> wrote: > > > > Hmm. I haven't bisected things yet, but I suspect it's something with > > the KASAN patches. With this all applied, I get: > > > > lib/crypto/curve25519-hacl64.c: In function ‘ladder_cmult.constprop’: > > lib/crypto/curve25519-hacl64.c:601:1: warning: the frame size of > > 2288 bytes is larger than 2048 bytes [-Wframe-larger-than=] > > > > and > > > > lib/bitfield_kunit.c: In function ‘test_bitfields_constants’: > > lib/bitfield_kunit.c:93:1: warning: the frame size of 11200 bytes is > > larger than 2048 bytes [-Wframe-larger-than=] > > > > which is obviously not really acceptable. A 11kB stack frame _will_ > > cause issues. > > A quick bisect shoes that this was introduced by "[patch 101/173] > kasan: remove redundant config option". > > I didn't check what part of that patch screws up, but it's definitely > doing something bad. I'm not sure why that patch surfaced the bug, but it's worth pointing out that the underlying problem is asan-stack in combination with the structleak plugin. This will happen for every user of kunit. I sent a series[1] out earlier this year to turn off the structleak plugin as an alternative workaround, but need to follow up on the remaining patches. Someone suggested adding a more generic way to turn off the plugin for a file instead of open-coding the CLFAGS_REMOVE_*.o Makefile bit, which would help. I am also still hoping that someone can come up with a way to make kunit work better with the structleak plugin, as there shouldn't be a fundamental reason why it can't work, just that it the code pattern triggers a particularly bad case in the compiler. Arnd [1] https://lore.kernel.org/lkml/20210125124533.101339-1-arnd@kernel.org/ ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2021-02-25 8:53 ` incoming Arnd Bergmann @ 2021-02-25 9:12 ` Andrey Ryabinin 2021-02-25 11:07 ` incoming Walter Wu 0 siblings, 1 reply; 349+ messages in thread From: Andrey Ryabinin @ 2021-02-25 9:12 UTC (permalink / raw) To: Arnd Bergmann Cc: Linus Torvalds, Andrew Morton, Walter Wu, Dmitry Vyukov, Nathan Chancellor, Arnd Bergmann, Andrey Konovalov, Linux-MM, mm-commits, Andrey Ryabinin, Alexander Potapenko On Thu, Feb 25, 2021 at 11:53 AM Arnd Bergmann <arnd@kernel.org> wrote: > > On Wed, Feb 24, 2021 at 10:37 PM Linus Torvalds > <torvalds@linux-foundation.org> wrote: > > > > On Wed, Feb 24, 2021 at 1:30 PM Linus Torvalds > > <torvalds@linux-foundation.org> wrote: > > > > > > Hmm. I haven't bisected things yet, but I suspect it's something with > > > the KASAN patches. With this all applied, I get: > > > > > > lib/crypto/curve25519-hacl64.c: In function ‘ladder_cmult.constprop’: > > > lib/crypto/curve25519-hacl64.c:601:1: warning: the frame size of > > > 2288 bytes is larger than 2048 bytes [-Wframe-larger-than=] > > > > > > and > > > > > > lib/bitfield_kunit.c: In function ‘test_bitfields_constants’: > > > lib/bitfield_kunit.c:93:1: warning: the frame size of 11200 bytes is > > > larger than 2048 bytes [-Wframe-larger-than=] > > > > > > which is obviously not really acceptable. A 11kB stack frame _will_ > > > cause issues. > > > > A quick bisect shoes that this was introduced by "[patch 101/173] > > kasan: remove redundant config option". > > > > I didn't check what part of that patch screws up, but it's definitely > > doing something bad. > > I'm not sure why that patch surfaced the bug, but it's worth pointing > out that the underlying problem is asan-stack in combination > with the structleak plugin. This will happen for every user of kunit. > The patch didn't update KASAN_STACK dependency in kconfig: config GCC_PLUGIN_STRUCTLEAK_BYREF .... depends on !(KASAN && KASAN_STACK=1) This 'depends on' stopped working with the patch ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2021-02-25 9:12 ` incoming Andrey Ryabinin @ 2021-02-25 11:07 ` Walter Wu 0 siblings, 0 replies; 349+ messages in thread From: Walter Wu @ 2021-02-25 11:07 UTC (permalink / raw) To: Andrey Ryabinin Cc: Arnd Bergmann, Linus Torvalds, Andrew Morton, Dmitry Vyukov, Nathan Chancellor, Arnd Bergmann, Andrey Konovalov, Linux-MM, mm-commits, Andrey Ryabinin, Alexander Potapenko Hi Andrey, On Thu, 2021-02-25 at 12:12 +0300, Andrey Ryabinin wrote: > On Thu, Feb 25, 2021 at 11:53 AM Arnd Bergmann <arnd@kernel.org> wrote: > > > > On Wed, Feb 24, 2021 at 10:37 PM Linus Torvalds > > <torvalds@linux-foundation.org> wrote: > > > > > > On Wed, Feb 24, 2021 at 1:30 PM Linus Torvalds > > > <torvalds@linux-foundation.org> wrote: > > > > > > > > Hmm. I haven't bisected things yet, but I suspect it's something with > > > > the KASAN patches. With this all applied, I get: > > > > > > > > lib/crypto/curve25519-hacl64.c: In function ‘ladder_cmult.constprop’: > > > > lib/crypto/curve25519-hacl64.c:601:1: warning: the frame size of > > > > 2288 bytes is larger than 2048 bytes [-Wframe-larger-than=] > > > > > > > > and > > > > > > > > lib/bitfield_kunit.c: In function ‘test_bitfields_constants’: > > > > lib/bitfield_kunit.c:93:1: warning: the frame size of 11200 bytes is > > > > larger than 2048 bytes [-Wframe-larger-than=] > > > > > > > > which is obviously not really acceptable. A 11kB stack frame _will_ > > > > cause issues. > > > > > > A quick bisect shoes that this was introduced by "[patch 101/173] > > > kasan: remove redundant config option". > > > > > > I didn't check what part of that patch screws up, but it's definitely > > > doing something bad. > > > > I'm not sure why that patch surfaced the bug, but it's worth pointing > > out that the underlying problem is asan-stack in combination > > with the structleak plugin. This will happen for every user of kunit. > > > > The patch didn't update KASAN_STACK dependency in kconfig: > config GCC_PLUGIN_STRUCTLEAK_BYREF > .... > depends on !(KASAN && KASAN_STACK=1) > > This 'depends on' stopped working with the patch Thanks for pointing out this problem. I will re-send that patch. Walter ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-02-13 4:52 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-02-13 4:52 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 6 patches, based on dcc0b49040c70ad827a7f3d58a21b01fdb14e749. Subsystems affected by this patch series: mm/pagemap scripts MAINTAINERS h8300 Subsystem: mm/pagemap Mike Rapoport <rppt@linux.ibm.com>: m68k: make __pfn_to_phys() and __phys_to_pfn() available for !MMU Subsystem: scripts Rong Chen <rong.a.chen@intel.com>: scripts/recordmcount.pl: support big endian for ARCH sh Subsystem: MAINTAINERS Andrey Konovalov <andreyknvl@google.com>: MAINTAINERS: update KASAN file list MAINTAINERS: update Andrey Konovalov's email address MAINTAINERS: add Andrey Konovalov to KASAN reviewers Subsystem: h8300 Randy Dunlap <rdunlap@infradead.org>: h8300: fix PREEMPTION build, TI_PRE_COUNT undefined MAINTAINERS | 8 +++++--- arch/h8300/kernel/asm-offsets.c | 3 +++ arch/m68k/include/asm/page.h | 2 +- scripts/recordmcount.pl | 6 +++++- 4 files changed, 14 insertions(+), 5 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-02-09 21:41 Andrew Morton 2021-02-10 19:30 ` incoming Linus Torvalds 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2021-02-09 21:41 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 14 patches, based on e0756cfc7d7cd08c98a53b6009c091a3f6a50be6. Subsystems affected by this patch series: squashfs mm/kasan firmware mm/mremap mm/tmpfs mm/selftests MAINTAINERS mm/memcg mm/slub nilfs2 Subsystem: squashfs Phillip Lougher <phillip@squashfs.org.uk>: Patch series "Squashfs: fix BIO migration regression and add sanity checks": squashfs: avoid out of bounds writes in decompressors squashfs: add more sanity checks in id lookup squashfs: add more sanity checks in inode lookup squashfs: add more sanity checks in xattr id lookup Subsystem: mm/kasan Andrey Konovalov <andreyknvl@google.com>: kasan: fix stack traces dependency for HW_TAGS Subsystem: firmware Fangrui Song <maskray@google.com>: firmware_loader: align .builtin_fw to 8 Subsystem: mm/mremap Arnd Bergmann <arnd@arndb.de>: mm/mremap: fix BUILD_BUG_ON() error in get_extent Subsystem: mm/tmpfs Seth Forshee <seth.forshee@canonical.com>: tmpfs: disallow CONFIG_TMPFS_INODE64 on s390 tmpfs: disallow CONFIG_TMPFS_INODE64 on alpha Subsystem: mm/selftests Rong Chen <rong.a.chen@intel.com>: selftests/vm: rename file run_vmtests to run_vmtests.sh Subsystem: MAINTAINERS Andrey Ryabinin <ryabinin.a.a@gmail.com>: MAINTAINERS: update Andrey Ryabinin's email address Subsystem: mm/memcg Johannes Weiner <hannes@cmpxchg.org>: Revert "mm: memcontrol: avoid workload stalls when lowering memory.high" Subsystem: mm/slub Vlastimil Babka <vbabka@suse.cz>: mm, slub: better heuristic for number of cpus when calculating slab order Subsystem: nilfs2 Joachim Henke <joachim.henke@t-systems.com>: nilfs2: make splice write available again .mailmap | 1 Documentation/dev-tools/kasan.rst | 3 - MAINTAINERS | 2 - fs/Kconfig | 4 +- fs/nilfs2/file.c | 1 fs/squashfs/block.c | 8 ++++ fs/squashfs/export.c | 41 +++++++++++++++++++---- fs/squashfs/id.c | 40 ++++++++++++++++++----- fs/squashfs/squashfs_fs_sb.h | 1 fs/squashfs/super.c | 6 +-- fs/squashfs/xattr.h | 10 +++++ fs/squashfs/xattr_id.c | 66 ++++++++++++++++++++++++++++++++------ include/asm-generic/vmlinux.lds.h | 2 - mm/kasan/hw_tags.c | 8 +--- mm/memcontrol.c | 5 +- mm/mremap.c | 5 +- mm/slub.c | 18 +++++++++- 17 files changed, 172 insertions(+), 49 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2021-02-09 21:41 incoming Andrew Morton @ 2021-02-10 19:30 ` Linus Torvalds 0 siblings, 0 replies; 349+ messages in thread From: Linus Torvalds @ 2021-02-10 19:30 UTC (permalink / raw) To: Andrew Morton; +Cc: Linux-MM, mm-commits Hah. This series shows a small deficiency in your scripting wrt the diffstat: On Tue, Feb 9, 2021 at 1:41 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > .mailmap | 1 ... > mm/slub.c | 18 +++++++++- > 17 files changed, 172 insertions(+), 49 deletions(-) It actually has 18 files changed, but one of them is a pure rename (no change to the content), and apparently your diffstat tool can't handle that case. It *should* have ended with ... mm/slub.c | 18 +++++- .../selftests/vm/{run_vmtests => run_vmtests.sh} | 0 18 files changed, 172 insertions(+), 49 deletions(-) rename tools/testing/selftests/vm/{run_vmtests => run_vmtests.sh} (100%) if you'd done a proper "git diff -M --stat --summary" of the series. [ Ok, by default git would actually have said 18 files changed, 171 insertions(+), 48 deletions(-) but it looks like you use the patience diff option, which gives that extra insertion/deletion line because it generates the diff a bit differently ] Not a big deal,, but it made me briefly wonder "why doesn't my diffstat match yours". Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-02-05 2:31 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-02-05 2:31 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 18 patches, based on 5c279c4cf206e03995e04fd3404fa95ffd243a97. Subsystems affected by this patch series: mm/hugetlb mm/compaction mm/vmalloc gcov mm/shmem mm/memblock mailmap mm/pagecache mm/kasan ubsan mm/hugetlb MAINTAINERS Subsystem: mm/hugetlb Muchun Song <songmuchun@bytedance.com>: mm: hugetlbfs: fix cannot migrate the fallocated HugeTLB page mm: hugetlb: fix a race between freeing and dissolving the page mm: hugetlb: fix a race between isolating and freeing page mm: hugetlb: remove VM_BUG_ON_PAGE from page_huge_active mm: migrate: do not migrate HugeTLB page whose refcount is one Subsystem: mm/compaction Rokudo Yan <wu-yan@tcl.com>: mm, compaction: move high_pfn to the for loop scope Subsystem: mm/vmalloc Rick Edgecombe <rick.p.edgecombe@intel.com>: mm/vmalloc: separate put pages and flush VM flags Subsystem: gcov Johannes Berg <johannes.berg@intel.com>: init/gcov: allow CONFIG_CONSTRUCTORS on UML to fix module gcov Subsystem: mm/shmem Hugh Dickins <hughd@google.com>: mm: thp: fix MADV_REMOVE deadlock on shmem THP Subsystem: mm/memblock Roman Gushchin <guro@fb.com>: memblock: do not start bottom-up allocations with kernel_end Subsystem: mailmap Viresh Kumar <viresh.kumar@linaro.org>: mailmap: fix name/email for Viresh Kumar Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>: mailmap: add entries for Manivannan Sadhasivam Subsystem: mm/pagecache Waiman Long <longman@redhat.com>: mm/filemap: add missing mem_cgroup_uncharge() to __add_to_page_cache_locked() Subsystem: mm/kasan Vincenzo Frascino <vincenzo.frascino@arm.com>: Patch series "kasan: Fix metadata detection for KASAN_HW_TAGS", v5: kasan: add explicit preconditions to kasan_report() kasan: make addr_has_metadata() return true for valid addresses Subsystem: ubsan Nathan Chancellor <nathan@kernel.org>: ubsan: implement __ubsan_handle_alignment_assumption Subsystem: mm/hugetlb Muchun Song <songmuchun@bytedance.com>: mm: hugetlb: fix missing put_page in gather_surplus_pages() Subsystem: MAINTAINERS Nathan Chancellor <nathan@kernel.org>: MAINTAINERS/.mailmap: use my @kernel.org address .mailmap | 5 ++++ MAINTAINERS | 2 - fs/hugetlbfs/inode.c | 3 +- include/linux/hugetlb.h | 2 + include/linux/kasan.h | 7 ++++++ include/linux/vmalloc.h | 9 +------- init/Kconfig | 1 init/main.c | 8 ++++++- kernel/gcov/Kconfig | 2 - lib/ubsan.c | 31 ++++++++++++++++++++++++++++ lib/ubsan.h | 6 +++++ mm/compaction.c | 3 +- mm/filemap.c | 4 +++ mm/huge_memory.c | 37 ++++++++++++++++++++------------- mm/hugetlb.c | 53 ++++++++++++++++++++++++++++++++++++++++++------ mm/kasan/kasan.h | 2 - mm/memblock.c | 49 +++++--------------------------------------- mm/migrate.c | 6 +++++ 18 files changed, 153 insertions(+), 77 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-01-24 5:00 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2021-01-24 5:00 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 19 patches, based on e1ae4b0be15891faf46d390e9f3dc9bd71a8cae1. Subsystems affected by this patch series: mm/pagealloc mm/memcg mm/kasan ubsan mm/memory-failure mm/highmem proc MAINTAINERS Subsystem: mm/pagealloc Mike Rapoport <rppt@linux.ibm.com>: Patch series "mm: fix initialization of struct page for holes in memory layout", v3: x86/setup: don't remove E820_TYPE_RAM for pfn 0 mm: fix initialization of struct page for holes in memory layout Subsystem: mm/memcg Roman Gushchin <guro@fb.com>: mm: memcg/slab: optimize objcg stock draining Shakeel Butt <shakeelb@google.com>: mm: memcg: fix memcg file_dirty numa stat mm: fix numa stats for thp migration Johannes Weiner <hannes@cmpxchg.org>: mm: memcontrol: prevent starvation when writing memory.high Subsystem: mm/kasan Lecopzer Chen <lecopzer@gmail.com>: kasan: fix unaligned address is unhandled in kasan_remove_zero_shadow kasan: fix incorrect arguments passing in kasan_add_zero_shadow Andrey Konovalov <andreyknvl@google.com>: kasan: fix HW_TAGS boot parameters kasan, mm: fix conflicts with init_on_alloc/free kasan, mm: fix resetting page_alloc tags for HW_TAGS Subsystem: ubsan Arnd Bergmann <arnd@arndb.de>: ubsan: disable unsigned-overflow check for i386 Subsystem: mm/memory-failure Dan Williams <dan.j.williams@intel.com>: mm: fix page reference leak in soft_offline_page() Subsystem: mm/highmem Thomas Gleixner <tglx@linutronix.de>: Patch series "mm/highmem: Fix fallout from generic kmap_local conversions": sparc/mm/highmem: flush cache and TLB mm/highmem: prepare for overriding set_pte_at() mips/mm/highmem: use set_pte() for kmap_local() powerpc/mm/highmem: use __set_pte_at() for kmap_local() Subsystem: proc Xiaoming Ni <nixiaoming@huawei.com>: proc_sysctl: fix oops caused by incorrect command parameters Subsystem: MAINTAINERS Nathan Chancellor <natechancellor@gmail.com>: MAINTAINERS: add a couple more files to the Clang/LLVM section Documentation/dev-tools/kasan.rst | 27 ++--------- MAINTAINERS | 2 arch/mips/include/asm/highmem.h | 1 arch/powerpc/include/asm/highmem.h | 2 arch/sparc/include/asm/highmem.h | 9 ++- arch/x86/kernel/setup.c | 20 +++----- fs/proc/proc_sysctl.c | 7 ++- lib/Kconfig.ubsan | 1 mm/highmem.c | 7 ++- mm/kasan/hw_tags.c | 77 +++++++++++++-------------------- mm/kasan/init.c | 23 +++++---- mm/memcontrol.c | 11 +--- mm/memory-failure.c | 20 ++++++-- mm/migrate.c | 27 ++++++----- mm/page_alloc.c | 86 ++++++++++++++++++++++--------------- mm/slub.c | 7 +-- 16 files changed, 173 insertions(+), 154 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2021-01-12 23:48 Andrew Morton 2021-01-15 23:32 ` incoming Linus Torvalds 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2021-01-12 23:48 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 10 patches, based on e609571b5ffa3528bf85292de1ceaddac342bc1c. Subsystems affected by this patch series: mm/slub mm/pagealloc mm/memcg mm/kasan mm/vmalloc mm/migration mm/hugetlb MAINTAINERS mm/memory-failure mm/process_vm_access Subsystem: mm/slub Jann Horn <jannh@google.com>: mm, slub: consider rest of partial list if acquire_slab() fails Subsystem: mm/pagealloc Hailong liu <liu.hailong6@zte.com.cn>: mm/page_alloc: add a missing mm_page_alloc_zone_locked() tracepoint Subsystem: mm/memcg Hugh Dickins <hughd@google.com>: mm/memcontrol: fix warning in mem_cgroup_page_lruvec() Subsystem: mm/kasan Hailong Liu <liu.hailong6@zte.com.cn>: arm/kasan: fix the array size of kasan_early_shadow_pte[] Subsystem: mm/vmalloc Miaohe Lin <linmiaohe@huawei.com>: mm/vmalloc.c: fix potential memory leak Subsystem: mm/migration Jan Stancek <jstancek@redhat.com>: mm: migrate: initialize err in do_migrate_pages Subsystem: mm/hugetlb Miaohe Lin <linmiaohe@huawei.com>: mm/hugetlb: fix potential missing huge page size info Subsystem: MAINTAINERS Vlastimil Babka <vbabka@suse.cz>: MAINTAINERS: add Vlastimil as slab allocators maintainer Subsystem: mm/memory-failure Oscar Salvador <osalvador@suse.de>: mm,hwpoison: fix printing of page flags Subsystem: mm/process_vm_access Andrew Morton <akpm@linux-foundation.org>: mm/process_vm_access.c: include compat.h MAINTAINERS | 1 + include/linux/kasan.h | 6 +++++- include/linux/memcontrol.h | 2 +- mm/hugetlb.c | 2 +- mm/kasan/init.c | 3 ++- mm/memory-failure.c | 2 +- mm/mempolicy.c | 2 +- mm/page_alloc.c | 31 ++++++++++++++++--------------- mm/process_vm_access.c | 1 + mm/slub.c | 2 +- mm/vmalloc.c | 4 +++- 11 files changed, 33 insertions(+), 23 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2021-01-12 23:48 incoming Andrew Morton @ 2021-01-15 23:32 ` Linus Torvalds 0 siblings, 0 replies; 349+ messages in thread From: Linus Torvalds @ 2021-01-15 23:32 UTC (permalink / raw) To: Andrew Morton; +Cc: Linux-MM, mm-commits On Tue, Jan 12, 2021 at 3:48 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > 10 patches, based on e609571b5ffa3528bf85292de1ceaddac342bc1c. Whee. I had completely dropped the ball on this - I had built my usual "akpm" branch with the patches, but then had completely forgotten about it after doing my basic build tests. I tend to leave it for a while to see if people send belated ACK/NAK's for the patches, but that "for a while" is typically "overnight", not several days. So if you ever notice that I haven't merged your patch submission, and you haven't seen me comment on them, feel free to ping me to remind me. Because it might just have gotten lost in the shuffle for some random reason. Admittedly it's rare - I think this is the first time I just randomly noticed three days later that I'd never done the actual merge of the patch-series). Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-12-29 23:13 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-12-29 23:13 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 16 patches, based on dea8dcf2a9fa8cc540136a6cd885c3beece16ec3. Subsystems affected by this patch series: mm/selftests mm/hugetlb kbuild checkpatch mm/pagecache mm/mremap mm/kasan misc lib mm/slub Subsystem: mm/selftests Harish <harish@linux.ibm.com>: selftests/vm: fix building protection keys test Subsystem: mm/hugetlb Mike Kravetz <mike.kravetz@oracle.com>: mm/hugetlb: fix deadlock in hugetlb_cow error path Subsystem: kbuild Masahiro Yamada <masahiroy@kernel.org>: Revert "kbuild: avoid static_assert for genksyms" Subsystem: checkpatch Joe Perches <joe@perches.com>: checkpatch: prefer strscpy to strlcpy Subsystem: mm/pagecache Souptick Joarder <jrdr.linux@gmail.com>: mm: add prototype for __add_to_page_cache_locked() Baoquan He <bhe@redhat.com>: mm: memmap defer init doesn't work as expected Subsystem: mm/mremap Kalesh Singh <kaleshsingh@google.com>: mm/mremap.c: fix extent calculation Nicholas Piggin <npiggin@gmail.com>: mm: generalise COW SMC TLB flushing race comment Subsystem: mm/kasan Walter Wu <walter-zh.wu@mediatek.com>: kasan: fix null pointer dereference in kasan_record_aux_stack Subsystem: misc Randy Dunlap <rdunlap@infradead.org>: local64.h: make <asm/local64.h> mandatory Huang Shijie <sjhuang@iluvatar.ai>: sizes.h: add SZ_8G/SZ_16G/SZ_32G macros Josh Poimboeuf <jpoimboe@redhat.com>: kdev_t: always inline major/minor helper functions Subsystem: lib Huang Shijie <sjhuang@iluvatar.ai>: lib/genalloc: fix the overflow when size is too big Ilya Leoshkevich <iii@linux.ibm.com>: lib/zlib: fix inflating zlib streams on s390 Randy Dunlap <rdunlap@infradead.org>: zlib: move EXPORT_SYMBOL() and MODULE_LICENSE() out of dfltcc_syms.c Subsystem: mm/slub Roman Gushchin <guro@fb.com>: mm: slub: call account_slab_page() after slab page initialization arch/alpha/include/asm/local64.h | 1 - arch/arc/include/asm/Kbuild | 1 - arch/arm/include/asm/Kbuild | 1 - arch/arm64/include/asm/Kbuild | 1 - arch/csky/include/asm/Kbuild | 1 - arch/h8300/include/asm/Kbuild | 1 - arch/hexagon/include/asm/Kbuild | 1 - arch/ia64/include/asm/local64.h | 1 - arch/ia64/mm/init.c | 4 ++-- arch/m68k/include/asm/Kbuild | 1 - arch/microblaze/include/asm/Kbuild | 1 - arch/mips/include/asm/Kbuild | 1 - arch/nds32/include/asm/Kbuild | 1 - arch/openrisc/include/asm/Kbuild | 1 - arch/parisc/include/asm/Kbuild | 1 - arch/powerpc/include/asm/Kbuild | 1 - arch/riscv/include/asm/Kbuild | 1 - arch/s390/include/asm/Kbuild | 1 - arch/sh/include/asm/Kbuild | 1 - arch/sparc/include/asm/Kbuild | 1 - arch/x86/include/asm/local64.h | 1 - arch/xtensa/include/asm/Kbuild | 1 - include/asm-generic/Kbuild | 1 + include/linux/build_bug.h | 5 ----- include/linux/kdev_t.h | 22 +++++++++++----------- include/linux/mm.h | 12 ++++++++++-- include/linux/sizes.h | 3 +++ lib/genalloc.c | 25 +++++++++++++------------ lib/zlib_dfltcc/Makefile | 2 +- lib/zlib_dfltcc/dfltcc.c | 6 +++++- lib/zlib_dfltcc/dfltcc_deflate.c | 3 +++ lib/zlib_dfltcc/dfltcc_inflate.c | 4 ++-- lib/zlib_dfltcc/dfltcc_syms.c | 17 ----------------- mm/hugetlb.c | 22 +++++++++++++++++++++- mm/kasan/generic.c | 2 ++ mm/memory.c | 8 +++++--- mm/memory_hotplug.c | 2 +- mm/mremap.c | 4 +++- mm/page_alloc.c | 8 +++++--- mm/slub.c | 5 ++--- scripts/checkpatch.pl | 6 ++++++ tools/testing/selftests/vm/Makefile | 10 +++++----- 42 files changed, 101 insertions(+), 91 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-12-22 19:58 Andrew Morton 2020-12-22 21:43 ` incoming Linus Torvalds 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2020-12-22 19:58 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 60 patches, based on 8653b778e454a7708847aeafe689bce07aeeb94e. Subsystems affected by this patch series: mm/kasan Subsystem: mm/kasan Andrey Konovalov <andreyknvl@google.com>: Patch series "kasan: add hardware tag-based mode for arm64", v11: kasan: drop unnecessary GPL text from comment headers kasan: KASAN_VMALLOC depends on KASAN_GENERIC kasan: group vmalloc code kasan: shadow declarations only for software modes kasan: rename (un)poison_shadow to (un)poison_range kasan: rename KASAN_SHADOW_* to KASAN_GRANULE_* kasan: only build init.c for software modes kasan: split out shadow.c from common.c kasan: define KASAN_MEMORY_PER_SHADOW_PAGE kasan: rename report and tags files kasan: don't duplicate config dependencies kasan: hide invalid free check implementation kasan: decode stack frame only with KASAN_STACK_ENABLE kasan, arm64: only init shadow for software modes kasan, arm64: only use kasan_depth for software modes kasan, arm64: move initialization message kasan, arm64: rename kasan_init_tags and mark as __init kasan: rename addr_has_shadow to addr_has_metadata kasan: rename print_shadow_for_address to print_memory_metadata kasan: rename SHADOW layout macros to META kasan: separate metadata_fetch_row for each mode kasan: introduce CONFIG_KASAN_HW_TAGS Vincenzo Frascino <vincenzo.frascino@arm.com>: arm64: enable armv8.5-a asm-arch option arm64: mte: add in-kernel MTE helpers arm64: mte: reset the page tag in page->flags arm64: mte: add in-kernel tag fault handler arm64: kasan: allow enabling in-kernel MTE arm64: mte: convert gcr_user into an exclude mask arm64: mte: switch GCR_EL1 in kernel entry and exit kasan, mm: untag page address in free_reserved_area Andrey Konovalov <andreyknvl@google.com>: arm64: kasan: align allocations for HW_TAGS arm64: kasan: add arch layer for memory tagging helpers kasan: define KASAN_GRANULE_SIZE for HW_TAGS kasan, x86, s390: update undef CONFIG_KASAN kasan, arm64: expand CONFIG_KASAN checks kasan, arm64: implement HW_TAGS runtime kasan, arm64: print report from tag fault handler kasan, mm: reset tags when accessing metadata kasan, arm64: enable CONFIG_KASAN_HW_TAGS kasan: add documentation for hardware tag-based mode Vincenzo Frascino <vincenzo.frascino@arm.com>: kselftest/arm64: check GCR_EL1 after context switch Andrey Konovalov <andreyknvl@google.com>: Patch series "kasan: boot parameters for hardware tag-based mode", v4: kasan: simplify quarantine_put call site kasan: rename get_alloc/free_info kasan: introduce set_alloc_info kasan, arm64: unpoison stack only with CONFIG_KASAN_STACK kasan: allow VMAP_STACK for HW_TAGS mode kasan: remove __kasan_unpoison_stack kasan: inline kasan_reset_tag for tag-based modes kasan: inline random_tag for HW_TAGS kasan: open-code kasan_unpoison_slab kasan: inline (un)poison_range and check_invalid_free kasan: add and integrate kasan boot parameters kasan, mm: check kasan_enabled in annotations kasan, mm: rename kasan_poison_kfree kasan: don't round_up too much kasan: simplify assign_tag and set_tag calls kasan: clarify comment in __kasan_kfree_large kasan: sanitize objects when metadata doesn't fit kasan, mm: allow cache merging with no metadata kasan: update documentation Documentation/dev-tools/kasan.rst | 274 ++- arch/Kconfig | 8 arch/arm64/Kconfig | 9 arch/arm64/Makefile | 7 arch/arm64/include/asm/assembler.h | 2 arch/arm64/include/asm/cache.h | 3 arch/arm64/include/asm/esr.h | 1 arch/arm64/include/asm/kasan.h | 17 arch/arm64/include/asm/memory.h | 15 arch/arm64/include/asm/mte-def.h | 16 arch/arm64/include/asm/mte-kasan.h | 67 arch/arm64/include/asm/mte.h | 22 arch/arm64/include/asm/processor.h | 2 arch/arm64/include/asm/string.h | 5 arch/arm64/include/asm/uaccess.h | 23 arch/arm64/kernel/asm-offsets.c | 3 arch/arm64/kernel/cpufeature.c | 3 arch/arm64/kernel/entry.S | 41 arch/arm64/kernel/head.S | 2 arch/arm64/kernel/hibernate.c | 5 arch/arm64/kernel/image-vars.h | 2 arch/arm64/kernel/kaslr.c | 3 arch/arm64/kernel/module.c | 6 arch/arm64/kernel/mte.c | 124 + arch/arm64/kernel/setup.c | 2 arch/arm64/kernel/sleep.S | 2 arch/arm64/kernel/smp.c | 2 arch/arm64/lib/mte.S | 16 arch/arm64/mm/copypage.c | 9 arch/arm64/mm/fault.c | 59 arch/arm64/mm/kasan_init.c | 41 arch/arm64/mm/mteswap.c | 9 arch/arm64/mm/proc.S | 23 arch/arm64/mm/ptdump.c | 6 arch/s390/boot/string.c | 1 arch/x86/boot/compressed/misc.h | 1 arch/x86/kernel/acpi/wakeup_64.S | 2 include/linux/kasan-checks.h | 2 include/linux/kasan.h | 423 ++++- include/linux/mm.h | 24 include/linux/moduleloader.h | 3 include/linux/page-flags-layout.h | 2 include/linux/sched.h | 2 include/linux/string.h | 2 init/init_task.c | 2 kernel/fork.c | 4 lib/Kconfig.kasan | 71 lib/test_kasan.c | 2 lib/test_kasan_module.c | 2 mm/kasan/Makefile | 33 mm/kasan/common.c | 1006 +++----------- mm/kasan/generic.c | 72 - mm/kasan/generic_report.c | 13 mm/kasan/hw_tags.c | 276 +++ mm/kasan/init.c | 25 mm/kasan/kasan.h | 195 ++ mm/kasan/quarantine.c | 35 mm/kasan/report.c | 363 +---- mm/kasan/report_generic.c | 169 ++ mm/kasan/report_hw_tags.c | 44 mm/kasan/report_sw_tags.c | 22 mm/kasan/shadow.c | 528 +++++++ mm/kasan/sw_tags.c | 34 mm/kasan/tags.c | 7 mm/kasan/tags_report.c | 7 mm/mempool.c | 4 mm/page_alloc.c | 9 mm/page_poison.c | 2 mm/ptdump.c | 13 mm/slab_common.c | 5 mm/slub.c | 29 scripts/Makefile.lib | 2 tools/testing/selftests/arm64/mte/Makefile | 2 tools/testing/selftests/arm64/mte/check_gcr_el1_cswitch.c | 155 ++ 74 files changed, 2869 insertions(+), 1553 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-12-22 19:58 incoming Andrew Morton @ 2020-12-22 21:43 ` Linus Torvalds 0 siblings, 0 replies; 349+ messages in thread From: Linus Torvalds @ 2020-12-22 21:43 UTC (permalink / raw) To: Andrew Morton; +Cc: Linux-MM, mm-commits On Tue, Dec 22, 2020 at 11:58 AM Andrew Morton <akpm@linux-foundation.org> wrote: > > 60 patches, based on 8653b778e454a7708847aeafe689bce07aeeb94e. I see that you enabled renaming in the patches. Lovely. Can you also enable it in the diffstat? > 74 files changed, 2869 insertions(+), 1553 deletions(-) With -M in the diffstat, you should have seen 72 files changed, 2775 insertions(+), 1460 deletions(-) and if you add "--summary", you'll also see the rename part ofthe file create/delete summary: rename mm/kasan/{tags_report.c => report_sw_tags.c} (78%) which is often nice to see in addition to the line stats.. Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-12-18 22:00 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-12-18 22:00 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 78 patches, based on a409ed156a90093a03fe6a93721ddf4c591eac87. Subsystems affected by this patch series: mm/memcg epoll mm/kasan mm/cleanups epoll Subsystem: mm/memcg Alex Shi <alex.shi@linux.alibaba.com>: Patch series "bail out early for memcg disable": mm/memcg: bail early from swap accounting if memcg disabled mm/memcg: warning on !memcg after readahead page charged Wei Yang <richard.weiyang@gmail.com>: mm/memcg: remove unused definitions Shakeel Butt <shakeelb@google.com>: mm, kvm: account kvm_vcpu_mmap to kmemcg Hui Su <sh_def@163.com>: mm/memcontrol:rewrite mem_cgroup_page_lruvec() Subsystem: epoll Soheil Hassas Yeganeh <soheil@google.com>: Patch series "simplify ep_poll": epoll: check for events when removing a timed out thread from the wait queue epoll: simplify signal handling epoll: pull fatal signal checks into ep_send_events() epoll: move eavail next to the list_empty_careful check epoll: simplify and optimize busy loop logic epoll: pull all code between fetch_events and send_event into the loop epoll: replace gotos with a proper loop epoll: eliminate unnecessary lock for zero timeout Subsystem: mm/kasan Andrey Konovalov <andreyknvl@google.com>: Patch series "kasan: add hardware tag-based mode for arm64", v11: kasan: drop unnecessary GPL text from comment headers kasan: KASAN_VMALLOC depends on KASAN_GENERIC kasan: group vmalloc code kasan: shadow declarations only for software modes kasan: rename (un)poison_shadow to (un)poison_range kasan: rename KASAN_SHADOW_* to KASAN_GRANULE_* kasan: only build init.c for software modes kasan: split out shadow.c from common.c kasan: define KASAN_MEMORY_PER_SHADOW_PAGE kasan: rename report and tags files kasan: don't duplicate config dependencies kasan: hide invalid free check implementation kasan: decode stack frame only with KASAN_STACK_ENABLE kasan, arm64: only init shadow for software modes kasan, arm64: only use kasan_depth for software modes kasan, arm64: move initialization message kasan, arm64: rename kasan_init_tags and mark as __init kasan: rename addr_has_shadow to addr_has_metadata kasan: rename print_shadow_for_address to print_memory_metadata kasan: rename SHADOW layout macros to META kasan: separate metadata_fetch_row for each mode kasan: introduce CONFIG_KASAN_HW_TAGS Vincenzo Frascino <vincenzo.frascino@arm.com>: arm64: enable armv8.5-a asm-arch option arm64: mte: add in-kernel MTE helpers arm64: mte: reset the page tag in page->flags arm64: mte: add in-kernel tag fault handler arm64: kasan: allow enabling in-kernel MTE arm64: mte: convert gcr_user into an exclude mask arm64: mte: switch GCR_EL1 in kernel entry and exit kasan, mm: untag page address in free_reserved_area Andrey Konovalov <andreyknvl@google.com>: arm64: kasan: align allocations for HW_TAGS arm64: kasan: add arch layer for memory tagging helpers kasan: define KASAN_GRANULE_SIZE for HW_TAGS kasan, x86, s390: update undef CONFIG_KASAN kasan, arm64: expand CONFIG_KASAN checks kasan, arm64: implement HW_TAGS runtime kasan, arm64: print report from tag fault handler kasan, mm: reset tags when accessing metadata kasan, arm64: enable CONFIG_KASAN_HW_TAGS kasan: add documentation for hardware tag-based mode Vincenzo Frascino <vincenzo.frascino@arm.com>: kselftest/arm64: check GCR_EL1 after context switch Andrey Konovalov <andreyknvl@google.com>: Patch series "kasan: boot parameters for hardware tag-based mode", v4: kasan: simplify quarantine_put call site kasan: rename get_alloc/free_info kasan: introduce set_alloc_info kasan, arm64: unpoison stack only with CONFIG_KASAN_STACK kasan: allow VMAP_STACK for HW_TAGS mode kasan: remove __kasan_unpoison_stack kasan: inline kasan_reset_tag for tag-based modes kasan: inline random_tag for HW_TAGS kasan: open-code kasan_unpoison_slab kasan: inline (un)poison_range and check_invalid_free kasan: add and integrate kasan boot parameters kasan, mm: check kasan_enabled in annotations kasan, mm: rename kasan_poison_kfree kasan: don't round_up too much kasan: simplify assign_tag and set_tag calls kasan: clarify comment in __kasan_kfree_large kasan: sanitize objects when metadata doesn't fit kasan, mm: allow cache merging with no metadata kasan: update documentation Subsystem: mm/cleanups Colin Ian King <colin.king@canonical.com>: mm/Kconfig: fix spelling mistake "whats" -> "what's" Subsystem: epoll Willem de Bruijn <willemb@google.com>: Patch series "add epoll_pwait2 syscall", v4: epoll: convert internal api to timespec64 epoll: add syscall epoll_pwait2 epoll: wire up syscall epoll_pwait2 selftests/filesystems: expand epoll with epoll_pwait2 Documentation/dev-tools/kasan.rst | 274 +- arch/Kconfig | 8 arch/alpha/kernel/syscalls/syscall.tbl | 1 arch/arm/tools/syscall.tbl | 1 arch/arm64/Kconfig | 9 arch/arm64/Makefile | 7 arch/arm64/include/asm/assembler.h | 2 arch/arm64/include/asm/cache.h | 3 arch/arm64/include/asm/esr.h | 1 arch/arm64/include/asm/kasan.h | 17 arch/arm64/include/asm/memory.h | 15 arch/arm64/include/asm/mte-def.h | 16 arch/arm64/include/asm/mte-kasan.h | 67 arch/arm64/include/asm/mte.h | 22 arch/arm64/include/asm/processor.h | 2 arch/arm64/include/asm/string.h | 5 arch/arm64/include/asm/uaccess.h | 23 arch/arm64/include/asm/unistd.h | 2 arch/arm64/include/asm/unistd32.h | 2 arch/arm64/kernel/asm-offsets.c | 3 arch/arm64/kernel/cpufeature.c | 3 arch/arm64/kernel/entry.S | 41 arch/arm64/kernel/head.S | 2 arch/arm64/kernel/hibernate.c | 5 arch/arm64/kernel/image-vars.h | 2 arch/arm64/kernel/kaslr.c | 3 arch/arm64/kernel/module.c | 6 arch/arm64/kernel/mte.c | 124 + arch/arm64/kernel/setup.c | 2 arch/arm64/kernel/sleep.S | 2 arch/arm64/kernel/smp.c | 2 arch/arm64/lib/mte.S | 16 arch/arm64/mm/copypage.c | 9 arch/arm64/mm/fault.c | 59 arch/arm64/mm/kasan_init.c | 41 arch/arm64/mm/mteswap.c | 9 arch/arm64/mm/proc.S | 23 arch/arm64/mm/ptdump.c | 6 arch/ia64/kernel/syscalls/syscall.tbl | 1 arch/m68k/kernel/syscalls/syscall.tbl | 1 arch/microblaze/kernel/syscalls/syscall.tbl | 1 arch/mips/kernel/syscalls/syscall_n32.tbl | 1 arch/mips/kernel/syscalls/syscall_n64.tbl | 1 arch/mips/kernel/syscalls/syscall_o32.tbl | 1 arch/parisc/kernel/syscalls/syscall.tbl | 1 arch/powerpc/kernel/syscalls/syscall.tbl | 1 arch/s390/boot/string.c | 1 arch/s390/kernel/syscalls/syscall.tbl | 1 arch/sh/kernel/syscalls/syscall.tbl | 1 arch/sparc/kernel/syscalls/syscall.tbl | 1 arch/x86/boot/compressed/misc.h | 1 arch/x86/entry/syscalls/syscall_32.tbl | 1 arch/x86/entry/syscalls/syscall_64.tbl | 1 arch/x86/kernel/acpi/wakeup_64.S | 2 arch/x86/kvm/x86.c | 2 arch/xtensa/kernel/syscalls/syscall.tbl | 1 fs/eventpoll.c | 359 ++- include/linux/compat.h | 6 include/linux/kasan-checks.h | 2 include/linux/kasan.h | 423 ++-- include/linux/memcontrol.h | 137 - include/linux/mm.h | 24 include/linux/mmdebug.h | 13 include/linux/moduleloader.h | 3 include/linux/page-flags-layout.h | 2 include/linux/sched.h | 2 include/linux/string.h | 2 include/linux/syscalls.h | 5 include/uapi/asm-generic/unistd.h | 4 init/init_task.c | 2 kernel/fork.c | 4 kernel/sys_ni.c | 2 lib/Kconfig.kasan | 71 lib/test_kasan.c | 2 lib/test_kasan_module.c | 2 mm/Kconfig | 2 mm/kasan/Makefile | 33 mm/kasan/common.c | 1006 ++-------- mm/kasan/generic.c | 72 mm/kasan/generic_report.c | 13 mm/kasan/hw_tags.c | 294 ++ mm/kasan/init.c | 25 mm/kasan/kasan.h | 204 +- mm/kasan/quarantine.c | 35 mm/kasan/report.c | 363 +-- mm/kasan/report_generic.c | 169 + mm/kasan/report_hw_tags.c | 44 mm/kasan/report_sw_tags.c | 22 mm/kasan/shadow.c | 541 +++++ mm/kasan/sw_tags.c | 34 mm/kasan/tags.c | 7 mm/kasan/tags_report.c | 7 mm/memcontrol.c | 53 mm/mempool.c | 4 mm/page_alloc.c | 9 mm/page_poison.c | 2 mm/ptdump.c | 13 mm/slab_common.c | 5 mm/slub.c | 29 scripts/Makefile.lib | 2 tools/testing/selftests/arm64/mte/Makefile | 2 tools/testing/selftests/arm64/mte/check_gcr_el1_cswitch.c | 155 + tools/testing/selftests/filesystems/epoll/epoll_wakeup_test.c | 72 virt/kvm/coalesced_mmio.c | 2 virt/kvm/kvm_main.c | 2 105 files changed, 3268 insertions(+), 1873 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-12-16 4:41 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-12-16 4:41 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm - lots of little subsystems - a few post-linux-next MM material. Most of this awaits more merging of other trees. 95 patches, based on 489e9fea66f31086f85d9a18e61e4791d94a56a4. Subsystems affected by this patch series: mm/swap mm/memory-hotplug alpha procfs misc core-kernel bitmap lib lz4 bitops checkpatch nilfs kdump rapidio gcov bfs relay resource ubsan reboot fault-injection lzo apparmor mm/pagemap mm/cleanups mm/gup Subsystem: mm/swap Zhaoyang Huang <huangzhaoyang@gmail.com>: mm: fix a race on nr_swap_pages Subsystem: mm/memory-hotplug Laurent Dufour <ldufour@linux.ibm.com>: mm/memory_hotplug: quieting offline operation Subsystem: alpha Thomas Gleixner <tglx@linutronix.de>: alpha: replace bogus in_interrupt() Subsystem: procfs Randy Dunlap <rdunlap@infradead.org>: procfs: delete duplicated words + other fixes Anand K Mistry <amistry@google.com>: proc: provide details on indirect branch speculation Alexey Dobriyan <adobriyan@gmail.com>: proc: fix lookup in /proc/net subdirectories after setns(2) Hui Su <sh_def@163.com>: fs/proc: make pde_get() return nothing Subsystem: misc Christophe Leroy <christophe.leroy@csgroup.eu>: asm-generic: force inlining of get_order() to work around gcc10 poor decision Andy Shevchenko <andriy.shevchenko@linux.intel.com>: kernel.h: split out mathematical helpers Subsystem: core-kernel Hui Su <sh_def@163.com>: kernel/acct.c: use #elif instead of #end and #elif Subsystem: bitmap Andy Shevchenko <andriy.shevchenko@linux.intel.com>: include/linux/bitmap.h: convert bitmap_empty() / bitmap_full() to return boolean "Ma, Jianpeng" <jianpeng.ma@intel.com>: bitmap: remove unused function declaration Subsystem: lib Geert Uytterhoeven <geert@linux-m68k.org>: lib/test_free_pages.c: add basic progress indicators "Gustavo A. R. Silva" <gustavoars@kernel.org>: Patch series "] lib/stackdepot.c: Replace one-element array with flexible-array member": lib/stackdepot.c: replace one-element array with flexible-array member lib/stackdepot.c: use flex_array_size() helper in memcpy() lib/stackdepot.c: use array_size() helper in jhash2() Sebastian Andrzej Siewior <bigeasy@linutronix.de>: lib/test_lockup.c: minimum fix to get it compiled on PREEMPT_RT Andy Shevchenko <andriy.shevchenko@linux.intel.com>: lib/list_kunit: follow new file name convention for KUnit tests lib/linear_ranges_kunit: follow new file name convention for KUnit tests lib/bits_kunit: follow new file name convention for KUnit tests lib/cmdline: fix get_option() for strings starting with hyphen lib/cmdline: allow NULL to be an output for get_option() lib/cmdline_kunit: add a new test suite for cmdline API Jakub Jelinek <jakub@redhat.com>: ilog2: improve ilog2 for constant arguments Nick Desaulniers <ndesaulniers@google.com>: lib/string: remove unnecessary #undefs Daniel Axtens <dja@axtens.net>: Patch series "Fortify strscpy()", v7: lib: string.h: detect intra-object overflow in fortified string functions lkdtm: tests for FORTIFY_SOURCE Francis Laniel <laniel_francis@privacyrequired.com>: string.h: add FORTIFY coverage for strscpy() drivers/misc/lkdtm: add new file in LKDTM to test fortified strscpy drivers/misc/lkdtm/lkdtm.h: correct wrong filenames in comment Alexey Dobriyan <adobriyan@gmail.com>: lib: cleanup kstrto*() usage Subsystem: lz4 Gao Xiang <hsiangkao@redhat.com>: lib/lz4: explicitly support in-place decompression Subsystem: bitops Syed Nayyar Waris <syednwaris@gmail.com>: Patch series "Introduce the for_each_set_clump macro", v12: bitops: introduce the for_each_set_clump macro lib/test_bitmap.c: add for_each_set_clump test cases gpio: thunderx: utilize for_each_set_clump macro gpio: xilinx: utilize generic bitmap_get_value and _set_value Subsystem: checkpatch Dwaipayan Ray <dwaipayanray1@gmail.com>: checkpatch: add new exception to repeated word check Aditya Srivastava <yashsri421@gmail.com>: checkpatch: fix false positives in REPEATED_WORD warning Łukasz Stelmach <l.stelmach@samsung.com>: checkpatch: ignore generated CamelCase defines and enum values Joe Perches <joe@perches.com>: checkpatch: prefer static const declarations checkpatch: allow --fix removal of unnecessary break statements Dwaipayan Ray <dwaipayanray1@gmail.com>: checkpatch: extend attributes check to handle more patterns Tom Rix <trix@redhat.com>: checkpatch: add a fixer for missing newline at eof Joe Perches <joe@perches.com>: checkpatch: update __attribute__((section("name"))) quote removal Aditya Srivastava <yashsri421@gmail.com>: checkpatch: add fix option for GERRIT_CHANGE_ID Joe Perches <joe@perches.com>: checkpatch: add __alias and __weak to suggested __attribute__ conversions Dwaipayan Ray <dwaipayanray1@gmail.com>: checkpatch: improve email parsing checkpatch: fix spelling errors and remove repeated word Aditya Srivastava <yashsri421@gmail.com>: checkpatch: avoid COMMIT_LOG_LONG_LINE warning for signature tags Dwaipayan Ray <dwaipayanray1@gmail.com>: checkpatch: fix unescaped left brace Aditya Srivastava <yashsri421@gmail.com>: checkpatch: add fix option for ASSIGNMENT_CONTINUATIONS checkpatch: add fix option for LOGICAL_CONTINUATIONS checkpatch: add fix and improve warning msg for non-standard signature Dwaipayan Ray <dwaipayanray1@gmail.com>: checkpatch: add warning for unnecessary use of %h[xudi] and %hh[xudi] checkpatch: add warning for lines starting with a '#' in commit log checkpatch: fix TYPO_SPELLING check for words with apostrophe Joe Perches <joe@perches.com>: checkpatch: add printk_once and printk_ratelimit to prefer pr_<level> warning Subsystem: nilfs Alex Shi <alex.shi@linux.alibaba.com>: fs/nilfs2: remove some unused macros to tame gcc Subsystem: kdump Alexander Egorenkov <egorenar@linux.ibm.com>: kdump: append uts_namespace.name offset to VMCOREINFO Subsystem: rapidio Sebastian Andrzej Siewior <bigeasy@linutronix.de>: rapidio: remove unused rio_get_asm() and rio_get_device() Subsystem: gcov Nick Desaulniers <ndesaulniers@google.com>: gcov: remove support for GCC < 4.9 Alex Shi <alex.shi@linux.alibaba.com>: gcov: fix kernel-doc markup issue Subsystem: bfs Randy Dunlap <rdunlap@infradead.org>: bfs: don't use WARNING: string when it's just info. Subsystem: relay Jani Nikula <jani.nikula@intel.com>: Patch series "relay: cleanup and const callbacks", v2: relay: remove unused buf_mapped and buf_unmapped callbacks relay: require non-NULL callbacks in relay_open() relay: make create_buf_file and remove_buf_file callbacks mandatory relay: allow the use of const callback structs drm/i915: make relay callbacks const ath10k: make relay callbacks const ath11k: make relay callbacks const ath9k: make relay callbacks const blktrace: make relay callbacks const Subsystem: resource Mauro Carvalho Chehab <mchehab+huawei@kernel.org>: kernel/resource.c: fix kernel-doc markups Subsystem: ubsan Kees Cook <keescook@chromium.org>: Patch series "Clean up UBSAN Makefile", v2: ubsan: remove redundant -Wno-maybe-uninitialized ubsan: move cc-option tests into Kconfig ubsan: disable object-size sanitizer under GCC ubsan: disable UBSAN_TRAP for all*config ubsan: enable for all*config builds ubsan: remove UBSAN_MISC in favor of individual options ubsan: expand tests and reporting Dmitry Vyukov <dvyukov@google.com>: kcov: don't instrument with UBSAN Zou Wei <zou_wei@huawei.com>: lib/ubsan.c: mark type_check_kinds with static keyword Subsystem: reboot Matteo Croce <mcroce@microsoft.com>: reboot: refactor and comment the cpu selection code reboot: allow to specify reboot mode via sysfs reboot: remove cf9_safe from allowed types and rename cf9_force Patch series "reboot: sysfs improvements": reboot: allow to override reboot type if quirks are found reboot: hide from sysfs not applicable settings Subsystem: fault-injection Barnabás Pőcze <pobrn@protonmail.com>: fault-injection: handle EI_ETYPE_TRUE Subsystem: lzo Jason Yan <yanaijie@huawei.com>: lib/lzo/lzo1x_compress.c: make lzogeneric1x_1_compress() static Subsystem: apparmor Andy Shevchenko <andriy.shevchenko@linux.intel.com>: apparmor: remove duplicate macro list_entry_is_head() Subsystem: mm/pagemap Christoph Hellwig <hch@lst.de>: Patch series "simplify follow_pte a bit": mm: unexport follow_pte_pmd mm: simplify follow_pte{,pmd} Subsystem: mm/cleanups Haitao Shi <shihaitao1@huawei.com>: mm: fix some spelling mistakes in comments Subsystem: mm/gup Jann Horn <jannh@google.com>: mmap locking API: don't check locking if the mm isn't live yet mm/gup: assert that the mmap lock is held in __get_user_pages() Documentation/ABI/testing/sysfs-kernel-reboot | 32 Documentation/admin-guide/kdump/vmcoreinfo.rst | 6 Documentation/dev-tools/ubsan.rst | 1 Documentation/filesystems/proc.rst | 2 MAINTAINERS | 5 arch/alpha/kernel/process.c | 2 arch/powerpc/kernel/vmlinux.lds.S | 4 arch/s390/pci/pci_mmio.c | 4 drivers/gpio/gpio-thunderx.c | 11 drivers/gpio/gpio-xilinx.c | 61 - drivers/gpu/drm/i915/gt/uc/intel_guc_log.c | 2 drivers/misc/lkdtm/Makefile | 1 drivers/misc/lkdtm/bugs.c | 50 + drivers/misc/lkdtm/core.c | 3 drivers/misc/lkdtm/fortify.c | 82 ++ drivers/misc/lkdtm/lkdtm.h | 19 drivers/net/wireless/ath/ath10k/spectral.c | 2 drivers/net/wireless/ath/ath11k/spectral.c | 2 drivers/net/wireless/ath/ath9k/common-spectral.c | 2 drivers/rapidio/rio.c | 81 -- fs/bfs/inode.c | 2 fs/dax.c | 9 fs/exec.c | 8 fs/nfs/callback_proc.c | 5 fs/nilfs2/segment.c | 5 fs/proc/array.c | 28 fs/proc/base.c | 2 fs/proc/generic.c | 24 fs/proc/internal.h | 10 fs/proc/proc_net.c | 20 include/asm-generic/bitops/find.h | 19 include/asm-generic/getorder.h | 2 include/linux/bitmap.h | 67 +- include/linux/bitops.h | 24 include/linux/dcache.h | 1 include/linux/iommu-helper.h | 4 include/linux/kernel.h | 173 ----- include/linux/log2.h | 3 include/linux/math.h | 177 +++++ include/linux/mm.h | 6 include/linux/mm_types.h | 10 include/linux/mmap_lock.h | 16 include/linux/proc_fs.h | 8 include/linux/rcu_node_tree.h | 2 include/linux/relay.h | 29 include/linux/rio_drv.h | 3 include/linux/string.h | 75 +- include/linux/units.h | 2 kernel/Makefile | 3 kernel/acct.c | 7 kernel/crash_core.c | 1 kernel/fail_function.c | 6 kernel/gcov/gcc_4_7.c | 10 kernel/reboot.c | 308 ++++++++- kernel/relay.c | 111 --- kernel/resource.c | 24 kernel/trace/blktrace.c | 2 lib/Kconfig.debug | 11 lib/Kconfig.ubsan | 154 +++- lib/Makefile | 7 lib/bits_kunit.c | 75 ++ lib/cmdline.c | 20 lib/cmdline_kunit.c | 100 +++ lib/errname.c | 1 lib/error-inject.c | 2 lib/errseq.c | 1 lib/find_bit.c | 17 lib/linear_ranges_kunit.c | 228 +++++++ lib/list-test.c | 748 ----------------------- lib/list_kunit.c | 748 +++++++++++++++++++++++ lib/lz4/lz4_decompress.c | 6 lib/lz4/lz4defs.h | 1 lib/lzo/lzo1x_compress.c | 2 lib/math/div64.c | 4 lib/math/int_pow.c | 2 lib/math/int_sqrt.c | 3 lib/math/reciprocal_div.c | 9 lib/stackdepot.c | 11 lib/string.c | 4 lib/test_bitmap.c | 143 ++++ lib/test_bits.c | 75 -- lib/test_firmware.c | 9 lib/test_free_pages.c | 5 lib/test_kmod.c | 26 lib/test_linear_ranges.c | 228 ------- lib/test_lockup.c | 16 lib/test_ubsan.c | 74 ++ lib/ubsan.c | 2 mm/filemap.c | 2 mm/gup.c | 2 mm/huge_memory.c | 2 mm/khugepaged.c | 2 mm/memblock.c | 2 mm/memory.c | 36 - mm/memory_hotplug.c | 2 mm/migrate.c | 2 mm/page_ext.c | 2 mm/swapfile.c | 11 scripts/Makefile.ubsan | 49 - scripts/checkpatch.pl | 495 +++++++++++---- security/apparmor/apparmorfs.c | 3 tools/testing/selftests/lkdtm/tests.txt | 1 102 files changed, 3022 insertions(+), 1899 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-12-15 20:32 Andrew Morton 2020-12-15 21:00 ` incoming Linus Torvalds 2020-12-15 22:48 ` incoming Linus Torvalds 0 siblings, 2 replies; 349+ messages in thread From: Andrew Morton @ 2020-12-15 20:32 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits - more MM work: a memcg scalability improvememt 19 patches, based on 148842c98a24e508aecb929718818fbf4c2a6ff3. Subsystems affected by this patch series: Alex Shi <alex.shi@linux.alibaba.com>: Patch series "per memcg lru lock", v21: mm/thp: move lru_add_page_tail() to huge_memory.c mm/thp: use head for head page in lru_add_page_tail() mm/thp: simplify lru_add_page_tail() mm/thp: narrow lru locking mm/vmscan: remove unnecessary lruvec adding mm/rmap: stop store reordering issue on page->mapping Hugh Dickins <hughd@google.com>: mm: page_idle_get_page() does not need lru_lock Alex Shi <alex.shi@linux.alibaba.com>: mm/memcg: add debug checking in lock_page_memcg mm/swap.c: fold vm event PGROTATED into pagevec_move_tail_fn mm/lru: move lock into lru_note_cost mm/vmscan: remove lruvec reget in move_pages_to_lru mm/mlock: remove lru_lock on TestClearPageMlocked mm/mlock: remove __munlock_isolate_lru_page() mm/lru: introduce TestClearPageLRU() mm/compaction: do page isolation first in compaction mm/swap.c: serialize memcg changes in pagevec_lru_move_fn mm/lru: replace pgdat lru_lock with lruvec lock Alexander Duyck <alexander.h.duyck@linux.intel.com>: mm/lru: introduce relock_page_lruvec() Hugh Dickins <hughd@google.com>: mm/lru: revise the comments of lru_lock Documentation/admin-guide/cgroup-v1/memcg_test.rst | 15 - Documentation/admin-guide/cgroup-v1/memory.rst | 23 - Documentation/trace/events-kmem.rst | 2 Documentation/vm/unevictable-lru.rst | 22 - include/linux/memcontrol.h | 110 +++++++ include/linux/mm_types.h | 2 include/linux/mmzone.h | 6 include/linux/page-flags.h | 1 include/linux/swap.h | 4 mm/compaction.c | 98 ++++--- mm/filemap.c | 4 mm/huge_memory.c | 109 ++++--- mm/memcontrol.c | 84 +++++- mm/mlock.c | 93 ++---- mm/mmzone.c | 1 mm/page_alloc.c | 1 mm/page_idle.c | 4 mm/rmap.c | 12 mm/swap.c | 292 ++++++++------------- mm/vmscan.c | 239 ++++++++--------- mm/workingset.c | 2 21 files changed, 644 insertions(+), 480 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-12-15 20:32 incoming Andrew Morton @ 2020-12-15 21:00 ` Linus Torvalds 2020-12-15 22:48 ` incoming Linus Torvalds 1 sibling, 0 replies; 349+ messages in thread From: Linus Torvalds @ 2020-12-15 21:00 UTC (permalink / raw) To: Andrew Morton; +Cc: Linux-MM, mm-commits On Tue, Dec 15, 2020 at 12:32 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > - more MM work: a memcg scalability improvememt > > 19 patches, based on 148842c98a24e508aecb929718818fbf4c2a6ff3. I'm not seeing patch 10/19 at all. And patch 19/19 is corrupted and has an attachment with a '^P' character in it. I could fix it up, but with the missing patch in the middle I'm not going to even try. 'b4' is also very unhappy about that patch 19/19. I don't know what went wrong, but I'll ignore this send - please re-send the series at your leisure, ok? Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-12-15 20:32 incoming Andrew Morton 2020-12-15 21:00 ` incoming Linus Torvalds @ 2020-12-15 22:48 ` Linus Torvalds 2020-12-15 22:49 ` incoming Linus Torvalds 1 sibling, 1 reply; 349+ messages in thread From: Linus Torvalds @ 2020-12-15 22:48 UTC (permalink / raw) To: Andrew Morton; +Cc: Linux-MM, mm-commits On Tue, Dec 15, 2020 at 12:32 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > - more MM work: a memcg scalability improvememt > > 19 patches, based on 148842c98a24e508aecb929718818fbf4c2a6ff3. With your re-send, I get all patches, but they don't actually apply cleanly. Is that base correct? I get error: patch failed: mm/huge_memory.c:2750 error: mm/huge_memory.c: patch does not apply Patch failed at 0004 mm/thp: narrow lru locking for that patch "[patch 04/19] mm/thp: narrow lru locking", and that's definitely true: the patch fragment has @@ -2750,7 +2751,7 @@ int split_huge_page_to_list(struct page __dec_lruvec_page_state(head, NR_FILE_THPS); } - __split_huge_page(page, list, end, flags); + __split_huge_page(page, list, end); ret = 0; } else { if (IS_ENABLED(CONFIG_DEBUG_VM) && mapcount) { but that __dec_lruvec_page_state() conversion was done by your previous commit series. So I have the feeling that what you actually mean by "base" isn't actually really the base for that series at all.. I will try to apply it on top of my merge of your previous series instead. Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-12-15 22:48 ` incoming Linus Torvalds @ 2020-12-15 22:49 ` Linus Torvalds 2020-12-15 22:55 ` incoming Andrew Morton 0 siblings, 1 reply; 349+ messages in thread From: Linus Torvalds @ 2020-12-15 22:49 UTC (permalink / raw) To: Andrew Morton; +Cc: Linux-MM, mm-commits On Tue, Dec 15, 2020 at 2:48 PM Linus Torvalds <torvalds@linux-foundation.org> wrote: > > I will try to apply it on top of my merge of your previous series instead. Yes, then it applies cleanly. So apparently we just have different concepts of what really constitutes a "base" for applying your series. Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-12-15 22:49 ` incoming Linus Torvalds @ 2020-12-15 22:55 ` Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-12-15 22:55 UTC (permalink / raw) To: Linus Torvalds; +Cc: Linux-MM, mm-commits On Tue, 15 Dec 2020 14:49:24 -0800 Linus Torvalds <torvalds@linux-foundation.org> wrote: > On Tue, Dec 15, 2020 at 2:48 PM Linus Torvalds > <torvalds@linux-foundation.org> wrote: > > > > I will try to apply it on top of my merge of your previous series instead. > > Yes, then it applies cleanly. So apparently we just have different > concepts of what really constitutes a "base" for applying your series. > oop, sorry, yes, the "based on" thing was wrong because I had two series in flight simultaneously. I've never tried that before.. ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-12-15 3:02 Andrew Morton 2020-12-15 3:25 ` incoming Linus Torvalds 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2020-12-15 3:02 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm - a few random little subsystems - almost all of the MM patches which are staged ahead of linux-next material. I'll trickle to post-linux-next work in as the dependents get merged up. 200 patches, based on 2c85ebc57b3e1817b6ce1a6b703928e113a90442. Subsystems affected by this patch series: kthread kbuild ide ntfs ocfs2 arch mm/slab-generic mm/slab mm/slub mm/dax mm/debug mm/pagecache mm/gup mm/swap mm/shmem mm/memcg mm/pagemap mm/mremap mm/hmm mm/vmalloc mm/documentation mm/kasan mm/pagealloc mm/memory-failure mm/hugetlb mm/vmscan mm/z3fold mm/compaction mm/oom-kill mm/migration mm/cma mm/page-poison mm/userfaultfd mm/zswap mm/zsmalloc mm/uaccess mm/zram mm/cleanups Subsystem: kthread Rob Clark <robdclark@chromium.org>: kthread: add kthread_work tracepoints Petr Mladek <pmladek@suse.com>: kthread_worker: document CPU hotplug handling Subsystem: kbuild Petr Vorel <petr.vorel@gmail.com>: uapi: move constants from <linux/kernel.h> to <linux/const.h> Subsystem: ide Sebastian Andrzej Siewior <bigeasy@linutronix.de>: ide/falcon: remove in_interrupt() usage ide: remove BUG_ON(in_interrupt() || irqs_disabled()) from ide_unregister() Subsystem: ntfs Alex Shi <alex.shi@linux.alibaba.com>: fs/ntfs: remove unused varibles fs/ntfs: remove unused variable attr_len Subsystem: ocfs2 Tom Rix <trix@redhat.com>: fs/ocfs2/cluster/tcp.c: remove unneeded break Mauricio Faria de Oliveira <mfo@canonical.com>: ocfs2: ratelimit the 'max lookup times reached' notice Subsystem: arch Colin Ian King <colin.king@canonical.com>: arch/Kconfig: fix spelling mistakes Subsystem: mm/slab-generic Hui Su <sh_def@163.com>: mm/slab_common.c: use list_for_each_entry in dump_unreclaimable_slab() Bartosz Golaszewski <bgolaszewski@baylibre.com>: Patch series "slab: provide and use krealloc_array()", v3: mm: slab: clarify krealloc()'s behavior with __GFP_ZERO mm: slab: provide krealloc_array() ALSA: pcm: use krealloc_array() vhost: vringh: use krealloc_array() pinctrl: use krealloc_array() edac: ghes: use krealloc_array() drm: atomic: use krealloc_array() hwtracing: intel: use krealloc_array() dma-buf: use krealloc_array() Vlastimil Babka <vbabka@suse.cz>: mm, slab, slub: clear the slab_cache field when freeing page Subsystem: mm/slab Alexander Popov <alex.popov@linux.com>: mm/slab: rerform init_on_free earlier Subsystem: mm/slub Vlastimil Babka <vbabka@suse.cz>: mm, slub: use kmem_cache_debug_flags() in deactivate_slab() Bharata B Rao <bharata@linux.ibm.com>: mm/slub: let number of online CPUs determine the slub page order Subsystem: mm/dax Dan Williams <dan.j.williams@intel.com>: device-dax/kmem: use struct_size() Subsystem: mm/debug Zhenhua Huang <zhenhuah@codeaurora.org>: mm: fix page_owner initializing issue for arm32 Liam Mark <lmark@codeaurora.org>: mm/page_owner: record timestamp and pid Subsystem: mm/pagecache Kent Overstreet <kent.overstreet@gmail.com>: Patch series "generic_file_buffered_read() improvements", v2: mm/filemap/c: break generic_file_buffered_read up into multiple functions mm/filemap.c: generic_file_buffered_read() now uses find_get_pages_contig Alex Shi <alex.shi@linux.alibaba.com>: mm/truncate: add parameter explanation for invalidate_mapping_pagevec Hailong Liu <carver4lio@163.com>: mm/filemap.c: remove else after a return Subsystem: mm/gup John Hubbard <jhubbard@nvidia.com>: Patch series "selftests/vm: gup_test, hmm-tests, assorted improvements", v3: mm/gup_benchmark: rename to mm/gup_test selftests/vm: use a common gup_test.h selftests/vm: rename run_vmtests --> run_vmtests.sh selftests/vm: minor cleanup: Makefile and gup_test.c selftests/vm: only some gup_test items are really benchmarks selftests/vm: gup_test: introduce the dump_pages() sub-test selftests/vm: run_vmtests.sh: update and clean up gup_test invocation selftests/vm: hmm-tests: remove the libhugetlbfs dependency selftests/vm: 2x speedup for run_vmtests.sh Barry Song <song.bao.hua@hisilicon.com>: mm/gup_test.c: mark gup_test_init as __init function mm/gup_test: GUP_TEST depends on DEBUG_FS Jason Gunthorpe <jgg@nvidia.com>: Patch series "Add a seqcount between gup_fast and copy_page_range()", v4: mm/gup: reorganize internal_get_user_pages_fast() mm/gup: prevent gup_fast from racing with COW during fork mm/gup: remove the vma allocation from gup_longterm_locked() mm/gup: combine put_compound_head() and unpin_user_page() Subsystem: mm/swap Ralph Campbell <rcampbell@nvidia.com>: mm: handle zone device pages in release_pages() Miaohe Lin <linmiaohe@huawei.com>: mm/swapfile.c: use helper function swap_count() in add_swap_count_continuation() mm/swap_state: skip meaningless swap cache readahead when ra_info.win == 0 mm/swapfile.c: remove unnecessary out label in __swap_duplicate() mm/swapfile.c: use memset to fill the swap_map with SWAP_HAS_CACHE Jeff Layton <jlayton@kernel.org>: mm: remove pagevec_lookup_range_nr_tag() Subsystem: mm/shmem Hui Su <sh_def@163.com>: mm/shmem.c: make shmem_mapping() inline Randy Dunlap <rdunlap@infradead.org>: tmpfs: fix Documentation nits Subsystem: mm/memcg Johannes Weiner <hannes@cmpxchg.org>: mm: memcontrol: add file_thp, shmem_thp to memory.stat Muchun Song <songmuchun@bytedance.com>: mm: memcontrol: remove unused mod_memcg_obj_state() Miaohe Lin <linmiaohe@huawei.com>: mm: memcontrol: eliminate redundant check in __mem_cgroup_insert_exceeded() Muchun Song <songmuchun@bytedance.com>: mm: memcg/slab: fix return of child memcg objcg for root memcg mm: memcg/slab: fix use after free in obj_cgroup_charge Shakeel Butt <shakeelb@google.com>: mm/rmap: always do TTU_IGNORE_ACCESS Alex Shi <alex.shi@linux.alibaba.com>: mm/memcg: update page struct member in comments Roman Gushchin <guro@fb.com>: mm: memcg: fix obsolete code comments Patch series "mm: memcg: deprecate cgroup v1 non-hierarchical mode", v1: mm: memcg: deprecate the non-hierarchical mode docs: cgroup-v1: reflect the deprecation of the non-hierarchical mode cgroup: remove obsoleted broken_hierarchy and warned_broken_hierarchy Hui Su <sh_def@163.com>: mm/page_counter: use page_counter_read in page_counter_set_max Lukas Bulwahn <lukas.bulwahn@gmail.com>: mm: memcg: remove obsolete memcg_has_children() Muchun Song <songmuchun@bytedance.com>: mm: memcg/slab: rename *_lruvec_slab_state to *_lruvec_kmem_state Kaixu Xia <kaixuxia@tencent.com>: mm: memcontrol: sssign boolean values to a bool variable Alex Shi <alex.shi@linux.alibaba.com>: mm/memcg: remove incorrect comment Shakeel Butt <shakeelb@google.com>: Patch series "memcg: add pagetable comsumption to memory.stat", v2: mm: move lruvec stats update functions to vmstat.h mm: memcontrol: account pagetables per node Subsystem: mm/pagemap Dan Williams <dan.j.williams@intel.com>: xen/unpopulated-alloc: consolidate pgmap manipulation Kalesh Singh <kaleshsingh@google.com>: Patch series "Speed up mremap on large regions", v4: kselftests: vm: add mremap tests mm: speedup mremap on 1GB or larger regions arm64: mremap speedup - enable HAVE_MOVE_PUD x86: mremap speedup - Enable HAVE_MOVE_PUD John Hubbard <jhubbard@nvidia.com>: mm: cleanup: remove unused tsk arg from __access_remote_vm Alex Shi <alex.shi@linux.alibaba.com>: mm/mapping_dirty_helpers: enhance the kernel-doc markups mm/page_vma_mapped.c: add colon to fix kernel-doc markups error for check_pte Axel Rasmussen <axelrasmussen@google.com>: mm: mmap_lock: add tracepoints around lock acquisition "Matthew Wilcox (Oracle)" <willy@infradead.org>: sparc: fix handling of page table constructor failure mm: move free_unref_page to mm/internal.h Subsystem: mm/mremap Dmitry Safonov <dima@arista.com>: Patch series "mremap: move_vma() fixes": mm/mremap: account memory on do_munmap() failure mm/mremap: for MREMAP_DONTUNMAP check security_vm_enough_memory_mm() mremap: don't allow MREMAP_DONTUNMAP on special_mappings and aio vm_ops: rename .split() callback to .may_split() mremap: check if it's possible to split original vma mm: forbid splitting special mappings Subsystem: mm/hmm Daniel Vetter <daniel.vetter@ffwll.ch>: mm: track mmu notifiers in fs_reclaim_acquire/release mm: extract might_alloc() debug check locking/selftests: add testcases for fs_reclaim Subsystem: mm/vmalloc Andrew Morton <akpm@linux-foundation.org>: mm/vmalloc.c:__vmalloc_area_node(): avoid 32-bit overflow "Uladzislau Rezki (Sony)" <urezki@gmail.com>: mm/vmalloc: use free_vm_area() if an allocation fails mm/vmalloc: rework the drain logic Alex Shi <alex.shi@linux.alibaba.com>: mm/vmalloc: add 'align' parameter explanation for pvm_determine_end_from_reverse Baolin Wang <baolin.wang@linux.alibaba.com>: mm/vmalloc.c: remove unnecessary return statement Waiman Long <longman@redhat.com>: mm/vmalloc: Fix unlock order in s_stop() Subsystem: mm/documentation Alex Shi <alex.shi@linux.alibaba.com>: docs/vm: remove unused 3 items explanation for /proc/vmstat Subsystem: mm/kasan Vincenzo Frascino <vincenzo.frascino@arm.com>: mm/vmalloc.c: fix kasan shadow poisoning size Walter Wu <walter-zh.wu@mediatek.com>: Patch series "kasan: add workqueue stack for generic KASAN", v5: workqueue: kasan: record workqueue stack kasan: print workqueue stack lib/test_kasan.c: add workqueue test case kasan: update documentation for generic kasan Marco Elver <elver@google.com>: lkdtm: disable KASAN for rodata.o Subsystem: mm/pagealloc Mike Rapoport <rppt@linux.ibm.com>: Patch series "arch, mm: deprecate DISCONTIGMEM", v2: alpha: switch from DISCONTIGMEM to SPARSEMEM ia64: remove custom __early_pfn_to_nid() ia64: remove 'ifdef CONFIG_ZONE_DMA32' statements ia64: discontig: paging_init(): remove local max_pfn calculation ia64: split virtual map initialization out of paging_init() ia64: forbid using VIRTUAL_MEM_MAP with FLATMEM ia64: make SPARSEMEM default and disable DISCONTIGMEM arm: remove CONFIG_ARCH_HAS_HOLES_MEMORYMODEL arm, arm64: move free_unused_memmap() to generic mm arc: use FLATMEM with freeing of unused memory map instead of DISCONTIGMEM m68k/mm: make node data and node setup depend on CONFIG_DISCONTIGMEM m68k/mm: enable use of generic memory_model.h for !DISCONTIGMEM m68k: deprecate DISCONTIGMEM Patch series "arch, mm: improve robustness of direct map manipulation", v7: mm: introduce debug_pagealloc_{map,unmap}_pages() helpers PM: hibernate: make direct map manipulations more explicit arch, mm: restore dependency of __kernel_map_pages() on DEBUG_PAGEALLOC arch, mm: make kernel_page_present() always available Vlastimil Babka <vbabka@suse.cz>: Patch series "disable pcplists during memory offline", v3: mm, page_alloc: clean up pageset high and batch update mm, page_alloc: calculate pageset high and batch once per zone mm, page_alloc: remove setup_pageset() mm, page_alloc: simplify pageset_update() mm, page_alloc: cache pageset high and batch in struct zone mm, page_alloc: move draining pcplists to page isolation users mm, page_alloc: disable pcplists during memory offline Miaohe Lin <linmiaohe@huawei.com>: include/linux/page-flags.h: remove unused __[Set|Clear]PagePrivate "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm/page-flags: fix comment mm/page_alloc: add __free_pages() documentation Zou Wei <zou_wei@huawei.com>: mm/page_alloc: mark some symbols with static keyword David Hildenbrand <david@redhat.com>: mm/page_alloc: clear all pages in post_alloc_hook() with init_on_alloc=1 Lin Feng <linf@wangsu.com>: init/main: fix broken buffer_init when DEFERRED_STRUCT_PAGE_INIT set Lorenzo Stoakes <lstoakes@gmail.com>: mm: page_alloc: refactor setup_per_zone_lowmem_reserve() Muchun Song <songmuchun@bytedance.com>: mm/page_alloc: speed up the iteration of max_order Subsystem: mm/memory-failure Oscar Salvador <osalvador@suse.de>: Patch series "HWpoison: further fixes and cleanups", v5: mm,hwpoison: drain pcplists before bailing out for non-buddy zero-refcount page mm,hwpoison: take free pages off the buddy freelists mm,hwpoison: drop unneeded pcplist draining Patch series "HWPoison: Refactor get page interface", v2: mm,hwpoison: refactor get_any_page mm,hwpoison: disable pcplists before grabbing a refcount mm,hwpoison: remove drain_all_pages from shake_page mm,memory_failure: always pin the page in madvise_inject_error mm,hwpoison: return -EBUSY when migration fails Subsystem: mm/hugetlb Hui Su <sh_def@163.com>: mm/hugetlb.c: just use put_page_testzero() instead of page_count() Ralph Campbell <rcampbell@nvidia.com>: include/linux/huge_mm.h: remove extern keyword Alex Shi <alex.shi@linux.alibaba.com>: khugepaged: add parameter explanations for kernel-doc markup Liu Xiang <liu.xiang@zlingsmart.com>: mm: hugetlb: fix type of delta parameter and related local variables in gather_surplus_pages() Oscar Salvador <osalvador@suse.de>: mm,hugetlb: remove unneeded initialization Dan Carpenter <dan.carpenter@oracle.com>: hugetlb: fix an error code in hugetlb_reserve_pages() Subsystem: mm/vmscan Johannes Weiner <hannes@cmpxchg.org>: mm: don't wake kswapd prematurely when watermark boosting is disabled Lukas Bulwahn <lukas.bulwahn@gmail.com>: mm/vmscan: drop unneeded assignment in kswapd() "logic.yu" <hymmsx.yu@gmail.com>: mm/vmscan.c: remove the filename in the top of file comment Muchun Song <songmuchun@bytedance.com>: mm/page_isolation: do not isolate the max order page Subsystem: mm/z3fold Vitaly Wool <vitaly.wool@konsulko.com>: Patch series "z3fold: stability / rt fixes": z3fold: simplify freeing slots z3fold: stricter locking and more careful reclaim z3fold: remove preempt disabled sections for RT Subsystem: mm/compaction Yanfei Xu <yanfei.xu@windriver.com>: mm/compaction: rename 'start_pfn' to 'iteration_start_pfn' in compact_zone() Hui Su <sh_def@163.com>: mm/compaction: move compaction_suitable's comment to right place mm/compaction: make defer_compaction and compaction_deferred static Subsystem: mm/oom-kill Hui Su <sh_def@163.com>: mm/oom_kill: change comment and rename is_dump_unreclaim_slabs() Subsystem: mm/migration Long Li <lonuxli.64@gmail.com>: mm/migrate.c: fix comment spelling Ralph Campbell <rcampbell@nvidia.com>: mm/migrate.c: optimize migrate_vma_pages() mmu notifier "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm: support THPs in zero_user_segments Yang Shi <shy828301@gmail.com>: Patch series "mm: misc migrate cleanup and improvement", v3: mm: truncate_complete_page() does not exist any more mm: migrate: simplify the logic for handling permanent failure mm: migrate: skip shared exec THP for NUMA balancing mm: migrate: clean up migrate_prep{_local} mm: migrate: return -ENOSYS if THP migration is unsupported Stephen Zhang <starzhangzsd@gmail.com>: mm: migrate: remove unused parameter in migrate_vma_insert_page() Subsystem: mm/cma Lecopzer Chen <lecopzer.chen@mediatek.com>: mm/cma.c: remove redundant cma_mutex lock Charan Teja Reddy <charante@codeaurora.org>: mm: cma: improve pr_debug log in cma_release() Subsystem: mm/page-poison Vlastimil Babka <vbabka@suse.cz>: Patch series "cleanup page poisoning", v3: mm, page_alloc: do not rely on the order of page_poison and init_on_alloc/free parameters mm, page_poison: use static key more efficiently kernel/power: allow hibernation with page_poison sanity checking mm, page_poison: remove CONFIG_PAGE_POISONING_NO_SANITY mm, page_poison: remove CONFIG_PAGE_POISONING_ZERO Subsystem: mm/userfaultfd Lokesh Gidra <lokeshgidra@google.com>: Patch series "Control over userfaultfd kernel-fault handling", v6: userfaultfd: add UFFD_USER_MODE_ONLY userfaultfd: add user-mode only option to unprivileged_userfaultfd sysctl knob Axel Rasmussen <axelrasmussen@google.com>: userfaultfd: selftests: make __{s,u}64 format specifiers portable Peter Xu <peterx@redhat.com>: Patch series "userfaultfd: selftests: Small fixes": userfaultfd/selftests: always dump something in modes userfaultfd/selftests: fix retval check for userfaultfd_open() userfaultfd/selftests: hint the test runner on required privilege Subsystem: mm/zswap Joe Perches <joe@perches.com>: mm/zswap: make struct kernel_param_ops definitions const YueHaibing <yuehaibing@huawei.com>: mm/zswap: fix passing zero to 'PTR_ERR' warning Barry Song <song.bao.hua@hisilicon.com>: mm/zswap: move to use crypto_acomp API for hardware acceleration Subsystem: mm/zsmalloc Miaohe Lin <linmiaohe@huawei.com>: mm/zsmalloc.c: rework the list_add code in insert_zspage() Subsystem: mm/uaccess Colin Ian King <colin.king@canonical.com>: mm/process_vm_access: remove redundant initialization of iov_r Subsystem: mm/zram Minchan Kim <minchan@kernel.org>: zram: support page writeback zram: add stat to gather incompressible pages since zram set up Rui Salvaterra <rsalvaterra@gmail.com>: zram: break the strict dependency from lzo Subsystem: mm/cleanups Mauro Carvalho Chehab <mchehab+huawei@kernel.org>: mm: fix kernel-doc markups Joe Perches <joe@perches.com>: Patch series "mm: Convert sysfs sprintf family to sysfs_emit", v2: mm: use sysfs_emit for struct kobject * uses mm: huge_memory: convert remaining use of sprintf to sysfs_emit and neatening mm:backing-dev: use sysfs_emit in macro defining functions mm: shmem: convert shmem_enabled_show to use sysfs_emit_at mm: slub: convert sysfs sprintf family to sysfs_emit/sysfs_emit_at "Gustavo A. R. Silva" <gustavoars@kernel.org>: mm: fix fall-through warnings for Clang Alexey Dobriyan <adobriyan@gmail.com>: mm: cleanup kstrto*() usage /mmap_lock.h | 107 ++ a/Documentation/admin-guide/blockdev/zram.rst | 6 a/Documentation/admin-guide/cgroup-v1/memcg_test.rst | 8 a/Documentation/admin-guide/cgroup-v1/memory.rst | 42 a/Documentation/admin-guide/cgroup-v2.rst | 11 a/Documentation/admin-guide/mm/transhuge.rst | 15 a/Documentation/admin-guide/sysctl/vm.rst | 15 a/Documentation/core-api/memory-allocation.rst | 4 a/Documentation/core-api/pin_user_pages.rst | 8 a/Documentation/dev-tools/kasan.rst | 5 a/Documentation/filesystems/tmpfs.rst | 8 a/Documentation/vm/memory-model.rst | 3 a/Documentation/vm/page_owner.rst | 12 a/arch/Kconfig | 21 a/arch/alpha/Kconfig | 8 a/arch/alpha/include/asm/mmzone.h | 14 a/arch/alpha/include/asm/page.h | 7 a/arch/alpha/include/asm/pgtable.h | 12 a/arch/alpha/include/asm/sparsemem.h | 18 a/arch/alpha/kernel/setup.c | 1 a/arch/arc/Kconfig | 3 a/arch/arc/include/asm/page.h | 20 a/arch/arc/mm/init.c | 29 a/arch/arm/Kconfig | 12 a/arch/arm/kernel/vdso.c | 9 a/arch/arm/mach-bcm/Kconfig | 1 a/arch/arm/mach-davinci/Kconfig | 1 a/arch/arm/mach-exynos/Kconfig | 1 a/arch/arm/mach-highbank/Kconfig | 1 a/arch/arm/mach-omap2/Kconfig | 1 a/arch/arm/mach-s5pv210/Kconfig | 1 a/arch/arm/mach-tango/Kconfig | 1 a/arch/arm/mm/init.c | 78 - a/arch/arm64/Kconfig | 9 a/arch/arm64/include/asm/cacheflush.h | 1 a/arch/arm64/include/asm/pgtable.h | 1 a/arch/arm64/kernel/vdso.c | 41 a/arch/arm64/mm/init.c | 68 - a/arch/arm64/mm/pageattr.c | 12 a/arch/ia64/Kconfig | 11 a/arch/ia64/include/asm/meminit.h | 2 a/arch/ia64/mm/contig.c | 88 -- a/arch/ia64/mm/discontig.c | 44 - a/arch/ia64/mm/init.c | 14 a/arch/ia64/mm/numa.c | 30 a/arch/m68k/Kconfig.cpu | 31 a/arch/m68k/include/asm/page.h | 2 a/arch/m68k/include/asm/page_mm.h | 7 a/arch/m68k/include/asm/virtconvert.h | 7 a/arch/m68k/mm/init.c | 10 a/arch/mips/vdso/genvdso.c | 4 a/arch/nds32/mm/mm-nds32.c | 6 a/arch/powerpc/Kconfig | 5 a/arch/riscv/Kconfig | 4 a/arch/riscv/include/asm/pgtable.h | 2 a/arch/riscv/include/asm/set_memory.h | 1 a/arch/riscv/mm/pageattr.c | 31 a/arch/s390/Kconfig | 4 a/arch/s390/configs/debug_defconfig | 2 a/arch/s390/configs/defconfig | 2 a/arch/s390/kernel/vdso.c | 11 a/arch/sparc/Kconfig | 4 a/arch/sparc/mm/init_64.c | 2 a/arch/x86/Kconfig | 5 a/arch/x86/entry/vdso/vma.c | 17 a/arch/x86/include/asm/set_memory.h | 1 a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c | 2 a/arch/x86/kernel/tboot.c | 1 a/arch/x86/mm/pat/set_memory.c | 6 a/drivers/base/node.c | 2 a/drivers/block/zram/Kconfig | 42 a/drivers/block/zram/zcomp.c | 2 a/drivers/block/zram/zram_drv.c | 29 a/drivers/block/zram/zram_drv.h | 1 a/drivers/dax/device.c | 4 a/drivers/dax/kmem.c | 2 a/drivers/dma-buf/sync_file.c | 3 a/drivers/edac/ghes_edac.c | 4 a/drivers/firmware/efi/efi.c | 1 a/drivers/gpu/drm/drm_atomic.c | 3 a/drivers/hwtracing/intel_th/msu.c | 2 a/drivers/ide/falconide.c | 2 a/drivers/ide/ide-probe.c | 3 a/drivers/misc/lkdtm/Makefile | 1 a/drivers/pinctrl/pinctrl-utils.c | 2 a/drivers/vhost/vringh.c | 3 a/drivers/virtio/virtio_balloon.c | 6 a/drivers/xen/unpopulated-alloc.c | 14 a/fs/aio.c | 5 a/fs/ntfs/file.c | 5 a/fs/ntfs/inode.c | 2 a/fs/ntfs/logfile.c | 3 a/fs/ocfs2/cluster/tcp.c | 1 a/fs/ocfs2/namei.c | 4 a/fs/proc/kcore.c | 2 a/fs/proc/meminfo.c | 2 a/fs/userfaultfd.c | 20 a/include/linux/cgroup-defs.h | 15 a/include/linux/compaction.h | 12 a/include/linux/fs.h | 2 a/include/linux/gfp.h | 2 a/include/linux/highmem.h | 19 a/include/linux/huge_mm.h | 93 -- a/include/linux/memcontrol.h | 148 --- a/include/linux/migrate.h | 4 a/include/linux/mm.h | 118 +- a/include/linux/mm_types.h | 8 a/include/linux/mmap_lock.h | 94 ++ a/include/linux/mmzone.h | 50 - a/include/linux/page-flags.h | 6 a/include/linux/page_ext.h | 8 a/include/linux/pagevec.h | 3 a/include/linux/poison.h | 4 a/include/linux/rmap.h | 1 a/include/linux/sched/mm.h | 16 a/include/linux/set_memory.h | 5 a/include/linux/shmem_fs.h | 6 a/include/linux/slab.h | 18 a/include/linux/vmalloc.h | 8 a/include/linux/vmstat.h | 104 ++ a/include/trace/events/sched.h | 84 + a/include/uapi/linux/const.h | 5 a/include/uapi/linux/ethtool.h | 2 a/include/uapi/linux/kernel.h | 9 a/include/uapi/linux/lightnvm.h | 2 a/include/uapi/linux/mroute6.h | 2 a/include/uapi/linux/netfilter/x_tables.h | 2 a/include/uapi/linux/netlink.h | 2 a/include/uapi/linux/sysctl.h | 2 a/include/uapi/linux/userfaultfd.h | 9 a/init/main.c | 6 a/ipc/shm.c | 8 a/kernel/cgroup/cgroup.c | 12 a/kernel/fork.c | 3 a/kernel/kthread.c | 29 a/kernel/power/hibernate.c | 2 a/kernel/power/power.h | 2 a/kernel/power/snapshot.c | 52 + a/kernel/ptrace.c | 2 a/kernel/workqueue.c | 3 a/lib/locking-selftest.c | 47 + a/lib/test_kasan_module.c | 29 a/mm/Kconfig | 25 a/mm/Kconfig.debug | 28 a/mm/Makefile | 4 a/mm/backing-dev.c | 8 a/mm/cma.c | 6 a/mm/compaction.c | 29 a/mm/filemap.c | 823 ++++++++++--------- a/mm/gup.c | 329 ++----- a/mm/gup_benchmark.c | 210 ---- a/mm/gup_test.c | 299 ++++++ a/mm/gup_test.h | 40 a/mm/highmem.c | 52 + a/mm/huge_memory.c | 86 + a/mm/hugetlb.c | 28 a/mm/init-mm.c | 1 a/mm/internal.h | 5 a/mm/kasan/generic.c | 3 a/mm/kasan/report.c | 4 a/mm/khugepaged.c | 58 - a/mm/ksm.c | 50 - a/mm/madvise.c | 14 a/mm/mapping_dirty_helpers.c | 6 a/mm/memblock.c | 80 + a/mm/memcontrol.c | 170 +-- a/mm/memory-failure.c | 322 +++---- a/mm/memory.c | 24 a/mm/memory_hotplug.c | 44 - a/mm/mempolicy.c | 8 a/mm/migrate.c | 183 ++-- a/mm/mm_init.c | 1 a/mm/mmap.c | 22 a/mm/mmap_lock.c | 230 +++++ a/mm/mmu_notifier.c | 7 a/mm/mmzone.c | 14 a/mm/mremap.c | 282 ++++-- a/mm/nommu.c | 8 a/mm/oom_kill.c | 14 a/mm/page_alloc.c | 517 ++++++----- a/mm/page_counter.c | 4 a/mm/page_ext.c | 10 a/mm/page_isolation.c | 18 a/mm/page_owner.c | 17 a/mm/page_poison.c | 56 - a/mm/page_vma_mapped.c | 9 a/mm/process_vm_access.c | 2 a/mm/rmap.c | 9 a/mm/shmem.c | 39 a/mm/slab.c | 10 a/mm/slab.h | 9 a/mm/slab_common.c | 10 a/mm/slob.c | 6 a/mm/slub.c | 156 +-- a/mm/swap.c | 12 a/mm/swap_state.c | 7 a/mm/swapfile.c | 14 a/mm/truncate.c | 18 a/mm/vmalloc.c | 105 +- a/mm/vmscan.c | 21 a/mm/vmstat.c | 6 a/mm/workingset.c | 8 a/mm/z3fold.c | 215 ++-- a/mm/zsmalloc.c | 11 a/mm/zswap.c | 193 +++- a/sound/core/pcm_lib.c | 4 a/tools/include/linux/poison.h | 6 a/tools/testing/selftests/vm/.gitignore | 4 a/tools/testing/selftests/vm/Makefile | 41 a/tools/testing/selftests/vm/check_config.sh | 31 a/tools/testing/selftests/vm/config | 2 a/tools/testing/selftests/vm/gup_benchmark.c | 143 --- a/tools/testing/selftests/vm/gup_test.c | 258 +++++ a/tools/testing/selftests/vm/hmm-tests.c | 10 a/tools/testing/selftests/vm/mremap_test.c | 344 +++++++ a/tools/testing/selftests/vm/run_vmtests | 51 - a/tools/testing/selftests/vm/userfaultfd.c | 94 -- 217 files changed, 4817 insertions(+), 3369 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-12-15 3:02 incoming Andrew Morton @ 2020-12-15 3:25 ` Linus Torvalds 2020-12-15 3:30 ` incoming Linus Torvalds 0 siblings, 1 reply; 349+ messages in thread From: Linus Torvalds @ 2020-12-15 3:25 UTC (permalink / raw) To: Andrew Morton, Konstantin Ryabitsev; +Cc: mm-commits, Linux-MM On Mon, Dec 14, 2020 at 7:02 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > 200 patches, based on 2c85ebc57b3e1817b6ce1a6b703928e113a90442. I haven't actually processed the patches yet, but I have a question for Konstantin wrt b4. All the patches except for _one_ get a nice little green check-mark next to them when I use 'git am' on this series. The one that did not was [patch 192/200]. I have no idea why - and it doesn't matter a lot to me, it just stood out as being different. I'm assuming Andrew has started doing patch attestation, and that patch failed. But if so, maybe Konstantin wants to know what went wrong. Konstantin? Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-12-15 3:25 ` incoming Linus Torvalds @ 2020-12-15 3:30 ` Linus Torvalds 2020-12-15 14:04 ` incoming Konstantin Ryabitsev 0 siblings, 1 reply; 349+ messages in thread From: Linus Torvalds @ 2020-12-15 3:30 UTC (permalink / raw) To: Andrew Morton, Konstantin Ryabitsev; +Cc: mm-commits, Linux-MM On Mon, Dec 14, 2020 at 7:25 PM Linus Torvalds <torvalds@linux-foundation.org> wrote: > > All the patches except for _one_ get a nice little green check-mark > next to them when I use 'git am' on this series. > > The one that did not was [patch 192/200]. > > I have no idea why Hmm. It looks like that patch is the only one in the series with the ">From" marker in the commit message, from the silly "clarify that this isn't the first line in a new message in mbox format". And "b4 am" has turned the single ">" into two, making the stupid marker worse, and actually corrupting the end result. Coincidence? Or cause? Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-12-15 3:30 ` incoming Linus Torvalds @ 2020-12-15 14:04 ` Konstantin Ryabitsev 0 siblings, 0 replies; 349+ messages in thread From: Konstantin Ryabitsev @ 2020-12-15 14:04 UTC (permalink / raw) To: Linus Torvalds; +Cc: Andrew Morton, mm-commits, Linux-MM On Mon, Dec 14, 2020 at 07:30:54PM -0800, Linus Torvalds wrote: > > All the patches except for _one_ get a nice little green check-mark > > next to them when I use 'git am' on this series. > > > > The one that did not was [patch 192/200]. > > > > I have no idea why > > Hmm. It looks like that patch is the only one in the series with the > ">From" marker in the commit message, from the silly "clarify that > this isn't the first line in a new message in mbox format". > > And "b4 am" has turned the single ">" into two, making the stupid > marker worse, and actually corrupting the end result. It's a bug in b4 that I overlooked. Public-inbox emits mboxrd-formatted .mbox files, while Python's mailbox.mbox consumes mboxo only. The main distinction between the two is precisely that mboxrd will convert ">From " into ">>From " in an attempt to avoid corruption during escape/unescape (it didn't end up fixing the problem 100% and mostly introduced incompatibilities like this one). I have a fix in master/stable-0.6.y and I'll release a 0.6.2 before the end of the week. Thanks for the report. -K ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-12-11 21:35 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-12-11 21:35 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 8 patches, based on 33dc9614dc208291d0c4bcdeb5d30d481dcd2c4c. Subsystems affected by this patch series: mm/pagecache proc selftests kbuild mm/kasan mm/hugetlb Subsystem: mm/pagecache Andrew Morton <akpm@linux-foundation.org>: revert "mm/filemap: add static for function __add_to_page_cache_locked" Subsystem: proc Miles Chen <miles.chen@mediatek.com>: proc: use untagged_addr() for pagemap_read addresses Subsystem: selftests Arnd Bergmann <arnd@arndb.de>: selftest/fpu: avoid clang warning Subsystem: kbuild Arnd Bergmann <arnd@arndb.de>: kbuild: avoid static_assert for genksyms initramfs: fix clang build failure elfcore: fix building with clang Subsystem: mm/kasan Kuan-Ying Lee <Kuan-Ying.Lee@mediatek.com>: kasan: fix object remaining in offline per-cpu quarantine Subsystem: mm/hugetlb Gerald Schaefer <gerald.schaefer@linux.ibm.com>: mm/hugetlb: clear compound_nr before freeing gigantic pages fs/proc/task_mmu.c | 8 ++++++-- include/linux/build_bug.h | 5 +++++ include/linux/elfcore.h | 22 ++++++++++++++++++++++ init/initramfs.c | 2 +- kernel/Makefile | 1 - kernel/elfcore.c | 26 -------------------------- lib/Makefile | 3 ++- mm/filemap.c | 2 +- mm/hugetlb.c | 1 + mm/kasan/quarantine.c | 39 +++++++++++++++++++++++++++++++++++++++ 10 files changed, 77 insertions(+), 32 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-12-06 6:14 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-12-06 6:14 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 12 patches, based on 33256ce194110874d4bc90078b577c59f9076c59. Subsystems affected by this patch series: lib coredump mm/memcg mm/zsmalloc mm/swap mailmap mm/selftests mm/pagecache mm/hugetlb mm/pagemap Subsystem: lib Randy Dunlap <rdunlap@infradead.org>: zlib: export S390 symbols for zlib modules Subsystem: coredump Menglong Dong <dong.menglong@zte.com.cn>: coredump: fix core_pattern parse error Subsystem: mm/memcg Roman Gushchin <guro@fb.com>: mm: memcg/slab: fix obj_cgroup_charge() return value handling Yang Shi <shy828301@gmail.com>: mm: list_lru: set shrinker map bit when child nr_items is not zero Subsystem: mm/zsmalloc Minchan Kim <minchan@kernel.org>: mm/zsmalloc.c: drop ZSMALLOC_PGTABLE_MAPPING Subsystem: mm/swap Qian Cai <qcai@redhat.com>: mm/swapfile: do not sleep with a spin lock held Subsystem: mailmap Uwe Kleine-König <u.kleine-koenig@pengutronix.de>: mailmap: add two more addresses of Uwe Kleine-König Subsystem: mm/selftests Xingxing Su <suxingxing@loongson.cn>: tools/testing/selftests/vm: fix build error Axel Rasmussen <axelrasmussen@google.com>: userfaultfd: selftests: fix SIGSEGV if huge mmap fails Subsystem: mm/pagecache Alex Shi <alex.shi@linux.alibaba.com>: mm/filemap: add static for function __add_to_page_cache_locked Subsystem: mm/hugetlb Mike Kravetz <mike.kravetz@oracle.com>: hugetlb_cgroup: fix offline of hugetlb cgroup with reservations Subsystem: mm/pagemap Liu Zixian <liuzixian4@huawei.com>: mm/mmap.c: fix mmap return value when vma is merged after call_mmap() .mailmap | 2 + arch/arm/configs/omap2plus_defconfig | 1 fs/coredump.c | 3 + include/linux/zsmalloc.h | 1 lib/zlib_dfltcc/dfltcc_inflate.c | 3 + mm/Kconfig | 13 ------- mm/filemap.c | 2 - mm/hugetlb_cgroup.c | 8 +--- mm/list_lru.c | 10 ++--- mm/mmap.c | 26 ++++++-------- mm/slab.h | 40 +++++++++++++--------- mm/swapfile.c | 4 +- mm/zsmalloc.c | 54 ------------------------------- tools/testing/selftests/vm/Makefile | 4 ++ tools/testing/selftests/vm/userfaultfd.c | 25 +++++++++----- 15 files changed, 75 insertions(+), 121 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-11-22 6:16 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-11-22 6:16 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 8 patches, based on a349e4c659609fd20e4beea89e5c4a4038e33a95. Subsystems affected by this patch series: mm/madvise kbuild mm/pagemap mm/readahead mm/memcg mm/userfaultfd vfs-akpm mm/madvise Subsystem: mm/madvise Eric Dumazet <edumazet@google.com>: mm/madvise: fix memory leak from process_madvise Subsystem: kbuild Nick Desaulniers <ndesaulniers@google.com>: compiler-clang: remove version check for BPF Tracing Subsystem: mm/pagemap Dan Williams <dan.j.williams@intel.com>: mm: fix phys_to_target_node() and memory_add_physaddr_to_nid() exports Subsystem: mm/readahead "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm: fix readahead_page_batch for retry entries Subsystem: mm/memcg Muchun Song <songmuchun@bytedance.com>: mm: memcg/slab: fix root memcg vmstats Subsystem: mm/userfaultfd Gerald Schaefer <gerald.schaefer@linux.ibm.com>: mm/userfaultfd: do not access vma->vm_mm after calling handle_userfault() Subsystem: vfs-akpm Yicong Yang <yangyicong@hisilicon.com>: libfs: fix error cast of negative value in simple_attr_write() Subsystem: mm/madvise "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm: fix madvise WILLNEED performance problem arch/ia64/include/asm/sparsemem.h | 6 ++++++ arch/powerpc/include/asm/mmzone.h | 5 +++++ arch/powerpc/include/asm/sparsemem.h | 5 ++--- arch/powerpc/mm/mem.c | 1 + arch/x86/include/asm/sparsemem.h | 10 ++++++++++ arch/x86/mm/numa.c | 2 ++ drivers/dax/Kconfig | 1 - fs/libfs.c | 6 ++++-- include/linux/compiler-clang.h | 2 ++ include/linux/memory_hotplug.h | 14 -------------- include/linux/numa.h | 30 +++++++++++++++++++++++++++++- include/linux/pagemap.h | 2 ++ mm/huge_memory.c | 9 ++++----- mm/madvise.c | 4 +--- mm/memcontrol.c | 9 +++++++-- mm/memory_hotplug.c | 18 ------------------ 16 files changed, 75 insertions(+), 49 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-11-14 6:51 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-11-14 6:51 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 14 patches, based on 9e6a39eae450b81c8b2c8cbbfbdf8218e9b40c81. Subsystems affected by this patch series: mm/migration mm/vmscan mailmap mm/slub mm/gup kbuild reboot kernel/watchdog mm/memcg mm/hugetlbfs panic ocfs2 Subsystem: mm/migration Zi Yan <ziy@nvidia.com>: mm/compaction: count pages and stop correctly during page isolation mm/compaction: stop isolation if too many pages are isolated and we have pages to migrate Subsystem: mm/vmscan Nicholas Piggin <npiggin@gmail.com>: mm/vmscan: fix NR_ISOLATED_FILE corruption on 64-bit Subsystem: mailmap Dmitry Baryshkov <dbaryshkov@gmail.com>: mailmap: fix entry for Dmitry Baryshkov/Eremin-Solenikov Subsystem: mm/slub Laurent Dufour <ldufour@linux.ibm.com>: mm/slub: fix panic in slab_alloc_node() Subsystem: mm/gup Jason Gunthorpe <jgg@nvidia.com>: mm/gup: use unpin_user_pages() in __gup_longterm_locked() Subsystem: kbuild Arvind Sankar <nivedita@alum.mit.edu>: compiler.h: fix barrier_data() on clang Subsystem: reboot Matteo Croce <mcroce@microsoft.com>: Patch series "fix parsing of reboot= cmdline", v3: Revert "kernel/reboot.c: convert simple_strtoul to kstrtoint" reboot: fix overflow parsing reboot cpu number Subsystem: kernel/watchdog Santosh Sivaraj <santosh@fossix.org>: kernel/watchdog: fix watchdog_allowed_mask not used warning Subsystem: mm/memcg Muchun Song <songmuchun@bytedance.com>: mm: memcontrol: fix missing wakeup polling thread Subsystem: mm/hugetlbfs Mike Kravetz <mike.kravetz@oracle.com>: hugetlbfs: fix anon huge page migration race Subsystem: panic Christophe Leroy <christophe.leroy@csgroup.eu>: panic: don't dump stack twice on warn Subsystem: ocfs2 Wengang Wang <wen.gang.wang@oracle.com>: ocfs2: initialize ip_next_orphan .mailmap | 5 +- fs/ocfs2/super.c | 1 include/asm-generic/barrier.h | 1 include/linux/compiler-clang.h | 6 -- include/linux/compiler-gcc.h | 19 -------- include/linux/compiler.h | 18 +++++++- include/linux/memcontrol.h | 11 ++++- kernel/panic.c | 3 - kernel/reboot.c | 28 ++++++------ kernel/watchdog.c | 4 - mm/compaction.c | 12 +++-- mm/gup.c | 14 ++++-- mm/hugetlb.c | 90 ++--------------------------------------- mm/memory-failure.c | 36 +++++++--------- mm/migrate.c | 46 +++++++++++--------- mm/rmap.c | 5 -- mm/slub.c | 2 mm/vmscan.c | 5 +- 18 files changed, 119 insertions(+), 187 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-11-02 1:06 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-11-02 1:06 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 15 patches, based on 3cea11cd5e3b00d91caf0b4730194039b45c5891. Subsystems affected by this patch series: mm/memremap mm/memcg mm/slab-generic mm/kasan mm/mempolicy signals lib mm/pagecache kthread mm/oom-kill mm/pagemap epoll core-kernel Subsystem: mm/memremap Ralph Campbell <rcampbell@nvidia.com>: mm/mremap_pages: fix static key devmap_managed_key updates Subsystem: mm/memcg Mike Kravetz <mike.kravetz@oracle.com>: hugetlb_cgroup: fix reservation accounting zhongjiang-ali <zhongjiang-ali@linux.alibaba.com>: mm: memcontrol: correct the NR_ANON_THPS counter of hierarchical memcg Roman Gushchin <guro@fb.com>: mm: memcg: link page counters to root if use_hierarchy is false Subsystem: mm/slab-generic Subsystem: mm/kasan Andrey Konovalov <andreyknvl@google.com>: kasan: adopt KUNIT tests to SW_TAGS mode Subsystem: mm/mempolicy Shijie Luo <luoshijie1@huawei.com>: mm: mempolicy: fix potential pte_unmap_unlock pte error Subsystem: signals Oleg Nesterov <oleg@redhat.com>: ptrace: fix task_join_group_stop() for the case when current is traced Subsystem: lib Vasily Gorbik <gor@linux.ibm.com>: lib/crc32test: remove extra local_irq_disable/enable Subsystem: mm/pagecache Jason Yan <yanaijie@huawei.com>: mm/truncate.c: make __invalidate_mapping_pages() static Subsystem: kthread Zqiang <qiang.zhang@windriver.com>: kthread_worker: prevent queuing delayed work from timer_fn when it is being canceled Subsystem: mm/oom-kill Charles Haithcock <chaithco@redhat.com>: mm, oom: keep oom_adj under or at upper limit when printing Subsystem: mm/pagemap Jason Gunthorpe <jgg@nvidia.com>: mm: always have io_remap_pfn_range() set pgprot_decrypted() Subsystem: epoll Soheil Hassas Yeganeh <soheil@google.com>: epoll: check ep_events_available() upon timeout epoll: add a selftest for epoll timeout race Subsystem: core-kernel Lukas Bulwahn <lukas.bulwahn@gmail.com>: kernel/hung_task.c: make type annotations consistent fs/eventpoll.c | 16 + fs/proc/base.c | 2 include/linux/mm.h | 9 include/linux/pgtable.h | 4 kernel/hung_task.c | 3 kernel/kthread.c | 3 kernel/signal.c | 19 - lib/crc32test.c | 4 lib/test_kasan.c | 149 +++++++--- mm/hugetlb.c | 20 - mm/memcontrol.c | 25 + mm/mempolicy.c | 6 mm/memremap.c | 39 +- mm/truncate.c | 2 tools/testing/selftests/filesystems/epoll/epoll_wakeup_test.c | 95 ++++++ 15 files changed, 290 insertions(+), 106 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-10-17 23:13 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-10-17 23:13 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 40 patches, based on 9d9af1007bc08971953ae915d88dc9bb21344b53. Subsystems affected by this patch series: ia64 mm/memcg mm/migration mm/pagemap mm/gup mm/madvise mm/vmalloc misc Subsystem: ia64 Krzysztof Kozlowski <krzk@kernel.org>: ia64: fix build error with !COREDUMP Subsystem: mm/memcg Roman Gushchin <guro@fb.com>: mm, memcg: rework remote charging API to support nesting Patch series "mm: kmem: kernel memory accounting in an interrupt context": mm: kmem: move memcg_kmem_bypass() calls to get_mem/obj_cgroup_from_current() mm: kmem: remove redundant checks from get_obj_cgroup_from_current() mm: kmem: prepare remote memcg charging infra for interrupt contexts mm: kmem: enable kernel memcg accounting from interrupt contexts Subsystem: mm/migration Joonsoo Kim <iamjoonsoo.kim@lge.com>: mm/memory-failure: remove a wrapper for alloc_migration_target() mm/memory_hotplug: remove a wrapper for alloc_migration_target() Miaohe Lin <linmiaohe@huawei.com>: mm/migrate: avoid possible unnecessary process right check in kernel_move_pages() Subsystem: mm/pagemap "Liam R. Howlett" <Liam.Howlett@Oracle.com>: mm/mmap: add inline vma_next() for readability of mmap code mm/mmap: add inline munmap_vma_range() for code readability Subsystem: mm/gup Jann Horn <jannh@google.com>: mm/gup_benchmark: take the mmap lock around GUP binfmt_elf: take the mmap lock around find_extend_vma() mm/gup: assert that the mmap lock is held in __get_user_pages() John Hubbard <jhubbard@nvidia.com>: Patch series "selftests/vm: gup_test, hmm-tests, assorted improvements", v2: mm/gup_benchmark: rename to mm/gup_test selftests/vm: use a common gup_test.h selftests/vm: rename run_vmtests --> run_vmtests.sh selftests/vm: minor cleanup: Makefile and gup_test.c selftests/vm: only some gup_test items are really benchmarks selftests/vm: gup_test: introduce the dump_pages() sub-test selftests/vm: run_vmtests.sh: update and clean up gup_test invocation selftests/vm: hmm-tests: remove the libhugetlbfs dependency selftests/vm: 10x speedup for hmm-tests Subsystem: mm/madvise Minchan Kim <minchan@kernel.org>: Patch series "introduce memory hinting API for external process", v9: mm/madvise: pass mm to do_madvise pid: move pidfd_get_pid() to pid.c mm/madvise: introduce process_madvise() syscall: an external memory hinting API Subsystem: mm/vmalloc "Matthew Wilcox (Oracle)" <willy@infradead.org>: Patch series "remove alloc_vm_area", v4: mm: update the documentation for vfree Christoph Hellwig <hch@lst.de>: mm: add a VM_MAP_PUT_PAGES flag for vmap mm: add a vmap_pfn function mm: allow a NULL fn callback in apply_to_page_range zsmalloc: switch from alloc_vm_area to get_vm_area drm/i915: use vmap in shmem_pin_map drm/i915: stop using kmap in i915_gem_object_map drm/i915: use vmap in i915_gem_object_map xen/xenbus: use apply_to_page_range directly in xenbus_map_ring_pv x86/xen: open code alloc_vm_area in arch_gnttab_valloc mm: remove alloc_vm_area Patch series "two small vmalloc cleanups": mm: cleanup the gfp_mask handling in __vmalloc_area_node mm: remove the filename in the top of file comment in vmalloc.c Subsystem: misc Tian Tao <tiantao6@hisilicon.com>: mm: remove duplicate include statement in mmu.c Documentation/core-api/pin_user_pages.rst | 8 arch/alpha/kernel/syscalls/syscall.tbl | 1 arch/arm/mm/mmu.c | 1 arch/arm/tools/syscall.tbl | 1 arch/arm64/include/asm/unistd.h | 2 arch/arm64/include/asm/unistd32.h | 2 arch/ia64/kernel/Makefile | 2 arch/ia64/kernel/syscalls/syscall.tbl | 1 arch/m68k/kernel/syscalls/syscall.tbl | 1 arch/microblaze/kernel/syscalls/syscall.tbl | 1 arch/mips/kernel/syscalls/syscall_n32.tbl | 1 arch/mips/kernel/syscalls/syscall_n64.tbl | 1 arch/mips/kernel/syscalls/syscall_o32.tbl | 1 arch/parisc/kernel/syscalls/syscall.tbl | 1 arch/powerpc/kernel/syscalls/syscall.tbl | 1 arch/s390/configs/debug_defconfig | 2 arch/s390/configs/defconfig | 2 arch/s390/kernel/syscalls/syscall.tbl | 1 arch/sh/kernel/syscalls/syscall.tbl | 1 arch/sparc/kernel/syscalls/syscall.tbl | 1 arch/x86/entry/syscalls/syscall_32.tbl | 1 arch/x86/entry/syscalls/syscall_64.tbl | 1 arch/x86/xen/grant-table.c | 27 +- arch/xtensa/kernel/syscalls/syscall.tbl | 1 drivers/gpu/drm/i915/Kconfig | 1 drivers/gpu/drm/i915/gem/i915_gem_pages.c | 136 ++++------ drivers/gpu/drm/i915/gt/shmem_utils.c | 78 +----- drivers/xen/xenbus/xenbus_client.c | 30 +- fs/binfmt_elf.c | 3 fs/buffer.c | 6 fs/io_uring.c | 2 fs/notify/fanotify/fanotify.c | 5 fs/notify/inotify/inotify_fsnotify.c | 5 include/linux/memcontrol.h | 12 include/linux/mm.h | 2 include/linux/pid.h | 1 include/linux/sched/mm.h | 43 +-- include/linux/syscalls.h | 2 include/linux/vmalloc.h | 7 include/uapi/asm-generic/unistd.h | 4 kernel/exit.c | 19 - kernel/pid.c | 19 + kernel/sys_ni.c | 1 mm/Kconfig | 24 + mm/Makefile | 2 mm/gup.c | 2 mm/gup_benchmark.c | 225 ------------------ mm/gup_test.c | 295 +++++++++++++++++++++-- mm/gup_test.h | 40 ++- mm/madvise.c | 125 ++++++++-- mm/memcontrol.c | 83 ++++-- mm/memory-failure.c | 18 - mm/memory.c | 16 - mm/memory_hotplug.c | 46 +-- mm/migrate.c | 71 +++-- mm/mmap.c | 74 ++++- mm/nommu.c | 7 mm/percpu.c | 3 mm/slab.h | 3 mm/vmalloc.c | 147 +++++------ mm/zsmalloc.c | 10 tools/testing/selftests/vm/.gitignore | 3 tools/testing/selftests/vm/Makefile | 40 ++- tools/testing/selftests/vm/check_config.sh | 31 ++ tools/testing/selftests/vm/config | 2 tools/testing/selftests/vm/gup_benchmark.c | 143 ----------- tools/testing/selftests/vm/gup_test.c | 260 ++++++++++++++++++-- tools/testing/selftests/vm/hmm-tests.c | 12 tools/testing/selftests/vm/run_vmtests | 334 -------------------------- tools/testing/selftests/vm/run_vmtests.sh | 350 +++++++++++++++++++++++++++- 70 files changed, 1580 insertions(+), 1224 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-10-16 2:40 Andrew Morton 2020-10-16 3:03 ` incoming Andrew Morton 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2020-10-16 2:40 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm - most of the rest of mm/ - various other subsystems 156 patches, based on 578a7155c5a1894a789d4ece181abf9d25dc6b0d. Subsystems affected by this patch series: mm/dax mm/debug mm/thp mm/readahead mm/page-poison mm/util mm/memory-hotplug mm/zram mm/cleanups misc core-kernel get_maintainer MAINTAINERS lib bitops checkpatch binfmt ramfs autofs nilfs rapidio panic relay kgdb ubsan romfs fault-injection Subsystem: mm/dax Dan Williams <dan.j.williams@intel.com>: device-dax/kmem: fix resource release Subsystem: mm/debug "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>: Patch series "mm/debug_vm_pgtable fixes", v4: powerpc/mm: add DEBUG_VM WARN for pmd_clear powerpc/mm: move setting pte specific flags to pfn_pte mm/debug_vm_pgtable/ppc64: avoid setting top bits in radom value mm/debug_vm_pgtables/hugevmap: use the arch helper to identify huge vmap support. mm/debug_vm_pgtable/savedwrite: enable savedwrite test with CONFIG_NUMA_BALANCING mm/debug_vm_pgtable/THP: mark the pte entry huge before using set_pmd/pud_at mm/debug_vm_pgtable/set_pte/pmd/pud: don't use set_*_at to update an existing pte entry mm/debug_vm_pgtable/locks: move non page table modifying test together mm/debug_vm_pgtable/locks: take correct page table lock mm/debug_vm_pgtable/thp: use page table depost/withdraw with THP mm/debug_vm_pgtable/pmd_clear: don't use pmd/pud_clear on pte entries mm/debug_vm_pgtable/hugetlb: disable hugetlb test on ppc64 mm/debug_vm_pgtable: avoid none pte in pte_clear_test mm/debug_vm_pgtable: avoid doing memory allocation with pgtable_t mapped. Subsystem: mm/thp "Matthew Wilcox (Oracle)" <willy@infradead.org>: Patch series "Fix read-only THP for non-tmpfs filesystems": XArray: add xa_get_order XArray: add xas_split mm/filemap: fix storing to a THP shadow entry Patch series "Remove assumptions of THP size": mm/filemap: fix page cache removal for arbitrary sized THPs mm/memory: remove page fault assumption of compound page size mm/page_owner: change split_page_owner to take a count "Kirill A. Shutemov" <kirill@shutemov.name>: mm/huge_memory: fix total_mapcount assumption of page size mm/huge_memory: fix split assumption of page size "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm/huge_memory: fix page_trans_huge_mapcount assumption of THP size mm/huge_memory: fix can_split_huge_page assumption of THP size mm/rmap: fix assumptions of THP size mm/truncate: fix truncation for pages of arbitrary size mm/page-writeback: support tail pages in wait_for_stable_page mm/vmscan: allow arbitrary sized pages to be paged out fs: add a filesystem flag for THPs fs: do not update nr_thps for mappings which support THPs Huang Ying <ying.huang@intel.com>: mm: fix a race during THP splitting Subsystem: mm/readahead "Matthew Wilcox (Oracle)" <willy@infradead.org>: Patch series "Readahead patches for 5.9/5.10": mm/readahead: add DEFINE_READAHEAD mm/readahead: make page_cache_ra_unbounded take a readahead_control mm/readahead: make do_page_cache_ra take a readahead_control David Howells <dhowells@redhat.com>: mm/readahead: make ondemand_readahead take a readahead_control mm/readahead: pass readahead_control to force_page_cache_ra "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm/readahead: add page_cache_sync_ra and page_cache_async_ra David Howells <dhowells@redhat.com>: mm/filemap: fold ra_submit into do_sync_mmap_readahead mm/readahead: pass a file_ra_state into force_page_cache_ra Subsystem: mm/page-poison Naoya Horiguchi <naoya.horiguchi@nec.com>: Patch series "HWPOISON: soft offline rework", v7: mm,hwpoison: cleanup unused PageHuge() check mm, hwpoison: remove recalculating hpage mm,hwpoison-inject: don't pin for hwpoison_filter Oscar Salvador <osalvador@suse.de>: mm,hwpoison: unexport get_hwpoison_page and make it static mm,hwpoison: refactor madvise_inject_error mm,hwpoison: kill put_hwpoison_page mm,hwpoison: unify THP handling for hard and soft offline mm,hwpoison: rework soft offline for free pages mm,hwpoison: rework soft offline for in-use pages mm,hwpoison: refactor soft_offline_huge_page and __soft_offline_page mm,hwpoison: return 0 if the page is already poisoned in soft-offline Naoya Horiguchi <naoya.horiguchi@nec.com>: mm,hwpoison: introduce MF_MSG_UNSPLIT_THP mm,hwpoison: double-check page count in __get_any_page() Oscar Salvador <osalvador@suse.de>: mm,hwpoison: try to narrow window race for free pages Mateusz Nosek <mateusznosek0@gmail.com>: mm/page_poison.c: replace bool variable with static key Miaohe Lin <linmiaohe@huawei.com>: mm/vmstat.c: use helper macro abs() Subsystem: mm/util Bartosz Golaszewski <bgolaszewski@baylibre.com>: mm/util.c: update the kerneldoc for kstrdup_const() Jann Horn <jannh@google.com>: mm/mmu_notifier: fix mmget() assert in __mmu_interval_notifier_insert Subsystem: mm/memory-hotplug David Hildenbrand <david@redhat.com>: Patch series "mm/memory_hotplug: online_pages()/offline_pages() cleanups", v2: mm/memory_hotplug: inline __offline_pages() into offline_pages() mm/memory_hotplug: enforce section granularity when onlining/offlining mm/memory_hotplug: simplify page offlining mm/page_alloc: simplify __offline_isolated_pages() mm/memory_hotplug: drop nr_isolate_pageblock in offline_pages() mm/page_isolation: simplify return value of start_isolate_page_range() mm/memory_hotplug: simplify page onlining mm/page_alloc: drop stale pageblock comment in memmap_init_zone*() mm: pass migratetype into memmap_init_zone() and move_pfn_range_to_zone() mm/memory_hotplug: mark pageblocks MIGRATE_ISOLATE while onlining memory Patch series "selective merging of system ram resources", v4: kernel/resource: make release_mem_region_adjustable() never fail kernel/resource: move and rename IORESOURCE_MEM_DRIVER_MANAGED mm/memory_hotplug: guard more declarations by CONFIG_MEMORY_HOTPLUG mm/memory_hotplug: prepare passing flags to add_memory() and friends mm/memory_hotplug: MEMHP_MERGE_RESOURCE to specify merging of System RAM resources virtio-mem: try to merge system ram resources xen/balloon: try to merge system ram resources hv_balloon: try to merge system ram resources kernel/resource: make iomem_resource implicit in release_mem_region_adjustable() Laurent Dufour <ldufour@linux.ibm.com>: mm: don't panic when links can't be created in sysfs David Hildenbrand <david@redhat.com>: Patch series "mm: place pages to the freelist tail when onlining and undoing isolation", v2: mm/page_alloc: convert "report" flag of __free_one_page() to a proper flag mm/page_alloc: place pages to tail in __putback_isolated_page() mm/page_alloc: move pages to tail in move_to_free_list() mm/page_alloc: place pages to tail in __free_pages_core() mm/memory_hotplug: update comment regarding zone shuffling Subsystem: mm/zram Douglas Anderson <dianders@chromium.org>: zram: failing to decompress is WARN_ON worthy Subsystem: mm/cleanups YueHaibing <yuehaibing@huawei.com>: mm/slab.h: remove duplicate include Wei Yang <richard.weiyang@linux.alibaba.com>: mm/page_reporting.c: drop stale list head check in page_reporting_cycle Ira Weiny <ira.weiny@intel.com>: mm/highmem.c: clean up endif comments Yu Zhao <yuzhao@google.com>: mm: use self-explanatory macros rather than "2" Miaohe Lin <linmiaohe@huawei.com>: mm: fix some broken comments Chen Tao <chentao3@hotmail.com>: mm: fix some comments formatting Xiaofei Tan <tanxiaofei@huawei.com>: mm/workingset.c: fix some doc warnings Miaohe Lin <linmiaohe@huawei.com>: mm: use helper function put_write_access() Mike Rapoport <rppt@linux.ibm.com>: include/linux/mmzone.h: remove unused early_pfn_valid() "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm: rename page_order() to buddy_order() Subsystem: misc Randy Dunlap <rdunlap@infradead.org>: fs: configfs: delete repeated words in comments Andy Shevchenko <andriy.shevchenko@linux.intel.com>: kernel.h: split out min()/max() et al. helpers Subsystem: core-kernel Liao Pingfang <liao.pingfang@zte.com.cn>: kernel/sys.c: replace do_brk with do_brk_flags in comment of prctl_set_mm_map() Randy Dunlap <rdunlap@infradead.org>: kernel/: fix repeated words in comments kernel: acct.c: fix some kernel-doc nits Subsystem: get_maintainer Joe Perches <joe@perches.com>: get_maintainer: add test for file in VCS Subsystem: MAINTAINERS Joe Perches <joe@perches.com>: get_maintainer: exclude MAINTAINERS file(s) from --git-fallback Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>: MAINTAINERS: jarkko.sakkinen@linux.intel.com -> jarkko@kernel.org Subsystem: lib Randy Dunlap <rdunlap@infradead.org>: lib: bitmap: delete duplicated words lib: libcrc32c: delete duplicated words lib: decompress_bunzip2: delete duplicated words lib: dynamic_queue_limits: delete duplicated words + fix typo lib: earlycpio: delete duplicated words lib: radix-tree: delete duplicated words lib: syscall: delete duplicated words lib: test_sysctl: delete duplicated words lib/mpi/mpi-bit.c: fix spello of "functions" Stephen Boyd <swboyd@chromium.org>: lib/idr.c: document calling context for IDA APIs mustn't use locks lib/idr.c: document that ida_simple_{get,remove}() are deprecated Christophe JAILLET <christophe.jaillet@wanadoo.fr>: lib/scatterlist.c: avoid a double memset Miaohe Lin <linmiaohe@huawei.com>: lib/percpu_counter.c: use helper macro abs() Andy Shevchenko <andriy.shevchenko@linux.intel.com>: include/linux/list.h: add a macro to test if entry is pointing to the head Dan Carpenter <dan.carpenter@oracle.com>: lib/test_hmm.c: fix an error code in dmirror_allocate_chunk() Tobias Jordan <kernel@cdqe.de>: lib/crc32.c: fix trivial typo in preprocessor condition Subsystem: bitops Wei Yang <richard.weiyang@linux.alibaba.com>: bitops: simplify get_count_order_long() bitops: use the same mechanism for get_count_order[_long] Subsystem: checkpatch Jerome Forissier <jerome@forissier.org>: checkpatch: add --kconfig-prefix Joe Perches <joe@perches.com>: checkpatch: move repeated word test checkpatch: add test for comma use that should be semicolon Rikard Falkeborn <rikard.falkeborn@gmail.com>: const_structs.checkpatch: add phy_ops Nicolas Boichat <drinkcat@chromium.org>: checkpatch: warn if trace_printk and friends are called Rikard Falkeborn <rikard.falkeborn@gmail.com>: const_structs.checkpatch: add pinctrl_ops and pinmux_ops Joe Perches <joe@perches.com>: checkpatch: warn on self-assignments checkpatch: allow not using -f with files that are in git Dwaipayan Ray <dwaipayanray1@gmail.com>: checkpatch: extend author Signed-off-by check for split From: header Joe Perches <joe@perches.com>: checkpatch: emit a warning on embedded filenames Dwaipayan Ray <dwaipayanray1@gmail.com>: checkpatch: fix multi-statement macro checks for while blocks. Łukasz Stelmach <l.stelmach@samsung.com>: checkpatch: fix false positive on empty block comment lines Dwaipayan Ray <dwaipayanray1@gmail.com>: checkpatch: add new warnings to author signoff checks. Subsystem: binfmt Chris Kennelly <ckennelly@google.com>: Patch series "Selecting Load Addresses According to p_align", v3: fs/binfmt_elf: use PT_LOAD p_align values for suitable start address tools/testing/selftests: add self-test for verifying load alignment Jann Horn <jannh@google.com>: Patch series "Fix ELF / FDPIC ELF core dumping, and use mmap_lock properly in there", v5: binfmt_elf_fdpic: stop using dump_emit() on user pointers on !MMU coredump: let dump_emit() bail out on short writes coredump: refactor page range dumping into common helper coredump: rework elf/elf_fdpic vma_dump_size() into common helper binfmt_elf, binfmt_elf_fdpic: use a VMA list snapshot mm/gup: take mmap_lock in get_dump_page() mm: remove the now-unnecessary mmget_still_valid() hack Subsystem: ramfs Matthew Wilcox (Oracle) <willy@infradead.org>: ramfs: fix nommu mmap with gaps in the page cache Subsystem: autofs Matthew Wilcox <willy@infradead.org>: autofs: harden ioctl table Subsystem: nilfs Wang Hai <wanghai38@huawei.com>: nilfs2: fix some kernel-doc warnings for nilfs2 Subsystem: rapidio Souptick Joarder <jrdr.linux@gmail.com>: rapidio: fix error handling path Jing Xiangfeng <jingxiangfeng@huawei.com>: rapidio: fix the missed put_device() for rio_mport_add_riodev Subsystem: panic Alexey Kardashevskiy <aik@ozlabs.ru>: panic: dump registers on panic_on_warn Subsystem: relay Sudip Mukherjee <sudipm.mukherjee@gmail.com>: kernel/relay.c: drop unneeded initialization Subsystem: kgdb Ritesh Harjani <riteshh@linux.ibm.com>: scripts/gdb/proc: add struct mount & struct super_block addr in lx-mounts command scripts/gdb/tasks: add headers and improve spacing format Subsystem: ubsan Elena Petrova <lenaptr@google.com>: sched.h: drop in_ubsan field when UBSAN is in trap mode George Popescu <georgepope@android.com>: ubsan: introduce CONFIG_UBSAN_LOCAL_BOUNDS for Clang Subsystem: romfs Libing Zhou <libing.zhou@nokia-sbell.com>: ROMFS: support inode blocks calculation Subsystem: fault-injection Albert van der Linde <alinde@google.com>: Patch series "add fault injection to user memory access", v3: lib, include/linux: add usercopy failure capability lib, uaccess: add failure injection to usercopy functions .mailmap | 1 Documentation/admin-guide/kernel-parameters.txt | 1 Documentation/core-api/xarray.rst | 14 Documentation/fault-injection/fault-injection.rst | 7 MAINTAINERS | 6 arch/ia64/mm/init.c | 4 arch/powerpc/include/asm/book3s/64/pgtable.h | 29 + arch/powerpc/include/asm/nohash/pgtable.h | 5 arch/powerpc/mm/pgtable.c | 5 arch/powerpc/platforms/powernv/memtrace.c | 2 arch/powerpc/platforms/pseries/hotplug-memory.c | 2 drivers/acpi/acpi_memhotplug.c | 3 drivers/base/memory.c | 3 drivers/base/node.c | 33 +- drivers/block/zram/zram_drv.c | 2 drivers/dax/kmem.c | 50 ++- drivers/hv/hv_balloon.c | 4 drivers/infiniband/core/uverbs_main.c | 3 drivers/rapidio/devices/rio_mport_cdev.c | 18 - drivers/s390/char/sclp_cmd.c | 2 drivers/vfio/pci/vfio_pci.c | 38 +- drivers/virtio/virtio_mem.c | 5 drivers/xen/balloon.c | 4 fs/autofs/dev-ioctl.c | 8 fs/binfmt_elf.c | 267 +++------------- fs/binfmt_elf_fdpic.c | 176 ++-------- fs/configfs/dir.c | 2 fs/configfs/file.c | 2 fs/coredump.c | 238 +++++++++++++- fs/ext4/verity.c | 4 fs/f2fs/verity.c | 4 fs/inode.c | 2 fs/nilfs2/bmap.c | 2 fs/nilfs2/cpfile.c | 6 fs/nilfs2/page.c | 1 fs/nilfs2/sufile.c | 4 fs/proc/task_mmu.c | 18 - fs/ramfs/file-nommu.c | 2 fs/romfs/super.c | 1 fs/userfaultfd.c | 28 - include/linux/bitops.h | 13 include/linux/blkdev.h | 1 include/linux/bvec.h | 6 include/linux/coredump.h | 13 include/linux/fault-inject-usercopy.h | 22 + include/linux/fs.h | 28 - include/linux/idr.h | 13 include/linux/ioport.h | 15 include/linux/jiffies.h | 3 include/linux/kernel.h | 150 --------- include/linux/list.h | 29 + include/linux/memory_hotplug.h | 42 +- include/linux/minmax.h | 153 +++++++++ include/linux/mm.h | 5 include/linux/mmzone.h | 17 - include/linux/node.h | 16 include/linux/nodemask.h | 2 include/linux/page-flags.h | 6 include/linux/page_owner.h | 6 include/linux/pagemap.h | 111 ++++++ include/linux/sched.h | 2 include/linux/sched/mm.h | 25 - include/linux/uaccess.h | 12 include/linux/vmstat.h | 2 include/linux/xarray.h | 22 + include/ras/ras_event.h | 3 kernel/acct.c | 10 kernel/cgroup/cpuset.c | 2 kernel/dma/direct.c | 2 kernel/fork.c | 4 kernel/futex.c | 2 kernel/irq/timings.c | 2 kernel/jump_label.c | 2 kernel/kcsan/encoding.h | 2 kernel/kexec_core.c | 2 kernel/kexec_file.c | 2 kernel/kthread.c | 2 kernel/livepatch/state.c | 2 kernel/panic.c | 12 kernel/pid_namespace.c | 2 kernel/power/snapshot.c | 2 kernel/range.c | 3 kernel/relay.c | 2 kernel/resource.c | 114 +++++-- kernel/smp.c | 2 kernel/sys.c | 2 kernel/user_namespace.c | 2 lib/Kconfig.debug | 7 lib/Kconfig.ubsan | 14 lib/Makefile | 1 lib/bitmap.c | 2 lib/crc32.c | 2 lib/decompress_bunzip2.c | 2 lib/dynamic_queue_limits.c | 4 lib/earlycpio.c | 2 lib/fault-inject-usercopy.c | 39 ++ lib/find_bit.c | 1 lib/hexdump.c | 1 lib/idr.c | 9 lib/iov_iter.c | 5 lib/libcrc32c.c | 2 lib/math/rational.c | 2 lib/math/reciprocal_div.c | 1 lib/mpi/mpi-bit.c | 2 lib/percpu_counter.c | 2 lib/radix-tree.c | 2 lib/scatterlist.c | 2 lib/strncpy_from_user.c | 3 lib/syscall.c | 2 lib/test_hmm.c | 2 lib/test_sysctl.c | 2 lib/test_xarray.c | 65 ++++ lib/usercopy.c | 5 lib/xarray.c | 208 ++++++++++++ mm/Kconfig | 2 mm/compaction.c | 6 mm/debug_vm_pgtable.c | 267 ++++++++-------- mm/filemap.c | 58 ++- mm/gup.c | 73 ++-- mm/highmem.c | 4 mm/huge_memory.c | 47 +- mm/hwpoison-inject.c | 18 - mm/internal.h | 47 +- mm/khugepaged.c | 2 mm/madvise.c | 52 --- mm/memory-failure.c | 357 ++++++++++------------ mm/memory.c | 7 mm/memory_hotplug.c | 223 +++++-------- mm/memremap.c | 3 mm/migrate.c | 11 mm/mmap.c | 7 mm/mmu_notifier.c | 2 mm/page-writeback.c | 1 mm/page_alloc.c | 289 +++++++++++------ mm/page_isolation.c | 16 mm/page_owner.c | 10 mm/page_poison.c | 20 - mm/page_reporting.c | 4 mm/readahead.c | 174 ++++------ mm/rmap.c | 10 mm/shmem.c | 2 mm/shuffle.c | 2 mm/slab.c | 2 mm/slab.h | 1 mm/slub.c | 2 mm/sparse.c | 2 mm/swap_state.c | 2 mm/truncate.c | 6 mm/util.c | 3 mm/vmscan.c | 5 mm/vmstat.c | 8 mm/workingset.c | 2 scripts/Makefile.ubsan | 10 scripts/checkpatch.pl | 238 ++++++++++---- scripts/const_structs.checkpatch | 3 scripts/gdb/linux/proc.py | 15 scripts/gdb/linux/tasks.py | 9 scripts/get_maintainer.pl | 9 tools/testing/selftests/exec/.gitignore | 1 tools/testing/selftests/exec/Makefile | 9 tools/testing/selftests/exec/load_address.c | 68 ++++ 161 files changed, 2532 insertions(+), 1864 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-10-16 2:40 incoming Andrew Morton @ 2020-10-16 3:03 ` Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-10-16 3:03 UTC (permalink / raw) To: Linus Torvalds, mm-commits, linux-mm And... I forgot to set in-reply-to :( Shall resend, omitting linux-mm. ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-10-13 23:46 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-10-13 23:46 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 181 patches, based on 029f56db6ac248769f2c260bfaf3c3c0e23e904c. Subsystems affected by this patch series: kbuild scripts ntfs ocfs2 vfs mm/slab mm/slub mm/kmemleak mm/dax mm/debug mm/pagecache mm/fadvise mm/gup mm/swap mm/memremap mm/memcg mm/selftests mm/pagemap mm/mincore mm/hmm mm/dma mm/memory-failure mm/vmalloc mm/documentation mm/kasan mm/pagealloc mm/hugetlb mm/vmscan mm/z3fold mm/zbud mm/compaction mm/mempolicy mm/mempool mm/memblock mm/oom-kill mm/migration Subsystem: kbuild Nick Desaulniers <ndesaulniers@google.com>: Patch series "set clang minimum version to 10.0.1", v3: compiler-clang: add build check for clang 10.0.1 Revert "kbuild: disable clang's default use of -fmerge-all-constants" Revert "arm64: bti: Require clang >= 10.0.1 for in-kernel BTI support" Revert "arm64: vdso: Fix compilation with clang older than 8" Partially revert "ARM: 8905/1: Emit __gnu_mcount_nc when using Clang 10.0.0 or newer" Marco Elver <elver@google.com>: kasan: remove mentions of unsupported Clang versions Nick Desaulniers <ndesaulniers@google.com>: compiler-gcc: improve version error compiler.h: avoid escaped section names export.h: fix section name for CONFIG_TRIM_UNUSED_KSYMS for Clang Lukas Bulwahn <lukas.bulwahn@gmail.com>: kbuild: doc: describe proper script invocation Subsystem: scripts Wang Qing <wangqing@vivo.com>: scripts/spelling.txt: increase error-prone spell checking Naoki Hayama <naoki.hayama@lineo.co.jp>: scripts/spelling.txt: add "arbitrary" typo Borislav Petkov <bp@suse.de>: scripts/decodecode: add the capability to supply the program counter Subsystem: ntfs Rustam Kovhaev <rkovhaev@gmail.com>: ntfs: add check for mft record size in superblock Subsystem: ocfs2 Randy Dunlap <rdunlap@infradead.org>: ocfs2: delete repeated words in comments Gang He <ghe@suse.com>: ocfs2: fix potential soft lockup during fstrim Subsystem: vfs Randy Dunlap <rdunlap@infradead.org>: fs/xattr.c: fix kernel-doc warnings for setxattr & removexattr Luo Jiaxing <luojiaxing@huawei.com>: fs_parse: mark fs_param_bad_value() as static Subsystem: mm/slab Mateusz Nosek <mateusznosek0@gmail.com>: mm/slab.c: clean code by removing redundant if condition tangjianqiang <wyqt1985@gmail.com>: include/linux/slab.h: fix a typo error in comment Subsystem: mm/slub Abel Wu <wuyun.wu@huawei.com>: mm/slub.c: branch optimization in free slowpath mm/slub: fix missing ALLOC_SLOWPATH stat when bulk alloc mm/slub: make add_full() condition more explicit Subsystem: mm/kmemleak Davidlohr Bueso <dave@stgolabs.net>: mm/kmemleak: rely on rcu for task stack scanning Hui Su <sh_def@163.com>: mm,kmemleak-test.c: move kmemleak-test.c to samples dir Subsystem: mm/dax Dan Williams <dan.j.williams@intel.com>: Patch series "device-dax: Support sub-dividing soft-reserved ranges", v5: x86/numa: cleanup configuration dependent command-line options x86/numa: add 'nohmat' option efi/fake_mem: arrange for a resource entry per efi_fake_mem instance ACPI: HMAT: refactor hmat_register_target_device to hmem_register_device resource: report parent to walk_iomem_res_desc() callback mm/memory_hotplug: introduce default phys_to_target_node() implementation ACPI: HMAT: attach a device for each soft-reserved range device-dax: drop the dax_region.pfn_flags attribute device-dax: move instance creation parameters to 'struct dev_dax_data' device-dax: make pgmap optional for instance creation device-dax/kmem: introduce dax_kmem_range() device-dax/kmem: move resource name tracking to drvdata device-dax/kmem: replace release_resource() with release_mem_region() device-dax: add an allocation interface for device-dax instances device-dax: introduce 'struct dev_dax' typed-driver operations device-dax: introduce 'seed' devices drivers/base: make device_find_child_by_name() compatible with sysfs inputs device-dax: add resize support mm/memremap_pages: convert to 'struct range' mm/memremap_pages: support multiple ranges per invocation device-dax: add dis-contiguous resource support device-dax: introduce 'mapping' devices Joao Martins <joao.m.martins@oracle.com>: device-dax: make align a per-device property Dan Williams <dan.j.williams@intel.com>: device-dax: add an 'align' attribute Joao Martins <joao.m.martins@oracle.com>: dax/hmem: introduce dax_hmem.region_idle parameter device-dax: add a range mapping allocation attribute Subsystem: mm/debug "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm/debug.c: do not dereference i_ino blindly John Hubbard <jhubbard@nvidia.com>: mm, dump_page: rename head_mapcount() --> head_compound_mapcount() Subsystem: mm/pagecache "Matthew Wilcox (Oracle)" <willy@infradead.org>: Patch series "Return head pages from find_*_entry", v2: mm: factor find_get_incore_page out of mincore_page mm: use find_get_incore_page in memcontrol mm: optimise madvise WILLNEED proc: optimise smaps for shmem entries i915: use find_lock_page instead of find_lock_entry mm: convert find_get_entry to return the head page mm/shmem: return head page from find_lock_entry mm: add find_lock_head mm/filemap: fix filemap_map_pages for THP Subsystem: mm/fadvise Yafang Shao <laoar.shao@gmail.com>: mm, fadvise: improve the expensive remote LRU cache draining after FADV_DONTNEED Subsystem: mm/gup Barry Song <song.bao.hua@hisilicon.com>: mm/gup_benchmark: update the documentation in Kconfig mm/gup_benchmark: use pin_user_pages for FOLL_LONGTERM flag mm/gup: don't permit users to call get_user_pages with FOLL_LONGTERM John Hubbard <jhubbard@nvidia.com>: mm/gup: protect unpin_user_pages() against npages==-ERRNO Subsystem: mm/swap Gao Xiang <hsiangkao@redhat.com>: swap: rename SWP_FS to SWAP_FS_OPS to avoid ambiguity Yu Zhao <yuzhao@google.com>: mm: remove activate_page() from unuse_pte() mm: remove superfluous __ClearPageActive() Miaohe Lin <linmiaohe@huawei.com>: mm/swap.c: fix confusing comment in release_pages() mm/swap_slots.c: remove always zero and unused return value of enable_swap_slots_cache() mm/page_io.c: remove useless out label in __swap_writepage() mm/swap.c: fix incomplete comment in lru_cache_add_inactive_or_unevictable() mm/swapfile.c: remove unnecessary goto out in _swap_info_get() mm/swapfile.c: fix potential memory leak in sys_swapon Subsystem: mm/memremap Ira Weiny <ira.weiny@intel.com>: mm/memremap.c: convert devmap static branch to {inc,dec} Subsystem: mm/memcg "Gustavo A. R. Silva" <gustavoars@kernel.org>: mm: memcontrol: use flex_array_size() helper in memcpy() mm: memcontrol: use the preferred form for passing the size of a structure type Roman Gushchin <guro@fb.com>: mm: memcg/slab: fix racy access to page->mem_cgroup in mem_cgroup_from_obj() Miaohe Lin <linmiaohe@huawei.com>: mm: memcontrol: correct the comment of mem_cgroup_iter() Waiman Long <longman@redhat.com>: Patch series "mm/memcg: Miscellaneous cleanups and streamlining", v2: mm/memcg: clean up obsolete enum charge_type mm/memcg: simplify mem_cgroup_get_max() mm/memcg: unify swap and memsw page counters Muchun Song <songmuchun@bytedance.com>: mm: memcontrol: add the missing numa_stat interface for cgroup v2 Miaohe Lin <linmiaohe@huawei.com>: mm/page_counter: correct the obsolete func name in the comment of page_counter_try_charge() mm: memcontrol: reword obsolete comment of mem_cgroup_unmark_under_oom() Bharata B Rao <bharata@linux.ibm.com>: mm: memcg/slab: uncharge during kmem_cache_free_bulk() Ralph Campbell <rcampbell@nvidia.com>: mm/memcg: fix device private memcg accounting Subsystem: mm/selftests John Hubbard <jhubbard@nvidia.com>: Patch series "selftests/vm: fix some minor aggravating factors in the Makefile": selftests/vm: fix false build success on the second and later attempts selftests/vm: fix incorrect gcc invocation in some cases Subsystem: mm/pagemap Matthew Wilcox <willy@infradead.org>: mm: account PMD tables like PTE tables Yanfei Xu <yanfei.xu@windriver.com>: mm/memory.c: fix typo in __do_fault() comment mm/memory.c: replace vmf->vma with variable vma Wei Yang <richard.weiyang@linux.alibaba.com>: mm/mmap: rename __vma_unlink_common() to __vma_unlink() mm/mmap: leverage vma_rb_erase_ignore() to implement vma_rb_erase() Chinwen Chang <chinwen.chang@mediatek.com>: Patch series "Try to release mmap_lock temporarily in smaps_rollup", v4: mmap locking API: add mmap_lock_is_contended() mm: smaps*: extend smap_gather_stats to support specified beginning mm: proc: smaps_rollup: do not stall write attempts on mmap_lock "Matthew Wilcox (Oracle)" <willy@infradead.org>: Patch series "Fix PageDoubleMap": mm: move PageDoubleMap bit mm: simplify PageDoubleMap with PF_SECOND policy Wei Yang <richard.weiyang@linux.alibaba.com>: mm/mmap: leave adjust_next as virtual address instead of page frame number Randy Dunlap <rdunlap@infradead.org>: mm/memory.c: fix spello of "function" Wei Yang <richard.weiyang@linux.alibaba.com>: mm/mmap: not necessary to check mapping separately mm/mmap: check on file instead of the rb_root_cached of its address_space Miaohe Lin <linmiaohe@huawei.com>: mm: use helper function mapping_allow_writable() mm/mmap.c: use helper function allow_write_access() in __remove_shared_vm_struct() Liao Pingfang <liao.pingfang@zte.com.cn>: mm/mmap.c: replace do_brk with do_brk_flags in comment of insert_vm_struct() Peter Xu <peterx@redhat.com>: mm: remove src/dst mm parameter in copy_page_range() Subsystem: mm/mincore yuleixzhang <yulei.kernel@gmail.com>: include/linux/huge_mm.h: remove mincore_huge_pmd declaration Subsystem: mm/hmm Ralph Campbell <rcampbell@nvidia.com>: tools/testing/selftests/vm/hmm-tests.c: use the new SKIP() macro lib/test_hmm.c: remove unused dmirror_zero_page Subsystem: mm/dma Andy Shevchenko <andriy.shevchenko@linux.intel.com>: mm/dmapool.c: replace open-coded list_for_each_entry_safe() mm/dmapool.c: replace hard coded function name with __func__ Subsystem: mm/memory-failure Xianting Tian <tian.xianting@h3c.com>: mm/memory-failure: do pgoff calculation before for_each_process() Alex Shi <alex.shi@linux.alibaba.com>: mm/memory-failure.c: remove unused macro `writeback' Subsystem: mm/vmalloc Hui Su <sh_def@163.com>: mm/vmalloc.c: update the comment in __vmalloc_area_node() mm/vmalloc.c: fix the comment of find_vm_area Subsystem: mm/documentation Alexander Gordeev <agordeev@linux.ibm.com>: docs/vm: fix 'mm_count' vs 'mm_users' counter confusion Subsystem: mm/kasan Patricia Alfonso <trishalfonso@google.com>: Patch series "KASAN-KUnit Integration", v14: kasan/kunit: add KUnit Struct to Current Task KUnit: KASAN Integration KASAN: port KASAN Tests to KUnit KASAN: Testing Documentation David Gow <davidgow@google.com>: mm: kasan: do not panic if both panic_on_warn and kasan_multishot set Subsystem: mm/pagealloc David Hildenbrand <david@redhat.com>: Patch series "mm / virtio-mem: support ZONE_MOVABLE", v5: mm/page_alloc: tweak comments in has_unmovable_pages() mm/page_isolation: exit early when pageblock is isolated in set_migratetype_isolate() mm/page_isolation: drop WARN_ON_ONCE() in set_migratetype_isolate() mm/page_isolation: cleanup set_migratetype_isolate() virtio-mem: don't special-case ZONE_MOVABLE mm: document semantics of ZONE_MOVABLE Li Xinhai <lixinhai.lxh@gmail.com>: mm, isolation: avoid checking unmovable pages across pageblock boundary Mateusz Nosek <mateusznosek0@gmail.com>: mm/page_alloc.c: clean code by removing unnecessary initialization mm/page_alloc.c: micro-optimization remove unnecessary branch mm/page_alloc.c: fix early params garbage value accesses mm/page_alloc.c: clean code by merging two functions Yanfei Xu <yanfei.xu@windriver.com>: mm/page_alloc.c: __perform_reclaim should return 'unsigned long' Mateusz Nosek <mateusznosek0@gmail.com>: mmzone: clean code by removing unused macro parameter Ralph Campbell <rcampbell@nvidia.com>: mm: move call to compound_head() in release_pages() "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm/page_alloc.c: fix freeing non-compound pages Michal Hocko <mhocko@suse.com>: include/linux/gfp.h: clarify usage of GFP_ATOMIC in !preemptible contexts Subsystem: mm/hugetlb Baoquan He <bhe@redhat.com>: Patch series "mm/hugetlb: Small cleanup and improvement", v2: mm/hugetlb.c: make is_hugetlb_entry_hwpoisoned return bool mm/hugetlb.c: remove the unnecessary non_swap_entry() doc/vm: fix typo in the hugetlb admin documentation Wei Yang <richard.weiyang@linux.alibaba.com>: Patch series "mm/hugetlb: code refine and simplification", v4: mm/hugetlb: not necessary to coalesce regions recursively mm/hugetlb: remove VM_BUG_ON(!nrg) in get_file_region_entry_from_cache() mm/hugetlb: use list_splice to merge two list at once mm/hugetlb: count file_region to be added when regions_needed != NULL mm/hugetlb: a page from buddy is not on any list mm/hugetlb: narrow the hugetlb_lock protection area during preparing huge page mm/hugetlb: take the free hpage during the iteration directly Mike Kravetz <mike.kravetz@oracle.com>: hugetlb: add lockdep check for i_mmap_rwsem held in huge_pmd_share Subsystem: mm/vmscan Chunxin Zang <zangchunxin@bytedance.com>: mm/vmscan: fix infinite loop in drop_slab_node Hui Su <sh_def@163.com>: mm/vmscan: fix comments for isolate_lru_page() Subsystem: mm/z3fold Hui Su <sh_def@163.com>: mm/z3fold.c: use xx_zalloc instead xx_alloc and memset Subsystem: mm/zbud Xiang Chen <chenxiang66@hisilicon.com>: mm/zbud: remove redundant initialization Subsystem: mm/compaction Mateusz Nosek <mateusznosek0@gmail.com>: mm/compaction.c: micro-optimization remove unnecessary branch include/linux/compaction.h: clean code by removing unused enum value John Hubbard <jhubbard@nvidia.com>: selftests/vm: 8x compaction_test speedup Subsystem: mm/mempolicy Wei Yang <richard.weiyang@linux.alibaba.com>: mm/mempolicy: remove or narrow the lock on current mm: remove unused alloc_page_vma_node() Subsystem: mm/mempool Miaohe Lin <linmiaohe@huawei.com>: mm/mempool: add 'else' to split mutually exclusive case Subsystem: mm/memblock Mike Rapoport <rppt@linux.ibm.com>: Patch series "memblock: seasonal cleaning^w cleanup", v3: KVM: PPC: Book3S HV: simplify kvm_cma_reserve() dma-contiguous: simplify cma_early_percent_memory() arm, xtensa: simplify initialization of high memory pages arm64: numa: simplify dummy_numa_init() h8300, nds32, openrisc: simplify detection of memory extents riscv: drop unneeded node initialization mircoblaze: drop unneeded NUMA and sparsemem initializations memblock: make for_each_memblock_type() iterator private memblock: make memblock_debug and related functionality private memblock: reduce number of parameters in for_each_mem_range() arch, mm: replace for_each_memblock() with for_each_mem_pfn_range() arch, drivers: replace for_each_membock() with for_each_mem_range() x86/setup: simplify initrd relocation and reservation x86/setup: simplify reserve_crashkernel() memblock: remove unused memblock_mem_size() memblock: implement for_each_reserved_mem_region() using __next_mem_region() memblock: use separate iterators for memory and reserved regions Subsystem: mm/oom-kill Suren Baghdasaryan <surenb@google.com>: mm, oom_adj: don't loop through tasks in __set_oom_adj when not necessary Subsystem: mm/migration Ralph Campbell <rcampbell@nvidia.com>: mm/migrate: remove cpages-- in migrate_vma_finalize() mm/migrate: remove obsolete comment about device public .clang-format | 7 Documentation/admin-guide/cgroup-v2.rst | 69 + Documentation/admin-guide/mm/hugetlbpage.rst | 2 Documentation/dev-tools/kasan.rst | 74 + Documentation/dev-tools/kmemleak.rst | 2 Documentation/kbuild/makefiles.rst | 20 Documentation/vm/active_mm.rst | 2 Documentation/x86/x86_64/boot-options.rst | 4 MAINTAINERS | 2 Makefile | 9 arch/arm/Kconfig | 2 arch/arm/include/asm/tlb.h | 1 arch/arm/kernel/setup.c | 18 arch/arm/mm/init.c | 59 - arch/arm/mm/mmu.c | 39 arch/arm/mm/pmsa-v7.c | 23 arch/arm/mm/pmsa-v8.c | 17 arch/arm/xen/mm.c | 7 arch/arm64/Kconfig | 2 arch/arm64/kernel/machine_kexec_file.c | 6 arch/arm64/kernel/setup.c | 4 arch/arm64/kernel/vdso/Makefile | 7 arch/arm64/mm/init.c | 11 arch/arm64/mm/kasan_init.c | 10 arch/arm64/mm/mmu.c | 11 arch/arm64/mm/numa.c | 15 arch/c6x/kernel/setup.c | 9 arch/h8300/kernel/setup.c | 8 arch/microblaze/mm/init.c | 23 arch/mips/cavium-octeon/dma-octeon.c | 14 arch/mips/kernel/setup.c | 31 arch/mips/netlogic/xlp/setup.c | 2 arch/nds32/kernel/setup.c | 8 arch/openrisc/kernel/setup.c | 9 arch/openrisc/mm/init.c | 8 arch/powerpc/kernel/fadump.c | 61 - arch/powerpc/kexec/file_load_64.c | 16 arch/powerpc/kvm/book3s_hv_builtin.c | 12 arch/powerpc/kvm/book3s_hv_uvmem.c | 14 arch/powerpc/mm/book3s64/hash_utils.c | 16 arch/powerpc/mm/book3s64/radix_pgtable.c | 10 arch/powerpc/mm/kasan/kasan_init_32.c | 8 arch/powerpc/mm/mem.c | 31 arch/powerpc/mm/numa.c | 7 arch/powerpc/mm/pgtable_32.c | 8 arch/riscv/mm/init.c | 36 arch/riscv/mm/kasan_init.c | 10 arch/s390/kernel/setup.c | 27 arch/s390/mm/page-states.c | 6 arch/s390/mm/vmem.c | 7 arch/sh/mm/init.c | 9 arch/sparc/mm/init_64.c | 12 arch/x86/include/asm/numa.h | 8 arch/x86/kernel/e820.c | 16 arch/x86/kernel/setup.c | 56 - arch/x86/mm/numa.c | 13 arch/x86/mm/numa_emulation.c | 3 arch/x86/xen/enlighten_pv.c | 2 arch/xtensa/mm/init.c | 55 - drivers/acpi/numa/hmat.c | 76 - drivers/acpi/numa/srat.c | 9 drivers/base/core.c | 2 drivers/bus/mvebu-mbus.c | 12 drivers/dax/Kconfig | 6 drivers/dax/Makefile | 3 drivers/dax/bus.c | 1237 +++++++++++++++++++++++---- drivers/dax/bus.h | 34 drivers/dax/dax-private.h | 74 + drivers/dax/device.c | 164 +-- drivers/dax/hmem.c | 56 - drivers/dax/hmem/Makefile | 8 drivers/dax/hmem/device.c | 100 ++ drivers/dax/hmem/hmem.c | 93 +- drivers/dax/kmem.c | 236 ++--- drivers/dax/pmem/compat.c | 2 drivers/dax/pmem/core.c | 36 drivers/firmware/efi/x86_fake_mem.c | 12 drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 4 drivers/gpu/drm/nouveau/nouveau_dmem.c | 15 drivers/irqchip/irq-gic-v3-its.c | 2 drivers/nvdimm/badrange.c | 26 drivers/nvdimm/claim.c | 13 drivers/nvdimm/nd.h | 3 drivers/nvdimm/pfn_devs.c | 13 drivers/nvdimm/pmem.c | 27 drivers/nvdimm/region.c | 21 drivers/pci/p2pdma.c | 12 drivers/virtio/virtio_mem.c | 47 - drivers/xen/unpopulated-alloc.c | 45 fs/fs_parser.c | 2 fs/ntfs/inode.c | 6 fs/ocfs2/alloc.c | 6 fs/ocfs2/localalloc.c | 2 fs/proc/base.c | 3 fs/proc/task_mmu.c | 104 +- fs/xattr.c | 22 include/acpi/acpi_numa.h | 14 include/kunit/test.h | 5 include/linux/acpi.h | 2 include/linux/compaction.h | 3 include/linux/compiler-clang.h | 8 include/linux/compiler-gcc.h | 2 include/linux/compiler.h | 2 include/linux/dax.h | 8 include/linux/export.h | 2 include/linux/fs.h | 4 include/linux/gfp.h | 6 include/linux/huge_mm.h | 3 include/linux/kasan.h | 6 include/linux/memblock.h | 90 + include/linux/memcontrol.h | 13 include/linux/memory_hotplug.h | 23 include/linux/memremap.h | 15 include/linux/mm.h | 36 include/linux/mmap_lock.h | 5 include/linux/mmzone.h | 37 include/linux/numa.h | 11 include/linux/oom.h | 1 include/linux/page-flags.h | 42 include/linux/pagemap.h | 43 include/linux/range.h | 6 include/linux/sched.h | 4 include/linux/sched/coredump.h | 1 include/linux/slab.h | 2 include/linux/swap.h | 10 include/linux/swap_slots.h | 2 kernel/dma/contiguous.c | 11 kernel/fork.c | 25 kernel/resource.c | 11 lib/Kconfig.debug | 9 lib/Kconfig.kasan | 31 lib/Makefile | 5 lib/kunit/test.c | 13 lib/test_free_pages.c | 42 lib/test_hmm.c | 65 - lib/test_kasan.c | 732 ++++++--------- lib/test_kasan_module.c | 111 ++ mm/Kconfig | 4 mm/Makefile | 1 mm/compaction.c | 5 mm/debug.c | 18 mm/dmapool.c | 46 - mm/fadvise.c | 9 mm/filemap.c | 78 - mm/gup.c | 44 mm/gup_benchmark.c | 23 mm/huge_memory.c | 4 mm/hugetlb.c | 100 +- mm/internal.h | 3 mm/kasan/report.c | 34 mm/kmemleak-test.c | 99 -- mm/kmemleak.c | 8 mm/madvise.c | 21 mm/memblock.c | 102 -- mm/memcontrol.c | 262 +++-- mm/memory-failure.c | 5 mm/memory.c | 147 +-- mm/memory_hotplug.c | 10 mm/mempolicy.c | 8 mm/mempool.c | 18 mm/memremap.c | 344 ++++--- mm/migrate.c | 3 mm/mincore.c | 28 mm/mmap.c | 45 mm/oom_kill.c | 2 mm/page_alloc.c | 82 - mm/page_counter.c | 2 mm/page_io.c | 14 mm/page_isolation.c | 41 mm/shmem.c | 19 mm/slab.c | 4 mm/slab.h | 50 - mm/slub.c | 33 mm/sparse.c | 10 mm/swap.c | 14 mm/swap_slots.c | 3 mm/swap_state.c | 38 mm/swapfile.c | 12 mm/truncate.c | 58 - mm/vmalloc.c | 6 mm/vmscan.c | 5 mm/z3fold.c | 3 mm/zbud.c | 1 samples/Makefile | 1 samples/kmemleak/Makefile | 3 samples/kmemleak/kmemleak-test.c | 99 ++ scripts/decodecode | 29 scripts/spelling.txt | 4 tools/testing/nvdimm/dax-dev.c | 28 tools/testing/nvdimm/test/iomap.c | 2 tools/testing/selftests/vm/Makefile | 17 tools/testing/selftests/vm/compaction_test.c | 11 tools/testing/selftests/vm/gup_benchmark.c | 14 tools/testing/selftests/vm/hmm-tests.c | 4 194 files changed, 4273 insertions(+), 2777 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-10-11 6:15 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-10-11 6:15 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 5 patches, based on da690031a5d6d50a361e3f19f3eeabd086a6f20d. Subsystems affected by this patch series: MAINTAINERS mm/pagemap mm/swap mm/hugetlb Subsystem: MAINTAINERS Kees Cook <keescook@chromium.org>: MAINTAINERS: change hardening mailing list Antoine Tenart <atenart@kernel.org>: MAINTAINERS: Antoine Tenart's email address Subsystem: mm/pagemap Miaohe Lin <linmiaohe@huawei.com>: mm: mmap: Fix general protection fault in unlink_file_vma() Subsystem: mm/swap Minchan Kim <minchan@kernel.org>: mm: validate inode in mapping_set_error() Subsystem: mm/hugetlb Vijay Balakrishna <vijayb@linux.microsoft.com>: mm: khugepaged: recalculate min_free_kbytes after memory hotplug as expected by khugepaged .mailmap | 4 +++- MAINTAINERS | 8 ++++---- include/linux/khugepaged.h | 5 +++++ include/linux/pagemap.h | 3 ++- mm/khugepaged.c | 13 +++++++++++-- mm/mmap.c | 6 +++++- mm/page_alloc.c | 3 +++ 7 files changed, 33 insertions(+), 9 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-10-03 5:20 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-10-03 5:20 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 3 patches, based on d3d45f8220d60a0b2aaaacf8fb2be4e6ffd9008e. Subsystems affected by this patch series: mm/slub mm/cma scripts Subsystem: mm/slub Eric Farman <farman@linux.ibm.com>: mm, slub: restore initial kmem_cache flags Subsystem: mm/cma Joonsoo Kim <iamjoonsoo.kim@lge.com>: mm/page_alloc: handle a missing case for memalloc_nocma_{save/restore} APIs Subsystem: scripts Eric Biggers <ebiggers@google.com>: scripts/spelling.txt: fix malformed entry mm/page_alloc.c | 19 ++++++++++++++++--- mm/slub.c | 6 +----- scripts/spelling.txt | 2 +- 3 files changed, 18 insertions(+), 9 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-09-26 4:17 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-09-26 4:17 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 9 patches, based on 7c7ec3226f5f33f9c050d85ec20f18419c622ad6. Subsystems affected by this patch series: mm/thp mm/memcg mm/gup mm/migration lib x86 mm/memory-hotplug Subsystem: mm/thp Gao Xiang <hsiangkao@redhat.com>: mm, THP, swap: fix allocating cluster for swapfile by mistake Subsystem: mm/memcg Muchun Song <songmuchun@bytedance.com>: mm: memcontrol: fix missing suffix of workingset_restore Subsystem: mm/gup Vasily Gorbik <gor@linux.ibm.com>: mm/gup: fix gup_fast with dynamic page table folding Subsystem: mm/migration Zi Yan <ziy@nvidia.com>: mm/migrate: correct thp migration stats Subsystem: lib Nick Desaulniers <ndesaulniers@google.com>: lib/string.c: implement stpcpy Jason Yan <yanaijie@huawei.com>: lib/memregion.c: include memregion.h Subsystem: x86 Mikulas Patocka <mpatocka@redhat.com>: arch/x86/lib/usercopy_64.c: fix __copy_user_flushcache() cache writeback Subsystem: mm/memory-hotplug Laurent Dufour <ldufour@linux.ibm.com>: Patch series "mm: fix memory to node bad links in sysfs", v3: mm: replace memmap_context by meminit_context mm: don't rely on system state to detect hot-plug operations Documentation/admin-guide/cgroup-v2.rst | 25 ++++++--- arch/ia64/mm/init.c | 6 +- arch/s390/include/asm/pgtable.h | 42 +++++++++++---- arch/x86/lib/usercopy_64.c | 2 drivers/base/node.c | 85 ++++++++++++++++++++------------ include/linux/mm.h | 2 include/linux/mmzone.h | 11 +++- include/linux/node.h | 11 ++-- include/linux/pgtable.h | 10 +++ lib/memregion.c | 1 lib/string.c | 24 +++++++++ mm/gup.c | 18 +++--- mm/memcontrol.c | 4 - mm/memory_hotplug.c | 5 + mm/migrate.c | 7 +- mm/page_alloc.c | 10 +-- mm/swapfile.c | 2 17 files changed, 181 insertions(+), 84 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-09-19 4:19 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-09-19 4:19 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 15 patches, based on 92ab97adeefccf375de7ebaad9d5b75d4125fe8b. Subsystems affected by this patch series: mailmap mm/hotfixes mm/thp mm/memory-hotplug misc kcsan Subsystem: mailmap Kees Cook <keescook@chromium.org>: mailmap: add older email addresses for Kees Cook Subsystem: mm/hotfixes Hugh Dickins <hughd@google.com>: Patch series "mm: fixes to past from future testing": ksm: reinstate memcg charge on copied pages mm: migration of hugetlbfs page skip memcg shmem: shmem_writepage() split unlikely i915 THP mm: fix check_move_unevictable_pages() on THP mlock: fix unevictable_pgs event counts on THP Byron Stanoszek <gandalf@winds.org>: tmpfs: restore functionality of nr_inodes=0 Muchun Song <songmuchun@bytedance.com>: kprobes: fix kill kprobe which has been marked as gone Subsystem: mm/thp Ralph Campbell <rcampbell@nvidia.com>: mm/thp: fix __split_huge_pmd_locked() for migration PMD Christophe Leroy <christophe.leroy@csgroup.eu>: selftests/vm: fix display of page size in map_hugetlb Subsystem: mm/memory-hotplug Pavel Tatashin <pasha.tatashin@soleen.com>: mm/memory_hotplug: drain per-cpu pages again during memory offline Subsystem: misc Tobias Klauser <tklauser@distanz.ch>: ftrace: let ftrace_enable_sysctl take a kernel pointer buffer stackleak: let stack_erasing_sysctl take a kernel pointer buffer fs/fs-writeback.c: adjust dirtytime_interval_handler definition to match prototype Subsystem: kcsan Changbin Du <changbin.du@gmail.com>: kcsan: kconfig: move to menu 'Generic Kernel Debugging Instruments' .mailmap | 4 ++ fs/fs-writeback.c | 2 - include/linux/ftrace.h | 3 -- include/linux/stackleak.h | 2 - kernel/kprobes.c | 9 +++++- kernel/stackleak.c | 2 - kernel/trace/ftrace.c | 3 -- lib/Kconfig.debug | 4 -- mm/huge_memory.c | 42 ++++++++++++++++--------------- mm/ksm.c | 4 ++ mm/memory_hotplug.c | 14 ++++++++++ mm/migrate.c | 3 +- mm/mlock.c | 24 +++++++++++------ mm/page_isolation.c | 8 +++++ mm/shmem.c | 20 +++++++++++--- mm/swap.c | 6 ++-- mm/vmscan.c | 10 +++++-- tools/testing/selftests/vm/map_hugetlb.c | 2 - 18 files changed, 111 insertions(+), 51 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-09-04 23:34 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-09-04 23:34 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 19 patches, based on 59126901f200f5fc907153468b03c64e0081b6e6. Subsystems affected by this patch series: mm/memcg mm/slub MAINTAINERS mm/pagemap ipc fork checkpatch mm/madvise mm/migration mm/hugetlb lib Subsystem: mm/memcg Michal Hocko <mhocko@suse.com>: memcg: fix use-after-free in uncharge_batch Xunlei Pang <xlpang@linux.alibaba.com>: mm: memcg: fix memcg reclaim soft lockup Subsystem: mm/slub Eugeniu Rosca <erosca@de.adit-jv.com>: mm: slub: fix conversion of freelist_corrupted() Subsystem: MAINTAINERS Robert Richter <rric@kernel.org>: MAINTAINERS: update Cavium/Marvell entries Nick Desaulniers <ndesaulniers@google.com>: MAINTAINERS: add LLVM maintainers Randy Dunlap <rdunlap@infradead.org>: MAINTAINERS: IA64: mark Status as Odd Fixes only Subsystem: mm/pagemap Joerg Roedel <jroedel@suse.de>: mm: track page table modifications in __apply_to_page_range() Subsystem: ipc Tobias Klauser <tklauser@distanz.ch>: ipc: adjust proc_ipc_sem_dointvec definition to match prototype Subsystem: fork Tobias Klauser <tklauser@distanz.ch>: fork: adjust sysctl_max_threads definition to match prototype Subsystem: checkpatch Mrinal Pandey <mrinalmni@gmail.com>: checkpatch: fix the usage of capture group ( ... ) Subsystem: mm/madvise Yang Shi <shy828301@gmail.com>: mm: madvise: fix vma user-after-free Subsystem: mm/migration Alistair Popple <alistair@popple.id.au>: mm/migrate: fixup setting UFFD_WP flag mm/rmap: fixup copying of soft dirty and uffd ptes Ralph Campbell <rcampbell@nvidia.com>: Patch series "mm/migrate: preserve soft dirty in remove_migration_pte()": mm/migrate: remove unnecessary is_zone_device_page() check mm/migrate: preserve soft dirty in remove_migration_pte() Subsystem: mm/hugetlb Li Xinhai <lixinhai.lxh@gmail.com>: mm/hugetlb: try preferred node first when alloc gigantic page from cma Muchun Song <songmuchun@bytedance.com>: mm/hugetlb: fix a race between hugetlb sysctl handlers David Howells <dhowells@redhat.com>: mm/khugepaged.c: fix khugepaged's request size in collapse_file Subsystem: lib Jason Gunthorpe <jgg@nvidia.com>: include/linux/log2.h: add missing () around n in roundup_pow_of_two() MAINTAINERS | 32 ++++++++++++++++---------------- include/linux/log2.h | 2 +- ipc/ipc_sysctl.c | 2 +- kernel/fork.c | 2 +- mm/hugetlb.c | 49 +++++++++++++++++++++++++++++++++++++------------ mm/khugepaged.c | 2 +- mm/madvise.c | 2 +- mm/memcontrol.c | 6 ++++++ mm/memory.c | 37 ++++++++++++++++++++++++------------- mm/migrate.c | 31 +++++++++++++++++++------------ mm/rmap.c | 9 +++++++-- mm/slub.c | 12 ++++++------ mm/vmscan.c | 8 ++++++++ scripts/checkpatch.pl | 4 ++-- 14 files changed, 130 insertions(+), 68 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-08-21 0:41 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-08-21 0:41 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 11 patches, based on 7eac66d0456fe12a462e5c14c68e97c7460989da. Subsystems affected by this patch series: misc mm/hugetlb mm/vmalloc mm/misc romfs relay uprobes squashfs mm/cma mm/pagealloc Subsystem: misc Nick Desaulniers <ndesaulniers@google.com>: mailmap: add Andi Kleen Subsystem: mm/hugetlb Xu Wang <vulab@iscas.ac.cn>: hugetlb_cgroup: convert comma to semicolon Hugh Dickins <hughd@google.com>: khugepaged: adjust VM_BUG_ON_MM() in __khugepaged_enter() Subsystem: mm/vmalloc "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>: mm/vunmap: add cond_resched() in vunmap_pmd_range Subsystem: mm/misc Leon Romanovsky <leonro@nvidia.com>: mm/rodata_test.c: fix missing function declaration Subsystem: romfs Jann Horn <jannh@google.com>: romfs: fix uninitialized memory leak in romfs_dev_read() Subsystem: relay Wei Yongjun <weiyongjun1@huawei.com>: kernel/relay.c: fix memleak on destroy relay channel Subsystem: uprobes Hugh Dickins <hughd@google.com>: uprobes: __replace_page() avoid BUG in munlock_vma_page() Subsystem: squashfs Phillip Lougher <phillip@squashfs.org.uk>: squashfs: avoid bio_alloc() failure with 1Mbyte blocks Subsystem: mm/cma Doug Berger <opendmb@gmail.com>: mm: include CMA pages in lowmem_reserve at boot Subsystem: mm/pagealloc Charan Teja Reddy <charante@codeaurora.org>: mm, page_alloc: fix core hung in free_pcppages_bulk() .mailmap | 1 + fs/romfs/storage.c | 4 +--- fs/squashfs/block.c | 6 +++++- kernel/events/uprobes.c | 2 +- kernel/relay.c | 1 + mm/hugetlb_cgroup.c | 4 ++-- mm/khugepaged.c | 2 +- mm/page_alloc.c | 7 ++++++- mm/rodata_test.c | 1 + mm/vmalloc.c | 2 ++ 10 files changed, 21 insertions(+), 9 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-08-15 0:29 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-08-15 0:29 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 39 patches, based on b923f1247b72fc100b87792fd2129d026bb10e66. Subsystems affected by this patch series: mm/hotfixes lz4 exec mailmap mm/thp autofs mm/madvise sysctl mm/kmemleak mm/misc lib Subsystem: mm/hotfixes Mike Rapoport <rppt@linux.ibm.com>: asm-generic: pgalloc.h: use correct #ifdef to enable pud_alloc_one() Baoquan He <bhe@redhat.com>: Revert "mm/vmstat.c: do not show lowmem reserve protection information of empty zone" Subsystem: lz4 Nick Terrell <terrelln@fb.com>: lz4: fix kernel decompression speed Subsystem: exec Kees Cook <keescook@chromium.org>: Patch series "Fix S_ISDIR execve() errno": exec: restore EACCES of S_ISDIR execve() selftests/exec: add file type errno tests Subsystem: mailmap Greg Kurz <groug@kaod.org>: mailmap: add entry for Greg Kurz Subsystem: mm/thp "Matthew Wilcox (Oracle)" <willy@infradead.org>: Patch series "THP prep patches": mm: store compound_nr as well as compound_order mm: move page-flags include to top of file mm: add thp_order mm: add thp_size mm: replace hpage_nr_pages with thp_nr_pages mm: add thp_head mm: introduce offset_in_thp Subsystem: autofs Randy Dunlap <rdunlap@infradead.org>: fs: autofs: delete repeated words in comments Subsystem: mm/madvise Minchan Kim <minchan@kernel.org>: Patch series "introduce memory hinting API for external process", v8: mm/madvise: pass task and mm to do_madvise pid: move pidfd_get_pid() to pid.c mm/madvise: introduce process_madvise() syscall: an external memory hinting API mm/madvise: check fatal signal pending of target process Subsystem: sysctl Xiaoming Ni <nixiaoming@huawei.com>: all arch: remove system call sys_sysctl Subsystem: mm/kmemleak Qian Cai <cai@lca.pw>: mm/kmemleak: silence KCSAN splats in checksum Subsystem: mm/misc Qian Cai <cai@lca.pw>: mm/frontswap: mark various intentional data races mm/page_io: mark various intentional data races mm/swap_state: mark various intentional data races Kirill A. Shutemov <kirill@shutemov.name>: mm/filemap.c: fix a data race in filemap_fault() Qian Cai <cai@lca.pw>: mm/swapfile: fix and annotate various data races mm/page_counter: fix various data races at memsw mm/memcontrol: fix a data race in scan count mm/list_lru: fix a data race in list_lru_count_one mm/mempool: fix a data race in mempool_free() mm/rmap: annotate a data race at tlb_flush_batched mm/swap.c: annotate data races for lru_rotate_pvecs mm: annotate a data race in page_zonenum() Romain Naour <romain.naour@gmail.com>: include/asm-generic/vmlinux.lds.h: align ro_after_init Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>: sh: clkfwk: remove r8/r16/r32 sh: use generic strncpy() Subsystem: lib Krzysztof Kozlowski <krzk@kernel.org>: Patch series "iomap: Constify ioreadX() iomem argument", v3: iomap: constify ioreadX() iomem argument (as in generic implementation) rtl818x: constify ioreadX() iomem argument (as in generic implementation) ntb: intel: constify ioreadX() iomem argument (as in generic implementation) virtio: pci: constify ioreadX() iomem argument (as in generic implementation) .mailmap | 1 arch/alpha/include/asm/core_apecs.h | 6 arch/alpha/include/asm/core_cia.h | 6 arch/alpha/include/asm/core_lca.h | 6 arch/alpha/include/asm/core_marvel.h | 4 arch/alpha/include/asm/core_mcpcia.h | 6 arch/alpha/include/asm/core_t2.h | 2 arch/alpha/include/asm/io.h | 12 - arch/alpha/include/asm/io_trivial.h | 16 - arch/alpha/include/asm/jensen.h | 2 arch/alpha/include/asm/machvec.h | 6 arch/alpha/kernel/core_marvel.c | 2 arch/alpha/kernel/io.c | 12 - arch/alpha/kernel/syscalls/syscall.tbl | 3 arch/arm/configs/am200epdkit_defconfig | 1 arch/arm/tools/syscall.tbl | 3 arch/arm64/include/asm/unistd.h | 2 arch/arm64/include/asm/unistd32.h | 6 arch/ia64/kernel/syscalls/syscall.tbl | 3 arch/m68k/kernel/syscalls/syscall.tbl | 3 arch/microblaze/kernel/syscalls/syscall.tbl | 3 arch/mips/configs/cu1000-neo_defconfig | 1 arch/mips/kernel/syscalls/syscall_n32.tbl | 3 arch/mips/kernel/syscalls/syscall_n64.tbl | 3 arch/mips/kernel/syscalls/syscall_o32.tbl | 3 arch/parisc/include/asm/io.h | 4 arch/parisc/kernel/syscalls/syscall.tbl | 3 arch/parisc/lib/iomap.c | 72 +++--- arch/powerpc/kernel/iomap.c | 28 +- arch/powerpc/kernel/syscalls/syscall.tbl | 3 arch/s390/kernel/syscalls/syscall.tbl | 3 arch/sh/configs/dreamcast_defconfig | 1 arch/sh/configs/espt_defconfig | 1 arch/sh/configs/hp6xx_defconfig | 1 arch/sh/configs/landisk_defconfig | 1 arch/sh/configs/lboxre2_defconfig | 1 arch/sh/configs/microdev_defconfig | 1 arch/sh/configs/migor_defconfig | 1 arch/sh/configs/r7780mp_defconfig | 1 arch/sh/configs/r7785rp_defconfig | 1 arch/sh/configs/rts7751r2d1_defconfig | 1 arch/sh/configs/rts7751r2dplus_defconfig | 1 arch/sh/configs/se7206_defconfig | 1 arch/sh/configs/se7343_defconfig | 1 arch/sh/configs/se7619_defconfig | 1 arch/sh/configs/se7705_defconfig | 1 arch/sh/configs/se7750_defconfig | 1 arch/sh/configs/se7751_defconfig | 1 arch/sh/configs/secureedge5410_defconfig | 1 arch/sh/configs/sh03_defconfig | 1 arch/sh/configs/sh7710voipgw_defconfig | 1 arch/sh/configs/sh7757lcr_defconfig | 1 arch/sh/configs/sh7763rdp_defconfig | 1 arch/sh/configs/shmin_defconfig | 1 arch/sh/configs/titan_defconfig | 1 arch/sh/include/asm/string_32.h | 26 -- arch/sh/kernel/iomap.c | 22 - arch/sh/kernel/syscalls/syscall.tbl | 3 arch/sparc/kernel/syscalls/syscall.tbl | 3 arch/x86/entry/syscalls/syscall_32.tbl | 3 arch/x86/entry/syscalls/syscall_64.tbl | 4 arch/xtensa/kernel/syscalls/syscall.tbl | 3 drivers/mailbox/bcm-pdc-mailbox.c | 2 drivers/net/wireless/realtek/rtl818x/rtl8180/rtl8180.h | 6 drivers/ntb/hw/intel/ntb_hw_gen1.c | 2 drivers/ntb/hw/intel/ntb_hw_gen3.h | 2 drivers/ntb/hw/intel/ntb_hw_intel.h | 2 drivers/nvdimm/btt.c | 4 drivers/nvdimm/pmem.c | 6 drivers/sh/clk/cpg.c | 25 -- drivers/virtio/virtio_pci_modern.c | 6 fs/autofs/dev-ioctl.c | 4 fs/io_uring.c | 2 fs/namei.c | 4 include/asm-generic/iomap.h | 28 +- include/asm-generic/pgalloc.h | 2 include/asm-generic/vmlinux.lds.h | 1 include/linux/compat.h | 5 include/linux/huge_mm.h | 58 ++++- include/linux/io-64-nonatomic-hi-lo.h | 4 include/linux/io-64-nonatomic-lo-hi.h | 4 include/linux/memcontrol.h | 2 include/linux/mm.h | 16 - include/linux/mm_inline.h | 6 include/linux/mm_types.h | 1 include/linux/pagemap.h | 6 include/linux/pid.h | 1 include/linux/syscalls.h | 4 include/linux/sysctl.h | 6 include/uapi/asm-generic/unistd.h | 4 kernel/Makefile | 2 kernel/exit.c | 17 - kernel/pid.c | 17 + kernel/sys_ni.c | 3 kernel/sysctl_binary.c | 171 -------------- lib/iomap.c | 30 +- lib/lz4/lz4_compress.c | 4 lib/lz4/lz4_decompress.c | 18 - lib/lz4/lz4defs.h | 10 lib/lz4/lz4hc_compress.c | 2 mm/compaction.c | 2 mm/filemap.c | 22 + mm/frontswap.c | 8 mm/gup.c | 2 mm/internal.h | 4 mm/kmemleak.c | 2 mm/list_lru.c | 2 mm/madvise.c | 190 ++++++++++++++-- mm/memcontrol.c | 10 mm/memory.c | 4 mm/memory_hotplug.c | 7 mm/mempolicy.c | 2 mm/mempool.c | 2 mm/migrate.c | 18 - mm/mlock.c | 9 mm/page_alloc.c | 5 mm/page_counter.c | 13 - mm/page_io.c | 12 - mm/page_vma_mapped.c | 6 mm/rmap.c | 10 mm/swap.c | 21 - mm/swap_state.c | 10 mm/swapfile.c | 33 +- mm/vmscan.c | 6 mm/vmstat.c | 12 - mm/workingset.c | 6 tools/perf/arch/powerpc/entry/syscalls/syscall.tbl | 2 tools/perf/arch/s390/entry/syscalls/syscall.tbl | 2 tools/perf/arch/x86/entry/syscalls/syscall_64.tbl | 2 tools/testing/selftests/exec/.gitignore | 1 tools/testing/selftests/exec/Makefile | 5 tools/testing/selftests/exec/non-regular.c | 196 +++++++++++++++++ 132 files changed, 815 insertions(+), 614 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-08-12 1:29 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-08-12 1:29 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm - Most of the rest of MM - various other subsystems 165 patches, based on 00e4db51259a5f936fec1424b884f029479d3981. Subsystems affected by this patch series: mm/memcg mm/hugetlb mm/vmscan mm/proc mm/compaction mm/mempolicy mm/oom-kill mm/hugetlbfs mm/migration mm/thp mm/cma mm/util mm/memory-hotplug mm/cleanups mm/uaccess alpha misc sparse bitmap lib lz4 bitops checkpatch autofs minix nilfs ufs fat signals kmod coredump exec kdump rapidio panic kcov kgdb ipc mm/migration mm/gup mm/pagemap Subsystem: mm/memcg Roman Gushchin <guro@fb.com>: Patch series "mm: memcg accounting of percpu memory", v3: percpu: return number of released bytes from pcpu_free_area() mm: memcg/percpu: account percpu memory to memory cgroups mm: memcg/percpu: per-memcg percpu memory statistics mm: memcg: charge memcg percpu memory to the parent cgroup kselftests: cgroup: add perpcu memory accounting test Subsystem: mm/hugetlb Muchun Song <songmuchun@bytedance.com>: mm/hugetlb: add mempolicy check in the reservation routine Subsystem: mm/vmscan Joonsoo Kim <iamjoonsoo.kim@lge.com>: Patch series "workingset protection/detection on the anonymous LRU list", v7: mm/vmscan: make active/inactive ratio as 1:1 for anon lru mm/vmscan: protect the workingset on anonymous LRU mm/workingset: prepare the workingset detection infrastructure for anon LRU mm/swapcache: support to handle the shadow entries mm/swap: implement workingset detection for anonymous LRU mm/vmscan: restore active/inactive ratio for anonymous LRU Subsystem: mm/proc Michal Koutný <mkoutny@suse.com>: /proc/PID/smaps: consistent whitespace output format Subsystem: mm/compaction Nitin Gupta <nigupta@nvidia.com>: mm: proactive compaction mm: fix compile error due to COMPACTION_HPAGE_ORDER mm: use unsigned types for fragmentation score Alex Shi <alex.shi@linux.alibaba.com>: mm/compaction: correct the comments of compact_defer_shift Subsystem: mm/mempolicy Krzysztof Kozlowski <krzk@kernel.org>: mm: mempolicy: fix kerneldoc of numa_map_to_online_node() Wenchao Hao <haowenchao22@gmail.com>: mm/mempolicy.c: check parameters first in kernel_get_mempolicy Yanfei Xu <yanfei.xu@windriver.com>: include/linux/mempolicy.h: fix typo Subsystem: mm/oom-kill Yafang Shao <laoar.shao@gmail.com>: mm, oom: make the calculation of oom badness more accurate Michal Hocko <mhocko@suse.com>: doc, mm: sync up oom_score_adj documentation doc, mm: clarify /proc/<pid>/oom_score value range Yafang Shao <laoar.shao@gmail.com>: mm, oom: show process exiting information in __oom_kill_process() Subsystem: mm/hugetlbfs Mike Kravetz <mike.kravetz@oracle.com>: hugetlbfs: prevent filesystem stacking of hugetlbfs hugetlbfs: remove call to huge_pte_alloc without i_mmap_rwsem Subsystem: mm/migration Ralph Campbell <rcampbell@nvidia.com>: Patch series "mm/migrate: optimize migrate_vma_setup() for holes": mm/migrate: optimize migrate_vma_setup() for holes mm/migrate: add migrate-shared test for migrate_vma_*() Subsystem: mm/thp Yang Shi <yang.shi@linux.alibaba.com>: mm: thp: remove debug_cow switch Anshuman Khandual <anshuman.khandual@arm.com>: mm/vmstat: add events for THP migration without split Subsystem: mm/cma Jianqun Xu <jay.xu@rock-chips.com>: mm/cma.c: fix NULL pointer dereference when cma could not be activated Barry Song <song.bao.hua@hisilicon.com>: Patch series "mm: fix the names of general cma and hugetlb cma", v2: mm: cma: fix the name of CMA areas mm: hugetlb: fix the name of hugetlb CMA Mike Kravetz <mike.kravetz@oracle.com>: cma: don't quit at first error when activating reserved areas Subsystem: mm/util Waiman Long <longman@redhat.com>: include/linux/sched/mm.h: optimize current_gfp_context() Krzysztof Kozlowski <krzk@kernel.org>: mm: mmu_notifier: fix and extend kerneldoc Subsystem: mm/memory-hotplug Daniel Jordan <daniel.m.jordan@oracle.com>: x86/mm: use max memory block size on bare metal Jia He <justin.he@arm.com>: mm/memory_hotplug: introduce default dummy memory_add_physaddr_to_nid() mm/memory_hotplug: fix unpaired mem_hotplug_begin/done Charan Teja Reddy <charante@codeaurora.org>: mm, memory_hotplug: update pcp lists everytime onlining a memory block Subsystem: mm/cleanups Randy Dunlap <rdunlap@infradead.org>: mm: drop duplicated words in <linux/pgtable.h> mm: drop duplicated words in <linux/mm.h> include/linux/highmem.h: fix duplicated words in a comment include/linux/frontswap.h: drop duplicated word in a comment include/linux/memcontrol.h: drop duplicate word and fix spello Arvind Sankar <nivedita@alum.mit.edu>: sh/mm: drop unused MAX_PHYSADDR_BITS sparc: drop unused MAX_PHYSADDR_BITS Randy Dunlap <rdunlap@infradead.org>: mm/compaction.c: delete duplicated word mm/filemap.c: delete duplicated word mm/hmm.c: delete duplicated word mm/hugetlb.c: delete duplicated words mm/memcontrol.c: delete duplicated words mm/memory.c: delete duplicated words mm/migrate.c: delete duplicated word mm/nommu.c: delete duplicated words mm/page_alloc.c: delete or fix duplicated words mm/shmem.c: delete duplicated word mm/slab_common.c: delete duplicated word mm/usercopy.c: delete duplicated word mm/vmscan.c: delete or fix duplicated words mm/zpool.c: delete duplicated word and fix grammar mm/zsmalloc.c: fix duplicated words Subsystem: mm/uaccess Christoph Hellwig <hch@lst.de>: Patch series "clean up address limit helpers", v2: syscalls: use uaccess_kernel in addr_limit_user_check nds32: use uaccess_kernel in show_regs riscv: include <asm/pgtable.h> in <asm/uaccess.h> uaccess: remove segment_eq uaccess: add force_uaccess_{begin,end} helpers exec: use force_uaccess_begin during exec and exit Subsystem: alpha Luc Van Oostenryck <luc.vanoostenryck@gmail.com>: alpha: fix annotation of io{read,write}{16,32}be() Subsystem: misc Randy Dunlap <rdunlap@infradead.org>: include/linux/compiler-clang.h: drop duplicated word in a comment include/linux/exportfs.h: drop duplicated word in a comment include/linux/async_tx.h: drop duplicated word in a comment include/linux/xz.h: drop duplicated word Christoph Hellwig <hch@lst.de>: kernel: add a kernel_wait helper Feng Tang <feng.tang@intel.com>: ./Makefile: add debug option to enable function aligned on 32 bytes Arvind Sankar <nivedita@alum.mit.edu>: kernel.h: remove duplicate include of asm/div64.h "Alexander A. Klimov" <grandmaster@al2klimov.de>: include/: replace HTTP links with HTTPS ones Matthew Wilcox <willy@infradead.org>: include/linux/poison.h: remove obsolete comment Subsystem: sparse Luc Van Oostenryck <luc.vanoostenryck@gmail.com>: sparse: group the defines by functionality Subsystem: bitmap Stefano Brivio <sbrivio@redhat.com>: Patch series "lib: Fix bitmap_cut() for overlaps, add test": lib/bitmap.c: fix bitmap_cut() for partial overlapping case lib/test_bitmap.c: add test for bitmap_cut() Subsystem: lib Luc Van Oostenryck <luc.vanoostenryck@gmail.com>: lib/generic-radix-tree.c: remove unneeded __rcu Geert Uytterhoeven <geert@linux-m68k.org>: lib/test_bitops: do the full test during module init Wei Yongjun <weiyongjun1@huawei.com>: lib/test_lockup.c: make symbol 'test_works' static Tiezhu Yang <yangtiezhu@loongson.cn>: lib/Kconfig.debug: make TEST_LOCKUP depend on module lib/test_lockup.c: fix return value of test_lockup_init() "Alexander A. Klimov" <grandmaster@al2klimov.de>: lib/: replace HTTP links with HTTPS ones "Kars Mulder" <kerneldev@karsmulder.nl>: kstrto*: correct documentation references to simple_strto*() kstrto*: do not describe simple_strto*() as obsolete/replaced Subsystem: lz4 Nick Terrell <terrelln@fb.com>: lz4: fix kernel decompression speed Subsystem: bitops Rikard Falkeborn <rikard.falkeborn@gmail.com>: lib/test_bits.c: add tests of GENMASK Subsystem: checkpatch Joe Perches <joe@perches.com>: checkpatch: add test for possible misuse of IS_ENABLED() without CONFIG_ checkpatch: add --fix option for ASSIGN_IN_IF Quentin Monnet <quentin@isovalent.com>: checkpatch: fix CONST_STRUCT when const_structs.checkpatch is missing Joe Perches <joe@perches.com>: checkpatch: add test for repeated words checkpatch: remove missing switch/case break test Subsystem: autofs Randy Dunlap <rdunlap@infradead.org>: autofs: fix doubled word Subsystem: minix Eric Biggers <ebiggers@google.com>: Patch series "fs/minix: fix syzbot bugs and set s_maxbytes": fs/minix: check return value of sb_getblk() fs/minix: don't allow getting deleted inodes fs/minix: reject too-large maximum file size fs/minix: set s_maxbytes correctly fs/minix: fix block limit check for V1 filesystems fs/minix: remove expected error message in block_to_path() Subsystem: nilfs Eric Biggers <ebiggers@google.com>: Patch series "nilfs2 updates": nilfs2: only call unlock_new_inode() if I_NEW Joe Perches <joe@perches.com>: nilfs2: convert __nilfs_msg to integrate the level and format nilfs2: use a more common logging style Subsystem: ufs Colin Ian King <colin.king@canonical.com>: fs/ufs: avoid potential u32 multiplication overflow Subsystem: fat Yubo Feng <fengyubo3@huawei.com>: fatfs: switch write_lock to read_lock in fat_ioctl_get_attributes "Alexander A. Klimov" <grandmaster@al2klimov.de>: VFAT/FAT/MSDOS FILESYSTEM: replace HTTP links with HTTPS ones OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>: fat: fix fat_ra_init() for data clusters == 0 Subsystem: signals Helge Deller <deller@gmx.de>: fs/signalfd.c: fix inconsistent return codes for signalfd4 Subsystem: kmod Tiezhu Yang <yangtiezhu@loongson.cn>: Patch series "kmod/umh: a few fixes": selftests: kmod: use variable NAME in kmod_test_0001() kmod: remove redundant "be an" in the comment test_kmod: avoid potential double free in trigger_config_run_type() Subsystem: coredump Lepton Wu <ytht.net@gmail.com>: coredump: add %f for executable filename Subsystem: exec Kees Cook <keescook@chromium.org>: Patch series "Relocate execve() sanity checks", v2: exec: change uselib(2) IS_SREG() failure to EACCES exec: move S_ISREG() check earlier exec: move path_noexec() check earlier Subsystem: kdump Vijay Balakrishna <vijayb@linux.microsoft.com>: kdump: append kernel build-id string to VMCOREINFO Subsystem: rapidio "Gustavo A. R. Silva" <gustavoars@kernel.org>: drivers/rapidio/devices/rio_mport_cdev.c: use struct_size() helper drivers/rapidio/rio-scan.c: use struct_size() helper rapidio/rio_mport_cdev: use array_size() helper in copy_{from,to}_user() Subsystem: panic Tiezhu Yang <yangtiezhu@loongson.cn>: kernel/panic.c: make oops_may_print() return bool lib/Kconfig.debug: fix typo in the help text of CONFIG_PANIC_TIMEOUT Yue Hu <huyue2@yulong.com>: panic: make print_oops_end_marker() static Subsystem: kcov Marco Elver <elver@google.com>: kcov: unconditionally add -fno-stack-protector to compiler options Wei Yongjun <weiyongjun1@huawei.com>: kcov: make some symbols static Subsystem: kgdb Nick Desaulniers <ndesaulniers@google.com>: scripts/gdb: fix python 3.8 SyntaxWarning Subsystem: ipc Alexey Dobriyan <adobriyan@gmail.com>: ipc: uninline functions Liao Pingfang <liao.pingfang@zte.com.cn>: ipc/shm.c: remove the superfluous break Subsystem: mm/migration Joonsoo Kim <iamjoonsoo.kim@lge.com>: Patch series "clean-up the migration target allocation functions", v5: mm/page_isolation: prefer the node of the source page mm/migrate: move migration helper from .h to .c mm/hugetlb: unify migration callbacks mm/migrate: clear __GFP_RECLAIM to make the migration callback consistent with regular THP allocations mm/migrate: introduce a standard migration target allocation function mm/mempolicy: use a standard migration target allocation callback mm/page_alloc: remove a wrapper for alloc_migration_target() Subsystem: mm/gup Joonsoo Kim <iamjoonsoo.kim@lge.com>: mm/gup: restrict CMA region by using allocation scope API mm/hugetlb: make hugetlb migration callback CMA aware mm/gup: use a standard migration target allocation callback Subsystem: mm/pagemap Peter Xu <peterx@redhat.com>: Patch series "mm: Page fault accounting cleanups", v5: mm: do page fault accounting in handle_mm_fault mm/alpha: use general page fault accounting mm/arc: use general page fault accounting mm/arm: use general page fault accounting mm/arm64: use general page fault accounting mm/csky: use general page fault accounting mm/hexagon: use general page fault accounting mm/ia64: use general page fault accounting mm/m68k: use general page fault accounting mm/microblaze: use general page fault accounting mm/mips: use general page fault accounting mm/nds32: use general page fault accounting mm/nios2: use general page fault accounting mm/openrisc: use general page fault accounting mm/parisc: use general page fault accounting mm/powerpc: use general page fault accounting mm/riscv: use general page fault accounting mm/s390: use general page fault accounting mm/sh: use general page fault accounting mm/sparc32: use general page fault accounting mm/sparc64: use general page fault accounting mm/x86: use general page fault accounting mm/xtensa: use general page fault accounting mm: clean up the last pieces of page fault accountings mm/gup: remove task_struct pointer for all gup code Documentation/admin-guide/cgroup-v2.rst | 4 Documentation/admin-guide/sysctl/kernel.rst | 3 Documentation/admin-guide/sysctl/vm.rst | 15 + Documentation/filesystems/proc.rst | 11 - Documentation/vm/page_migration.rst | 27 +++ Makefile | 4 arch/alpha/include/asm/io.h | 8 arch/alpha/include/asm/uaccess.h | 2 arch/alpha/mm/fault.c | 10 - arch/arc/include/asm/segment.h | 3 arch/arc/kernel/process.c | 2 arch/arc/mm/fault.c | 20 -- arch/arm/include/asm/uaccess.h | 4 arch/arm/kernel/signal.c | 2 arch/arm/mm/fault.c | 27 --- arch/arm64/include/asm/uaccess.h | 2 arch/arm64/kernel/sdei.c | 2 arch/arm64/mm/fault.c | 31 --- arch/arm64/mm/numa.c | 10 - arch/csky/include/asm/segment.h | 2 arch/csky/mm/fault.c | 15 - arch/h8300/include/asm/segment.h | 2 arch/hexagon/mm/vm_fault.c | 11 - arch/ia64/include/asm/uaccess.h | 2 arch/ia64/mm/fault.c | 11 - arch/ia64/mm/numa.c | 2 arch/m68k/include/asm/segment.h | 2 arch/m68k/include/asm/tlbflush.h | 6 arch/m68k/mm/fault.c | 16 - arch/microblaze/include/asm/uaccess.h | 2 arch/microblaze/mm/fault.c | 11 - arch/mips/include/asm/uaccess.h | 2 arch/mips/kernel/unaligned.c | 27 +-- arch/mips/mm/fault.c | 16 - arch/nds32/include/asm/uaccess.h | 2 arch/nds32/kernel/process.c | 2 arch/nds32/mm/alignment.c | 7 arch/nds32/mm/fault.c | 21 -- arch/nios2/include/asm/uaccess.h | 2 arch/nios2/mm/fault.c | 16 - arch/openrisc/include/asm/uaccess.h | 2 arch/openrisc/mm/fault.c | 11 - arch/parisc/include/asm/uaccess.h | 2 arch/parisc/mm/fault.c | 10 - arch/powerpc/include/asm/uaccess.h | 3 arch/powerpc/mm/copro_fault.c | 7 arch/powerpc/mm/fault.c | 13 - arch/riscv/include/asm/uaccess.h | 6 arch/riscv/mm/fault.c | 18 -- arch/s390/include/asm/uaccess.h | 2 arch/s390/kvm/interrupt.c | 2 arch/s390/kvm/kvm-s390.c | 2 arch/s390/kvm/priv.c | 8 arch/s390/mm/fault.c | 18 -- arch/s390/mm/gmap.c | 4 arch/sh/include/asm/segment.h | 3 arch/sh/include/asm/sparsemem.h | 4 arch/sh/kernel/traps_32.c | 12 - arch/sh/mm/fault.c | 13 - arch/sh/mm/init.c | 9 - arch/sparc/include/asm/sparsemem.h | 1 arch/sparc/include/asm/uaccess_32.h | 2 arch/sparc/include/asm/uaccess_64.h | 2 arch/sparc/mm/fault_32.c | 15 - arch/sparc/mm/fault_64.c | 13 - arch/um/kernel/trap.c | 6 arch/x86/include/asm/uaccess.h | 2 arch/x86/mm/fault.c | 19 -- arch/x86/mm/init_64.c | 9 + arch/x86/mm/numa.c | 1 arch/xtensa/include/asm/uaccess.h | 2 arch/xtensa/mm/fault.c | 17 - drivers/firmware/arm_sdei.c | 5 drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 2 drivers/infiniband/core/umem_odp.c | 2 drivers/iommu/amd/iommu_v2.c | 2 drivers/iommu/intel/svm.c | 3 drivers/rapidio/devices/rio_mport_cdev.c | 7 drivers/rapidio/rio-scan.c | 8 drivers/vfio/vfio_iommu_type1.c | 4 fs/coredump.c | 17 + fs/exec.c | 38 ++-- fs/fat/Kconfig | 2 fs/fat/fatent.c | 3 fs/fat/file.c | 4 fs/hugetlbfs/inode.c | 6 fs/minix/inode.c | 48 ++++- fs/minix/itree_common.c | 8 fs/minix/itree_v1.c | 16 - fs/minix/itree_v2.c | 15 - fs/minix/minix.h | 1 fs/namei.c | 10 - fs/nilfs2/alloc.c | 38 ++-- fs/nilfs2/btree.c | 42 ++-- fs/nilfs2/cpfile.c | 10 - fs/nilfs2/dat.c | 14 - fs/nilfs2/direct.c | 14 - fs/nilfs2/gcinode.c | 2 fs/nilfs2/ifile.c | 4 fs/nilfs2/inode.c | 32 +-- fs/nilfs2/ioctl.c | 37 ++-- fs/nilfs2/mdt.c | 2 fs/nilfs2/namei.c | 6 fs/nilfs2/nilfs.h | 18 +- fs/nilfs2/page.c | 11 - fs/nilfs2/recovery.c | 32 +-- fs/nilfs2/segbuf.c | 2 fs/nilfs2/segment.c | 38 ++-- fs/nilfs2/sufile.c | 29 +-- fs/nilfs2/super.c | 73 ++++---- fs/nilfs2/sysfs.c | 29 +-- fs/nilfs2/the_nilfs.c | 85 ++++----- fs/open.c | 6 fs/proc/base.c | 11 + fs/proc/task_mmu.c | 4 fs/signalfd.c | 10 - fs/ufs/super.c | 2 include/asm-generic/uaccess.h | 4 include/clocksource/timer-ti-dm.h | 2 include/linux/async_tx.h | 2 include/linux/btree.h | 2 include/linux/compaction.h | 6 include/linux/compiler-clang.h | 2 include/linux/compiler_types.h | 44 ++--- include/linux/crash_core.h | 6 include/linux/delay.h | 2 include/linux/dma/k3-psil.h | 2 include/linux/dma/k3-udma-glue.h | 2 include/linux/dma/ti-cppi5.h | 2 include/linux/exportfs.h | 2 include/linux/frontswap.h | 2 include/linux/fs.h | 10 + include/linux/generic-radix-tree.h | 2 include/linux/highmem.h | 2 include/linux/huge_mm.h | 7 include/linux/hugetlb.h | 53 ++++-- include/linux/irqchip/irq-omap-intc.h | 2 include/linux/jhash.h | 2 include/linux/kernel.h | 12 - include/linux/leds-ti-lmu-common.h | 2 include/linux/memcontrol.h | 12 + include/linux/mempolicy.h | 18 +- include/linux/migrate.h | 42 +--- include/linux/mm.h | 20 +- include/linux/mmzone.h | 17 + include/linux/oom.h | 4 include/linux/pgtable.h | 12 - include/linux/platform_data/davinci-cpufreq.h | 2 include/linux/platform_data/davinci_asp.h | 2 include/linux/platform_data/elm.h | 2 include/linux/platform_data/gpio-davinci.h | 2 include/linux/platform_data/gpmc-omap.h | 2 include/linux/platform_data/mtd-davinci-aemif.h | 2 include/linux/platform_data/omap-twl4030.h | 2 include/linux/platform_data/uio_pruss.h | 2 include/linux/platform_data/usb-omap.h | 2 include/linux/poison.h | 4 include/linux/sched/mm.h | 8 include/linux/sched/task.h | 1 include/linux/soc/ti/k3-ringacc.h | 2 include/linux/soc/ti/knav_qmss.h | 2 include/linux/soc/ti/ti-msgmgr.h | 2 include/linux/swap.h | 25 ++ include/linux/syscalls.h | 2 include/linux/uaccess.h | 20 ++ include/linux/vm_event_item.h | 3 include/linux/wkup_m3_ipc.h | 2 include/linux/xxhash.h | 2 include/linux/xz.h | 4 include/linux/zlib.h | 2 include/soc/arc/aux.h | 2 include/trace/events/migrate.h | 17 + include/uapi/linux/auto_dev-ioctl.h | 2 include/uapi/linux/elf.h | 2 include/uapi/linux/map_to_7segment.h | 2 include/uapi/linux/types.h | 2 include/uapi/linux/usb/ch9.h | 2 ipc/sem.c | 3 ipc/shm.c | 4 kernel/Makefile | 2 kernel/crash_core.c | 50 +++++ kernel/events/callchain.c | 5 kernel/events/core.c | 5 kernel/events/uprobes.c | 8 kernel/exit.c | 18 +- kernel/futex.c | 2 kernel/kcov.c | 6 kernel/kmod.c | 5 kernel/kthread.c | 5 kernel/panic.c | 4 kernel/stacktrace.c | 5 kernel/sysctl.c | 11 + kernel/umh.c | 29 --- lib/Kconfig.debug | 27 ++- lib/Makefile | 1 lib/bitmap.c | 4 lib/crc64.c | 2 lib/decompress_bunzip2.c | 2 lib/decompress_unlzma.c | 6 lib/kstrtox.c | 20 -- lib/lz4/lz4_compress.c | 4 lib/lz4/lz4_decompress.c | 18 +- lib/lz4/lz4defs.h | 10 + lib/lz4/lz4hc_compress.c | 2 lib/math/rational.c | 2 lib/rbtree.c | 2 lib/test_bitmap.c | 58 ++++++ lib/test_bitops.c | 18 +- lib/test_bits.c | 75 ++++++++ lib/test_kmod.c | 2 lib/test_lockup.c | 6 lib/ts_bm.c | 2 lib/xxhash.c | 2 lib/xz/xz_crc32.c | 2 lib/xz/xz_dec_bcj.c | 2 lib/xz/xz_dec_lzma2.c | 2 lib/xz/xz_lzma2.h | 2 lib/xz/xz_stream.h | 2 mm/cma.c | 40 +--- mm/cma.h | 4 mm/compaction.c | 207 +++++++++++++++++++++-- mm/filemap.c | 2 mm/gup.c | 195 ++++++---------------- mm/hmm.c | 5 mm/huge_memory.c | 23 -- mm/hugetlb.c | 93 ++++------ mm/internal.h | 9 - mm/khugepaged.c | 2 mm/ksm.c | 3 mm/maccess.c | 22 +- mm/memcontrol.c | 42 +++- mm/memory-failure.c | 7 mm/memory.c | 107 +++++++++--- mm/memory_hotplug.c | 30 ++- mm/mempolicy.c | 49 +---- mm/migrate.c | 151 ++++++++++++++--- mm/mmu_notifier.c | 9 - mm/nommu.c | 4 mm/oom_kill.c | 24 +- mm/page_alloc.c | 14 + mm/page_isolation.c | 21 -- mm/percpu-internal.h | 55 ++++++ mm/percpu-km.c | 5 mm/percpu-stats.c | 36 ++-- mm/percpu-vm.c | 5 mm/percpu.c | 208 +++++++++++++++++++++--- mm/process_vm_access.c | 2 mm/rmap.c | 2 mm/shmem.c | 5 mm/slab_common.c | 2 mm/swap.c | 13 - mm/swap_state.c | 80 +++++++-- mm/swapfile.c | 4 mm/usercopy.c | 2 mm/userfaultfd.c | 2 mm/vmscan.c | 36 ++-- mm/vmstat.c | 32 +++ mm/workingset.c | 23 +- mm/zpool.c | 8 mm/zsmalloc.c | 2 scripts/checkpatch.pl | 116 +++++++++---- scripts/gdb/linux/rbtree.py | 4 security/tomoyo/domain.c | 2 tools/testing/selftests/cgroup/test_kmem.c | 70 +++++++- tools/testing/selftests/kmod/kmod.sh | 4 tools/testing/selftests/vm/hmm-tests.c | 35 ++++ virt/kvm/async_pf.c | 2 virt/kvm/kvm_main.c | 2 268 files changed, 2481 insertions(+), 1551 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-08-07 6:16 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-08-07 6:16 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm - A few MM hotfixes - kthread, tools, scripts, ntfs and ocfs2 - Some of MM 163 patches, based on d6efb3ac3e6c19ab722b28bdb9252bae0b9676b6. Subsystems affected by this patch series: mm/pagemap mm/hofixes mm/pagealloc kthread tools scripts ntfs ocfs2 mm/slab-generic mm/slab mm/slub mm/kcsan mm/debug mm/pagecache mm/gup mm/swap mm/shmem mm/memcg mm/pagemap mm/mremap mm/mincore mm/sparsemem mm/vmalloc mm/kasan mm/pagealloc mm/hugetlb mm/vmscan Subsystem: mm/pagemap Yang Shi <yang.shi@linux.alibaba.com>: mm/memory.c: avoid access flag update TLB flush for retried page fault Subsystem: mm/hofixes Ralph Campbell <rcampbell@nvidia.com>: mm/migrate: fix migrate_pgmap_owner w/o CONFIG_MMU_NOTIFIER Subsystem: mm/pagealloc David Hildenbrand <david@redhat.com>: mm/shuffle: don't move pages between zones and don't read garbage memmaps Subsystem: kthread Peter Zijlstra <peterz@infradead.org>: mm: fix kthread_use_mm() vs TLB invalidate Ilias Stamatis <stamatis.iliass@gmail.com>: kthread: remove incorrect comment in kthread_create_on_cpu() Subsystem: tools "Alexander A. Klimov" <grandmaster@al2klimov.de>: tools/: replace HTTP links with HTTPS ones Gaurav Singh <gaurav1086@gmail.com>: tools/testing/selftests/cgroup/cgroup_util.c: cg_read_strcmp: fix null pointer dereference Subsystem: scripts Jialu Xu <xujialu@vimux.org>: scripts/tags.sh: collect compiled source precisely Nikolay Borisov <nborisov@suse.com>: scripts/bloat-o-meter: Support comparing library archives Konstantin Khlebnikov <khlebnikov@yandex-team.ru>: scripts/decode_stacktrace.sh: skip missing symbols scripts/decode_stacktrace.sh: guess basepath if not specified scripts/decode_stacktrace.sh: guess path to modules scripts/decode_stacktrace.sh: guess path to vmlinux by release name Joe Perches <joe@perches.com>: const_structs.checkpatch: add regulator_ops Colin Ian King <colin.king@canonical.com>: scripts/spelling.txt: add more spellings to spelling.txt Subsystem: ntfs Luca Stefani <luca.stefani.ge1@gmail.com>: ntfs: fix ntfs_test_inode and ntfs_init_locked_inode function type Subsystem: ocfs2 Gang He <ghe@suse.com>: ocfs2: fix remounting needed after setfacl command Randy Dunlap <rdunlap@infradead.org>: ocfs2: suballoc.h: delete a duplicated word Junxiao Bi <junxiao.bi@oracle.com>: ocfs2: change slot number type s16 to u16 "Alexander A. Klimov" <grandmaster@al2klimov.de>: ocfs2: replace HTTP links with HTTPS ones Pavel Machek <pavel@ucw.cz>: ocfs2: fix unbalanced locking Subsystem: mm/slab-generic Waiman Long <longman@redhat.com>: mm, treewide: rename kzfree() to kfree_sensitive() William Kucharski <william.kucharski@oracle.com>: mm: ksize() should silently accept a NULL pointer Subsystem: mm/slab Kees Cook <keescook@chromium.org>: Patch series "mm: Expand CONFIG_SLAB_FREELIST_HARDENED to include SLAB": mm/slab: expand CONFIG_SLAB_FREELIST_HARDENED to include SLAB mm/slab: add naive detection of double free Long Li <lonuxli.64@gmail.com>: mm, slab: check GFP_SLAB_BUG_MASK before alloc_pages in kmalloc_order Xiao Yang <yangx.jy@cn.fujitsu.com>: mm/slab.c: update outdated kmem_list3 in a comment Subsystem: mm/slub Vlastimil Babka <vbabka@suse.cz>: Patch series "slub_debug fixes and improvements": mm, slub: extend slub_debug syntax for multiple blocks mm, slub: make some slub_debug related attributes read-only mm, slub: remove runtime allocation order changes mm, slub: make remaining slub_debug related attributes read-only mm, slub: make reclaim_account attribute read-only mm, slub: introduce static key for slub_debug() mm, slub: introduce kmem_cache_debug_flags() mm, slub: extend checks guarded by slub_debug static key mm, slab/slub: move and improve cache_from_obj() mm, slab/slub: improve error reporting and overhead of cache_from_obj() Sebastian Andrzej Siewior <bigeasy@linutronix.de>: mm/slub.c: drop lockdep_assert_held() from put_map() Subsystem: mm/kcsan Marco Elver <elver@google.com>: mm, kcsan: instrument SLAB/SLUB free with "ASSERT_EXCLUSIVE_ACCESS" Subsystem: mm/debug Anshuman Khandual <anshuman.khandual@arm.com>: Patch series "mm/debug_vm_pgtable: Add some more tests", v5: mm/debug_vm_pgtable: add tests validating arch helpers for core MM features mm/debug_vm_pgtable: add tests validating advanced arch page table helpers mm/debug_vm_pgtable: add debug prints for individual tests Documentation/mm: add descriptions for arch page table helpers "Matthew Wilcox (Oracle)" <willy@infradead.org>: Patch series "Improvements for dump_page()", v2: mm/debug: handle page->mapping better in dump_page mm/debug: dump compound page information on a second line mm/debug: print head flags in dump_page mm/debug: switch dump_page to get_kernel_nofault mm/debug: print the inode number in dump_page mm/debug: print hashed address of struct page John Hubbard <jhubbard@nvidia.com>: mm, dump_page: do not crash with bad compound_mapcount() Subsystem: mm/pagecache Yang Shi <yang.shi@linux.alibaba.com>: mm: filemap: clear idle flag for writes mm: filemap: add missing FGP_ flags in kerneldoc comment for pagecache_get_page Subsystem: mm/gup Tang Yizhou <tangyizhou@huawei.com>: mm/gup.c: fix the comment of return value for populate_vma_page_range() Subsystem: mm/swap Zhen Lei <thunder.leizhen@huawei.com>: Patch series "clean up some functions in mm/swap_slots.c": mm/swap_slots.c: simplify alloc_swap_slot_cache() mm/swap_slots.c: simplify enable_swap_slots_cache() mm/swap_slots.c: remove redundant check for swap_slot_cache_initialized Krzysztof Kozlowski <krzk@kernel.org>: mm: swap: fix kerneldoc of swap_vma_readahead() Xianting Tian <xianting_tian@126.com>: mm/page_io.c: use blk_io_schedule() for avoiding task hung in sync io Subsystem: mm/shmem Chris Down <chris@chrisdown.name>: Patch series "tmpfs: inode: Reduce risk of inum overflow", v7: tmpfs: per-superblock i_ino support tmpfs: support 64-bit inums per-sb Subsystem: mm/memcg Roman Gushchin <guro@fb.com>: mm: kmem: make memcg_kmem_enabled() irreversible Patch series "The new cgroup slab memory controller", v7: mm: memcg: factor out memcg- and lruvec-level changes out of __mod_lruvec_state() mm: memcg: prepare for byte-sized vmstat items mm: memcg: convert vmstat slab counters to bytes mm: slub: implement SLUB version of obj_to_index() Johannes Weiner <hannes@cmpxchg.org>: mm: memcontrol: decouple reference counting from page accounting Roman Gushchin <guro@fb.com>: mm: memcg/slab: obj_cgroup API mm: memcg/slab: allocate obj_cgroups for non-root slab pages mm: memcg/slab: save obj_cgroup for non-root slab objects mm: memcg/slab: charge individual slab objects instead of pages mm: memcg/slab: deprecate memory.kmem.slabinfo mm: memcg/slab: move memcg_kmem_bypass() to memcontrol.h mm: memcg/slab: use a single set of kmem_caches for all accounted allocations mm: memcg/slab: simplify memcg cache creation mm: memcg/slab: remove memcg_kmem_get_cache() mm: memcg/slab: deprecate slab_root_caches mm: memcg/slab: remove redundant check in memcg_accumulate_slabinfo() mm: memcg/slab: use a single set of kmem_caches for all allocations kselftests: cgroup: add kernel memory accounting tests tools/cgroup: add memcg_slabinfo.py tool Shakeel Butt <shakeelb@google.com>: mm: memcontrol: account kernel stack per node Roman Gushchin <guro@fb.com>: mm: memcg/slab: remove unused argument by charge_slab_page() mm: slab: rename (un)charge_slab_page() to (un)account_slab_page() mm: kmem: switch to static_branch_likely() in memcg_kmem_enabled() mm: memcontrol: avoid workload stalls when lowering memory.high Chris Down <chris@chrisdown.name>: Patch series "mm, memcg: reclaim harder before high throttling", v2: mm, memcg: reclaim more aggressively before high allocator throttling mm, memcg: unify reclaim retry limits with page allocator Yafang Shao <laoar.shao@gmail.com>: Patch series "mm, memcg: memory.{low,min} reclaim fix & cleanup", v4: mm, memcg: avoid stale protection values when cgroup is above protection Chris Down <chris@chrisdown.name>: mm, memcg: decouple e{low,min} state mutations from protection checks Yafang Shao <laoar.shao@gmail.com>: memcg, oom: check memcg margin for parallel oom Johannes Weiner <hannes@cmpxchg.org>: mm: memcontrol: restore proper dirty throttling when memory.high changes mm: memcontrol: don't count limit-setting reclaim as memory pressure Michal Koutný <mkoutny@suse.com>: mm/page_counter.c: fix protection usage propagation Subsystem: mm/pagemap Ralph Campbell <rcampbell@nvidia.com>: mm: remove redundant check non_swap_entry() Alex Zhang <zhangalex@google.com>: mm/memory.c: make remap_pfn_range() reject unaligned addr Mike Rapoport <rppt@linux.ibm.com>: Patch series "mm: cleanup usage of <asm/pgalloc.h>": mm: remove unneeded includes of <asm/pgalloc.h> opeinrisc: switch to generic version of pte allocation xtensa: switch to generic version of pte allocation asm-generic: pgalloc: provide generic pmd_alloc_one() and pmd_free_one() asm-generic: pgalloc: provide generic pud_alloc_one() and pud_free_one() asm-generic: pgalloc: provide generic pgd_free() mm: move lib/ioremap.c to mm/ Joerg Roedel <jroedel@suse.de>: mm: move p?d_alloc_track to separate header file Zhen Lei <thunder.leizhen@huawei.com>: mm/mmap: optimize a branch judgment in ksys_mmap_pgoff() Feng Tang <feng.tang@intel.com>: Patch series "make vm_committed_as_batch aware of vm overcommit policy", v6: proc/meminfo: avoid open coded reading of vm_committed_as mm/util.c: make vm_memory_committed() more accurate percpu_counter: add percpu_counter_sync() mm: adjust vm_committed_as_batch according to vm overcommit policy Anshuman Khandual <anshuman.khandual@arm.com>: Patch series "arm64: Enable vmemmap mapping from device memory", v4: mm/sparsemem: enable vmem_altmap support in vmemmap_populate_basepages() mm/sparsemem: enable vmem_altmap support in vmemmap_alloc_block_buf() arm64/mm: enable vmem_altmap support for vmemmap mappings Miaohe Lin <linmiaohe@huawei.com>: mm: mmap: merge vma after call_mmap() if possible Peter Collingbourne <pcc@google.com>: mm: remove unnecessary wrapper function do_mmap_pgoff() Subsystem: mm/mremap Wei Yang <richard.weiyang@linux.alibaba.com>: Patch series "mm/mremap: cleanup move_page_tables() a little", v5: mm/mremap: it is sure to have enough space when extent meets requirement mm/mremap: calculate extent in one place mm/mremap: start addresses are properly aligned Subsystem: mm/mincore Ricardo Cañuelo <ricardo.canuelo@collabora.com>: selftests: add mincore() tests Subsystem: mm/sparsemem Wei Yang <richard.weiyang@linux.alibaba.com>: mm/sparse: never partially remove memmap for early section mm/sparse: only sub-section aligned range would be populated Mike Rapoport <rppt@linux.ibm.com>: mm/sparse: cleanup the code surrounding memory_present() Subsystem: mm/vmalloc "Matthew Wilcox (Oracle)" <willy@infradead.org>: vmalloc: convert to XArray "Uladzislau Rezki (Sony)" <urezki@gmail.com>: mm/vmalloc: simplify merge_or_add_vmap_area() mm/vmalloc: simplify augment_tree_propagate_check() mm/vmalloc: switch to "propagate()" callback mm/vmalloc: update the header about KVA rework Mike Rapoport <rppt@linux.ibm.com>: mm: vmalloc: remove redundant assignment in unmap_kernel_range_noflush() "Uladzislau Rezki (Sony)" <urezki@gmail.com>: mm/vmalloc.c: remove BUG() from the find_va_links() Subsystem: mm/kasan Marco Elver <elver@google.com>: kasan: improve and simplify Kconfig.kasan kasan: update required compiler versions in documentation Walter Wu <walter-zh.wu@mediatek.com>: Patch series "kasan: memorize and print call_rcu stack", v8: rcu: kasan: record and print call_rcu() call stack kasan: record and print the free track kasan: add tests for call_rcu stack recording kasan: update documentation for generic kasan Vincenzo Frascino <vincenzo.frascino@arm.com>: kasan: remove kasan_unpoison_stack_above_sp_to() Walter Wu <walter-zh.wu@mediatek.com>: lib/test_kasan.c: fix KASAN unit tests for tag-based KASAN Andrey Konovalov <andreyknvl@google.com>: Patch series "kasan: support stack instrumentation for tag-based mode", v2: kasan: don't tag stacks allocated with pagealloc efi: provide empty efi_enter_virtual_mode implementation kasan, arm64: don't instrument functions that enable kasan kasan: allow enabling stack tagging for tag-based mode kasan: adjust kasan_stack_oob for tag-based mode Subsystem: mm/pagealloc Vlastimil Babka <vbabka@suse.cz>: mm, page_alloc: use unlikely() in task_capc() Jaewon Kim <jaewon31.kim@samsung.com>: page_alloc: consider highatomic reserve in watermark fast Charan Teja Reddy <charante@codeaurora.org>: mm, page_alloc: skip ->waternark_boost for atomic order-0 allocations David Hildenbrand <david@redhat.com>: mm: remove vm_total_pages mm/page_alloc: remove nr_free_pagecache_pages() mm/memory_hotplug: document why shuffle_zone() is relevant mm/shuffle: remove dynamic reconfiguration Wei Yang <richard.weiyang@linux.alibaba.com>: mm/page_alloc.c: replace the definition of NR_MIGRATETYPE_BITS with PB_migratetype_bits mm/page_alloc.c: extract the common part in pfn_to_bitidx() mm/page_alloc.c: simplify pageblock bitmap access mm/page_alloc.c: remove unnecessary end_bitidx for [set|get]_pfnblock_flags_mask() Qian Cai <cai@lca.pw>: mm/page_alloc: silence a KASAN false positive Wei Yang <richard.weiyang@linux.alibaba.com>: mm/page_alloc: fallbacks at most has 3 elements Muchun Song <songmuchun@bytedance.com>: mm/page_alloc.c: skip setting nodemask when we are in interrupt Joonsoo Kim <iamjoonsoo.kim@lge.com>: mm/page_alloc: fix memalloc_nocma_{save/restore} APIs Subsystem: mm/hugetlb "Alexander A. Klimov" <grandmaster@al2klimov.de>: mm: thp: replace HTTP links with HTTPS ones Peter Xu <peterx@redhat.com>: mm/hugetlb: fix calculation of adjust_range_if_pmd_sharing_possible Hugh Dickins <hughd@google.com>: khugepaged: collapse_pte_mapped_thp() flush the right range khugepaged: collapse_pte_mapped_thp() protect the pmd lock khugepaged: retract_page_tables() remember to test exit khugepaged: khugepaged_test_exit() check mmget_still_valid() Subsystem: mm/vmscan dylan-meiners <spacct.spacct@gmail.com>: mm/vmscan.c: fix typo Shakeel Butt <shakeelb@google.com>: mm: vmscan: consistent update to pgrefill Documentation/admin-guide/kernel-parameters.txt | 2 Documentation/dev-tools/kasan.rst | 10 Documentation/filesystems/dlmfs.rst | 2 Documentation/filesystems/ocfs2.rst | 2 Documentation/filesystems/tmpfs.rst | 18 Documentation/vm/arch_pgtable_helpers.rst | 258 +++++ Documentation/vm/memory-model.rst | 9 Documentation/vm/slub.rst | 51 - arch/alpha/include/asm/pgalloc.h | 21 arch/alpha/include/asm/tlbflush.h | 1 arch/alpha/kernel/core_irongate.c | 1 arch/alpha/kernel/core_marvel.c | 1 arch/alpha/kernel/core_titan.c | 1 arch/alpha/kernel/machvec_impl.h | 2 arch/alpha/kernel/smp.c | 1 arch/alpha/mm/numa.c | 1 arch/arc/mm/fault.c | 1 arch/arc/mm/init.c | 1 arch/arm/include/asm/pgalloc.h | 12 arch/arm/include/asm/tlb.h | 1 arch/arm/kernel/machine_kexec.c | 1 arch/arm/kernel/smp.c | 1 arch/arm/kernel/suspend.c | 1 arch/arm/mach-omap2/omap-mpuss-lowpower.c | 1 arch/arm/mm/hugetlbpage.c | 1 arch/arm/mm/init.c | 9 arch/arm/mm/mmu.c | 1 arch/arm64/include/asm/pgalloc.h | 39 arch/arm64/kernel/setup.c | 2 arch/arm64/kernel/smp.c | 1 arch/arm64/mm/hugetlbpage.c | 1 arch/arm64/mm/init.c | 6 arch/arm64/mm/ioremap.c | 1 arch/arm64/mm/mmu.c | 63 - arch/csky/include/asm/pgalloc.h | 7 arch/csky/kernel/smp.c | 1 arch/hexagon/include/asm/pgalloc.h | 7 arch/ia64/include/asm/pgalloc.h | 24 arch/ia64/include/asm/tlb.h | 1 arch/ia64/kernel/process.c | 1 arch/ia64/kernel/smp.c | 1 arch/ia64/kernel/smpboot.c | 1 arch/ia64/mm/contig.c | 1 arch/ia64/mm/discontig.c | 4 arch/ia64/mm/hugetlbpage.c | 1 arch/ia64/mm/tlb.c | 1 arch/m68k/include/asm/mmu_context.h | 2 arch/m68k/include/asm/sun3_pgalloc.h | 7 arch/m68k/kernel/dma.c | 2 arch/m68k/kernel/traps.c | 3 arch/m68k/mm/cache.c | 2 arch/m68k/mm/fault.c | 1 arch/m68k/mm/kmap.c | 2 arch/m68k/mm/mcfmmu.c | 1 arch/m68k/mm/memory.c | 1 arch/m68k/sun3x/dvma.c | 2 arch/microblaze/include/asm/pgalloc.h | 6 arch/microblaze/include/asm/tlbflush.h | 1 arch/microblaze/kernel/process.c | 1 arch/microblaze/kernel/signal.c | 1 arch/microblaze/mm/init.c | 3 arch/mips/include/asm/pgalloc.h | 19 arch/mips/kernel/setup.c | 8 arch/mips/loongson64/numa.c | 1 arch/mips/sgi-ip27/ip27-memory.c | 2 arch/mips/sgi-ip32/ip32-memory.c | 1 arch/nds32/mm/mm-nds32.c | 2 arch/nios2/include/asm/pgalloc.h | 7 arch/openrisc/include/asm/pgalloc.h | 33 arch/openrisc/include/asm/tlbflush.h | 1 arch/openrisc/kernel/or32_ksyms.c | 1 arch/parisc/include/asm/mmu_context.h | 1 arch/parisc/include/asm/pgalloc.h | 12 arch/parisc/kernel/cache.c | 1 arch/parisc/kernel/pci-dma.c | 1 arch/parisc/kernel/process.c | 1 arch/parisc/kernel/signal.c | 1 arch/parisc/kernel/smp.c | 1 arch/parisc/mm/hugetlbpage.c | 1 arch/parisc/mm/init.c | 5 arch/parisc/mm/ioremap.c | 2 arch/powerpc/include/asm/tlb.h | 1 arch/powerpc/mm/book3s64/hash_hugetlbpage.c | 1 arch/powerpc/mm/book3s64/hash_pgtable.c | 1 arch/powerpc/mm/book3s64/hash_tlb.c | 1 arch/powerpc/mm/book3s64/radix_hugetlbpage.c | 1 arch/powerpc/mm/init_32.c | 1 arch/powerpc/mm/init_64.c | 4 arch/powerpc/mm/kasan/8xx.c | 1 arch/powerpc/mm/kasan/book3s_32.c | 1 arch/powerpc/mm/mem.c | 3 arch/powerpc/mm/nohash/40x.c | 1 arch/powerpc/mm/nohash/8xx.c | 1 arch/powerpc/mm/nohash/fsl_booke.c | 1 arch/powerpc/mm/nohash/kaslr_booke.c | 1 arch/powerpc/mm/nohash/tlb.c | 1 arch/powerpc/mm/numa.c | 1 arch/powerpc/mm/pgtable.c | 1 arch/powerpc/mm/pgtable_64.c | 1 arch/powerpc/mm/ptdump/hashpagetable.c | 2 arch/powerpc/mm/ptdump/ptdump.c | 1 arch/powerpc/platforms/pseries/cmm.c | 1 arch/riscv/include/asm/pgalloc.h | 18 arch/riscv/mm/fault.c | 1 arch/riscv/mm/init.c | 3 arch/s390/crypto/prng.c | 4 arch/s390/include/asm/tlb.h | 1 arch/s390/include/asm/tlbflush.h | 1 arch/s390/kernel/machine_kexec.c | 1 arch/s390/kernel/ptrace.c | 1 arch/s390/kvm/diag.c | 1 arch/s390/kvm/priv.c | 1 arch/s390/kvm/pv.c | 1 arch/s390/mm/cmm.c | 1 arch/s390/mm/init.c | 1 arch/s390/mm/mmap.c | 1 arch/s390/mm/pgtable.c | 1 arch/sh/include/asm/pgalloc.h | 4 arch/sh/kernel/idle.c | 1 arch/sh/kernel/machine_kexec.c | 1 arch/sh/mm/cache-sh3.c | 1 arch/sh/mm/cache-sh7705.c | 1 arch/sh/mm/hugetlbpage.c | 1 arch/sh/mm/init.c | 7 arch/sh/mm/ioremap_fixed.c | 1 arch/sh/mm/numa.c | 3 arch/sh/mm/tlb-sh3.c | 1 arch/sparc/include/asm/ide.h | 1 arch/sparc/include/asm/tlb_64.h | 1 arch/sparc/kernel/leon_smp.c | 1 arch/sparc/kernel/process_32.c | 1 arch/sparc/kernel/signal_32.c | 1 arch/sparc/kernel/smp_32.c | 1 arch/sparc/kernel/smp_64.c | 1 arch/sparc/kernel/sun4m_irq.c | 1 arch/sparc/mm/highmem.c | 1 arch/sparc/mm/init_64.c | 1 arch/sparc/mm/io-unit.c | 1 arch/sparc/mm/iommu.c | 1 arch/sparc/mm/tlb.c | 1 arch/um/include/asm/pgalloc.h | 9 arch/um/include/asm/pgtable-3level.h | 3 arch/um/kernel/mem.c | 17 arch/x86/ia32/ia32_aout.c | 1 arch/x86/include/asm/mmu_context.h | 1 arch/x86/include/asm/pgalloc.h | 42 arch/x86/kernel/alternative.c | 1 arch/x86/kernel/apic/apic.c | 1 arch/x86/kernel/mpparse.c | 1 arch/x86/kernel/traps.c | 1 arch/x86/mm/fault.c | 1 arch/x86/mm/hugetlbpage.c | 1 arch/x86/mm/init_32.c | 2 arch/x86/mm/init_64.c | 12 arch/x86/mm/kaslr.c | 1 arch/x86/mm/pgtable_32.c | 1 arch/x86/mm/pti.c | 1 arch/x86/platform/uv/bios_uv.c | 1 arch/x86/power/hibernate.c | 2 arch/xtensa/include/asm/pgalloc.h | 46 arch/xtensa/kernel/xtensa_ksyms.c | 1 arch/xtensa/mm/cache.c | 1 arch/xtensa/mm/fault.c | 1 crypto/adiantum.c | 2 crypto/ahash.c | 4 crypto/api.c | 2 crypto/asymmetric_keys/verify_pefile.c | 4 crypto/deflate.c | 2 crypto/drbg.c | 10 crypto/ecc.c | 8 crypto/ecdh.c | 2 crypto/gcm.c | 2 crypto/gf128mul.c | 4 crypto/jitterentropy-kcapi.c | 2 crypto/rng.c | 2 crypto/rsa-pkcs1pad.c | 6 crypto/seqiv.c | 2 crypto/shash.c | 2 crypto/skcipher.c | 2 crypto/testmgr.c | 6 crypto/zstd.c | 2 drivers/base/node.c | 10 drivers/block/xen-blkback/common.h | 1 drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c | 2 drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c | 2 drivers/crypto/amlogic/amlogic-gxl-cipher.c | 4 drivers/crypto/atmel-ecc.c | 2 drivers/crypto/caam/caampkc.c | 28 drivers/crypto/cavium/cpt/cptvf_main.c | 6 drivers/crypto/cavium/cpt/cptvf_reqmanager.c | 12 drivers/crypto/cavium/nitrox/nitrox_lib.c | 4 drivers/crypto/cavium/zip/zip_crypto.c | 6 drivers/crypto/ccp/ccp-crypto-rsa.c | 6 drivers/crypto/ccree/cc_aead.c | 4 drivers/crypto/ccree/cc_buffer_mgr.c | 4 drivers/crypto/ccree/cc_cipher.c | 6 drivers/crypto/ccree/cc_hash.c | 8 drivers/crypto/ccree/cc_request_mgr.c | 2 drivers/crypto/marvell/cesa/hash.c | 2 drivers/crypto/marvell/octeontx/otx_cptvf_main.c | 6 drivers/crypto/marvell/octeontx/otx_cptvf_reqmgr.h | 2 drivers/crypto/nx/nx.c | 4 drivers/crypto/virtio/virtio_crypto_algs.c | 12 drivers/crypto/virtio/virtio_crypto_core.c | 2 drivers/iommu/ipmmu-vmsa.c | 1 drivers/md/dm-crypt.c | 32 drivers/md/dm-integrity.c | 6 drivers/misc/ibmvmc.c | 6 drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c | 2 drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c | 6 drivers/net/ppp/ppp_mppe.c | 6 drivers/net/wireguard/noise.c | 4 drivers/net/wireguard/peer.c | 2 drivers/net/wireless/intel/iwlwifi/pcie/rx.c | 2 drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c | 6 drivers/net/wireless/intel/iwlwifi/pcie/tx.c | 6 drivers/net/wireless/intersil/orinoco/wext.c | 4 drivers/s390/crypto/ap_bus.h | 4 drivers/staging/ks7010/ks_hostif.c | 2 drivers/staging/rtl8723bs/core/rtw_security.c | 2 drivers/staging/wlan-ng/p80211netdev.c | 2 drivers/target/iscsi/iscsi_target_auth.c | 2 drivers/xen/balloon.c | 1 drivers/xen/privcmd.c | 1 fs/Kconfig | 21 fs/aio.c | 6 fs/binfmt_elf_fdpic.c | 1 fs/cifs/cifsencrypt.c | 2 fs/cifs/connect.c | 10 fs/cifs/dfs_cache.c | 2 fs/cifs/misc.c | 8 fs/crypto/inline_crypt.c | 5 fs/crypto/keyring.c | 6 fs/crypto/keysetup_v1.c | 4 fs/ecryptfs/keystore.c | 4 fs/ecryptfs/messaging.c | 2 fs/hugetlbfs/inode.c | 2 fs/ntfs/dir.c | 2 fs/ntfs/inode.c | 27 fs/ntfs/inode.h | 4 fs/ntfs/mft.c | 4 fs/ocfs2/Kconfig | 6 fs/ocfs2/acl.c | 2 fs/ocfs2/blockcheck.c | 2 fs/ocfs2/dlmglue.c | 8 fs/ocfs2/ocfs2.h | 4 fs/ocfs2/suballoc.c | 4 fs/ocfs2/suballoc.h | 2 fs/ocfs2/super.c | 4 fs/proc/meminfo.c | 10 include/asm-generic/pgalloc.h | 80 + include/asm-generic/tlb.h | 1 include/crypto/aead.h | 2 include/crypto/akcipher.h | 2 include/crypto/gf128mul.h | 2 include/crypto/hash.h | 2 include/crypto/internal/acompress.h | 2 include/crypto/kpp.h | 2 include/crypto/skcipher.h | 2 include/linux/efi.h | 4 include/linux/fs.h | 17 include/linux/huge_mm.h | 2 include/linux/kasan.h | 4 include/linux/memcontrol.h | 209 +++- include/linux/mm.h | 86 - include/linux/mm_types.h | 5 include/linux/mman.h | 4 include/linux/mmu_notifier.h | 13 include/linux/mmzone.h | 54 - include/linux/pageblock-flags.h | 30 include/linux/percpu_counter.h | 4 include/linux/sched/mm.h | 8 include/linux/shmem_fs.h | 3 include/linux/slab.h | 11 include/linux/slab_def.h | 9 include/linux/slub_def.h | 31 include/linux/swap.h | 2 include/linux/vmstat.h | 14 init/Kconfig | 9 init/main.c | 2 ipc/shm.c | 2 kernel/fork.c | 54 - kernel/kthread.c | 8 kernel/power/snapshot.c | 2 kernel/rcu/tree.c | 2 kernel/scs.c | 2 kernel/sysctl.c | 2 lib/Kconfig.kasan | 39 lib/Makefile | 1 lib/ioremap.c | 287 ----- lib/mpi/mpiutil.c | 6 lib/percpu_counter.c | 19 lib/test_kasan.c | 87 + mm/Kconfig | 6 mm/Makefile | 2 mm/debug.c | 103 +- mm/debug_vm_pgtable.c | 666 +++++++++++++ mm/filemap.c | 9 mm/gup.c | 3 mm/huge_memory.c | 14 mm/hugetlb.c | 25 mm/ioremap.c | 289 +++++ mm/kasan/common.c | 41 mm/kasan/generic.c | 43 mm/kasan/generic_report.c | 1 mm/kasan/kasan.h | 25 mm/kasan/quarantine.c | 1 mm/kasan/report.c | 54 - mm/kasan/tags.c | 37 mm/khugepaged.c | 75 - mm/memcontrol.c | 832 ++++++++++------- mm/memory.c | 15 mm/memory_hotplug.c | 11 mm/migrate.c | 6 mm/mm_init.c | 20 mm/mmap.c | 45 mm/mremap.c | 19 mm/nommu.c | 6 mm/oom_kill.c | 2 mm/page-writeback.c | 6 mm/page_alloc.c | 226 ++-- mm/page_counter.c | 6 mm/page_io.c | 2 mm/pgalloc-track.h | 51 + mm/shmem.c | 133 ++ mm/shuffle.c | 46 mm/shuffle.h | 17 mm/slab.c | 129 +- mm/slab.h | 755 ++++++--------- mm/slab_common.c | 829 ++-------------- mm/slob.c | 12 mm/slub.c | 680 ++++--------- mm/sparse-vmemmap.c | 62 - mm/sparse.c | 31 mm/swap_slots.c | 45 mm/swap_state.c | 2 mm/util.c | 52 + mm/vmalloc.c | 176 +-- mm/vmscan.c | 39 mm/vmstat.c | 38 mm/workingset.c | 6 net/atm/mpoa_caches.c | 4 net/bluetooth/ecdh_helper.c | 6 net/bluetooth/smp.c | 24 net/core/sock.c | 2 net/ipv4/tcp_fastopen.c | 2 net/mac80211/aead_api.c | 4 net/mac80211/aes_gmac.c | 2 net/mac80211/key.c | 2 net/mac802154/llsec.c | 20 net/sctp/auth.c | 2 net/sunrpc/auth_gss/gss_krb5_crypto.c | 4 net/sunrpc/auth_gss/gss_krb5_keys.c | 6 net/sunrpc/auth_gss/gss_krb5_mech.c | 2 net/tipc/crypto.c | 10 net/wireless/core.c | 2 net/wireless/ibss.c | 4 net/wireless/lib80211_crypt_tkip.c | 2 net/wireless/lib80211_crypt_wep.c | 2 net/wireless/nl80211.c | 24 net/wireless/sme.c | 6 net/wireless/util.c | 2 net/wireless/wext-sme.c | 2 scripts/Makefile.kasan | 3 scripts/bloat-o-meter | 2 scripts/coccinelle/free/devm_free.cocci | 4 scripts/coccinelle/free/ifnullfree.cocci | 4 scripts/coccinelle/free/kfree.cocci | 6 scripts/coccinelle/free/kfreeaddr.cocci | 2 scripts/const_structs.checkpatch | 1 scripts/decode_stacktrace.sh | 85 + scripts/spelling.txt | 19 scripts/tags.sh | 18 security/apparmor/domain.c | 4 security/apparmor/include/file.h | 2 security/apparmor/policy.c | 24 security/apparmor/policy_ns.c | 6 security/apparmor/policy_unpack.c | 14 security/keys/big_key.c | 6 security/keys/dh.c | 14 security/keys/encrypted-keys/encrypted.c | 14 security/keys/trusted-keys/trusted_tpm1.c | 34 security/keys/user_defined.c | 6 tools/cgroup/memcg_slabinfo.py | 226 ++++ tools/include/linux/jhash.h | 2 tools/lib/rbtree.c | 2 tools/lib/traceevent/event-parse.h | 2 tools/testing/ktest/examples/README | 2 tools/testing/ktest/examples/crosstests.conf | 2 tools/testing/selftests/Makefile | 1 tools/testing/selftests/cgroup/.gitignore | 1 tools/testing/selftests/cgroup/Makefile | 2 tools/testing/selftests/cgroup/cgroup_util.c | 2 tools/testing/selftests/cgroup/test_kmem.c | 382 +++++++ tools/testing/selftests/mincore/.gitignore | 2 tools/testing/selftests/mincore/Makefile | 6 tools/testing/selftests/mincore/mincore_selftest.c | 361 +++++++ 397 files changed, 5547 insertions(+), 4072 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-07-24 4:14 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-07-24 4:14 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 15 patches, based on f37e99aca03f63aa3f2bd13ceaf769455d12c4b0. Subsystems affected by this patch series: mm/pagemap mm/shmem mm/hotfixes mm/memcg mm/hugetlb mailmap squashfs scripts io-mapping MAINTAINERS gdb Subsystem: mm/pagemap Yang Shi <yang.shi@linux.alibaba.com>: mm/memory.c: avoid access flag update TLB flush for retried page fault "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>: mm/mmap.c: close race between munmap() and expand_upwards()/downwards() Subsystem: mm/shmem Chengguang Xu <cgxu519@mykernel.net>: vfs/xattr: mm/shmem: kernfs: release simple xattr entry in a right way Subsystem: mm/hotfixes Tom Rix <trix@redhat.com>: mm: initialize return of vm_insert_pages Bhupesh Sharma <bhsharma@redhat.com>: mm/memcontrol: fix OOPS inside mem_cgroup_get_nr_swap_pages() Subsystem: mm/memcg Hugh Dickins <hughd@google.com>: mm/memcg: fix refcount error while moving and swapping Muchun Song <songmuchun@bytedance.com>: mm: memcg/slab: fix memory leak at non-root kmem_cache destroy Subsystem: mm/hugetlb Barry Song <song.bao.hua@hisilicon.com>: mm/hugetlb: avoid hardcoding while checking if cma is enabled "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>: khugepaged: fix null-pointer dereference due to race Subsystem: mailmap Mike Rapoport <rppt@linux.ibm.com>: mailmap: add entry for Mike Rapoport Subsystem: squashfs Phillip Lougher <phillip@squashfs.org.uk>: squashfs: fix length field overlap check in metadata reading Subsystem: scripts Pi-Hsun Shih <pihsun@chromium.org>: scripts/decode_stacktrace: strip basepath from all paths Subsystem: io-mapping "Michael J. Ruhl" <michael.j.ruhl@intel.com>: io-mapping: indicate mapping failure Subsystem: MAINTAINERS Andrey Konovalov <andreyknvl@google.com>: MAINTAINERS: add KCOV section Subsystem: gdb Stefano Garzarella <sgarzare@redhat.com>: scripts/gdb: fix lx-symbols 'gdb.error' while loading modules .mailmap | 3 +++ MAINTAINERS | 11 +++++++++++ fs/squashfs/block.c | 2 +- include/linux/io-mapping.h | 5 ++++- include/linux/xattr.h | 3 ++- mm/hugetlb.c | 15 ++++++++++----- mm/khugepaged.c | 3 +++ mm/memcontrol.c | 13 ++++++++++--- mm/memory.c | 9 +++++++-- mm/mmap.c | 16 ++++++++++++++-- mm/shmem.c | 2 +- mm/slab_common.c | 35 ++++++++++++++++++++++++++++------- scripts/decode_stacktrace.sh | 4 ++-- scripts/gdb/linux/symbols.py | 2 +- 14 files changed, 97 insertions(+), 26 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-07-03 22:14 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-07-03 22:14 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 5 patches, based on cdd3bb54332f82295ed90cd0c09c78cd0c0ee822. Subsystems affected by this patch series: mm/hugetlb samples mm/cma mm/vmalloc mm/pagealloc Subsystem: mm/hugetlb Mike Kravetz <mike.kravetz@oracle.com>: mm/hugetlb.c: fix pages per hugetlb calculation Subsystem: samples Kees Cook <keescook@chromium.org>: samples/vfs: avoid warning in statx override Subsystem: mm/cma Barry Song <song.bao.hua@hisilicon.com>: mm/cma.c: use exact_nid true to fix possible per-numa cma leak Subsystem: mm/vmalloc Christoph Hellwig <hch@lst.de>: vmalloc: fix the owner argument for the new __vmalloc_node_range callers Subsystem: mm/pagealloc Joel Savitz <jsavitz@redhat.com>: mm/page_alloc: fix documentation error arch/arm64/kernel/probes/kprobes.c | 2 +- arch/x86/hyperv/hv_init.c | 3 ++- kernel/module.c | 2 +- mm/cma.c | 4 ++-- mm/hugetlb.c | 2 +- mm/page_alloc.c | 2 +- samples/vfs/test-statx.c | 2 ++ 7 files changed, 10 insertions(+), 7 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-06-26 3:28 Andrew Morton 2020-06-26 6:51 ` incoming Linus Torvalds 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2020-06-26 3:28 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 32 patches, based on 908f7d12d3ba51dfe0449b9723199b423f97ca9a. Subsystems affected by this patch series: hotfixes mm/pagealloc kexec ocfs2 lib misc mm/slab mm/slab mm/slub mm/swap mm/pagemap mm/vmalloc mm/memcg mm/gup mm/thp mm/vmscan x86 mm/memory-hotplug MAINTAINERS Subsystem: hotfixes Stafford Horne <shorne@gmail.com>: openrisc: fix boot oops when DEBUG_VM is enabled Michal Hocko <mhocko@suse.com>: mm: do_swap_page(): fix up the error code Subsystem: mm/pagealloc Vlastimil Babka <vbabka@suse.cz>: mm, compaction: make capture control handling safe wrt interrupts Subsystem: kexec Lianbo Jiang <lijiang@redhat.com>: kexec: do not verify the signature without the lockdown or mandatory signature Subsystem: ocfs2 Junxiao Bi <junxiao.bi@oracle.com>: Patch series "ocfs2: fix nfsd over ocfs2 issues", v2: ocfs2: avoid inode removal while nfsd is accessing it ocfs2: load global_inode_alloc ocfs2: fix panic on nfs server over ocfs2 ocfs2: fix value of OCFS2_INVALID_SLOT Subsystem: lib Randy Dunlap <rdunlap@infradead.org>: lib: fix test_hmm.c reference after free Subsystem: misc Rikard Falkeborn <rikard.falkeborn@gmail.com>: linux/bits.h: fix unsigned less than zero warnings Subsystem: mm/slab Waiman Long <longman@redhat.com>: mm, slab: fix sign conversion problem in memcg_uncharge_slab() Subsystem: mm/slab Waiman Long <longman@redhat.com>: mm/slab: use memzero_explicit() in kzfree() Subsystem: mm/slub Sebastian Andrzej Siewior <bigeasy@linutronix.de>: slub: cure list_slab_objects() from double fix Subsystem: mm/swap Hugh Dickins <hughd@google.com>: mm: fix swap cache node allocation mask Subsystem: mm/pagemap Arjun Roy <arjunroy@google.com>: mm/memory.c: properly pte_offset_map_lock/unlock in vm_insert_pages() Christophe Leroy <christophe.leroy@csgroup.eu>: mm/debug_vm_pgtable: fix build failure with powerpc 8xx Stephen Rothwell <sfr@canb.auug.org.au>: make asm-generic/cacheflush.h more standalone Nathan Chancellor <natechancellor@gmail.com>: media: omap3isp: remove cacheflush.h Subsystem: mm/vmalloc Masanari Iida <standby24x7@gmail.com>: mm/vmalloc.c: fix a warning while make xmldocs Subsystem: mm/memcg Johannes Weiner <hannes@cmpxchg.org>: mm: memcontrol: handle div0 crash race condition in memory.low Muchun Song <songmuchun@bytedance.com>: mm/memcontrol.c: add missed css_put() Chris Down <chris@chrisdown.name>: mm/memcontrol.c: prevent missed memory.low load tears Subsystem: mm/gup Souptick Joarder <jrdr.linux@gmail.com>: docs: mm/gup: minor documentation update Subsystem: mm/thp Yang Shi <yang.shi@linux.alibaba.com>: doc: THP CoW fault no longer allocate THP Subsystem: mm/vmscan Johannes Weiner <hannes@cmpxchg.org>: Patch series "fix for "mm: balance LRU lists based on relative thrashing" patchset": mm: workingset: age nonresident information alongside anonymous pages Joonsoo Kim <iamjoonsoo.kim@lge.com>: mm/swap: fix for "mm: workingset: age nonresident information alongside anonymous pages" mm/memory: fix IO cost for anonymous page Subsystem: x86 Christoph Hellwig <hch@lst.de>: Patch series "fix a hyperv W^X violation and remove vmalloc_exec": x86/hyperv: allocate the hypercall page with only read and execute bits arm64: use PAGE_KERNEL_ROX directly in alloc_insn_page mm: remove vmalloc_exec Subsystem: mm/memory-hotplug Ben Widawsky <ben.widawsky@intel.com>: mm/memory_hotplug.c: fix false softlockup during pfn range removal Subsystem: MAINTAINERS Luc Van Oostenryck <luc.vanoostenryck@gmail.com>: MAINTAINERS: update info for sparse Documentation/admin-guide/cgroup-v2.rst | 4 +- Documentation/admin-guide/mm/transhuge.rst | 3 - Documentation/core-api/pin_user_pages.rst | 2 - MAINTAINERS | 4 +- arch/arm64/kernel/probes/kprobes.c | 12 +------ arch/openrisc/kernel/dma.c | 5 +++ arch/x86/hyperv/hv_init.c | 4 +- arch/x86/include/asm/pgtable_types.h | 2 + drivers/media/platform/omap3isp/isp.c | 2 - drivers/media/platform/omap3isp/ispvideo.c | 1 fs/ocfs2/dlmglue.c | 17 ++++++++++ fs/ocfs2/ocfs2.h | 1 fs/ocfs2/ocfs2_fs.h | 4 +- fs/ocfs2/suballoc.c | 9 +++-- include/asm-generic/cacheflush.h | 5 +++ include/linux/bits.h | 3 + include/linux/mmzone.h | 4 +- include/linux/swap.h | 1 include/linux/vmalloc.h | 1 kernel/kexec_file.c | 36 ++++------------------ kernel/module.c | 4 +- lib/test_hmm.c | 3 - mm/compaction.c | 17 ++++++++-- mm/debug_vm_pgtable.c | 4 +- mm/memcontrol.c | 18 ++++++++--- mm/memory.c | 33 +++++++++++++------- mm/memory_hotplug.c | 13 ++++++-- mm/nommu.c | 17 ---------- mm/slab.h | 4 +- mm/slab_common.c | 2 - mm/slub.c | 19 ++--------- mm/swap.c | 3 - mm/swap_state.c | 4 +- mm/vmalloc.c | 21 ------------- mm/vmscan.c | 3 + mm/workingset.c | 46 +++++++++++++++++------------ 36 files changed, 168 insertions(+), 163 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-06-26 3:28 incoming Andrew Morton @ 2020-06-26 6:51 ` Linus Torvalds 2020-06-26 7:31 ` incoming Linus Torvalds 2020-06-26 17:39 ` incoming Konstantin Ryabitsev 0 siblings, 2 replies; 349+ messages in thread From: Linus Torvalds @ 2020-06-26 6:51 UTC (permalink / raw) To: Andrew Morton, Konstantin Ryabitsev; +Cc: Linux-MM, mm-commits On Thu, Jun 25, 2020 at 8:28 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > 32 patches, based on 908f7d12d3ba51dfe0449b9723199b423f97ca9a. You didn't cc lkml, so now none of the nice 'b4' automation seems to work for this series.. Yes, this cover-letter went to linux-mm (which is on lore), but the individual patches didn't. Konstantin, maybe mm-commits could be on lore too and then they'd have been caught that way? Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-06-26 6:51 ` incoming Linus Torvalds @ 2020-06-26 7:31 ` Linus Torvalds 2020-06-26 17:39 ` incoming Konstantin Ryabitsev 1 sibling, 0 replies; 349+ messages in thread From: Linus Torvalds @ 2020-06-26 7:31 UTC (permalink / raw) To: Andrew Morton, Konstantin Ryabitsev; +Cc: Linux-MM, mm-commits On Thu, Jun 25, 2020 at 11:51 PM Linus Torvalds <torvalds@linux-foundation.org> wrote: > > You didn't cc lkml, so now none of the nice 'b4' automation seems to > work for this series.. Note that I've picked them up the old-fashioned way, so don't re-send them. So more of a note for "please, next time..." Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-06-26 6:51 ` incoming Linus Torvalds 2020-06-26 7:31 ` incoming Linus Torvalds @ 2020-06-26 17:39 ` Konstantin Ryabitsev 2020-06-26 17:40 ` incoming Konstantin Ryabitsev 1 sibling, 1 reply; 349+ messages in thread From: Konstantin Ryabitsev @ 2020-06-26 17:39 UTC (permalink / raw) To: Linus Torvalds; +Cc: Andrew Morton, Linux-MM, mm-commits On Thu, Jun 25, 2020 at 11:51:06PM -0700, Linus Torvalds wrote: > On Thu, Jun 25, 2020 at 8:28 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > > > 32 patches, based on 908f7d12d3ba51dfe0449b9723199b423f97ca9a. > > You didn't cc lkml, so now none of the nice 'b4' automation seems to > work for this series.. > > Yes, this cover-letter went to linux-mm (which is on lore), but the > individual patches didn't. > > Konstantin, maybe mm-commits could be on lore too and then they'd have > been caught that way? Yes, I already have a request from Kees for linux-mm addition, so that should show up in archives before long. -K ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-06-26 17:39 ` incoming Konstantin Ryabitsev @ 2020-06-26 17:40 ` Konstantin Ryabitsev 0 siblings, 0 replies; 349+ messages in thread From: Konstantin Ryabitsev @ 2020-06-26 17:40 UTC (permalink / raw) To: Linus Torvalds; +Cc: Andrew Morton, Linux-MM, mm-commits On Fri, 26 Jun 2020 at 13:39, Konstantin Ryabitsev <konstantin@linuxfoundation.org> wrote: > > Konstantin, maybe mm-commits could be on lore too and then they'd have > > been caught that way? > > Yes, I already have a request from Kees for linux-mm addition, so that > should show up in archives before long. correction: mm-commits, that is -K ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-06-12 0:30 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-12 0:30 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits A few fixes and stragglers. 5 patches, based on 623f6dc593eaf98b91916836785278eddddaacf8. Subsystems affected by this patch series: mm/memory-failure ocfs2 lib/lzo misc Subsystem: mm/memory-failure Naoya Horiguchi <nao.horiguchi@gmail.com>: Patch series "hwpoison: fixes signaling on memory error": mm/memory-failure: prioritize prctl(PR_MCE_KILL) over vm.memory_failure_early_kill mm/memory-failure: send SIGBUS(BUS_MCEERR_AR) only to current thread Subsystem: ocfs2 Tom Seewald <tseewald@gmail.com>: ocfs2: fix build failure when TCP/IP is disabled Subsystem: lib/lzo Dave Rodgman <dave.rodgman@arm.com>: lib/lzo: fix ambiguous encoding bug in lzo-rle Subsystem: misc Christoph Hellwig <hch@lst.de>: amdgpu: a NULL ->mm does not mean a thread is a kthread Documentation/lzo.txt | 8 ++++- drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h | 2 - fs/ocfs2/Kconfig | 2 - lib/lzo/lzo1x_compress.c | 13 ++++++++ mm/memory-failure.c | 43 +++++++++++++++++------------ 5 files changed, 47 insertions(+), 21 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-06-11 1:40 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-11 1:40 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm - various hotfixes and minor things - hch's use_mm/unuse_mm clearnups - new syscall process_madvise(): perform madvise() on a process other than self 25 patches, based on 6f630784cc0d92fb58ea326e2bc01aa056279ecb. Subsystems affected by this patch series: mm/hugetlb scripts kcov lib nilfs checkpatch lib mm/debug ocfs2 lib misc mm/madvise Subsystem: mm/hugetlb Dan Carpenter <dan.carpenter@oracle.com>: khugepaged: selftests: fix timeout condition in wait_for_scan() Subsystem: scripts SeongJae Park <sjpark@amazon.de>: scripts/spelling: add a few more typos Subsystem: kcov Andrey Konovalov <andreyknvl@google.com>: kcov: check kcov_softirq in kcov_remote_stop() Subsystem: lib Joe Perches <joe@perches.com>: lib/lz4/lz4_decompress.c: document deliberate use of `&' Subsystem: nilfs Ryusuke Konishi <konishi.ryusuke@gmail.com>: nilfs2: fix null pointer dereference at nilfs_segctor_do_construct() Subsystem: checkpatch Tim Froidcoeur <tim.froidcoeur@tessares.net>: checkpatch: correct check for kernel parameters doc Subsystem: lib Alexander Gordeev <agordeev@linux.ibm.com>: lib: fix bitmap_parse() on 64-bit big endian archs Subsystem: mm/debug "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>: mm/debug_vm_pgtable: fix kernel crash by checking for THP support Subsystem: ocfs2 Keyur Patel <iamkeyur96@gmail.com>: ocfs2: fix spelling mistake and grammar Ben Widawsky <ben.widawsky@intel.com>: mm: add comments on pglist_data zones Subsystem: lib Wei Yang <richard.weiyang@gmail.com>: lib: test get_count_order/long in test_bitops.c Subsystem: misc Walter Wu <walter-zh.wu@mediatek.com>: stacktrace: cleanup inconsistent variable type Christoph Hellwig <hch@lst.de>: Patch series "improve use_mm / unuse_mm", v2: kernel: move use_mm/unuse_mm to kthread.c kernel: move use_mm/unuse_mm to kthread.c kernel: better document the use_mm/unuse_mm API contract kernel: set USER_DS in kthread_use_mm Subsystem: mm/madvise Minchan Kim <minchan@kernel.org>: Patch series "introduce memory hinting API for external process", v7: mm/madvise: pass task and mm to do_madvise mm/madvise: introduce process_madvise() syscall: an external memory hinting API mm/madvise: check fatal signal pending of target process pid: move pidfd_get_pid() to pid.c mm/madvise: support both pid and pidfd for process_madvise Oleksandr Natalenko <oleksandr@redhat.com>: mm/madvise: allow KSM hints for remote API Minchan Kim <minchan@kernel.org>: mm: support vector address ranges for process_madvise mm: use only pidfd for process_madvise syscall YueHaibing <yuehaibing@huawei.com>: mm/madvise.c: remove duplicated include arch/alpha/kernel/syscalls/syscall.tbl | 1 arch/arm/tools/syscall.tbl | 1 arch/arm64/include/asm/unistd.h | 2 arch/arm64/include/asm/unistd32.h | 4 arch/ia64/kernel/syscalls/syscall.tbl | 1 arch/m68k/kernel/syscalls/syscall.tbl | 1 arch/microblaze/kernel/syscalls/syscall.tbl | 1 arch/mips/kernel/syscalls/syscall_n32.tbl | 3 arch/mips/kernel/syscalls/syscall_n64.tbl | 1 arch/mips/kernel/syscalls/syscall_o32.tbl | 3 arch/parisc/kernel/syscalls/syscall.tbl | 3 arch/powerpc/kernel/syscalls/syscall.tbl | 3 arch/powerpc/platforms/powernv/vas-fault.c | 4 arch/s390/kernel/syscalls/syscall.tbl | 3 arch/sh/kernel/syscalls/syscall.tbl | 1 arch/sparc/kernel/syscalls/syscall.tbl | 3 arch/x86/entry/syscalls/syscall_32.tbl | 3 arch/x86/entry/syscalls/syscall_64.tbl | 5 arch/xtensa/kernel/syscalls/syscall.tbl | 1 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h | 5 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_arcturus.c | 1 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10.c | 1 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.c | 2 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v8.c | 2 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c | 2 drivers/gpu/drm/i915/gvt/kvmgt.c | 2 drivers/usb/gadget/function/f_fs.c | 10 drivers/usb/gadget/legacy/inode.c | 6 drivers/vfio/vfio_iommu_type1.c | 6 drivers/vhost/vhost.c | 8 fs/aio.c | 1 fs/io-wq.c | 15 - fs/io_uring.c | 11 fs/nilfs2/segment.c | 2 fs/ocfs2/mmap.c | 2 include/linux/compat.h | 10 include/linux/kthread.h | 9 include/linux/mm.h | 3 include/linux/mmu_context.h | 5 include/linux/mmzone.h | 14 include/linux/pid.h | 1 include/linux/stacktrace.h | 2 include/linux/syscalls.h | 16 - include/uapi/asm-generic/unistd.h | 7 kernel/exit.c | 17 - kernel/kcov.c | 26 + kernel/kthread.c | 95 +++++- kernel/pid.c | 17 + kernel/sys_ni.c | 2 lib/Kconfig.debug | 10 lib/bitmap.c | 9 lib/lz4/lz4_decompress.c | 3 lib/test_bitops.c | 53 +++ mm/Makefile | 2 mm/debug_vm_pgtable.c | 6 mm/madvise.c | 295 ++++++++++++++------ mm/mmu_context.c | 64 ---- mm/oom_kill.c | 6 mm/vmacache.c | 4 scripts/checkpatch.pl | 4 scripts/spelling.txt | 9 tools/testing/selftests/vm/khugepaged.c | 2 62 files changed, 526 insertions(+), 285 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-06-09 4:29 Andrew Morton 2020-06-09 16:58 ` incoming Linus Torvalds 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2020-06-09 4:29 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm - a kernel-wide sweep of show_stack() - pagetable cleanups - abstract out accesses to mmap_sem - prep for mmap_sem scalability work - hch's user acess work 93 patches, based on abfbb29297c27e3f101f348dc9e467b0fe70f919: Subsystems affected by this patch series: debug mm/pagemap mm/maccess mm/documentation Subsystem: debug Dmitry Safonov <dima@arista.com>: Patch series "Add log level to show_stack()", v3: kallsyms/printk: add loglvl to print_ip_sym() alpha: add show_stack_loglvl() arc: add show_stack_loglvl() arm/asm: add loglvl to c_backtrace() arm: add loglvl to unwind_backtrace() arm: add loglvl to dump_backtrace() arm: wire up dump_backtrace_{entry,stm} arm: add show_stack_loglvl() arm64: add loglvl to dump_backtrace() arm64: add show_stack_loglvl() c6x: add show_stack_loglvl() csky: add show_stack_loglvl() h8300: add show_stack_loglvl() hexagon: add show_stack_loglvl() ia64: pass log level as arg into ia64_do_show_stack() ia64: add show_stack_loglvl() m68k: add show_stack_loglvl() microblaze: add loglvl to microblaze_unwind_inner() microblaze: add loglvl to microblaze_unwind() microblaze: add show_stack_loglvl() mips: add show_stack_loglvl() nds32: add show_stack_loglvl() nios2: add show_stack_loglvl() openrisc: add show_stack_loglvl() parisc: add show_stack_loglvl() powerpc: add show_stack_loglvl() riscv: add show_stack_loglvl() s390: add show_stack_loglvl() sh: add loglvl to dump_mem() sh: remove needless printk() sh: add loglvl to printk_address() sh: add loglvl to show_trace() sh: add show_stack_loglvl() sparc: add show_stack_loglvl() um/sysrq: remove needless variable sp um: add show_stack_loglvl() unicore32: remove unused pmode argument in c_backtrace() unicore32: add loglvl to c_backtrace() unicore32: add show_stack_loglvl() x86: add missing const qualifiers for log_lvl x86: add show_stack_loglvl() xtensa: add loglvl to show_trace() xtensa: add show_stack_loglvl() sysrq: use show_stack_loglvl() x86/amd_gart: print stacktrace for a leak with KERN_ERR power: use show_stack_loglvl() kdb: don't play with console_loglevel sched: print stack trace with KERN_INFO kernel: use show_stack_loglvl() kernel: rename show_stack_loglvl() => show_stack() Subsystem: mm/pagemap Mike Rapoport <rppt@linux.ibm.com>: Patch series "mm: consolidate definitions of page table accessors", v2: mm: don't include asm/pgtable.h if linux/mm.h is already included mm: introduce include/linux/pgtable.h mm: reorder includes after introduction of linux/pgtable.h csky: replace definitions of __pXd_offset() with pXd_index() m68k/mm/motorola: move comment about page table allocation funcitons m68k/mm: move {cache,nocahe}_page() definitions close to their user x86/mm: simplify init_trampoline() and surrounding logic mm: pgtable: add shortcuts for accessing kernel PMD and PTE mm: consolidate pte_index() and pte_offset_*() definitions Michel Lespinasse <walken@google.com>: mmap locking API: initial implementation as rwsem wrappers MMU notifier: use the new mmap locking API DMA reservations: use the new mmap locking API mmap locking API: use coccinelle to convert mmap_sem rwsem call sites mmap locking API: convert mmap_sem call sites missed by coccinelle mmap locking API: convert nested write lock sites mmap locking API: add mmap_read_trylock_non_owner() mmap locking API: add MMAP_LOCK_INITIALIZER mmap locking API: add mmap_assert_locked() and mmap_assert_write_locked() mmap locking API: rename mmap_sem to mmap_lock mmap locking API: convert mmap_sem API comments mmap locking API: convert mmap_sem comments Subsystem: mm/maccess Christoph Hellwig <hch@lst.de>: Patch series "clean up and streamline probe_kernel_* and friends", v4: maccess: unexport probe_kernel_write() maccess: remove various unused weak aliases maccess: remove duplicate kerneldoc comments maccess: clarify kerneldoc comments maccess: update the top of file comment maccess: rename strncpy_from_unsafe_user to strncpy_from_user_nofault maccess: rename strncpy_from_unsafe_strict to strncpy_from_kernel_nofault maccess: rename strnlen_unsafe_user to strnlen_user_nofault maccess: remove probe_read_common and probe_write_common maccess: unify the probe kernel arch hooks bpf: factor out a bpf_trace_copy_string helper bpf: handle the compat string in bpf_trace_copy_string better Andrew Morton <akpm@linux-foundation.org>: bpf:bpf_seq_printf(): handle potentially unsafe format string better Christoph Hellwig <hch@lst.de>: bpf: rework the compat kernel probe handling tracing/kprobes: handle mixed kernel/userspace probes better maccess: remove strncpy_from_unsafe maccess: always use strict semantics for probe_kernel_read maccess: move user access routines together maccess: allow architectures to provide kernel probing directly x86: use non-set_fs based maccess routines maccess: return -ERANGE when probe_kernel_read() fails Subsystem: mm/documentation Luis Chamberlain <mcgrof@kernel.org>: include/linux/cache.h: expand documentation over __read_mostly Documentation/admin-guide/mm/numa_memory_policy.rst | 10 Documentation/admin-guide/mm/userfaultfd.rst | 2 Documentation/filesystems/locking.rst | 2 Documentation/vm/hmm.rst | 6 Documentation/vm/transhuge.rst | 4 arch/alpha/boot/bootp.c | 1 arch/alpha/boot/bootpz.c | 1 arch/alpha/boot/main.c | 1 arch/alpha/include/asm/io.h | 1 arch/alpha/include/asm/pgtable.h | 16 arch/alpha/kernel/process.c | 1 arch/alpha/kernel/proto.h | 4 arch/alpha/kernel/ptrace.c | 1 arch/alpha/kernel/setup.c | 1 arch/alpha/kernel/smp.c | 1 arch/alpha/kernel/sys_alcor.c | 1 arch/alpha/kernel/sys_cabriolet.c | 1 arch/alpha/kernel/sys_dp264.c | 1 arch/alpha/kernel/sys_eb64p.c | 1 arch/alpha/kernel/sys_eiger.c | 1 arch/alpha/kernel/sys_jensen.c | 1 arch/alpha/kernel/sys_marvel.c | 1 arch/alpha/kernel/sys_miata.c | 1 arch/alpha/kernel/sys_mikasa.c | 1 arch/alpha/kernel/sys_nautilus.c | 1 arch/alpha/kernel/sys_noritake.c | 1 arch/alpha/kernel/sys_rawhide.c | 1 arch/alpha/kernel/sys_ruffian.c | 1 arch/alpha/kernel/sys_rx164.c | 1 arch/alpha/kernel/sys_sable.c | 1 arch/alpha/kernel/sys_sio.c | 1 arch/alpha/kernel/sys_sx164.c | 1 arch/alpha/kernel/sys_takara.c | 1 arch/alpha/kernel/sys_titan.c | 1 arch/alpha/kernel/sys_wildfire.c | 1 arch/alpha/kernel/traps.c | 40 arch/alpha/mm/fault.c | 12 arch/alpha/mm/init.c | 1 arch/arc/include/asm/bug.h | 3 arch/arc/include/asm/pgtable.h | 24 arch/arc/kernel/process.c | 4 arch/arc/kernel/stacktrace.c | 29 arch/arc/kernel/troubleshoot.c | 6 arch/arc/mm/fault.c | 6 arch/arc/mm/highmem.c | 14 arch/arc/mm/tlbex.S | 4 arch/arm/include/asm/bug.h | 3 arch/arm/include/asm/efi.h | 3 arch/arm/include/asm/fixmap.h | 4 arch/arm/include/asm/idmap.h | 2 arch/arm/include/asm/pgtable-2level.h | 1 arch/arm/include/asm/pgtable-3level.h | 7 arch/arm/include/asm/pgtable-nommu.h | 3 arch/arm/include/asm/pgtable.h | 25 arch/arm/include/asm/traps.h | 3 arch/arm/include/asm/unwind.h | 3 arch/arm/kernel/head.S | 4 arch/arm/kernel/machine_kexec.c | 1 arch/arm/kernel/module.c | 1 arch/arm/kernel/process.c | 4 arch/arm/kernel/ptrace.c | 1 arch/arm/kernel/smp.c | 1 arch/arm/kernel/suspend.c | 4 arch/arm/kernel/swp_emulate.c | 4 arch/arm/kernel/traps.c | 61 arch/arm/kernel/unwind.c | 7 arch/arm/kernel/vdso.c | 2 arch/arm/kernel/vmlinux.lds.S | 4 arch/arm/lib/backtrace-clang.S | 9 arch/arm/lib/backtrace.S | 14 arch/arm/lib/uaccess_with_memcpy.c | 16 arch/arm/mach-ebsa110/core.c | 1 arch/arm/mach-footbridge/common.c | 1 arch/arm/mach-imx/mm-imx21.c | 1 arch/arm/mach-imx/mm-imx27.c | 1 arch/arm/mach-imx/mm-imx3.c | 1 arch/arm/mach-integrator/core.c | 4 arch/arm/mach-iop32x/i2c.c | 1 arch/arm/mach-iop32x/iq31244.c | 1 arch/arm/mach-iop32x/iq80321.c | 1 arch/arm/mach-iop32x/n2100.c | 1 arch/arm/mach-ixp4xx/common.c | 1 arch/arm/mach-keystone/platsmp.c | 4 arch/arm/mach-sa1100/assabet.c | 3 arch/arm/mach-sa1100/hackkit.c | 4 arch/arm/mach-tegra/iomap.h | 2 arch/arm/mach-zynq/common.c | 4 arch/arm/mm/copypage-v4mc.c | 1 arch/arm/mm/copypage-v6.c | 1 arch/arm/mm/copypage-xscale.c | 1 arch/arm/mm/dump.c | 1 arch/arm/mm/fault-armv.c | 1 arch/arm/mm/fault.c | 9 arch/arm/mm/highmem.c | 4 arch/arm/mm/idmap.c | 4 arch/arm/mm/ioremap.c | 31 arch/arm/mm/mm.h | 8 arch/arm/mm/mmu.c | 7 arch/arm/mm/pageattr.c | 1 arch/arm/mm/proc-arm1020.S | 4 arch/arm/mm/proc-arm1020e.S | 4 arch/arm/mm/proc-arm1022.S | 4 arch/arm/mm/proc-arm1026.S | 4 arch/arm/mm/proc-arm720.S | 4 arch/arm/mm/proc-arm740.S | 4 arch/arm/mm/proc-arm7tdmi.S | 4 arch/arm/mm/proc-arm920.S | 4 arch/arm/mm/proc-arm922.S | 4 arch/arm/mm/proc-arm925.S | 4 arch/arm/mm/proc-arm926.S | 4 arch/arm/mm/proc-arm940.S | 4 arch/arm/mm/proc-arm946.S | 4 arch/arm/mm/proc-arm9tdmi.S | 4 arch/arm/mm/proc-fa526.S | 4 arch/arm/mm/proc-feroceon.S | 4 arch/arm/mm/proc-mohawk.S | 4 arch/arm/mm/proc-sa110.S | 4 arch/arm/mm/proc-sa1100.S | 4 arch/arm/mm/proc-v6.S | 4 arch/arm/mm/proc-v7.S | 4 arch/arm/mm/proc-xsc3.S | 4 arch/arm/mm/proc-xscale.S | 4 arch/arm/mm/pv-fixup-asm.S | 4 arch/arm64/include/asm/io.h | 4 arch/arm64/include/asm/kernel-pgtable.h | 2 arch/arm64/include/asm/kvm_mmu.h | 4 arch/arm64/include/asm/mmu_context.h | 4 arch/arm64/include/asm/pgtable.h | 40 arch/arm64/include/asm/stacktrace.h | 3 arch/arm64/include/asm/stage2_pgtable.h | 2 arch/arm64/include/asm/vmap_stack.h | 4 arch/arm64/kernel/acpi.c | 4 arch/arm64/kernel/head.S | 4 arch/arm64/kernel/hibernate.c | 5 arch/arm64/kernel/kaslr.c | 4 arch/arm64/kernel/process.c | 2 arch/arm64/kernel/ptrace.c | 1 arch/arm64/kernel/smp.c | 1 arch/arm64/kernel/suspend.c | 4 arch/arm64/kernel/traps.c | 37 arch/arm64/kernel/vdso.c | 8 arch/arm64/kernel/vmlinux.lds.S | 3 arch/arm64/kvm/mmu.c | 14 arch/arm64/mm/dump.c | 1 arch/arm64/mm/fault.c | 9 arch/arm64/mm/kasan_init.c | 3 arch/arm64/mm/mmu.c | 8 arch/arm64/mm/pageattr.c | 1 arch/arm64/mm/proc.S | 4 arch/c6x/include/asm/pgtable.h | 3 arch/c6x/kernel/traps.c | 28 arch/csky/include/asm/io.h | 2 arch/csky/include/asm/pgtable.h | 37 arch/csky/kernel/module.c | 1 arch/csky/kernel/ptrace.c | 5 arch/csky/kernel/stacktrace.c | 20 arch/csky/kernel/vdso.c | 4 arch/csky/mm/fault.c | 10 arch/csky/mm/highmem.c | 2 arch/csky/mm/init.c | 7 arch/csky/mm/tlb.c | 1 arch/h8300/include/asm/pgtable.h | 1 arch/h8300/kernel/process.c | 1 arch/h8300/kernel/setup.c | 1 arch/h8300/kernel/signal.c | 1 arch/h8300/kernel/traps.c | 26 arch/h8300/mm/fault.c | 1 arch/h8300/mm/init.c | 1 arch/h8300/mm/memory.c | 1 arch/hexagon/include/asm/fixmap.h | 4 arch/hexagon/include/asm/pgtable.h | 55 arch/hexagon/kernel/traps.c | 39 arch/hexagon/kernel/vdso.c | 4 arch/hexagon/mm/uaccess.c | 2 arch/hexagon/mm/vm_fault.c | 9 arch/ia64/include/asm/pgtable.h | 34 arch/ia64/include/asm/ptrace.h | 1 arch/ia64/include/asm/uaccess.h | 2 arch/ia64/kernel/efi.c | 1 arch/ia64/kernel/entry.S | 4 arch/ia64/kernel/head.S | 5 arch/ia64/kernel/irq_ia64.c | 4 arch/ia64/kernel/ivt.S | 4 arch/ia64/kernel/kprobes.c | 4 arch/ia64/kernel/mca.c | 2 arch/ia64/kernel/mca_asm.S | 4 arch/ia64/kernel/perfmon.c | 8 arch/ia64/kernel/process.c | 37 arch/ia64/kernel/ptrace.c | 1 arch/ia64/kernel/relocate_kernel.S | 6 arch/ia64/kernel/setup.c | 4 arch/ia64/kernel/smp.c | 1 arch/ia64/kernel/smpboot.c | 1 arch/ia64/kernel/uncached.c | 4 arch/ia64/kernel/vmlinux.lds.S | 4 arch/ia64/mm/contig.c | 1 arch/ia64/mm/fault.c | 17 arch/ia64/mm/init.c | 12 arch/m68k/68000/m68EZ328.c | 2 arch/m68k/68000/m68VZ328.c | 4 arch/m68k/68000/timers.c | 1 arch/m68k/amiga/config.c | 1 arch/m68k/apollo/config.c | 1 arch/m68k/atari/atasound.c | 1 arch/m68k/atari/stram.c | 1 arch/m68k/bvme6000/config.c | 1 arch/m68k/include/asm/mcf_pgtable.h | 63 arch/m68k/include/asm/motorola_pgalloc.h | 8 arch/m68k/include/asm/motorola_pgtable.h | 84 - arch/m68k/include/asm/pgtable_mm.h | 1 arch/m68k/include/asm/pgtable_no.h | 2 arch/m68k/include/asm/sun3_pgtable.h | 24 arch/m68k/include/asm/sun3xflop.h | 4 arch/m68k/kernel/head.S | 4 arch/m68k/kernel/process.c | 1 arch/m68k/kernel/ptrace.c | 1 arch/m68k/kernel/setup_no.c | 1 arch/m68k/kernel/signal.c | 1 arch/m68k/kernel/sys_m68k.c | 14 arch/m68k/kernel/traps.c | 27 arch/m68k/kernel/uboot.c | 1 arch/m68k/mac/config.c | 1 arch/m68k/mm/fault.c | 10 arch/m68k/mm/init.c | 2 arch/m68k/mm/mcfmmu.c | 1 arch/m68k/mm/motorola.c | 65 arch/m68k/mm/sun3kmap.c | 1 arch/m68k/mm/sun3mmu.c | 1 arch/m68k/mvme147/config.c | 1 arch/m68k/mvme16x/config.c | 1 arch/m68k/q40/config.c | 1 arch/m68k/sun3/config.c | 1 arch/m68k/sun3/dvma.c | 1 arch/m68k/sun3/mmu_emu.c | 1 arch/m68k/sun3/sun3dvma.c | 1 arch/m68k/sun3x/dvma.c | 1 arch/m68k/sun3x/prom.c | 1 arch/microblaze/include/asm/pgalloc.h | 4 arch/microblaze/include/asm/pgtable.h | 23 arch/microblaze/include/asm/uaccess.h | 2 arch/microblaze/include/asm/unwind.h | 3 arch/microblaze/kernel/hw_exception_handler.S | 4 arch/microblaze/kernel/module.c | 4 arch/microblaze/kernel/setup.c | 4 arch/microblaze/kernel/signal.c | 9 arch/microblaze/kernel/stacktrace.c | 4 arch/microblaze/kernel/traps.c | 28 arch/microblaze/kernel/unwind.c | 46 arch/microblaze/mm/fault.c | 17 arch/microblaze/mm/init.c | 9 arch/microblaze/mm/pgtable.c | 4 arch/mips/fw/arc/memory.c | 1 arch/mips/include/asm/fixmap.h | 3 arch/mips/include/asm/mach-generic/floppy.h | 1 arch/mips/include/asm/mach-jazz/floppy.h | 1 arch/mips/include/asm/pgtable-32.h | 22 arch/mips/include/asm/pgtable-64.h | 32 arch/mips/include/asm/pgtable.h | 2 arch/mips/jazz/irq.c | 4 arch/mips/jazz/jazzdma.c | 1 arch/mips/jazz/setup.c | 4 arch/mips/kernel/module.c | 1 arch/mips/kernel/process.c | 1 arch/mips/kernel/ptrace.c | 1 arch/mips/kernel/ptrace32.c | 1 arch/mips/kernel/smp-bmips.c | 1 arch/mips/kernel/traps.c | 58 arch/mips/kernel/vdso.c | 4 arch/mips/kvm/mips.c | 4 arch/mips/kvm/mmu.c | 20 arch/mips/kvm/tlb.c | 1 arch/mips/kvm/trap_emul.c | 2 arch/mips/lib/dump_tlb.c | 1 arch/mips/lib/r3k_dump_tlb.c | 1 arch/mips/mm/c-octeon.c | 1 arch/mips/mm/c-r3k.c | 11 arch/mips/mm/c-r4k.c | 11 arch/mips/mm/c-tx39.c | 11 arch/mips/mm/fault.c | 12 arch/mips/mm/highmem.c | 2 arch/mips/mm/init.c | 1 arch/mips/mm/page.c | 1 arch/mips/mm/pgtable-32.c | 1 arch/mips/mm/pgtable-64.c | 1 arch/mips/mm/sc-ip22.c | 1 arch/mips/mm/sc-mips.c | 1 arch/mips/mm/sc-r5k.c | 1 arch/mips/mm/tlb-r3k.c | 1 arch/mips/mm/tlb-r4k.c | 1 arch/mips/mm/tlbex.c | 4 arch/mips/sgi-ip27/ip27-init.c | 1 arch/mips/sgi-ip27/ip27-timer.c | 1 arch/mips/sgi-ip32/ip32-memory.c | 1 arch/nds32/include/asm/highmem.h | 3 arch/nds32/include/asm/pgtable.h | 22 arch/nds32/kernel/head.S | 4 arch/nds32/kernel/module.c | 2 arch/nds32/kernel/traps.c | 33 arch/nds32/kernel/vdso.c | 6 arch/nds32/mm/fault.c | 17 arch/nds32/mm/init.c | 13 arch/nds32/mm/proc.c | 7 arch/nios2/include/asm/pgtable.h | 24 arch/nios2/kernel/module.c | 1 arch/nios2/kernel/nios2_ksyms.c | 4 arch/nios2/kernel/traps.c | 35 arch/nios2/mm/fault.c | 14 arch/nios2/mm/init.c | 5 arch/nios2/mm/pgtable.c | 1 arch/nios2/mm/tlb.c | 1 arch/openrisc/include/asm/io.h | 3 arch/openrisc/include/asm/pgtable.h | 33 arch/openrisc/include/asm/tlbflush.h | 1 arch/openrisc/kernel/asm-offsets.c | 1 arch/openrisc/kernel/entry.S | 4 arch/openrisc/kernel/head.S | 4 arch/openrisc/kernel/or32_ksyms.c | 4 arch/openrisc/kernel/process.c | 1 arch/openrisc/kernel/ptrace.c | 1 arch/openrisc/kernel/setup.c | 1 arch/openrisc/kernel/traps.c | 27 arch/openrisc/mm/fault.c | 12 arch/openrisc/mm/init.c | 1 arch/openrisc/mm/ioremap.c | 4 arch/openrisc/mm/tlb.c | 1 arch/parisc/include/asm/io.h | 2 arch/parisc/include/asm/mmu_context.h | 1 arch/parisc/include/asm/pgtable.h | 33 arch/parisc/kernel/asm-offsets.c | 4 arch/parisc/kernel/entry.S | 4 arch/parisc/kernel/head.S | 4 arch/parisc/kernel/module.c | 1 arch/parisc/kernel/pacache.S | 4 arch/parisc/kernel/pci-dma.c | 2 arch/parisc/kernel/pdt.c | 4 arch/parisc/kernel/ptrace.c | 1 arch/parisc/kernel/smp.c | 1 arch/parisc/kernel/traps.c | 42 arch/parisc/lib/memcpy.c | 14 arch/parisc/mm/fault.c | 10 arch/parisc/mm/fixmap.c | 6 arch/parisc/mm/init.c | 1 arch/powerpc/include/asm/book3s/32/pgtable.h | 20 arch/powerpc/include/asm/book3s/64/pgtable.h | 43 arch/powerpc/include/asm/fixmap.h | 4 arch/powerpc/include/asm/io.h | 1 arch/powerpc/include/asm/kup.h | 2 arch/powerpc/include/asm/nohash/32/pgtable.h | 17 arch/powerpc/include/asm/nohash/64/pgtable-4k.h | 4 arch/powerpc/include/asm/nohash/64/pgtable.h | 22 arch/powerpc/include/asm/nohash/pgtable.h | 2 arch/powerpc/include/asm/pgtable.h | 28 arch/powerpc/include/asm/pkeys.h | 2 arch/powerpc/include/asm/tlb.h | 2 arch/powerpc/kernel/asm-offsets.c | 1 arch/powerpc/kernel/btext.c | 4 arch/powerpc/kernel/fpu.S | 3 arch/powerpc/kernel/head_32.S | 4 arch/powerpc/kernel/head_40x.S | 4 arch/powerpc/kernel/head_44x.S | 4 arch/powerpc/kernel/head_8xx.S | 4 arch/powerpc/kernel/head_fsl_booke.S | 4 arch/powerpc/kernel/io-workarounds.c | 4 arch/powerpc/kernel/irq.c | 4 arch/powerpc/kernel/mce_power.c | 4 arch/powerpc/kernel/paca.c | 4 arch/powerpc/kernel/process.c | 30 arch/powerpc/kernel/prom.c | 4 arch/powerpc/kernel/prom_init.c | 4 arch/powerpc/kernel/rtas_pci.c | 4 arch/powerpc/kernel/setup-common.c | 4 arch/powerpc/kernel/setup_32.c | 4 arch/powerpc/kernel/setup_64.c | 4 arch/powerpc/kernel/signal_32.c | 1 arch/powerpc/kernel/signal_64.c | 1 arch/powerpc/kernel/smp.c | 4 arch/powerpc/kernel/stacktrace.c | 2 arch/powerpc/kernel/traps.c | 1 arch/powerpc/kernel/vdso.c | 7 arch/powerpc/kvm/book3s_64_mmu_radix.c | 4 arch/powerpc/kvm/book3s_hv.c | 6 arch/powerpc/kvm/book3s_hv_nested.c | 4 arch/powerpc/kvm/book3s_hv_rm_xics.c | 4 arch/powerpc/kvm/book3s_hv_rm_xive.c | 4 arch/powerpc/kvm/book3s_hv_uvmem.c | 18 arch/powerpc/kvm/e500_mmu_host.c | 4 arch/powerpc/kvm/fpu.S | 4 arch/powerpc/lib/code-patching.c | 1 arch/powerpc/mm/book3s32/hash_low.S | 4 arch/powerpc/mm/book3s32/mmu.c | 2 arch/powerpc/mm/book3s32/tlb.c | 6 arch/powerpc/mm/book3s64/hash_hugetlbpage.c | 1 arch/powerpc/mm/book3s64/hash_native.c | 4 arch/powerpc/mm/book3s64/hash_pgtable.c | 5 arch/powerpc/mm/book3s64/hash_utils.c | 4 arch/powerpc/mm/book3s64/iommu_api.c | 4 arch/powerpc/mm/book3s64/radix_hugetlbpage.c | 1 arch/powerpc/mm/book3s64/radix_pgtable.c | 1 arch/powerpc/mm/book3s64/slb.c | 4 arch/powerpc/mm/book3s64/subpage_prot.c | 16 arch/powerpc/mm/copro_fault.c | 4 arch/powerpc/mm/fault.c | 23 arch/powerpc/mm/hugetlbpage.c | 1 arch/powerpc/mm/init-common.c | 4 arch/powerpc/mm/init_32.c | 1 arch/powerpc/mm/init_64.c | 1 arch/powerpc/mm/kasan/8xx.c | 4 arch/powerpc/mm/kasan/book3s_32.c | 2 arch/powerpc/mm/kasan/kasan_init_32.c | 8 arch/powerpc/mm/mem.c | 1 arch/powerpc/mm/nohash/40x.c | 5 arch/powerpc/mm/nohash/8xx.c | 2 arch/powerpc/mm/nohash/fsl_booke.c | 1 arch/powerpc/mm/nohash/tlb_low_64e.S | 4 arch/powerpc/mm/pgtable.c | 2 arch/powerpc/mm/pgtable_32.c | 5 arch/powerpc/mm/pgtable_64.c | 1 arch/powerpc/mm/ptdump/8xx.c | 2 arch/powerpc/mm/ptdump/bats.c | 4 arch/powerpc/mm/ptdump/book3s64.c | 2 arch/powerpc/mm/ptdump/hashpagetable.c | 1 arch/powerpc/mm/ptdump/ptdump.c | 1 arch/powerpc/mm/ptdump/shared.c | 2 arch/powerpc/oprofile/cell/spu_task_sync.c | 6 arch/powerpc/perf/callchain.c | 1 arch/powerpc/perf/callchain_32.c | 1 arch/powerpc/perf/callchain_64.c | 1 arch/powerpc/platforms/85xx/corenet_generic.c | 4 arch/powerpc/platforms/85xx/mpc85xx_cds.c | 4 arch/powerpc/platforms/85xx/qemu_e500.c | 4 arch/powerpc/platforms/85xx/sbc8548.c | 4 arch/powerpc/platforms/85xx/smp.c | 4 arch/powerpc/platforms/86xx/mpc86xx_smp.c | 4 arch/powerpc/platforms/8xx/cpm1.c | 1 arch/powerpc/platforms/8xx/micropatch.c | 1 arch/powerpc/platforms/cell/cbe_regs.c | 4 arch/powerpc/platforms/cell/interrupt.c | 4 arch/powerpc/platforms/cell/pervasive.c | 4 arch/powerpc/platforms/cell/setup.c | 1 arch/powerpc/platforms/cell/smp.c | 4 arch/powerpc/platforms/cell/spider-pic.c | 4 arch/powerpc/platforms/cell/spufs/file.c | 10 arch/powerpc/platforms/chrp/pci.c | 4 arch/powerpc/platforms/chrp/setup.c | 1 arch/powerpc/platforms/chrp/smp.c | 4 arch/powerpc/platforms/maple/setup.c | 1 arch/powerpc/platforms/maple/time.c | 1 arch/powerpc/platforms/powermac/setup.c | 1 arch/powerpc/platforms/powermac/smp.c | 4 arch/powerpc/platforms/powermac/time.c | 1 arch/powerpc/platforms/pseries/lpar.c | 4 arch/powerpc/platforms/pseries/setup.c | 1 arch/powerpc/platforms/pseries/smp.c | 4 arch/powerpc/sysdev/cpm2.c | 1 arch/powerpc/sysdev/fsl_85xx_cache_sram.c | 2 arch/powerpc/sysdev/mpic.c | 4 arch/powerpc/xmon/xmon.c | 1 arch/riscv/include/asm/fixmap.h | 4 arch/riscv/include/asm/io.h | 4 arch/riscv/include/asm/kasan.h | 4 arch/riscv/include/asm/pgtable-64.h | 7 arch/riscv/include/asm/pgtable.h | 22 arch/riscv/kernel/module.c | 2 arch/riscv/kernel/setup.c | 1 arch/riscv/kernel/soc.c | 2 arch/riscv/kernel/stacktrace.c | 23 arch/riscv/kernel/vdso.c | 4 arch/riscv/mm/cacheflush.c | 3 arch/riscv/mm/fault.c | 14 arch/riscv/mm/init.c | 31 arch/riscv/mm/kasan_init.c | 4 arch/riscv/mm/pageattr.c | 6 arch/riscv/mm/ptdump.c | 2 arch/s390/boot/ipl_parm.c | 4 arch/s390/boot/kaslr.c | 4 arch/s390/include/asm/hugetlb.h | 4 arch/s390/include/asm/kasan.h | 4 arch/s390/include/asm/pgtable.h | 15 arch/s390/include/asm/tlbflush.h | 1 arch/s390/kernel/asm-offsets.c | 4 arch/s390/kernel/dumpstack.c | 25 arch/s390/kernel/machine_kexec.c | 1 arch/s390/kernel/ptrace.c | 1 arch/s390/kernel/uv.c | 4 arch/s390/kernel/vdso.c | 5 arch/s390/kvm/gaccess.c | 8 arch/s390/kvm/interrupt.c | 4 arch/s390/kvm/kvm-s390.c | 32 arch/s390/kvm/priv.c | 38 arch/s390/mm/dump_pagetables.c | 1 arch/s390/mm/extmem.c | 4 arch/s390/mm/fault.c | 17 arch/s390/mm/gmap.c | 80 arch/s390/mm/init.c | 1 arch/s390/mm/kasan_init.c | 4 arch/s390/mm/pageattr.c | 13 arch/s390/mm/pgalloc.c | 2 arch/s390/mm/pgtable.c | 1 arch/s390/mm/vmem.c | 1 arch/s390/pci/pci_mmio.c | 4 arch/sh/include/asm/io.h | 2 arch/sh/include/asm/kdebug.h | 6 arch/sh/include/asm/pgtable-3level.h | 7 arch/sh/include/asm/pgtable.h | 2 arch/sh/include/asm/pgtable_32.h | 25 arch/sh/include/asm/processor_32.h | 2 arch/sh/kernel/dumpstack.c | 54 arch/sh/kernel/machine_kexec.c | 1 arch/sh/kernel/process_32.c | 2 arch/sh/kernel/ptrace_32.c | 1 arch/sh/kernel/signal_32.c | 1 arch/sh/kernel/sys_sh.c | 6 arch/sh/kernel/traps.c | 4 arch/sh/kernel/vsyscall/vsyscall.c | 4 arch/sh/mm/cache-sh3.c | 1 arch/sh/mm/cache-sh4.c | 11 arch/sh/mm/cache-sh7705.c | 1 arch/sh/mm/fault.c | 16 arch/sh/mm/kmap.c | 5 arch/sh/mm/nommu.c | 1 arch/sh/mm/pmb.c | 4 arch/sparc/include/asm/floppy_32.h | 4 arch/sparc/include/asm/highmem.h | 4 arch/sparc/include/asm/ide.h | 2 arch/sparc/include/asm/io-unit.h | 4 arch/sparc/include/asm/pgalloc_32.h | 4 arch/sparc/include/asm/pgalloc_64.h | 2 arch/sparc/include/asm/pgtable_32.h | 34 arch/sparc/include/asm/pgtable_64.h | 32 arch/sparc/kernel/cpu.c | 4 arch/sparc/kernel/entry.S | 4 arch/sparc/kernel/head_64.S | 4 arch/sparc/kernel/ktlb.S | 4 arch/sparc/kernel/leon_smp.c | 1 arch/sparc/kernel/pci.c | 4 arch/sparc/kernel/process_32.c | 29 arch/sparc/kernel/process_64.c | 3 arch/sparc/kernel/ptrace_32.c | 1 arch/sparc/kernel/ptrace_64.c | 1 arch/sparc/kernel/setup_32.c | 1 arch/sparc/kernel/setup_64.c | 1 arch/sparc/kernel/signal32.c | 1 arch/sparc/kernel/signal_32.c | 1 arch/sparc/kernel/signal_64.c | 1 arch/sparc/kernel/smp_32.c | 1 arch/sparc/kernel/smp_64.c | 1 arch/sparc/kernel/sun4m_irq.c | 4 arch/sparc/kernel/trampoline_64.S | 4 arch/sparc/kernel/traps_32.c | 4 arch/sparc/kernel/traps_64.c | 24 arch/sparc/lib/clear_page.S | 4 arch/sparc/lib/copy_page.S | 2 arch/sparc/mm/fault_32.c | 21 arch/sparc/mm/fault_64.c | 17 arch/sparc/mm/highmem.c | 12 arch/sparc/mm/hugetlbpage.c | 1 arch/sparc/mm/init_32.c | 1 arch/sparc/mm/init_64.c | 7 arch/sparc/mm/io-unit.c | 11 arch/sparc/mm/iommu.c | 9 arch/sparc/mm/tlb.c | 1 arch/sparc/mm/tsb.c | 4 arch/sparc/mm/ultra.S | 4 arch/sparc/vdso/vma.c | 4 arch/um/drivers/mconsole_kern.c | 2 arch/um/include/asm/mmu_context.h | 5 arch/um/include/asm/pgtable-3level.h | 4 arch/um/include/asm/pgtable.h | 69 arch/um/kernel/maccess.c | 12 arch/um/kernel/mem.c | 10 arch/um/kernel/process.c | 1 arch/um/kernel/skas/mmu.c | 3 arch/um/kernel/skas/uaccess.c | 1 arch/um/kernel/sysrq.c | 35 arch/um/kernel/tlb.c | 5 arch/um/kernel/trap.c | 15 arch/um/kernel/um_arch.c | 1 arch/unicore32/include/asm/pgtable.h | 19 arch/unicore32/kernel/hibernate.c | 4 arch/unicore32/kernel/hibernate_asm.S | 4 arch/unicore32/kernel/module.c | 1 arch/unicore32/kernel/setup.h | 4 arch/unicore32/kernel/traps.c | 50 arch/unicore32/lib/backtrace.S | 24 arch/unicore32/mm/alignment.c | 4 arch/unicore32/mm/fault.c | 9 arch/unicore32/mm/mm.h | 10 arch/unicore32/mm/proc-ucv2.S | 4 arch/x86/boot/compressed/kaslr_64.c | 4 arch/x86/entry/vdso/vma.c | 14 arch/x86/events/core.c | 4 arch/x86/include/asm/agp.h | 2 arch/x86/include/asm/asm-prototypes.h | 4 arch/x86/include/asm/efi.h | 4 arch/x86/include/asm/iomap.h | 1 arch/x86/include/asm/kaslr.h | 2 arch/x86/include/asm/mmu.h | 2 arch/x86/include/asm/pgtable-3level.h | 8 arch/x86/include/asm/pgtable.h | 89 - arch/x86/include/asm/pgtable_32.h | 11 arch/x86/include/asm/pgtable_64.h | 4 arch/x86/include/asm/setup.h | 12 arch/x86/include/asm/stacktrace.h | 2 arch/x86/include/asm/uaccess.h | 16 arch/x86/include/asm/xen/hypercall.h | 4 arch/x86/include/asm/xen/page.h | 1 arch/x86/kernel/acpi/boot.c | 4 arch/x86/kernel/acpi/sleep.c | 4 arch/x86/kernel/alternative.c | 1 arch/x86/kernel/amd_gart_64.c | 5 arch/x86/kernel/apic/apic_numachip.c | 4 arch/x86/kernel/cpu/bugs.c | 4 arch/x86/kernel/cpu/common.c | 4 arch/x86/kernel/cpu/intel.c | 4 arch/x86/kernel/cpu/resctrl/pseudo_lock.c | 6 arch/x86/kernel/cpu/resctrl/rdtgroup.c | 6 arch/x86/kernel/crash_core_32.c | 4 arch/x86/kernel/crash_core_64.c | 4 arch/x86/kernel/doublefault_32.c | 1 arch/x86/kernel/dumpstack.c | 21 arch/x86/kernel/early_printk.c | 4 arch/x86/kernel/espfix_64.c | 2 arch/x86/kernel/head64.c | 4 arch/x86/kernel/head_64.S | 4 arch/x86/kernel/i8259.c | 4 arch/x86/kernel/irqinit.c | 4 arch/x86/kernel/kprobes/core.c | 4 arch/x86/kernel/kprobes/opt.c | 4 arch/x86/kernel/ldt.c | 2 arch/x86/kernel/machine_kexec_32.c | 1 arch/x86/kernel/machine_kexec_64.c | 1 arch/x86/kernel/module.c | 1 arch/x86/kernel/paravirt.c | 4 arch/x86/kernel/process_32.c | 1 arch/x86/kernel/process_64.c | 1 arch/x86/kernel/ptrace.c | 1 arch/x86/kernel/reboot.c | 4 arch/x86/kernel/smpboot.c | 4 arch/x86/kernel/tboot.c | 3 arch/x86/kernel/vm86_32.c | 4 arch/x86/kvm/mmu/paging_tmpl.h | 8 arch/x86/mm/cpu_entry_area.c | 4 arch/x86/mm/debug_pagetables.c | 2 arch/x86/mm/dump_pagetables.c | 1 arch/x86/mm/fault.c | 22 arch/x86/mm/init.c | 22 arch/x86/mm/init_32.c | 27 arch/x86/mm/init_64.c | 1 arch/x86/mm/ioremap.c | 4 arch/x86/mm/kasan_init_64.c | 1 arch/x86/mm/kaslr.c | 37 arch/x86/mm/maccess.c | 44 arch/x86/mm/mem_encrypt_boot.S | 2 arch/x86/mm/mmio-mod.c | 4 arch/x86/mm/pat/cpa-test.c | 1 arch/x86/mm/pat/memtype.c | 1 arch/x86/mm/pat/memtype_interval.c | 4 arch/x86/mm/pgtable.c | 1 arch/x86/mm/pgtable_32.c | 1 arch/x86/mm/pti.c | 1 arch/x86/mm/setup_nx.c | 4 arch/x86/platform/efi/efi_32.c | 4 arch/x86/platform/efi/efi_64.c | 1 arch/x86/platform/olpc/olpc_ofw.c | 4 arch/x86/power/cpu.c | 4 arch/x86/power/hibernate.c | 4 arch/x86/power/hibernate_32.c | 4 arch/x86/power/hibernate_64.c | 4 arch/x86/realmode/init.c | 4 arch/x86/um/vdso/vma.c | 4 arch/x86/xen/enlighten_pv.c | 1 arch/x86/xen/grant-table.c | 1 arch/x86/xen/mmu_pv.c | 4 arch/x86/xen/smp_pv.c | 2 arch/xtensa/include/asm/fixmap.h | 12 arch/xtensa/include/asm/highmem.h | 4 arch/xtensa/include/asm/initialize_mmu.h | 2 arch/xtensa/include/asm/mmu_context.h | 4 arch/xtensa/include/asm/pgtable.h | 20 arch/xtensa/kernel/entry.S | 4 arch/xtensa/kernel/process.c | 1 arch/xtensa/kernel/ptrace.c | 1 arch/xtensa/kernel/setup.c | 1 arch/xtensa/kernel/traps.c | 42 arch/xtensa/kernel/vectors.S | 4 arch/xtensa/mm/cache.c | 4 arch/xtensa/mm/fault.c | 12 arch/xtensa/mm/highmem.c | 2 arch/xtensa/mm/ioremap.c | 4 arch/xtensa/mm/kasan_init.c | 10 arch/xtensa/mm/misc.S | 4 arch/xtensa/mm/mmu.c | 5 drivers/acpi/scan.c | 3 drivers/android/binder_alloc.c | 14 drivers/atm/fore200e.c | 4 drivers/base/power/main.c | 4 drivers/block/z2ram.c | 4 drivers/char/agp/frontend.c | 1 drivers/char/agp/generic.c | 1 drivers/char/bsr.c | 1 drivers/char/mspec.c | 3 drivers/dma-buf/dma-resv.c | 5 drivers/firmware/efi/arm-runtime.c | 4 drivers/firmware/efi/efi.c | 2 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h | 2 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.c | 2 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v8.c | 2 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 4 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 10 drivers/gpu/drm/amd/amdkfd/kfd_events.c | 4 drivers/gpu/drm/drm_vm.c | 4 drivers/gpu/drm/etnaviv/etnaviv_gem.c | 2 drivers/gpu/drm/i915/gem/i915_gem_mman.c | 4 drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 14 drivers/gpu/drm/i915/i915_mm.c | 1 drivers/gpu/drm/i915/i915_perf.c | 2 drivers/gpu/drm/nouveau/nouveau_svm.c | 22 drivers/gpu/drm/radeon/radeon_cs.c | 4 drivers/gpu/drm/radeon/radeon_gem.c | 6 drivers/gpu/drm/ttm/ttm_bo_vm.c | 10 drivers/infiniband/core/umem_odp.c | 4 drivers/infiniband/core/uverbs_main.c | 6 drivers/infiniband/hw/hfi1/mmu_rb.c | 2 drivers/infiniband/hw/mlx4/mr.c | 4 drivers/infiniband/hw/qib/qib_file_ops.c | 4 drivers/infiniband/hw/qib/qib_user_pages.c | 6 drivers/infiniband/hw/usnic/usnic_uiom.c | 4 drivers/infiniband/sw/rdmavt/mmap.c | 1 drivers/infiniband/sw/rxe/rxe_mmap.c | 1 drivers/infiniband/sw/siw/siw_mem.c | 4 drivers/iommu/amd_iommu_v2.c | 4 drivers/iommu/intel-svm.c | 4 drivers/macintosh/macio-adb.c | 4 drivers/macintosh/mediabay.c | 4 drivers/macintosh/via-pmu.c | 4 drivers/media/pci/bt8xx/bt878.c | 4 drivers/media/pci/bt8xx/btcx-risc.c | 4 drivers/media/pci/bt8xx/bttv-risc.c | 4 drivers/media/platform/davinci/vpbe_display.c | 1 drivers/media/v4l2-core/v4l2-common.c | 1 drivers/media/v4l2-core/videobuf-core.c | 4 drivers/media/v4l2-core/videobuf-dma-contig.c | 4 drivers/media/v4l2-core/videobuf-dma-sg.c | 10 drivers/media/v4l2-core/videobuf-vmalloc.c | 4 drivers/misc/cxl/cxllib.c | 9 drivers/misc/cxl/fault.c | 4 drivers/misc/genwqe/card_utils.c | 2 drivers/misc/sgi-gru/grufault.c | 25 drivers/misc/sgi-gru/grufile.c | 4 drivers/mtd/ubi/ubi.h | 2 drivers/net/ethernet/amd/7990.c | 4 drivers/net/ethernet/amd/hplance.c | 4 drivers/net/ethernet/amd/mvme147.c | 4 drivers/net/ethernet/amd/sun3lance.c | 4 drivers/net/ethernet/amd/sunlance.c | 4 drivers/net/ethernet/apple/bmac.c | 4 drivers/net/ethernet/apple/mace.c | 4 drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c | 4 drivers/net/ethernet/freescale/fs_enet/mac-fcc.c | 4 drivers/net/ethernet/freescale/fs_enet/mii-fec.c | 4 drivers/net/ethernet/i825xx/82596.c | 4 drivers/net/ethernet/korina.c | 4 drivers/net/ethernet/marvell/pxa168_eth.c | 4 drivers/net/ethernet/natsemi/jazzsonic.c | 4 drivers/net/ethernet/natsemi/macsonic.c | 4 drivers/net/ethernet/natsemi/xtsonic.c | 4 drivers/net/ethernet/sun/sunbmac.c | 4 drivers/net/ethernet/sun/sunhme.c | 1 drivers/net/ethernet/sun/sunqe.c | 4 drivers/oprofile/buffer_sync.c | 12 drivers/sbus/char/flash.c | 1 drivers/sbus/char/uctrl.c | 1 drivers/scsi/53c700.c | 4 drivers/scsi/a2091.c | 1 drivers/scsi/a3000.c | 1 drivers/scsi/arm/cumana_2.c | 4 drivers/scsi/arm/eesox.c | 4 drivers/scsi/arm/powertec.c | 4 drivers/scsi/dpt_i2o.c | 4 drivers/scsi/gvp11.c | 1 drivers/scsi/lasi700.c | 1 drivers/scsi/mac53c94.c | 4 drivers/scsi/mesh.c | 4 drivers/scsi/mvme147.c | 1 drivers/scsi/qlogicpti.c | 4 drivers/scsi/sni_53c710.c | 1 drivers/scsi/zorro_esp.c | 4 drivers/staging/android/ashmem.c | 4 drivers/staging/comedi/comedi_fops.c | 2 drivers/staging/kpc2000/kpc_dma/fileops.c | 4 drivers/staging/media/atomisp/pci/hmm/hmm_bo.c | 4 drivers/tee/optee/call.c | 4 drivers/tty/sysrq.c | 4 drivers/tty/vt/consolemap.c | 2 drivers/vfio/pci/vfio_pci.c | 22 drivers/vfio/vfio_iommu_type1.c | 8 drivers/vhost/vdpa.c | 4 drivers/video/console/newport_con.c | 1 drivers/video/fbdev/acornfb.c | 1 drivers/video/fbdev/atafb.c | 1 drivers/video/fbdev/cirrusfb.c | 1 drivers/video/fbdev/cyber2000fb.c | 1 drivers/video/fbdev/fb-puv3.c | 1 drivers/video/fbdev/hitfb.c | 1 drivers/video/fbdev/neofb.c | 1 drivers/video/fbdev/q40fb.c | 1 drivers/video/fbdev/savage/savagefb_driver.c | 1 drivers/xen/balloon.c | 1 drivers/xen/gntdev.c | 6 drivers/xen/grant-table.c | 1 drivers/xen/privcmd.c | 15 drivers/xen/xenbus/xenbus_probe.c | 1 drivers/xen/xenbus/xenbus_probe_backend.c | 1 drivers/xen/xenbus/xenbus_probe_frontend.c | 1 fs/aio.c | 4 fs/coredump.c | 8 fs/exec.c | 18 fs/ext2/file.c | 2 fs/ext4/super.c | 6 fs/hugetlbfs/inode.c | 2 fs/io_uring.c | 4 fs/kernfs/file.c | 4 fs/proc/array.c | 1 fs/proc/base.c | 24 fs/proc/meminfo.c | 1 fs/proc/nommu.c | 1 fs/proc/task_mmu.c | 34 fs/proc/task_nommu.c | 18 fs/proc/vmcore.c | 1 fs/userfaultfd.c | 46 fs/xfs/xfs_file.c | 2 fs/xfs/xfs_inode.c | 14 fs/xfs/xfs_iops.c | 4 include/asm-generic/io.h | 2 include/asm-generic/pgtable-nopmd.h | 1 include/asm-generic/pgtable-nopud.h | 1 include/asm-generic/pgtable.h | 1322 ---------------- include/linux/cache.h | 10 include/linux/crash_dump.h | 3 include/linux/dax.h | 1 include/linux/dma-noncoherent.h | 2 include/linux/fs.h | 4 include/linux/hmm.h | 2 include/linux/huge_mm.h | 2 include/linux/hugetlb.h | 2 include/linux/io-mapping.h | 4 include/linux/kallsyms.h | 4 include/linux/kasan.h | 4 include/linux/mempolicy.h | 2 include/linux/mm.h | 15 include/linux/mm_types.h | 4 include/linux/mmap_lock.h | 128 + include/linux/mmu_notifier.h | 13 include/linux/pagemap.h | 2 include/linux/pgtable.h | 1444 +++++++++++++++++- include/linux/rmap.h | 2 include/linux/sched/debug.h | 7 include/linux/sched/mm.h | 10 include/linux/uaccess.h | 62 include/xen/arm/page.h | 4 init/init_task.c | 1 ipc/shm.c | 8 kernel/acct.c | 6 kernel/bpf/stackmap.c | 21 kernel/bpf/syscall.c | 2 kernel/cgroup/cpuset.c | 4 kernel/debug/kdb/kdb_bt.c | 17 kernel/events/core.c | 10 kernel/events/uprobes.c | 20 kernel/exit.c | 11 kernel/fork.c | 15 kernel/futex.c | 4 kernel/locking/lockdep.c | 4 kernel/locking/rtmutex-debug.c | 4 kernel/power/snapshot.c | 1 kernel/relay.c | 2 kernel/sched/core.c | 10 kernel/sched/fair.c | 4 kernel/sys.c | 22 kernel/trace/bpf_trace.c | 176 +- kernel/trace/ftrace.c | 8 kernel/trace/trace_kprobe.c | 80 kernel/trace/trace_output.c | 4 lib/dump_stack.c | 4 lib/ioremap.c | 1 lib/test_hmm.c | 14 lib/test_lockup.c | 16 mm/debug.c | 10 mm/debug_vm_pgtable.c | 1 mm/filemap.c | 46 mm/frame_vector.c | 6 mm/gup.c | 73 mm/hmm.c | 2 mm/huge_memory.c | 8 mm/hugetlb.c | 3 mm/init-mm.c | 6 mm/internal.h | 6 mm/khugepaged.c | 72 mm/ksm.c | 48 mm/maccess.c | 496 +++--- mm/madvise.c | 40 mm/memcontrol.c | 10 mm/memory.c | 61 mm/mempolicy.c | 36 mm/migrate.c | 16 mm/mincore.c | 8 mm/mlock.c | 22 mm/mmap.c | 74 mm/mmu_gather.c | 2 mm/mmu_notifier.c | 22 mm/mprotect.c | 22 mm/mremap.c | 14 mm/msync.c | 8 mm/nommu.c | 22 mm/oom_kill.c | 14 mm/page_io.c | 1 mm/page_reporting.h | 2 mm/pagewalk.c | 12 mm/pgtable-generic.c | 6 mm/process_vm_access.c | 4 mm/ptdump.c | 4 mm/rmap.c | 12 mm/shmem.c | 5 mm/sparse-vmemmap.c | 1 mm/sparse.c | 1 mm/swap_state.c | 5 mm/swapfile.c | 5 mm/userfaultfd.c | 26 mm/util.c | 12 mm/vmacache.c | 1 mm/zsmalloc.c | 4 net/ipv4/tcp.c | 8 net/xdp/xdp_umem.c | 4 security/keys/keyctl.c | 2 sound/core/oss/pcm_oss.c | 2 sound/core/sgbuf.c | 1 sound/pci/hda/hda_intel.c | 4 sound/soc/intel/common/sst-firmware.c | 4 sound/soc/intel/haswell/sst-haswell-pcm.c | 4 tools/include/linux/kallsyms.h | 2 virt/kvm/async_pf.c | 4 virt/kvm/kvm_main.c | 9 942 files changed, 4580 insertions(+), 5662 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-06-09 4:29 incoming Andrew Morton @ 2020-06-09 16:58 ` Linus Torvalds 0 siblings, 0 replies; 349+ messages in thread From: Linus Torvalds @ 2020-06-09 16:58 UTC (permalink / raw) To: Andrew Morton; +Cc: mm-commits, Linux-MM On Mon, Jun 8, 2020 at 9:29 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > 942 files changed, 4580 insertions(+), 5662 deletions(-) If you use proper tools, add a "-M" to your diff script, so that you see 941 files changed, 2614 insertions(+), 3696 deletions(-) because a big portion of the lines were due to a rename: rename include/{asm-generic => linux}/pgtable.h (91%) but at some earlier point you mentioned "diffstat", so I guess "proper tools" isn't an option ;( Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-06-08 4:35 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-08 4:35 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm Various trees. Mainly those parts of MM whose linux-next dependents are now merged. I'm still sitting on ~160 patches which await merges from -next. 54 patches, based on 9aa900c8094dba7a60dc805ecec1e9f720744ba1. Subsystems affected by this patch series: mm/proc ipc dynamic-debug panic lib sysctl mm/gup mm/pagemap Subsystem: mm/proc SeongJae Park <sjpark@amazon.de>: mm/page_idle.c: skip offline pages Subsystem: ipc Jules Irenge <jbi.octave@gmail.com>: ipc/msg: add missing annotation for freeque() Giuseppe Scrivano <gscrivan@redhat.com>: ipc/namespace.c: use a work queue to free_ipc Subsystem: dynamic-debug Orson Zhai <orson.zhai@unisoc.com>: dynamic_debug: add an option to enable dynamic debug for modules only Subsystem: panic Rafael Aquini <aquini@redhat.com>: kernel: add panic_on_taint Subsystem: lib Manfred Spraul <manfred@colorfullife.com>: xarray.h: correct return code documentation for xa_store_{bh,irq}() Subsystem: sysctl Vlastimil Babka <vbabka@suse.cz>: Patch series "support setting sysctl parameters from kernel command line", v3: kernel/sysctl: support setting sysctl parameters from kernel command line kernel/sysctl: support handling command line aliases kernel/hung_task convert hung_task_panic boot parameter to sysctl tools/testing/selftests/sysctl/sysctl.sh: support CONFIG_TEST_SYSCTL=y lib/test_sysctl: support testing of sysctl. boot parameter "Guilherme G. Piccoli" <gpiccoli@canonical.com>: kernel/watchdog.c: convert {soft/hard}lockup boot parameters to sysctl aliases kernel/hung_task.c: introduce sysctl to print all traces when a hung task is detected panic: add sysctl to dump all CPUs backtraces on oops event Rafael Aquini <aquini@redhat.com>: kernel/sysctl.c: ignore out-of-range taint bits introduced via kernel.tainted Subsystem: mm/gup Souptick Joarder <jrdr.linux@gmail.com>: mm/gup.c: convert to use get_user_{page|pages}_fast_only() John Hubbard <jhubbard@nvidia.com>: mm/gup: update pin_user_pages.rst for "case 3" (mmu notifiers) Patch series "mm/gup: introduce pin_user_pages_locked(), use it in frame_vector.c", v2: mm/gup: introduce pin_user_pages_locked() mm/gup: frame_vector: convert get_user_pages() --> pin_user_pages() mm/gup: documentation fix for pin_user_pages*() APIs Patch series "vhost, docs: convert to pin_user_pages(), new "case 5"": docs: mm/gup: pin_user_pages.rst: add a "case 5" vhost: convert get_user_pages() --> pin_user_pages() Subsystem: mm/pagemap Alexander Gordeev <agordeev@linux.ibm.com>: mm/mmap.c: add more sanity checks to get_unmapped_area() mm/mmap.c: do not allow mappings outside of allowed limits Christoph Hellwig <hch@lst.de>: Patch series "sort out the flush_icache_range mess", v2: arm: fix the flush_icache_range arguments in set_fiq_handler nds32: unexport flush_icache_page powerpc: unexport flush_icache_user_range unicore32: remove flush_cache_user_range asm-generic: fix the inclusion guards for cacheflush.h asm-generic: don't include <linux/mm.h> in cacheflush.h asm-generic: improve the flush_dcache_page stub alpha: use asm-generic/cacheflush.h arm64: use asm-generic/cacheflush.h c6x: use asm-generic/cacheflush.h hexagon: use asm-generic/cacheflush.h ia64: use asm-generic/cacheflush.h microblaze: use asm-generic/cacheflush.h m68knommu: use asm-generic/cacheflush.h openrisc: use asm-generic/cacheflush.h powerpc: use asm-generic/cacheflush.h riscv: use asm-generic/cacheflush.h arm,sparc,unicore32: remove flush_icache_user_range mm: rename flush_icache_user_range to flush_icache_user_page asm-generic: add a flush_icache_user_range stub sh: implement flush_icache_user_range xtensa: implement flush_icache_user_range arm: rename flush_cache_user_range to flush_icache_user_range m68k: implement flush_icache_user_range exec: only build read_code when needed exec: use flush_icache_user_range in read_code binfmt_flat: use flush_icache_user_range nommu: use flush_icache_user_range in brk and mmap module: move the set_fs hack for flush_icache_range to m68k Konstantin Khlebnikov <khlebnikov@yandex-team.ru>: doc: cgroup: update note about conditions when oom killer is invoked Documentation/admin-guide/cgroup-v2.rst | 17 +- Documentation/admin-guide/dynamic-debug-howto.rst | 5 Documentation/admin-guide/kdump/kdump.rst | 8 + Documentation/admin-guide/kernel-parameters.txt | 34 +++- Documentation/admin-guide/sysctl/kernel.rst | 37 ++++ Documentation/core-api/pin_user_pages.rst | 47 ++++-- arch/alpha/include/asm/cacheflush.h | 38 +---- arch/alpha/kernel/smp.c | 2 arch/arm/include/asm/cacheflush.h | 7 arch/arm/kernel/fiq.c | 4 arch/arm/kernel/traps.c | 2 arch/arm64/include/asm/cacheflush.h | 46 ------ arch/c6x/include/asm/cacheflush.h | 19 -- arch/hexagon/include/asm/cacheflush.h | 19 -- arch/ia64/include/asm/cacheflush.h | 30 ---- arch/m68k/include/asm/cacheflush_mm.h | 6 arch/m68k/include/asm/cacheflush_no.h | 19 -- arch/m68k/mm/cache.c | 13 + arch/microblaze/include/asm/cacheflush.h | 29 --- arch/nds32/include/asm/cacheflush.h | 4 arch/nds32/mm/cacheflush.c | 3 arch/openrisc/include/asm/cacheflush.h | 33 ---- arch/powerpc/include/asm/cacheflush.h | 46 +----- arch/powerpc/kvm/book3s_64_mmu_hv.c | 2 arch/powerpc/kvm/book3s_64_mmu_radix.c | 2 arch/powerpc/mm/mem.c | 3 arch/powerpc/perf/callchain_64.c | 4 arch/riscv/include/asm/cacheflush.h | 65 -------- arch/sh/include/asm/cacheflush.h | 1 arch/sparc/include/asm/cacheflush_32.h | 2 arch/sparc/include/asm/cacheflush_64.h | 1 arch/um/include/asm/tlb.h | 2 arch/unicore32/include/asm/cacheflush.h | 11 - arch/x86/include/asm/cacheflush.h | 2 arch/xtensa/include/asm/cacheflush.h | 2 drivers/media/platform/omap3isp/ispvideo.c | 2 drivers/nvdimm/pmem.c | 3 drivers/vhost/vhost.c | 5 fs/binfmt_flat.c | 2 fs/exec.c | 5 fs/proc/proc_sysctl.c | 163 ++++++++++++++++++++-- include/asm-generic/cacheflush.h | 25 +-- include/linux/dev_printk.h | 6 include/linux/dynamic_debug.h | 2 include/linux/ipc_namespace.h | 2 include/linux/kernel.h | 9 + include/linux/mm.h | 12 + include/linux/net.h | 3 include/linux/netdevice.h | 6 include/linux/printk.h | 9 - include/linux/sched/sysctl.h | 7 include/linux/sysctl.h | 4 include/linux/xarray.h | 4 include/rdma/ib_verbs.h | 6 init/main.c | 2 ipc/msg.c | 2 ipc/namespace.c | 24 ++- kernel/events/core.c | 4 kernel/events/uprobes.c | 2 kernel/hung_task.c | 30 ++-- kernel/module.c | 8 - kernel/panic.c | 45 ++++++ kernel/sysctl.c | 38 ++++- kernel/watchdog.c | 37 +--- lib/Kconfig.debug | 12 + lib/Makefile | 2 lib/dynamic_debug.c | 9 - lib/test_sysctl.c | 13 + mm/frame_vector.c | 7 mm/gup.c | 74 +++++++-- mm/mmap.c | 28 ++- mm/nommu.c | 4 mm/page_alloc.c | 9 - mm/page_idle.c | 7 tools/testing/selftests/sysctl/sysctl.sh | 44 +++++ virt/kvm/kvm_main.c | 8 - 76 files changed, 732 insertions(+), 517 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-06-04 23:45 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-04 23:45 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits - More MM work. 100ish more to go. Mike's "mm: remove __ARCH_HAS_5LEVEL_HACK" series should fix the current ppc issue. - Various other little subsystems 127 patches, based on 6929f71e46bdddbf1c4d67c2728648176c67c555. Subsystems affected by this patch series: kcov mm/pagemap mm/vmalloc mm/kmap mm/util mm/memory-hotplug mm/cleanups mm/zram procfs core-kernel get_maintainer lib bitops checkpatch binfmt init fat seq_file exec rapidio relay selftests ubsan Subsystem: kcov Andrey Konovalov <andreyknvl@google.com>: Patch series "kcov: collect coverage from usb soft interrupts", v4: kcov: cleanup debug messages kcov: fix potential use-after-free in kcov_remote_start kcov: move t->kcov assignments into kcov_start/stop kcov: move t->kcov_sequence assignment kcov: use t->kcov_mode as enabled indicator kcov: collect coverage from interrupts usb: core: kcov: collect coverage from usb complete callback Subsystem: mm/pagemap Feng Tang <feng.tang@intel.com>: mm/util.c: remove the VM_WARN_ONCE for vm_committed_as underflow check Mike Rapoport <rppt@linux.ibm.com>: Patch series "mm: remove __ARCH_HAS_5LEVEL_HACK", v4: h8300: remove usage of __ARCH_USE_5LEVEL_HACK arm: add support for folded p4d page tables arm64: add support for folded p4d page tables hexagon: remove __ARCH_USE_5LEVEL_HACK ia64: add support for folded p4d page tables nios2: add support for folded p4d page tables openrisc: add support for folded p4d page tables powerpc: add support for folded p4d page tables Geert Uytterhoeven <geert+renesas@glider.be>: sh: fault: modernize printing of kernel messages Mike Rapoport <rppt@linux.ibm.com>: sh: drop __pXd_offset() macros that duplicate pXd_index() ones sh: add support for folded p4d page tables unicore32: remove __ARCH_USE_5LEVEL_HACK asm-generic: remove pgtable-nop4d-hack.h mm: remove __ARCH_HAS_5LEVEL_HACK and include/asm-generic/5level-fixup.h Anshuman Khandual <anshuman.khandual@arm.com>: Patch series "mm/debug: Add tests validating architecture page table: x86/mm: define mm_p4d_folded() mm/debug: add tests validating architecture page table helpers Subsystem: mm/vmalloc Jeongtae Park <jtp.park@samsung.com>: mm/vmalloc: fix a typo in comment Subsystem: mm/kmap Ira Weiny <ira.weiny@intel.com>: Patch series "Remove duplicated kmap code", v3: arch/kmap: remove BUG_ON() arch/xtensa: move kmap build bug out of the way arch/kmap: remove redundant arch specific kmaps arch/kunmap: remove duplicate kunmap implementations {x86,powerpc,microblaze}/kmap: move preempt disable arch/kmap_atomic: consolidate duplicate code arch/kunmap_atomic: consolidate duplicate code arch/kmap: ensure kmap_prot visibility arch/kmap: don't hard code kmap_prot values arch/kmap: define kmap_atomic_prot() for all arch's drm: remove drm specific kmap_atomic code kmap: remove kmap_atomic_to_page() parisc/kmap: remove duplicate kmap code sparc: remove unnecessary includes kmap: consolidate kmap_prot definitions Subsystem: mm/util Waiman Long <longman@redhat.com>: mm: add kvfree_sensitive() for freeing sensitive data objects Subsystem: mm/memory-hotplug Vishal Verma <vishal.l.verma@intel.com>: mm/memory_hotplug: refrain from adding memory into an impossible node David Hildenbrand <david@redhat.com>: powerpc/pseries/hotplug-memory: stop checking is_mem_section_removable() mm/memory_hotplug: remove is_mem_section_removable() Patch series "mm/memory_hotplug: handle memblocks only with: mm/memory_hotplug: set node_start_pfn of hotadded pgdat to 0 mm/memory_hotplug: handle memblocks only with CONFIG_ARCH_KEEP_MEMBLOCK Patch series "mm/memory_hotplug: Interface to add driver-managed system: mm/memory_hotplug: introduce add_memory_driver_managed() kexec_file: don't place kexec images on IORESOURCE_MEM_DRIVER_MANAGED device-dax: add memory via add_memory_driver_managed() Michal Hocko <mhocko@kernel.org>: mm/memory_hotplug: disable the functionality for 32b Subsystem: mm/cleanups chenqiwu <chenqiwu@xiaomi.com>: mm: replace zero-length array with flexible-array member Ethon Paul <ethp@qq.com>: mm/memory_hotplug: fix a typo in comment "recoreded"->"recorded" mm: ksm: fix a typo in comment "alreaady"->"already" mm: mmap: fix a typo in comment "compatbility"->"compatibility" mm/hugetlb: fix a typos in comments mm/vmsan: fix some typos in comment mm/compaction: fix a typo in comment "pessemistic"->"pessimistic" mm/memblock: fix a typo in comment "implict"->"implicit" mm/list_lru: fix a typo in comment "numbesr"->"numbers" mm/filemap: fix a typo in comment "unneccssary"->"unnecessary" mm/frontswap: fix some typos in frontswap.c mm, memcg: fix some typos in memcontrol.c mm: fix a typo in comment "strucure"->"structure" mm/slub: fix a typo in comment "disambiguiation"->"disambiguation" mm/sparse: fix a typo in comment "convienence"->"convenience" mm/page-writeback: fix a typo in comment "effictive"->"effective" mm/memory: fix a typo in comment "attampt"->"attempt" Zou Wei <zou_wei@huawei.com>: mm: use false for bool variable Jason Yan <yanaijie@huawei.com>: include/linux/mm.h: return true in cpupid_pid_unset() Subsystem: mm/zram Andy Shevchenko <andriy.shevchenko@linux.intel.com>: zcomp: Use ARRAY_SIZE() for backends list Subsystem: procfs Alexey Dobriyan <adobriyan@gmail.com>: proc: rename "catch" function argument Subsystem: core-kernel Jason Yan <yanaijie@huawei.com>: user.c: make uidhash_table static Subsystem: get_maintainer Joe Perches <joe@perches.com>: get_maintainer: add email addresses from .yaml files get_maintainer: fix unexpected behavior for path/to//file (double slashes) Subsystem: lib Christophe JAILLET <christophe.jaillet@wanadoo.fr>: lib/math: avoid trailing newline hidden in pr_fmt() KP Singh <kpsingh@chromium.org>: lib: Add might_fault() to strncpy_from_user. Jason Yan <yanaijie@huawei.com>: lib/test_lockup.c: make test_inode static Jann Horn <jannh@google.com>: lib/zlib: remove outdated and incorrect pre-increment optimization Joe Perches <joe@perches.com>: lib/percpu-refcount.c: use a more common logging style Tan Hu <tan.hu@zte.com.cn>: lib/flex_proportions.c: cleanup __fprop_inc_percpu_max Jesse Brandeburg <jesse.brandeburg@intel.com>: lib: make a test module with set/clear bit Subsystem: bitops Arnd Bergmann <arnd@arndb.de>: include/linux/bitops.h: avoid clang shift-count-overflow warnings Subsystem: checkpatch Joe Perches <joe@perches.com>: checkpatch: additional MAINTAINER section entry ordering checks checkpatch: look for c99 comments in ctx_locate_comment checkpatch: disallow --git and --file/--fix Geert Uytterhoeven <geert+renesas@glider.be>: checkpatch: use patch subject when reading from stdin Subsystem: binfmt Anthony Iliopoulos <ailiop@suse.com>: fs/binfmt_elf: remove redundant elf_map ifndef Nick Desaulniers <ndesaulniers@google.com>: elfnote: mark all .note sections SHF_ALLOC Subsystem: init Chris Down <chris@chrisdown.name>: init: allow distribution configuration of default init Subsystem: fat OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>: fat: don't allow to mount if the FAT length == 0 fat: improve the readahead for FAT entries Subsystem: seq_file Joe Perches <joe@perches.com>: fs/seq_file.c: seq_read: Update pr_info_ratelimited Kefeng Wang <wangkefeng.wang@huawei.com>: Patch series "seq_file: Introduce DEFINE_SEQ_ATTRIBUTE() helper macro": include/linux/seq_file.h: introduce DEFINE_SEQ_ATTRIBUTE() helper macro mm/vmstat.c: convert to use DEFINE_SEQ_ATTRIBUTE macro kernel/kprobes.c: convert to use DEFINE_SEQ_ATTRIBUTE macro Subsystem: exec Christoph Hellwig <hch@lst.de>: exec: simplify the copy_strings_kernel calling convention exec: open code copy_string_kernel Subsystem: rapidio Madhuparna Bhowmik <madhuparnabhowmik10@gmail.com>: rapidio: avoid data race between file operation callbacks and mport_cdev_add(). John Hubbard <jhubbard@nvidia.com>: rapidio: convert get_user_pages() --> pin_user_pages() Subsystem: relay Daniel Axtens <dja@axtens.net>: kernel/relay.c: handle alloc_percpu returning NULL in relay_open Pengcheng Yang <yangpc@wangsu.com>: kernel/relay.c: fix read_pos error when multiple readers Subsystem: selftests Ram Pai <linuxram@us.ibm.com>: Patch series "selftests, powerpc, x86: Memory Protection Keys", v19: selftests/x86/pkeys: move selftests to arch-neutral directory selftests/vm/pkeys: rename all references to pkru to a generic name selftests/vm/pkeys: move generic definitions to header file Thiago Jung Bauermann <bauerman@linux.ibm.com>: selftests/vm/pkeys: move some definitions to arch-specific header selftests/vm/pkeys: make gcc check arguments of sigsafe_printf() Sandipan Das <sandipan@linux.ibm.com>: selftests: vm: pkeys: Use sane types for pkey register selftests: vm: pkeys: add helpers for pkey bits Ram Pai <linuxram@us.ibm.com>: selftests/vm/pkeys: fix pkey_disable_clear() selftests/vm/pkeys: fix assertion in pkey_disable_set/clear() selftests/vm/pkeys: fix alloc_random_pkey() to make it really random Sandipan Das <sandipan@linux.ibm.com>: selftests: vm: pkeys: use the correct huge page size Ram Pai <linuxram@us.ibm.com>: selftests/vm/pkeys: introduce generic pkey abstractions selftests/vm/pkeys: introduce powerpc support "Desnes A. Nunes do Rosario" <desnesn@linux.vnet.ibm.com>: selftests/vm/pkeys: fix number of reserved powerpc pkeys Ram Pai <linuxram@us.ibm.com>: selftests/vm/pkeys: fix assertion in test_pkey_alloc_exhaust() selftests/vm/pkeys: improve checks to determine pkey support selftests/vm/pkeys: associate key on a mapped page and detect access violation selftests/vm/pkeys: associate key on a mapped page and detect write violation selftests/vm/pkeys: detect write violation on a mapped access-denied-key page selftests/vm/pkeys: introduce a sub-page allocator selftests/vm/pkeys: test correct behaviour of pkey-0 selftests/vm/pkeys: override access right definitions on powerpc Sandipan Das <sandipan@linux.ibm.com>: selftests: vm: pkeys: use the correct page size on powerpc selftests: vm: pkeys: fix multilib builds for x86 Jagadeesh Pagadala <jagdsh.linux@gmail.com>: tools/testing/selftests/vm: remove duplicate headers Subsystem: ubsan Arnd Bergmann <arnd@arndb.de>: lib/ubsan.c: fix gcc-10 warnings Documentation/dev-tools/kcov.rst | 17 Documentation/features/debug/debug-vm-pgtable/arch-support.txt | 34 arch/arc/Kconfig | 1 arch/arc/include/asm/highmem.h | 20 arch/arc/mm/highmem.c | 34 arch/arm/include/asm/highmem.h | 9 arch/arm/include/asm/pgtable.h | 1 arch/arm/lib/uaccess_with_memcpy.c | 7 arch/arm/mach-sa1100/assabet.c | 2 arch/arm/mm/dump.c | 29 arch/arm/mm/fault-armv.c | 7 arch/arm/mm/fault.c | 22 arch/arm/mm/highmem.c | 41 arch/arm/mm/idmap.c | 3 arch/arm/mm/init.c | 2 arch/arm/mm/ioremap.c | 12 arch/arm/mm/mm.h | 2 arch/arm/mm/mmu.c | 35 arch/arm/mm/pgd.c | 40 arch/arm64/Kconfig | 1 arch/arm64/include/asm/kvm_mmu.h | 10 arch/arm64/include/asm/pgalloc.h | 10 arch/arm64/include/asm/pgtable-types.h | 5 arch/arm64/include/asm/pgtable.h | 37 arch/arm64/include/asm/stage2_pgtable.h | 48 arch/arm64/kernel/hibernate.c | 44 arch/arm64/kvm/mmu.c | 209 arch/arm64/mm/fault.c | 9 arch/arm64/mm/hugetlbpage.c | 15 arch/arm64/mm/kasan_init.c | 26 arch/arm64/mm/mmu.c | 52 arch/arm64/mm/pageattr.c | 7 arch/csky/include/asm/highmem.h | 12 arch/csky/mm/highmem.c | 64 arch/h8300/include/asm/pgtable.h | 1 arch/hexagon/include/asm/fixmap.h | 4 arch/hexagon/include/asm/pgtable.h | 1 arch/ia64/include/asm/pgalloc.h | 4 arch/ia64/include/asm/pgtable.h | 17 arch/ia64/mm/fault.c | 7 arch/ia64/mm/hugetlbpage.c | 18 arch/ia64/mm/init.c | 28 arch/microblaze/include/asm/highmem.h | 55 arch/microblaze/mm/highmem.c | 21 arch/microblaze/mm/init.c | 3 arch/mips/include/asm/highmem.h | 11 arch/mips/mm/cache.c | 6 arch/mips/mm/highmem.c | 62 arch/nds32/include/asm/highmem.h | 9 arch/nds32/mm/highmem.c | 49 arch/nios2/include/asm/pgtable.h | 3 arch/nios2/mm/fault.c | 9 arch/nios2/mm/ioremap.c | 6 arch/openrisc/include/asm/pgtable.h | 1 arch/openrisc/mm/fault.c | 10 arch/openrisc/mm/init.c | 4 arch/parisc/include/asm/cacheflush.h | 32 arch/powerpc/Kconfig | 1 arch/powerpc/include/asm/book3s/32/pgtable.h | 1 arch/powerpc/include/asm/book3s/64/hash.h | 4 arch/powerpc/include/asm/book3s/64/pgalloc.h | 4 arch/powerpc/include/asm/book3s/64/pgtable.h | 60 arch/powerpc/include/asm/book3s/64/radix.h | 6 arch/powerpc/include/asm/highmem.h | 56 arch/powerpc/include/asm/nohash/32/pgtable.h | 1 arch/powerpc/include/asm/nohash/64/pgalloc.h | 2 arch/powerpc/include/asm/nohash/64/pgtable-4k.h | 32 arch/powerpc/include/asm/nohash/64/pgtable.h | 6 arch/powerpc/include/asm/pgtable.h | 10 arch/powerpc/kvm/book3s_64_mmu_radix.c | 32 arch/powerpc/lib/code-patching.c | 7 arch/powerpc/mm/book3s64/hash_pgtable.c | 4 arch/powerpc/mm/book3s64/radix_pgtable.c | 26 arch/powerpc/mm/book3s64/subpage_prot.c | 6 arch/powerpc/mm/highmem.c | 26 arch/powerpc/mm/hugetlbpage.c | 28 arch/powerpc/mm/kasan/kasan_init_32.c | 2 arch/powerpc/mm/mem.c | 3 arch/powerpc/mm/nohash/book3e_pgtable.c | 15 arch/powerpc/mm/pgtable.c | 30 arch/powerpc/mm/pgtable_64.c | 10 arch/powerpc/mm/ptdump/hashpagetable.c | 20 arch/powerpc/mm/ptdump/ptdump.c | 12 arch/powerpc/platforms/pseries/hotplug-memory.c | 26 arch/powerpc/xmon/xmon.c | 27 arch/s390/Kconfig | 1 arch/sh/include/asm/pgtable-2level.h | 1 arch/sh/include/asm/pgtable-3level.h | 1 arch/sh/include/asm/pgtable_32.h | 5 arch/sh/include/asm/pgtable_64.h | 5 arch/sh/kernel/io_trapped.c | 7 arch/sh/mm/cache-sh4.c | 4 arch/sh/mm/cache-sh5.c | 7 arch/sh/mm/fault.c | 64 arch/sh/mm/hugetlbpage.c | 28 arch/sh/mm/init.c | 15 arch/sh/mm/kmap.c | 2 arch/sh/mm/tlbex_32.c | 6 arch/sh/mm/tlbex_64.c | 7 arch/sparc/include/asm/highmem.h | 29 arch/sparc/mm/highmem.c | 31 arch/sparc/mm/io-unit.c | 1 arch/sparc/mm/iommu.c | 1 arch/unicore32/include/asm/pgtable.h | 1 arch/unicore32/kernel/hibernate.c | 4 arch/x86/Kconfig | 1 arch/x86/include/asm/fixmap.h | 1 arch/x86/include/asm/highmem.h | 37 arch/x86/include/asm/pgtable_64.h | 6 arch/x86/mm/highmem_32.c | 52 arch/xtensa/include/asm/highmem.h | 31 arch/xtensa/mm/highmem.c | 28 drivers/block/zram/zcomp.c | 7 drivers/dax/dax-private.h | 1 drivers/dax/kmem.c | 28 drivers/gpu/drm/ttm/ttm_bo_util.c | 56 drivers/gpu/drm/vmwgfx/vmwgfx_blit.c | 17 drivers/rapidio/devices/rio_mport_cdev.c | 27 drivers/usb/core/hcd.c | 3 fs/binfmt_elf.c | 4 fs/binfmt_em86.c | 6 fs/binfmt_misc.c | 4 fs/binfmt_script.c | 6 fs/exec.c | 58 fs/fat/fatent.c | 103 fs/fat/inode.c | 6 fs/proc/array.c | 8 fs/seq_file.c | 7 include/asm-generic/5level-fixup.h | 59 include/asm-generic/pgtable-nop4d-hack.h | 64 include/asm-generic/pgtable-nopud.h | 4 include/drm/ttm/ttm_bo_api.h | 4 include/linux/binfmts.h | 3 include/linux/bitops.h | 2 include/linux/elfnote.h | 2 include/linux/highmem.h | 89 include/linux/ioport.h | 1 include/linux/memory_hotplug.h | 9 include/linux/mm.h | 12 include/linux/sched.h | 3 include/linux/seq_file.h | 19 init/Kconfig | 10 init/main.c | 10 kernel/kcov.c | 282 - kernel/kexec_file.c | 5 kernel/kprobes.c | 34 kernel/relay.c | 22 kernel/user.c | 2 lib/Kconfig.debug | 44 lib/Makefile | 2 lib/flex_proportions.c | 7 lib/math/prime_numbers.c | 10 lib/percpu-refcount.c | 6 lib/strncpy_from_user.c | 1 lib/test_bitops.c | 60 lib/test_lockup.c | 2 lib/ubsan.c | 33 lib/zlib_inflate/inffast.c | 91 mm/Kconfig | 4 mm/Makefile | 1 mm/compaction.c | 2 mm/debug_vm_pgtable.c | 382 + mm/filemap.c | 2 mm/frontswap.c | 6 mm/huge_memory.c | 2 mm/hugetlb.c | 16 mm/internal.h | 2 mm/kasan/init.c | 11 mm/ksm.c | 10 mm/list_lru.c | 2 mm/memblock.c | 2 mm/memcontrol.c | 4 mm/memory.c | 10 mm/memory_hotplug.c | 179 mm/mmap.c | 2 mm/mremap.c | 2 mm/page-writeback.c | 2 mm/slub.c | 2 mm/sparse.c | 2 mm/util.c | 22 mm/vmalloc.c | 2 mm/vmscan.c | 6 mm/vmstat.c | 32 mm/zbud.c | 2 scripts/checkpatch.pl | 62 scripts/get_maintainer.pl | 46 security/keys/internal.h | 11 security/keys/keyctl.c | 16 tools/testing/selftests/lib/config | 1 tools/testing/selftests/vm/.gitignore | 1 tools/testing/selftests/vm/Makefile | 75 tools/testing/selftests/vm/mremap_dontunmap.c | 1 tools/testing/selftests/vm/pkey-helpers.h | 557 +- tools/testing/selftests/vm/pkey-powerpc.h | 153 tools/testing/selftests/vm/pkey-x86.h | 191 tools/testing/selftests/vm/protection_keys.c | 2370 ++++++++-- tools/testing/selftests/x86/.gitignore | 1 tools/testing/selftests/x86/Makefile | 2 tools/testing/selftests/x86/pkey-helpers.h | 219 tools/testing/selftests/x86/protection_keys.c | 1506 ------ 200 files changed, 5182 insertions(+), 4033 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-06-02 20:09 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-06-02 20:09 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm A few little subsystems and a start of a lot of MM patches. 128 patches, based on f359287765c04711ff54fbd11645271d8e5ff763: Subsystems affected by this patch series: squashfs ocfs2 parisc vfs mm/slab-generic mm/slub mm/debug mm/pagecache mm/gup mm/swap mm/memcg mm/pagemap mm/memory-failure mm/vmalloc mm/kasan Subsystem: squashfs Philippe Liard <pliard@google.com>: squashfs: migrate from ll_rw_block usage to BIO Subsystem: ocfs2 Jules Irenge <jbi.octave@gmail.com>: ocfs2: add missing annotation for dlm_empty_lockres() Gang He <ghe@suse.com>: ocfs2: mount shared volume without ha stack Subsystem: parisc Andrew Morton <akpm@linux-foundation.org>: arch/parisc/include/asm/pgtable.h: remove unused `old_pte' Subsystem: vfs Jeff Layton <jlayton@redhat.com>: Patch series "vfs: have syncfs() return error when there are writeback: vfs: track per-sb writeback errors and report them to syncfs fs/buffer.c: record blockdev write errors in super_block that it backs Subsystem: mm/slab-generic Vlastimil Babka <vbabka@suse.cz>: usercopy: mark dma-kmalloc caches as usercopy caches Subsystem: mm/slub Dongli Zhang <dongli.zhang@oracle.com>: mm/slub.c: fix corrupted freechain in deactivate_slab() Christoph Lameter <cl@linux.com>: slub: Remove userspace notifier for cache add/remove Christopher Lameter <cl@linux.com>: slub: remove kmalloc under list_lock from list_slab_objects() V2 Qian Cai <cai@lca.pw>: mm/slub: fix stack overruns with SLUB_STATS Andrew Morton <akpm@linux-foundation.org>: Documentation/vm/slub.rst: s/Toggle/Enable/ Subsystem: mm/debug Vlastimil Babka <vbabka@suse.cz>: mm, dump_page(): do not crash with invalid mapping pointer Subsystem: mm/pagecache "Matthew Wilcox (Oracle)" <willy@infradead.org>: Patch series "Change readahead API", v11: mm: move readahead prototypes from mm.h mm: return void from various readahead functions mm: ignore return value of ->readpages mm: move readahead nr_pages check into read_pages mm: add new readahead_control API mm: use readahead_control to pass arguments mm: rename various 'offset' parameters to 'index' mm: rename readahead loop variable to 'i' mm: remove 'page_offset' from readahead loop mm: put readahead pages in cache earlier mm: add readahead address space operation mm: move end_index check out of readahead loop mm: add page_cache_readahead_unbounded mm: document why we don't set PageReadahead mm: use memalloc_nofs_save in readahead path fs: convert mpage_readpages to mpage_readahead btrfs: convert from readpages to readahead erofs: convert uncompressed files from readpages to readahead erofs: convert compressed files from readpages to readahead ext4: convert from readpages to readahead ext4: pass the inode to ext4_mpage_readpages f2fs: convert from readpages to readahead f2fs: pass the inode to f2fs_mpage_readpages fuse: convert from readpages to readahead iomap: convert from readpages to readahead Guoqing Jiang <guoqing.jiang@cloud.ionos.com>: Patch series "Introduce attach/detach_page_private to cleanup code": include/linux/pagemap.h: introduce attach/detach_page_private md: remove __clear_page_buffers and use attach/detach_page_private btrfs: use attach/detach_page_private fs/buffer.c: use attach/detach_page_private f2fs: use attach/detach_page_private iomap: use attach/detach_page_private ntfs: replace attach_page_buffers with attach_page_private orangefs: use attach/detach_page_private buffer_head.h: remove attach_page_buffers mm/migrate.c: call detach_page_private to cleanup code mm_types.h: change set_page_private to inline function "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm/filemap.c: remove misleading comment Chao Yu <yuchao0@huawei.com>: mm/page-writeback.c: remove unused variable NeilBrown <neilb@suse.de>: mm/writeback: replace PF_LESS_THROTTLE with PF_LOCAL_THROTTLE mm/writeback: discard NR_UNSTABLE_NFS, use NR_WRITEBACK instead Subsystem: mm/gup Souptick Joarder <jrdr.linux@gmail.com>: mm/gup.c: update the documentation John Hubbard <jhubbard@nvidia.com>: mm/gup: introduce pin_user_pages_unlocked ivtv: convert get_user_pages() --> pin_user_pages() Miles Chen <miles.chen@mediatek.com>: mm/gup.c: further document vma_permits_fault() Subsystem: mm/swap chenqiwu <chenqiwu@xiaomi.com>: mm/swapfile: use list_{prev,next}_entry() instead of open-coding Qian Cai <cai@lca.pw>: mm/swap_state: fix a data race in swapin_nr_pages Andrea Righi <andrea.righi@canonical.com>: mm: swap: properly update readahead statistics in unuse_pte_range() Wei Yang <richard.weiyang@gmail.com>: mm/swapfile.c: offset is only used when there is more slots mm/swapfile.c: explicitly show ssd/non-ssd is handled mutually exclusive mm/swapfile.c: remove the unnecessary goto for SSD case mm/swapfile.c: simplify the calculation of n_goal mm/swapfile.c: remove the extra check in scan_swap_map_slots() mm/swapfile.c: found_free could be represented by (tmp < max) mm/swapfile.c: tmp is always smaller than max mm/swapfile.c: omit a duplicate code by compare tmp and max first Huang Ying <ying.huang@intel.com>: swap: try to scan more free slots even when fragmented Wei Yang <richard.weiyang@gmail.com>: mm/swapfile.c: classify SWAP_MAP_XXX to make it more readable mm/swapfile.c: __swap_entry_free() always free 1 entry Huang Ying <ying.huang@intel.com>: mm/swapfile.c: use prandom_u32_max() swap: reduce lock contention on swap cache from swap slots allocation Randy Dunlap <rdunlap@infradead.org>: mm: swapfile: fix /proc/swaps heading and Size/Used/Priority alignment Miaohe Lin <linmiaohe@huawei.com>: include/linux/swap.h: delete meaningless __add_to_swap_cache() declaration Subsystem: mm/memcg Yafang Shao <laoar.shao@gmail.com>: mm, memcg: add workingset_restore in memory.stat Kaixu Xia <kaixuxia@tencent.com>: mm: memcontrol: simplify value comparison between count and limit Shakeel Butt <shakeelb@google.com>: memcg: expose root cgroup's memory.stat Jakub Kicinski <kuba@kernel.org>: Patch series "memcg: Slow down swap allocation as the available space gets: mm/memcg: prepare for swap over-high accounting and penalty calculation mm/memcg: move penalty delay clamping out of calculate_high_delay() mm/memcg: move cgroup high memory limit setting into struct page_counter mm/memcg: automatically penalize tasks with high swap use Zefan Li <lizefan@huawei.com>: memcg: fix memcg_kmem_bypass() for remote memcg charging Subsystem: mm/pagemap Steven Price <steven.price@arm.com>: Patch series "Fix W+X debug feature on x86": x86: mm: ptdump: calculate effective permissions correctly mm: ptdump: expand type of 'val' in note_page() Huang Ying <ying.huang@intel.com>: /proc/PID/smaps: Add PMD migration entry parsing chenqiwu <chenqiwu@xiaomi.com>: mm/memory: remove unnecessary pte_devmap case in copy_one_pte() Subsystem: mm/memory-failure Wetp Zhang <wetp.zy@linux.alibaba.com>: mm, memory_failure: don't send BUS_MCEERR_AO for action required error Subsystem: mm/vmalloc Christoph Hellwig <hch@lst.de>: Patch series "decruft the vmalloc API", v2: x86/hyperv: use vmalloc_exec for the hypercall page x86: fix vmap arguments in map_irq_stack staging: android: ion: use vmap instead of vm_map_ram staging: media: ipu3: use vmap instead of reimplementing it dma-mapping: use vmap insted of reimplementing it powerpc: add an ioremap_phb helper powerpc: remove __ioremap_at and __iounmap_at mm: remove __get_vm_area mm: unexport unmap_kernel_range_noflush mm: rename CONFIG_PGTABLE_MAPPING to CONFIG_ZSMALLOC_PGTABLE_MAPPING mm: only allow page table mappings for built-in zsmalloc mm: pass addr as unsigned long to vb_free mm: remove vmap_page_range_noflush and vunmap_page_range mm: rename vmap_page_range to map_kernel_range mm: don't return the number of pages from map_kernel_range{,_noflush} mm: remove map_vm_range mm: remove unmap_vmap_area mm: remove the prot argument from vm_map_ram mm: enforce that vmap can't map pages executable gpu/drm: remove the powerpc hack in drm_legacy_sg_alloc mm: remove the pgprot argument to __vmalloc mm: remove the prot argument to __vmalloc_node mm: remove both instances of __vmalloc_node_flags mm: remove __vmalloc_node_flags_caller mm: switch the test_vmalloc module to use __vmalloc_node mm: remove vmalloc_user_node_flags arm64: use __vmalloc_node in arch_alloc_vmap_stack powerpc: use __vmalloc_node in alloc_vm_stack s390: use __vmalloc_node in stack_alloc Joerg Roedel <jroedel@suse.de>: Patch series "mm: Get rid of vmalloc_sync_(un)mappings()", v3: mm: add functions to track page directory modifications mm/vmalloc: track which page-table levels were modified mm/ioremap: track which page-table levels were modified x86/mm/64: implement arch_sync_kernel_mappings() x86/mm/32: implement arch_sync_kernel_mappings() mm: remove vmalloc_sync_(un)mappings() x86/mm: remove vmalloc faulting Subsystem: mm/kasan Andrey Konovalov <andreyknvl@google.com>: kasan: fix clang compilation warning due to stack protector Kees Cook <keescook@chromium.org>: ubsan: entirely disable alignment checks under UBSAN_TRAP Jing Xia <jing.xia@unisoc.com>: mm/mm_init.c: report kasan-tag information stored in page->flags Andrey Konovalov <andreyknvl@google.com>: kasan: move kasan_report() into report.c Documentation/admin-guide/cgroup-v2.rst | 24 + Documentation/core-api/cachetlb.rst | 2 Documentation/filesystems/locking.rst | 6 Documentation/filesystems/proc.rst | 4 Documentation/filesystems/vfs.rst | 15 Documentation/vm/slub.rst | 2 arch/arm/configs/omap2plus_defconfig | 2 arch/arm64/include/asm/pgtable.h | 3 arch/arm64/include/asm/vmap_stack.h | 6 arch/arm64/mm/dump.c | 2 arch/parisc/include/asm/pgtable.h | 2 arch/powerpc/include/asm/io.h | 10 arch/powerpc/include/asm/pci-bridge.h | 2 arch/powerpc/kernel/irq.c | 5 arch/powerpc/kernel/isa-bridge.c | 28 + arch/powerpc/kernel/pci_64.c | 56 +- arch/powerpc/mm/ioremap_64.c | 50 -- arch/riscv/include/asm/pgtable.h | 4 arch/riscv/mm/ptdump.c | 2 arch/s390/kernel/setup.c | 9 arch/sh/kernel/cpu/sh4/sq.c | 3 arch/x86/hyperv/hv_init.c | 5 arch/x86/include/asm/kvm_host.h | 3 arch/x86/include/asm/pgtable-2level_types.h | 2 arch/x86/include/asm/pgtable-3level_types.h | 2 arch/x86/include/asm/pgtable_64_types.h | 2 arch/x86/include/asm/pgtable_types.h | 8 arch/x86/include/asm/switch_to.h | 23 - arch/x86/kernel/irq_64.c | 2 arch/x86/kernel/setup_percpu.c | 6 arch/x86/kvm/svm/sev.c | 3 arch/x86/mm/dump_pagetables.c | 35 + arch/x86/mm/fault.c | 196 ---------- arch/x86/mm/init_64.c | 5 arch/x86/mm/pti.c | 8 arch/x86/mm/tlb.c | 37 - block/blk-core.c | 1 drivers/acpi/apei/ghes.c | 6 drivers/base/node.c | 2 drivers/block/drbd/drbd_bitmap.c | 4 drivers/block/loop.c | 2 drivers/dax/device.c | 1 drivers/gpu/drm/drm_scatter.c | 11 drivers/gpu/drm/etnaviv/etnaviv_dump.c | 4 drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c | 2 drivers/lightnvm/pblk-init.c | 5 drivers/md/dm-bufio.c | 4 drivers/md/md-bitmap.c | 12 drivers/media/common/videobuf2/videobuf2-dma-sg.c | 3 drivers/media/common/videobuf2/videobuf2-vmalloc.c | 3 drivers/media/pci/ivtv/ivtv-udma.c | 19 - drivers/media/pci/ivtv/ivtv-yuv.c | 17 drivers/media/pci/ivtv/ivtvfb.c | 4 drivers/mtd/ubi/io.c | 4 drivers/pcmcia/electra_cf.c | 45 -- drivers/scsi/sd_zbc.c | 3 drivers/staging/android/ion/ion_heap.c | 4 drivers/staging/media/ipu3/ipu3-css-pool.h | 4 drivers/staging/media/ipu3/ipu3-dmamap.c | 30 - fs/block_dev.c | 7 fs/btrfs/disk-io.c | 4 fs/btrfs/extent_io.c | 64 --- fs/btrfs/extent_io.h | 3 fs/btrfs/inode.c | 39 -- fs/buffer.c | 23 - fs/erofs/data.c | 41 -- fs/erofs/decompressor.c | 2 fs/erofs/zdata.c | 31 - fs/exfat/inode.c | 7 fs/ext2/inode.c | 10 fs/ext4/ext4.h | 5 fs/ext4/inode.c | 25 - fs/ext4/readpage.c | 25 - fs/ext4/verity.c | 35 - fs/f2fs/data.c | 56 +- fs/f2fs/f2fs.h | 14 fs/f2fs/verity.c | 35 - fs/fat/inode.c | 7 fs/file_table.c | 1 fs/fs-writeback.c | 1 fs/fuse/file.c | 100 +---- fs/gfs2/aops.c | 23 - fs/gfs2/dir.c | 9 fs/gfs2/quota.c | 2 fs/hpfs/file.c | 7 fs/iomap/buffered-io.c | 113 +---- fs/iomap/trace.h | 2 fs/isofs/inode.c | 7 fs/jfs/inode.c | 7 fs/mpage.c | 38 -- fs/nfs/blocklayout/extent_tree.c | 2 fs/nfs/internal.h | 10 fs/nfs/write.c | 4 fs/nfsd/vfs.c | 9 fs/nilfs2/inode.c | 15 fs/ntfs/aops.c | 2 fs/ntfs/malloc.h | 2 fs/ntfs/mft.c | 2 fs/ocfs2/aops.c | 34 - fs/ocfs2/dlm/dlmmaster.c | 1 fs/ocfs2/ocfs2.h | 4 fs/ocfs2/slot_map.c | 46 +- fs/ocfs2/super.c | 21 + fs/omfs/file.c | 7 fs/open.c | 3 fs/orangefs/inode.c | 32 - fs/proc/meminfo.c | 3 fs/proc/task_mmu.c | 16 fs/qnx6/inode.c | 7 fs/reiserfs/inode.c | 8 fs/squashfs/block.c | 273 +++++++------- fs/squashfs/decompressor.h | 5 fs/squashfs/decompressor_multi.c | 9 fs/squashfs/decompressor_multi_percpu.c | 17 fs/squashfs/decompressor_single.c | 9 fs/squashfs/lz4_wrapper.c | 17 fs/squashfs/lzo_wrapper.c | 17 fs/squashfs/squashfs.h | 4 fs/squashfs/xz_wrapper.c | 51 +- fs/squashfs/zlib_wrapper.c | 63 +-- fs/squashfs/zstd_wrapper.c | 62 +-- fs/sync.c | 6 fs/ubifs/debug.c | 2 fs/ubifs/lprops.c | 2 fs/ubifs/lpt_commit.c | 4 fs/ubifs/orphan.c | 2 fs/udf/inode.c | 7 fs/xfs/kmem.c | 2 fs/xfs/xfs_aops.c | 13 fs/xfs/xfs_buf.c | 2 fs/zonefs/super.c | 7 include/asm-generic/5level-fixup.h | 5 include/asm-generic/pgtable.h | 27 + include/linux/buffer_head.h | 8 include/linux/fs.h | 18 include/linux/iomap.h | 3 include/linux/memcontrol.h | 4 include/linux/mm.h | 67 ++- include/linux/mm_types.h | 6 include/linux/mmzone.h | 1 include/linux/mpage.h | 4 include/linux/page_counter.h | 8 include/linux/pagemap.h | 193 ++++++++++ include/linux/ptdump.h | 3 include/linux/sched.h | 3 include/linux/swap.h | 17 include/linux/vmalloc.h | 49 +- include/linux/zsmalloc.h | 2 include/trace/events/erofs.h | 6 include/trace/events/f2fs.h | 6 include/trace/events/writeback.h | 5 kernel/bpf/core.c | 6 kernel/bpf/syscall.c | 29 - kernel/dma/remap.c | 48 -- kernel/groups.c | 2 kernel/module.c | 3 kernel/notifier.c | 1 kernel/sys.c | 2 kernel/trace/trace.c | 12 lib/Kconfig.ubsan | 2 lib/ioremap.c | 46 +- lib/test_vmalloc.c | 26 - mm/Kconfig | 4 mm/debug.c | 56 ++ mm/fadvise.c | 6 mm/filemap.c | 1 mm/gup.c | 77 +++- mm/internal.h | 14 mm/kasan/Makefile | 21 - mm/kasan/common.c | 19 - mm/kasan/report.c | 22 + mm/memcontrol.c | 198 +++++++--- mm/memory-failure.c | 15 mm/memory.c | 2 mm/migrate.c | 9 mm/mm_init.c | 16 mm/nommu.c | 52 +- mm/page-writeback.c | 62 ++- mm/page_alloc.c | 7 mm/percpu.c | 2 mm/ptdump.c | 17 mm/readahead.c | 349 ++++++++++-------- mm/slab_common.c | 3 mm/slub.c | 67 ++- mm/swap_state.c | 5 mm/swapfile.c | 194 ++++++---- mm/util.c | 2 mm/vmalloc.c | 399 ++++++++------------- mm/vmscan.c | 4 mm/vmstat.c | 11 mm/zsmalloc.c | 12 net/bridge/netfilter/ebtables.c | 6 net/ceph/ceph_common.c | 3 sound/core/memalloc.c | 2 sound/core/pcm_memory.c | 2 195 files changed, 2292 insertions(+), 2288 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-06-02 4:44 Andrew Morton 2020-06-02 20:08 ` incoming Andrew Morton 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2020-06-02 4:44 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm A few little subsystems and a start of a lot of MM patches. 128 patches, based on 9bf9511e3d9f328c03f6f79bfb741c3d18f2f2c0: Subsystems affected by this patch series: squashfs ocfs2 parisc vfs mm/slab-generic mm/slub mm/debug mm/pagecache mm/gup mm/swap mm/memcg mm/pagemap mm/memory-failure mm/vmalloc mm/kasan Subsystem: squashfs Philippe Liard <pliard@google.com>: squashfs: migrate from ll_rw_block usage to BIO Subsystem: ocfs2 Jules Irenge <jbi.octave@gmail.com>: ocfs2: add missing annotation for dlm_empty_lockres() Gang He <ghe@suse.com>: ocfs2: mount shared volume without ha stack Subsystem: parisc Andrew Morton <akpm@linux-foundation.org>: arch/parisc/include/asm/pgtable.h: remove unused `old_pte' Subsystem: vfs Jeff Layton <jlayton@redhat.com>: Patch series "vfs: have syncfs() return error when there are writeback: vfs: track per-sb writeback errors and report them to syncfs fs/buffer.c: record blockdev write errors in super_block that it backs Subsystem: mm/slab-generic Vlastimil Babka <vbabka@suse.cz>: usercopy: mark dma-kmalloc caches as usercopy caches Subsystem: mm/slub Dongli Zhang <dongli.zhang@oracle.com>: mm/slub.c: fix corrupted freechain in deactivate_slab() Christoph Lameter <cl@linux.com>: slub: Remove userspace notifier for cache add/remove Christopher Lameter <cl@linux.com>: slub: remove kmalloc under list_lock from list_slab_objects() V2 Qian Cai <cai@lca.pw>: mm/slub: fix stack overruns with SLUB_STATS Andrew Morton <akpm@linux-foundation.org>: Documentation/vm/slub.rst: s/Toggle/Enable/ Subsystem: mm/debug Vlastimil Babka <vbabka@suse.cz>: mm, dump_page(): do not crash with invalid mapping pointer Subsystem: mm/pagecache "Matthew Wilcox (Oracle)" <willy@infradead.org>: Patch series "Change readahead API", v11: mm: move readahead prototypes from mm.h mm: return void from various readahead functions mm: ignore return value of ->readpages mm: move readahead nr_pages check into read_pages mm: add new readahead_control API mm: use readahead_control to pass arguments mm: rename various 'offset' parameters to 'index' mm: rename readahead loop variable to 'i' mm: remove 'page_offset' from readahead loop mm: put readahead pages in cache earlier mm: add readahead address space operation mm: move end_index check out of readahead loop mm: add page_cache_readahead_unbounded mm: document why we don't set PageReadahead mm: use memalloc_nofs_save in readahead path fs: convert mpage_readpages to mpage_readahead btrfs: convert from readpages to readahead erofs: convert uncompressed files from readpages to readahead erofs: convert compressed files from readpages to readahead ext4: convert from readpages to readahead ext4: pass the inode to ext4_mpage_readpages f2fs: convert from readpages to readahead f2fs: pass the inode to f2fs_mpage_readpages fuse: convert from readpages to readahead iomap: convert from readpages to readahead Guoqing Jiang <guoqing.jiang@cloud.ionos.com>: Patch series "Introduce attach/detach_page_private to cleanup code": include/linux/pagemap.h: introduce attach/detach_page_private md: remove __clear_page_buffers and use attach/detach_page_private btrfs: use attach/detach_page_private fs/buffer.c: use attach/detach_page_private f2fs: use attach/detach_page_private iomap: use attach/detach_page_private ntfs: replace attach_page_buffers with attach_page_private orangefs: use attach/detach_page_private buffer_head.h: remove attach_page_buffers mm/migrate.c: call detach_page_private to cleanup code mm_types.h: change set_page_private to inline function "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm/filemap.c: remove misleading comment Chao Yu <yuchao0@huawei.com>: mm/page-writeback.c: remove unused variable NeilBrown <neilb@suse.de>: mm/writeback: replace PF_LESS_THROTTLE with PF_LOCAL_THROTTLE mm/writeback: discard NR_UNSTABLE_NFS, use NR_WRITEBACK instead Subsystem: mm/gup Souptick Joarder <jrdr.linux@gmail.com>: mm/gup.c: update the documentation John Hubbard <jhubbard@nvidia.com>: mm/gup: introduce pin_user_pages_unlocked ivtv: convert get_user_pages() --> pin_user_pages() Miles Chen <miles.chen@mediatek.com>: mm/gup.c: further document vma_permits_fault() Subsystem: mm/swap chenqiwu <chenqiwu@xiaomi.com>: mm/swapfile: use list_{prev,next}_entry() instead of open-coding Qian Cai <cai@lca.pw>: mm/swap_state: fix a data race in swapin_nr_pages Andrea Righi <andrea.righi@canonical.com>: mm: swap: properly update readahead statistics in unuse_pte_range() Wei Yang <richard.weiyang@gmail.com>: mm/swapfile.c: offset is only used when there is more slots mm/swapfile.c: explicitly show ssd/non-ssd is handled mutually exclusive mm/swapfile.c: remove the unnecessary goto for SSD case mm/swapfile.c: simplify the calculation of n_goal mm/swapfile.c: remove the extra check in scan_swap_map_slots() mm/swapfile.c: found_free could be represented by (tmp < max) mm/swapfile.c: tmp is always smaller than max mm/swapfile.c: omit a duplicate code by compare tmp and max first Huang Ying <ying.huang@intel.com>: swap: try to scan more free slots even when fragmented Wei Yang <richard.weiyang@gmail.com>: mm/swapfile.c: classify SWAP_MAP_XXX to make it more readable mm/swapfile.c: __swap_entry_free() always free 1 entry Huang Ying <ying.huang@intel.com>: mm/swapfile.c: use prandom_u32_max() swap: reduce lock contention on swap cache from swap slots allocation Randy Dunlap <rdunlap@infradead.org>: mm: swapfile: fix /proc/swaps heading and Size/Used/Priority alignment Miaohe Lin <linmiaohe@huawei.com>: include/linux/swap.h: delete meaningless __add_to_swap_cache() declaration Subsystem: mm/memcg Yafang Shao <laoar.shao@gmail.com>: mm, memcg: add workingset_restore in memory.stat Kaixu Xia <kaixuxia@tencent.com>: mm: memcontrol: simplify value comparison between count and limit Shakeel Butt <shakeelb@google.com>: memcg: expose root cgroup's memory.stat Jakub Kicinski <kuba@kernel.org>: Patch series "memcg: Slow down swap allocation as the available space gets: mm/memcg: prepare for swap over-high accounting and penalty calculation mm/memcg: move penalty delay clamping out of calculate_high_delay() mm/memcg: move cgroup high memory limit setting into struct page_counter mm/memcg: automatically penalize tasks with high swap use Zefan Li <lizefan@huawei.com>: memcg: fix memcg_kmem_bypass() for remote memcg charging Subsystem: mm/pagemap Steven Price <steven.price@arm.com>: Patch series "Fix W+X debug feature on x86": x86: mm: ptdump: calculate effective permissions correctly mm: ptdump: expand type of 'val' in note_page() Huang Ying <ying.huang@intel.com>: /proc/PID/smaps: Add PMD migration entry parsing chenqiwu <chenqiwu@xiaomi.com>: mm/memory: remove unnecessary pte_devmap case in copy_one_pte() Subsystem: mm/memory-failure Wetp Zhang <wetp.zy@linux.alibaba.com>: mm, memory_failure: don't send BUS_MCEERR_AO for action required error Subsystem: mm/vmalloc Christoph Hellwig <hch@lst.de>: Patch series "decruft the vmalloc API", v2: x86/hyperv: use vmalloc_exec for the hypercall page x86: fix vmap arguments in map_irq_stack staging: android: ion: use vmap instead of vm_map_ram staging: media: ipu3: use vmap instead of reimplementing it dma-mapping: use vmap insted of reimplementing it powerpc: add an ioremap_phb helper powerpc: remove __ioremap_at and __iounmap_at mm: remove __get_vm_area mm: unexport unmap_kernel_range_noflush mm: rename CONFIG_PGTABLE_MAPPING to CONFIG_ZSMALLOC_PGTABLE_MAPPING mm: only allow page table mappings for built-in zsmalloc mm: pass addr as unsigned long to vb_free mm: remove vmap_page_range_noflush and vunmap_page_range mm: rename vmap_page_range to map_kernel_range mm: don't return the number of pages from map_kernel_range{,_noflush} mm: remove map_vm_range mm: remove unmap_vmap_area mm: remove the prot argument from vm_map_ram mm: enforce that vmap can't map pages executable gpu/drm: remove the powerpc hack in drm_legacy_sg_alloc mm: remove the pgprot argument to __vmalloc mm: remove the prot argument to __vmalloc_node mm: remove both instances of __vmalloc_node_flags mm: remove __vmalloc_node_flags_caller mm: switch the test_vmalloc module to use __vmalloc_node mm: remove vmalloc_user_node_flags arm64: use __vmalloc_node in arch_alloc_vmap_stack powerpc: use __vmalloc_node in alloc_vm_stack s390: use __vmalloc_node in stack_alloc Joerg Roedel <jroedel@suse.de>: Patch series "mm: Get rid of vmalloc_sync_(un)mappings()", v3: mm: add functions to track page directory modifications mm/vmalloc: track which page-table levels were modified mm/ioremap: track which page-table levels were modified x86/mm/64: implement arch_sync_kernel_mappings() x86/mm/32: implement arch_sync_kernel_mappings() mm: remove vmalloc_sync_(un)mappings() x86/mm: remove vmalloc faulting Subsystem: mm/kasan Andrey Konovalov <andreyknvl@google.com>: kasan: fix clang compilation warning due to stack protector Kees Cook <keescook@chromium.org>: ubsan: entirely disable alignment checks under UBSAN_TRAP Jing Xia <jing.xia@unisoc.com>: mm/mm_init.c: report kasan-tag information stored in page->flags Andrey Konovalov <andreyknvl@google.com>: kasan: move kasan_report() into report.c Documentation/admin-guide/cgroup-v2.rst | 24 + Documentation/core-api/cachetlb.rst | 2 Documentation/filesystems/locking.rst | 6 Documentation/filesystems/proc.rst | 4 Documentation/filesystems/vfs.rst | 15 Documentation/vm/slub.rst | 2 arch/arm/configs/omap2plus_defconfig | 2 arch/arm64/include/asm/pgtable.h | 3 arch/arm64/include/asm/vmap_stack.h | 6 arch/arm64/mm/dump.c | 2 arch/parisc/include/asm/pgtable.h | 2 arch/powerpc/include/asm/io.h | 10 arch/powerpc/include/asm/pci-bridge.h | 2 arch/powerpc/kernel/irq.c | 5 arch/powerpc/kernel/isa-bridge.c | 28 + arch/powerpc/kernel/pci_64.c | 56 +- arch/powerpc/mm/ioremap_64.c | 50 -- arch/riscv/include/asm/pgtable.h | 4 arch/riscv/mm/ptdump.c | 2 arch/s390/kernel/setup.c | 9 arch/sh/kernel/cpu/sh4/sq.c | 3 arch/x86/hyperv/hv_init.c | 5 arch/x86/include/asm/kvm_host.h | 3 arch/x86/include/asm/pgtable-2level_types.h | 2 arch/x86/include/asm/pgtable-3level_types.h | 2 arch/x86/include/asm/pgtable_64_types.h | 2 arch/x86/include/asm/pgtable_types.h | 8 arch/x86/include/asm/switch_to.h | 23 - arch/x86/kernel/irq_64.c | 2 arch/x86/kernel/setup_percpu.c | 6 arch/x86/kvm/svm/sev.c | 3 arch/x86/mm/dump_pagetables.c | 35 + arch/x86/mm/fault.c | 196 ---------- arch/x86/mm/init_64.c | 5 arch/x86/mm/pti.c | 8 arch/x86/mm/tlb.c | 37 - block/blk-core.c | 1 drivers/acpi/apei/ghes.c | 6 drivers/base/node.c | 2 drivers/block/drbd/drbd_bitmap.c | 4 drivers/block/loop.c | 2 drivers/dax/device.c | 1 drivers/gpu/drm/drm_scatter.c | 11 drivers/gpu/drm/etnaviv/etnaviv_dump.c | 4 drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c | 2 drivers/lightnvm/pblk-init.c | 5 drivers/md/dm-bufio.c | 4 drivers/md/md-bitmap.c | 12 drivers/media/common/videobuf2/videobuf2-dma-sg.c | 3 drivers/media/common/videobuf2/videobuf2-vmalloc.c | 3 drivers/media/pci/ivtv/ivtv-udma.c | 19 - drivers/media/pci/ivtv/ivtv-yuv.c | 17 drivers/media/pci/ivtv/ivtvfb.c | 4 drivers/mtd/ubi/io.c | 4 drivers/pcmcia/electra_cf.c | 45 -- drivers/scsi/sd_zbc.c | 3 drivers/staging/android/ion/ion_heap.c | 4 drivers/staging/media/ipu3/ipu3-css-pool.h | 4 drivers/staging/media/ipu3/ipu3-dmamap.c | 30 - fs/block_dev.c | 7 fs/btrfs/disk-io.c | 4 fs/btrfs/extent_io.c | 64 --- fs/btrfs/extent_io.h | 3 fs/btrfs/inode.c | 39 -- fs/buffer.c | 23 - fs/erofs/data.c | 41 -- fs/erofs/decompressor.c | 2 fs/erofs/zdata.c | 31 - fs/exfat/inode.c | 7 fs/ext2/inode.c | 10 fs/ext4/ext4.h | 5 fs/ext4/inode.c | 25 - fs/ext4/readpage.c | 25 - fs/ext4/verity.c | 35 - fs/f2fs/data.c | 56 +- fs/f2fs/f2fs.h | 14 fs/f2fs/verity.c | 35 - fs/fat/inode.c | 7 fs/file_table.c | 1 fs/fs-writeback.c | 1 fs/fuse/file.c | 100 +---- fs/gfs2/aops.c | 23 - fs/gfs2/dir.c | 9 fs/gfs2/quota.c | 2 fs/hpfs/file.c | 7 fs/iomap/buffered-io.c | 113 +---- fs/iomap/trace.h | 2 fs/isofs/inode.c | 7 fs/jfs/inode.c | 7 fs/mpage.c | 38 -- fs/nfs/blocklayout/extent_tree.c | 2 fs/nfs/internal.h | 10 fs/nfs/write.c | 4 fs/nfsd/vfs.c | 9 fs/nilfs2/inode.c | 15 fs/ntfs/aops.c | 2 fs/ntfs/malloc.h | 2 fs/ntfs/mft.c | 2 fs/ocfs2/aops.c | 34 - fs/ocfs2/dlm/dlmmaster.c | 1 fs/ocfs2/ocfs2.h | 4 fs/ocfs2/slot_map.c | 46 +- fs/ocfs2/super.c | 21 + fs/omfs/file.c | 7 fs/open.c | 3 fs/orangefs/inode.c | 32 - fs/proc/meminfo.c | 3 fs/proc/task_mmu.c | 16 fs/qnx6/inode.c | 7 fs/reiserfs/inode.c | 8 fs/squashfs/block.c | 273 +++++++------- fs/squashfs/decompressor.h | 5 fs/squashfs/decompressor_multi.c | 9 fs/squashfs/decompressor_multi_percpu.c | 17 fs/squashfs/decompressor_single.c | 9 fs/squashfs/lz4_wrapper.c | 17 fs/squashfs/lzo_wrapper.c | 17 fs/squashfs/squashfs.h | 4 fs/squashfs/xz_wrapper.c | 51 +- fs/squashfs/zlib_wrapper.c | 63 +-- fs/squashfs/zstd_wrapper.c | 62 +-- fs/sync.c | 6 fs/ubifs/debug.c | 2 fs/ubifs/lprops.c | 2 fs/ubifs/lpt_commit.c | 4 fs/ubifs/orphan.c | 2 fs/udf/inode.c | 7 fs/xfs/kmem.c | 2 fs/xfs/xfs_aops.c | 13 fs/xfs/xfs_buf.c | 2 fs/zonefs/super.c | 7 include/asm-generic/5level-fixup.h | 5 include/asm-generic/pgtable.h | 27 + include/linux/buffer_head.h | 8 include/linux/fs.h | 18 include/linux/iomap.h | 3 include/linux/memcontrol.h | 4 include/linux/mm.h | 67 ++- include/linux/mm_types.h | 6 include/linux/mmzone.h | 1 include/linux/mpage.h | 4 include/linux/page_counter.h | 8 include/linux/pagemap.h | 193 ++++++++++ include/linux/ptdump.h | 3 include/linux/sched.h | 3 include/linux/swap.h | 17 include/linux/vmalloc.h | 49 +- include/linux/zsmalloc.h | 2 include/trace/events/erofs.h | 6 include/trace/events/f2fs.h | 6 include/trace/events/writeback.h | 5 kernel/bpf/core.c | 6 kernel/bpf/syscall.c | 29 - kernel/dma/remap.c | 48 -- kernel/groups.c | 2 kernel/module.c | 3 kernel/notifier.c | 1 kernel/sys.c | 2 kernel/trace/trace.c | 12 lib/Kconfig.ubsan | 2 lib/ioremap.c | 46 +- lib/test_vmalloc.c | 26 - mm/Kconfig | 4 mm/debug.c | 56 ++ mm/fadvise.c | 6 mm/filemap.c | 1 mm/gup.c | 77 +++- mm/internal.h | 14 mm/kasan/Makefile | 21 - mm/kasan/common.c | 19 - mm/kasan/report.c | 22 + mm/memcontrol.c | 198 +++++++--- mm/memory-failure.c | 15 mm/memory.c | 2 mm/migrate.c | 9 mm/mm_init.c | 16 mm/nommu.c | 52 +- mm/page-writeback.c | 62 ++- mm/page_alloc.c | 7 mm/percpu.c | 2 mm/ptdump.c | 17 mm/readahead.c | 349 ++++++++++-------- mm/slab_common.c | 3 mm/slub.c | 67 ++- mm/swap_state.c | 5 mm/swapfile.c | 194 ++++++---- mm/util.c | 2 mm/vmalloc.c | 399 ++++++++------------- mm/vmscan.c | 4 mm/vmstat.c | 11 mm/zsmalloc.c | 12 net/bridge/netfilter/ebtables.c | 6 net/ceph/ceph_common.c | 3 sound/core/memalloc.c | 2 sound/core/pcm_memory.c | 2 195 files changed, 2292 insertions(+), 2288 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-06-02 4:44 incoming Andrew Morton @ 2020-06-02 20:08 ` Andrew Morton 2020-06-02 20:45 ` incoming Linus Torvalds 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2020-06-02 20:08 UTC (permalink / raw) To: Linus Torvalds, mm-commits, linux-mm The local_lock merge made rather a mess of all of this. I'm cooking up a full resend of the same material. ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-06-02 20:08 ` incoming Andrew Morton @ 2020-06-02 20:45 ` Linus Torvalds 2020-06-02 21:38 ` incoming Andrew Morton 0 siblings, 1 reply; 349+ messages in thread From: Linus Torvalds @ 2020-06-02 20:45 UTC (permalink / raw) To: Andrew Morton; +Cc: mm-commits, Linux-MM On Tue, Jun 2, 2020 at 1:08 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > The local_lock merge made rather a mess of all of this. I'm > cooking up a full resend of the same material. Hmm. I have no issues with conflicts, and already took your previous series. I've pushed it out now - does my tree match what you expect? Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-06-02 20:45 ` incoming Linus Torvalds @ 2020-06-02 21:38 ` Andrew Morton 2020-06-02 22:18 ` incoming Linus Torvalds 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2020-06-02 21:38 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, Linux-MM On Tue, 2 Jun 2020 13:45:49 -0700 Linus Torvalds <torvalds@linux-foundation.org> wrote: > On Tue, Jun 2, 2020 at 1:08 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > > > The local_lock merge made rather a mess of all of this. I'm > > cooking up a full resend of the same material. > > Hmm. I have no issues with conflicts, and already took your previous series. Well that's odd. > I've pushed it out now - does my tree match what you expect? Yup, thanks. ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-06-02 21:38 ` incoming Andrew Morton @ 2020-06-02 22:18 ` Linus Torvalds 0 siblings, 0 replies; 349+ messages in thread From: Linus Torvalds @ 2020-06-02 22:18 UTC (permalink / raw) To: Andrew Morton; +Cc: mm-commits, Linux-MM On Tue, Jun 2, 2020 at 2:38 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > On Tue, 2 Jun 2020 13:45:49 -0700 Linus Torvalds <torvalds@linux-foundation.org> wrote: > > > > Hmm. I have no issues with conflicts, and already took your previous series. > > Well that's odd. I meant "I saw the conflicts and had no issue with them". Nothing odd. And I actually much prefer seeing conflicts from your series (against other pulls I've done) over having you delay your patch bombs because of any fear for them. Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-05-28 5:20 Andrew Morton 2020-05-28 20:10 ` incoming Linus Torvalds 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2020-05-28 5:20 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 5 fixes, based on 444fc5cde64330661bf59944c43844e7d4c2ccd8: Qian Cai <cai@lca.pw>: mm/z3fold: silence kmemleak false positives of slots Hugh Dickins <hughd@google.com>: mm,thp: stop leaking unreleased file pages Konstantin Khlebnikov <khlebnikov@yandex-team.ru>: mm: remove VM_BUG_ON(PageSlab()) from page_mapcount() Alexander Potapenko <glider@google.com>: fs/binfmt_elf.c: allocate initialized memory in fill_thread_core_info() Arnd Bergmann <arnd@arndb.de>: include/asm-generic/topology.h: guard cpumask_of_node() macro argument fs/binfmt_elf.c | 2 +- include/asm-generic/topology.h | 2 +- include/linux/mm.h | 19 +++++++++++++++---- mm/khugepaged.c | 1 + mm/z3fold.c | 3 +++ 5 files changed, 21 insertions(+), 6 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-05-28 5:20 incoming Andrew Morton @ 2020-05-28 20:10 ` Linus Torvalds 2020-05-29 20:31 ` incoming Andrew Morton 0 siblings, 1 reply; 349+ messages in thread From: Linus Torvalds @ 2020-05-28 20:10 UTC (permalink / raw) To: Andrew Morton; +Cc: mm-commits, Linux-MM Hmm.. On Wed, May 27, 2020 at 10:20 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > fs/binfmt_elf.c | 2 +- > include/asm-generic/topology.h | 2 +- > include/linux/mm.h | 19 +++++++++++++++---- > mm/khugepaged.c | 1 + > mm/z3fold.c | 3 +++ > 5 files changed, 21 insertions(+), 6 deletions(-) I wonder how you generate that diffstat. The change to <linux/mm.h> simply doesn't match what you sent me. The patch you sent me that changed mm.h had this: include/linux/mm.h | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) (note 15 lines changed: it's +13 and -2) but now suddenly in your overall diffstat you have that include/linux/mm.h | 19 +++++++++++++++---- with +15/-4. So your diffstat simply doesn't match what you are sending. What's going on? Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-05-28 20:10 ` incoming Linus Torvalds @ 2020-05-29 20:31 ` Andrew Morton 2020-05-29 20:38 ` incoming Linus Torvalds 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2020-05-29 20:31 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, Linux-MM On Thu, 28 May 2020 13:10:18 -0700 Linus Torvalds <torvalds@linux-foundation.org> wrote: > Hmm.. > > On Wed, May 27, 2020 at 10:20 PM Andrew Morton > <akpm@linux-foundation.org> wrote: > > > > fs/binfmt_elf.c | 2 +- > > include/asm-generic/topology.h | 2 +- > > include/linux/mm.h | 19 +++++++++++++++---- > > mm/khugepaged.c | 1 + > > mm/z3fold.c | 3 +++ > > 5 files changed, 21 insertions(+), 6 deletions(-) > > I wonder how you generate that diffstat. > > The change to <linux/mm.h> simply doesn't match what you sent me. The > patch you sent me that changed mm.h had this: > > include/linux/mm.h | 15 +++++++++++++-- > 1 file changed, 13 insertions(+), 2 deletions(-) > > (note 15 lines changed: it's +13 and -2) but now suddenly in your > overall diffstat you have that > > include/linux/mm.h | 19 +++++++++++++++---- > > with +15/-4. > > So your diffstat simply doesn't match what you are sending. What's going on? > Bah. I got lazy (didn't want to interrupt an ongoing build) so I generated the diffstat prior to folding two patches into a single one. Evidently diffstat isn't as smart as I had assumed! ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-05-29 20:31 ` incoming Andrew Morton @ 2020-05-29 20:38 ` Linus Torvalds 2020-05-29 21:12 ` incoming Andrew Morton 0 siblings, 1 reply; 349+ messages in thread From: Linus Torvalds @ 2020-05-29 20:38 UTC (permalink / raw) To: Andrew Morton; +Cc: mm-commits, Linux-MM On Fri, May 29, 2020 at 1:31 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > Bah. I got lazy (didn't want to interrupt an ongoing build) so I > generated the diffstat prior to folding two patches into a single one. > Evidently diffstat isn't as smart as I had assumed! Ahh. Yes - given two patches, diffstat just adds up the line number counts for the individual diffs, it doesn't count some kind of "combined diff result" line counts. Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-05-29 20:38 ` incoming Linus Torvalds @ 2020-05-29 21:12 ` Andrew Morton 2020-05-29 21:20 ` incoming Linus Torvalds 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2020-05-29 21:12 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, Linux-MM On Fri, 29 May 2020 13:38:35 -0700 Linus Torvalds <torvalds@linux-foundation.org> wrote: > On Fri, May 29, 2020 at 1:31 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > > > Bah. I got lazy (didn't want to interrupt an ongoing build) so I > > generated the diffstat prior to folding two patches into a single one. > > Evidently diffstat isn't as smart as I had assumed! > > Ahh. Yes - given two patches, diffstat just adds up the line number > counts for the individual diffs, it doesn't count some kind of > "combined diff result" line counts. Stupid diffstat. Means that basically all my diffstats are very wrong. Thanks for spotting it. I can fix that... ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-05-29 21:12 ` incoming Andrew Morton @ 2020-05-29 21:20 ` Linus Torvalds 0 siblings, 0 replies; 349+ messages in thread From: Linus Torvalds @ 2020-05-29 21:20 UTC (permalink / raw) To: Andrew Morton; +Cc: mm-commits, Linux-MM On Fri, May 29, 2020 at 2:12 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > Stupid diffstat. Means that basically all my diffstats are very wrong. I'm actually used to diffstats not matching 100%/ Usually it's not due to this issue - a "git diff --stat" *will* give the stat from the actual combined diff result - but with git diffstats the issue is that I might have gotten a patch from another source. So the diffstat I see after-the-merge is possibly different from the pre-merge diffstat simply due to merge issues. So then I usually take a look at "ok, why did that diffstat differ" and go "Ahh". In your case, when I looked at the diffstat, I couldn't for the life of me see how you would have gotten the diffstat you did, since I only saw a single patch with no merge issues. > Thanks for spotting it. > > I can fix that... I can also just live with it, knowing what your workflow is. The diffstat matching exactly just isn't that important - in fact, different versions of "diff" can give slightly different output anyway depending on diff algorithms even when they are looking at the exact same before/after state. There's not necessarily always only one way to generate a valid diff. So to me, the diffstat is more of a guide than a hard thing, and I want to see the rough outline, In fact, one reason I want to see it in pull requests is actually just that I want to get a feel for what changes even before I do the pull or merge, so it's not just a "match against what I get" thing. Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-05-23 5:22 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-05-23 5:22 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 11 fixes, based on 444565650a5fe9c63ddf153e6198e31705dedeb2: David Hildenbrand <david@redhat.com>: device-dax: don't leak kernel memory to user space after unloading kmem Nick Desaulniers <ndesaulniers@google.com>: x86: bitops: fix build regression John Hubbard <jhubbard@nvidia.com>: rapidio: fix an error in get_user_pages_fast() error handling selftests/vm/.gitignore: add mremap_dontunmap selftests/vm/write_to_hugetlbfs.c: fix unused variable warning Marco Elver <elver@google.com>: kasan: disable branch tracing for core runtime Arnd Bergmann <arnd@arndb.de>: sh: include linux/time_types.h for sockios Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>: MAINTAINERS: update email address for Naoya Horiguchi Mike Rapoport <rppt@linux.ibm.com>: sparc32: use PUD rather than PGD to get PMD in srmmu_nocache_init() Uladzislau Rezki <uladzislau.rezki@sony.com>: z3fold: fix use-after-free when freeing handles Baoquan He <bhe@redhat.com>: MAINTAINERS: add files related to kdump MAINTAINERS | 7 ++++++- arch/sh/include/uapi/asm/sockios.h | 2 ++ arch/sparc/mm/srmmu.c | 2 +- arch/x86/include/asm/bitops.h | 12 ++++++------ drivers/dax/kmem.c | 14 +++++++++++--- drivers/rapidio/devices/rio_mport_cdev.c | 5 +++++ mm/kasan/Makefile | 16 ++++++++-------- mm/kasan/generic.c | 1 - mm/kasan/tags.c | 1 - mm/z3fold.c | 11 ++++++----- tools/testing/selftests/vm/.gitignore | 1 + tools/testing/selftests/vm/write_to_hugetlbfs.c | 2 -- 12 files changed, 46 insertions(+), 28 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-05-14 0:50 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-05-14 0:50 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 7 fixes, based on 24085f70a6e1b0cb647ec92623284641d8270637: Yafang Shao <laoar.shao@gmail.com>: mm, memcg: fix inconsistent oom event behavior Roman Penyaev <rpenyaev@suse.de>: epoll: call final ep_events_available() check under the lock Peter Xu <peterx@redhat.com>: mm/gup: fix fixup_user_fault() on multiple retries Brian Geffon <bgeffon@google.com>: userfaultfd: fix remap event with MREMAP_DONTUNMAP Vasily Averin <vvs@virtuozzo.com>: ipc/util.c: sysvipc_find_ipc() incorrectly updates position index Andrey Konovalov <andreyknvl@google.com>: kasan: consistently disable debugging features kasan: add missing functions declarations to kasan.h fs/eventpoll.c | 48 ++++++++++++++++++++++++++------------------- include/linux/memcontrol.h | 2 + ipc/util.c | 12 +++++------ mm/gup.c | 12 ++++++----- mm/kasan/Makefile | 15 +++++++++----- mm/kasan/kasan.h | 34 ++++++++++++++++++++++++++++++- mm/mremap.c | 2 - 7 files changed, 86 insertions(+), 39 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-05-08 1:35 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-05-08 1:35 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 14 fixes and one selftest to verify the ipc fixes herein. 15 patches, based on a811c1fa0a02c062555b54651065899437bacdbe: Oleg Nesterov <oleg@redhat.com>: ipc/mqueue.c: change __do_notify() to bypass check_kill_permission() Yafang Shao <laoar.shao@gmail.com>: mm, memcg: fix error return value of mem_cgroup_css_alloc() David Hildenbrand <david@redhat.com>: mm/page_alloc: fix watchdog soft lockups during set_zone_contiguous() Maciej Grochowski <maciej.grochowski@pm.me>: kernel/kcov.c: fix typos in kcov_remote_start documentation Ivan Delalande <colona@arista.com>: scripts/decodecode: fix trapping instruction formatting Janakarajan Natarajan <Janakarajan.Natarajan@amd.com>: arch/x86/kvm/svm/sev.c: change flag passed to GUP fast in sev_pin_memory() Khazhismel Kumykov <khazhy@google.com>: eventpoll: fix missing wakeup for ovflist in ep_poll_callback Aymeric Agon-Rambosson <aymeric.agon@yandex.com>: scripts/gdb: repair rb_first() and rb_last() Waiman Long <longman@redhat.com>: mm/slub: fix incorrect interpretation of s->offset Filipe Manana <fdmanana@suse.com>: percpu: make pcpu_alloc() aware of current gfp context Roman Penyaev <rpenyaev@suse.de>: kselftests: introduce new epoll60 testcase for catching lost wakeups epoll: atomically remove wait entry on wake up Qiwu Chen <qiwuchen55@gmail.com>: mm/vmscan: remove unnecessary argument description of isolate_lru_pages() Kees Cook <keescook@chromium.org>: ubsan: disable UBSAN_ALIGNMENT under COMPILE_TEST Henry Willard <henry.willard@oracle.com>: mm: limit boost_watermark on small zones arch/x86/kvm/svm/sev.c | 2 fs/eventpoll.c | 61 ++-- ipc/mqueue.c | 34 +- kernel/kcov.c | 4 lib/Kconfig.ubsan | 15 - mm/memcontrol.c | 15 - mm/page_alloc.c | 9 mm/percpu.c | 14 mm/slub.c | 45 ++- mm/vmscan.c | 1 scripts/decodecode | 2 scripts/gdb/linux/rbtree.py | 4 tools/testing/selftests/filesystems/epoll/epoll_wakeup_test.c | 146 ++++++++++ tools/testing/selftests/wireguard/qemu/debug.config | 1 14 files changed, 275 insertions(+), 78 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-04-21 1:13 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-04-21 1:13 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 15 fixes, based on ae83d0b416db002fe95601e7f97f64b59514d936: Masahiro Yamada <masahiroy@kernel.org>: sh: fix build error in mm/init.c Kees Cook <keescook@chromium.org>: slub: avoid redzone when choosing freepointer location Peter Xu <peterx@redhat.com>: mm/userfaultfd: disable userfaultfd-wp on x86_32 Bartosz Golaszewski <bgolaszewski@baylibre.com>: MAINTAINERS: add an entry for kfifo Longpeng <longpeng2@huawei.com>: mm/hugetlb: fix a addressing exception caused by huge_pte_offset Michal Hocko <mhocko@suse.com>: mm, gup: return EINTR when gup is interrupted by fatal signals Christophe JAILLET <christophe.jaillet@wanadoo.fr>: checkpatch: fix a typo in the regex for $allocFunctions George Burgess IV <gbiv@google.com>: tools/build: tweak unused value workaround Muchun Song <songmuchun@bytedance.com>: mm/ksm: fix NULL pointer dereference when KSM zero page is enabled Hugh Dickins <hughd@google.com>: mm/shmem: fix build without THP Jann Horn <jannh@google.com>: vmalloc: fix remap_vmalloc_range() bounds checks Hugh Dickins <hughd@google.com>: shmem: fix possible deadlocks on shmlock_user_lock Yang Shi <yang.shi@linux.alibaba.com>: mm: shmem: disable interrupt when acquiring info->lock in userfaultfd_copy path Sudip Mukherjee <sudipm.mukherjee@gmail.com>: coredump: fix null pointer dereference on coredump Lucas Stach <l.stach@pengutronix.de>: tools/vm: fix cross-compile build MAINTAINERS | 7 +++++++ arch/sh/mm/init.c | 2 +- arch/x86/Kconfig | 2 +- fs/coredump.c | 2 ++ fs/proc/vmcore.c | 5 +++-- include/linux/vmalloc.h | 2 +- mm/gup.c | 2 +- mm/hugetlb.c | 14 ++++++++------ mm/ksm.c | 12 ++++++++++-- mm/shmem.c | 13 ++++++++----- mm/slub.c | 12 ++++++++++-- mm/vmalloc.c | 16 +++++++++++++--- samples/vfio-mdev/mdpy.c | 2 +- scripts/checkpatch.pl | 2 +- tools/build/feature/test-sync-compare-and-swap.c | 2 +- tools/vm/Makefile | 2 ++ 16 files changed, 70 insertions(+), 27 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-04-12 7:41 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-04-12 7:41 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm A straggler. This patch caused a lot of build errors on a lot of architectures for a long time, but Anshuman believes it's all fixed up now. 1 patch, based on GIT b032227c62939b5481bcd45442b36dfa263f4a7c. Anshuman Khandual <anshuman.khandual@arm.com>: mm/debug: add tests validating architecture page table helpers Documentation/features/debug/debug-vm-pgtable/arch-support.txt | 34 arch/arc/Kconfig | 1 arch/arm64/Kconfig | 1 arch/powerpc/Kconfig | 1 arch/s390/Kconfig | 1 arch/x86/Kconfig | 1 arch/x86/include/asm/pgtable_64.h | 6 include/linux/mmdebug.h | 5 init/main.c | 2 lib/Kconfig.debug | 26 mm/Makefile | 1 mm/debug_vm_pgtable.c | 392 ++++++++++ 12 files changed, 471 insertions(+) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-04-10 21:30 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-04-10 21:30 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm Almost all of the rest of MM. Various other things. 35 patches, based on c0cc271173b2e1c2d8d0ceaef14e4dfa79eefc0d. Subsystems affected by this patch series: hfs mm/memcg mm/slab-generic mm/slab mm/pagealloc mm/gup ocfs2 mm/hugetlb mm/pagemap mm/memremap kmod misc seqfile Subsystem: hfs Simon Gander <simon@tuxera.com>: hfsplus: fix crash and filesystem corruption when deleting files Subsystem: mm/memcg Jakub Kicinski <kuba@kernel.org>: mm, memcg: do not high throttle allocators based on wraparound Subsystem: mm/slab-generic Qiujun Huang <hqjagain@gmail.com>: mm, slab_common: fix a typo in comment "eariler"->"earlier" Subsystem: mm/slab Mauro Carvalho Chehab <mchehab+huawei@kernel.org>: docs: mm: slab.h: fix a broken cross-reference Subsystem: mm/pagealloc Randy Dunlap <rdunlap@infradead.org>: mm/page_alloc.c: fix kernel-doc warning Jason Yan <yanaijie@huawei.com>: mm/page_alloc: make pcpu_drain_mutex and pcpu_drain static Subsystem: mm/gup Miles Chen <miles.chen@mediatek.com>: mm/gup: fix null pointer dereference detected by coverity Subsystem: ocfs2 Changwei Ge <chge@linux.alibaba.com>: ocfs2: no need try to truncate file beyond i_size Subsystem: mm/hugetlb Aslan Bakirov <aslan@fb.com>: mm: cma: NUMA node interface Roman Gushchin <guro@fb.com>: mm: hugetlb: optionally allocate gigantic hugepages using cma Subsystem: mm/pagemap Jaewon Kim <jaewon31.kim@samsung.com>: mm/mmap.c: initialize align_offset explicitly for vm_unmapped_area Arjun Roy <arjunroy@google.com>: mm/memory.c: refactor insert_page to prepare for batched-lock insert mm: bring sparc pte_index() semantics inline with other platforms mm: define pte_index as macro for x86 mm/memory.c: add vm_insert_pages() Anshuman Khandual <anshuman.khandual@arm.com>: mm/vma: define a default value for VM_DATA_DEFAULT_FLAGS mm/vma: introduce VM_ACCESS_FLAGS mm/special: create generic fallbacks for pte_special() and pte_mkspecial() Subsystem: mm/memremap Logan Gunthorpe <logang@deltatee.com>: Patch series "Allow setting caching mode in arch_add_memory() for P2PDMA", v4: mm/memory_hotplug: drop the flags field from struct mhp_restrictions mm/memory_hotplug: rename mhp_restrictions to mhp_params x86/mm: thread pgprot_t through init_memory_mapping() x86/mm: introduce __set_memory_prot() powerpc/mm: thread pgprot_t through create_section_mapping() mm/memory_hotplug: add pgprot_t to mhp_params mm/memremap: set caching mode for PCI P2PDMA memory to WC Subsystem: kmod Eric Biggers <ebiggers@google.com>: Patch series "module autoloading fixes and cleanups", v5: kmod: make request_module() return an error when autoloading is disabled fs/filesystems.c: downgrade user-reachable WARN_ONCE() to pr_warn_once() docs: admin-guide: document the kernel.modprobe sysctl selftests: kmod: fix handling test numbers above 9 selftests: kmod: test disabling module autoloading Subsystem: misc Pali Rohár <pali@kernel.org>: change email address for Pali Rohár kbuild test robot <lkp@intel.com>: drivers/dma/tegra20-apb-dma.c: fix platform_get_irq.cocci warnings Subsystem: seqfile Vasily Averin <vvs@virtuozzo.com>: Patch series "seq_file .next functions should increase position index": fs/seq_file.c: seq_read(): add info message about buggy .next functions kernel/gcov/fs.c: gcov_seq_next() should increase position index ipc/util.c: sysvipc_find_ipc() should increase position index Documentation/ABI/testing/sysfs-platform-dell-laptop | 8 Documentation/admin-guide/kernel-parameters.txt | 8 Documentation/admin-guide/sysctl/kernel.rst | 21 ++ MAINTAINERS | 16 - arch/alpha/include/asm/page.h | 3 arch/alpha/include/asm/pgtable.h | 2 arch/arc/include/asm/page.h | 2 arch/arm/include/asm/page.h | 4 arch/arm/include/asm/pgtable-2level.h | 2 arch/arm/include/asm/pgtable.h | 15 - arch/arm/mach-omap2/omap-secure.c | 2 arch/arm/mach-omap2/omap-secure.h | 2 arch/arm/mach-omap2/omap-smc.S | 2 arch/arm/mm/fault.c | 2 arch/arm/mm/mmu.c | 14 + arch/arm64/include/asm/page.h | 4 arch/arm64/mm/fault.c | 2 arch/arm64/mm/init.c | 6 arch/arm64/mm/mmu.c | 7 arch/c6x/include/asm/page.h | 5 arch/csky/include/asm/page.h | 3 arch/csky/include/asm/pgtable.h | 3 arch/h8300/include/asm/page.h | 2 arch/hexagon/include/asm/page.h | 3 arch/hexagon/include/asm/pgtable.h | 2 arch/ia64/include/asm/page.h | 5 arch/ia64/include/asm/pgtable.h | 2 arch/ia64/mm/init.c | 7 arch/m68k/include/asm/mcf_pgtable.h | 10 - arch/m68k/include/asm/motorola_pgtable.h | 2 arch/m68k/include/asm/page.h | 3 arch/m68k/include/asm/sun3_pgtable.h | 2 arch/microblaze/include/asm/page.h | 2 arch/microblaze/include/asm/pgtable.h | 4 arch/mips/include/asm/page.h | 5 arch/mips/include/asm/pgtable.h | 44 +++- arch/nds32/include/asm/page.h | 3 arch/nds32/include/asm/pgtable.h | 9 - arch/nds32/mm/fault.c | 2 arch/nios2/include/asm/page.h | 3 arch/nios2/include/asm/pgtable.h | 3 arch/openrisc/include/asm/page.h | 5 arch/openrisc/include/asm/pgtable.h | 2 arch/parisc/include/asm/page.h | 3 arch/parisc/include/asm/pgtable.h | 2 arch/powerpc/include/asm/book3s/64/hash.h | 3 arch/powerpc/include/asm/book3s/64/radix.h | 3 arch/powerpc/include/asm/page.h | 9 - arch/powerpc/include/asm/page_64.h | 7 arch/powerpc/include/asm/sparsemem.h | 3 arch/powerpc/mm/book3s64/hash_utils.c | 5 arch/powerpc/mm/book3s64/pgtable.c | 7 arch/powerpc/mm/book3s64/pkeys.c | 2 arch/powerpc/mm/book3s64/radix_pgtable.c | 18 +- arch/powerpc/mm/mem.c | 12 - arch/riscv/include/asm/page.h | 3 arch/s390/include/asm/page.h | 3 arch/s390/mm/fault.c | 2 arch/s390/mm/init.c | 9 - arch/sh/include/asm/page.h | 3 arch/sh/mm/init.c | 7 arch/sparc/include/asm/page_32.h | 3 arch/sparc/include/asm/page_64.h | 3 arch/sparc/include/asm/pgtable_32.h | 7 arch/sparc/include/asm/pgtable_64.h | 10 - arch/um/include/asm/pgtable.h | 10 - arch/unicore32/include/asm/page.h | 3 arch/unicore32/include/asm/pgtable.h | 3 arch/unicore32/mm/fault.c | 2 arch/x86/include/asm/page_types.h | 7 arch/x86/include/asm/pgtable.h | 6 arch/x86/include/asm/set_memory.h | 1 arch/x86/kernel/amd_gart_64.c | 3 arch/x86/kernel/setup.c | 4 arch/x86/mm/init.c | 9 - arch/x86/mm/init_32.c | 19 +- arch/x86/mm/init_64.c | 42 ++-- arch/x86/mm/mm_internal.h | 3 arch/x86/mm/pat/set_memory.c | 13 + arch/x86/mm/pkeys.c | 2 arch/x86/platform/uv/bios_uv.c | 3 arch/x86/um/asm/vm-flags.h | 10 - arch/xtensa/include/asm/page.h | 3 arch/xtensa/include/asm/pgtable.h | 3 drivers/char/hw_random/omap3-rom-rng.c | 4 drivers/dma/tegra20-apb-dma.c | 1 drivers/hwmon/dell-smm-hwmon.c | 4 drivers/platform/x86/dell-laptop.c | 4 drivers/platform/x86/dell-rbtn.c | 4 drivers/platform/x86/dell-rbtn.h | 2 drivers/platform/x86/dell-smbios-base.c | 4 drivers/platform/x86/dell-smbios-smm.c | 2 drivers/platform/x86/dell-smbios.h | 2 drivers/platform/x86/dell-smo8800.c | 2 drivers/platform/x86/dell-wmi.c | 4 drivers/power/supply/bq2415x_charger.c | 4 drivers/power/supply/bq27xxx_battery.c | 2 drivers/power/supply/isp1704_charger.c | 2 drivers/power/supply/rx51_battery.c | 4 drivers/staging/gasket/gasket_core.c | 2 fs/filesystems.c | 4 fs/hfsplus/attributes.c | 4 fs/ocfs2/alloc.c | 4 fs/seq_file.c | 7 fs/udf/ecma_167.h | 2 fs/udf/osta_udf.h | 2 include/linux/cma.h | 14 + include/linux/hugetlb.h | 12 + include/linux/memblock.h | 3 include/linux/memory_hotplug.h | 21 +- include/linux/mm.h | 34 +++ include/linux/power/bq2415x_charger.h | 2 include/linux/slab.h | 2 ipc/util.c | 2 kernel/gcov/fs.c | 2 kernel/kmod.c | 4 mm/cma.c | 16 + mm/gup.c | 3 mm/hugetlb.c | 109 ++++++++++++ mm/memblock.c | 2 mm/memcontrol.c | 3 mm/memory.c | 168 +++++++++++++++++-- mm/memory_hotplug.c | 13 - mm/memremap.c | 17 + mm/mmap.c | 4 mm/mprotect.c | 4 mm/page_alloc.c | 5 mm/slab_common.c | 2 tools/laptop/freefall/freefall.c | 2 tools/testing/selftests/kmod/kmod.sh | 43 ++++ 130 files changed, 710 insertions(+), 370 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-04-07 3:02 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-04-07 3:02 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits - a lot more of MM, quite a bit more yet to come. - various other subsystems 166 patches based on 7e63420847ae5f1036e4f7c42f0b3282e73efbc2. Subsystems affected by this patch series: mm/memcg mm/pagemap mm/vmalloc mm/pagealloc mm/migration mm/thp mm/ksm mm/madvise mm/virtio mm/userfaultfd mm/memory-hotplug mm/shmem mm/rmap mm/zswap mm/zsmalloc mm/cleanups procfs misc MAINTAINERS bitops lib checkpatch epoll binfmt kallsyms reiserfs kmod gcov kconfig kcov ubsan fault-injection ipc Subsystem: mm/memcg Chris Down <chris@chrisdown.name>: mm, memcg: bypass high reclaim iteration for cgroup hierarchy root Subsystem: mm/pagemap Li Xinhai <lixinhai.lxh@gmail.com>: Patch series "mm: Fix misuse of parent anon_vma in dup_mmap path": mm: don't prepare anon_vma if vma has VM_WIPEONFORK Revert "mm/rmap.c: reuse mergeable anon_vma as parent when fork" mm: set vm_next and vm_prev to NULL in vm_area_dup() Anshuman Khandual <anshuman.khandual@arm.com>: Patch series "mm/vma: Use all available wrappers when possible", v2: mm/vma: add missing VMA flag readable name for VM_SYNC mm/vma: make vma_is_accessible() available for general use mm/vma: replace all remaining open encodings with is_vm_hugetlb_page() mm/vma: replace all remaining open encodings with vma_is_anonymous() mm/vma: append unlikely() while testing VMA access permissions Subsystem: mm/vmalloc Qiujun Huang <hqjagain@gmail.com>: mm/vmalloc: fix a typo in comment Subsystem: mm/pagealloc Michal Hocko <mhocko@suse.com>: mm: make it clear that gfp reclaim modifiers are valid only for sleepable allocations Subsystem: mm/migration Wei Yang <richardw.yang@linux.intel.com>: Patch series "cleanup on do_pages_move()", v5: mm/migrate.c: no need to check for i > start in do_pages_move() mm/migrate.c: wrap do_move_pages_to_node() and store_status() mm/migrate.c: check pagelist in move_pages_and_store_status() mm/migrate.c: unify "not queued for migration" handling in do_pages_move() Yang Shi <yang.shi@linux.alibaba.com>: mm/migrate.c: migrate PG_readahead flag Subsystem: mm/thp David Rientjes <rientjes@google.com>: mm, shmem: add vmstat for hugepage fallback mm, thp: track fallbacks due to failed memcg charges separately "Matthew Wilcox (Oracle)" <willy@infradead.org>: include/linux/pagemap.h: optimise find_subpage for !THP mm: remove CONFIG_TRANSPARENT_HUGE_PAGECACHE Subsystem: mm/ksm Li Chen <chenli@uniontech.com>: mm/ksm.c: update get_user_pages() argument in comment Subsystem: mm/madvise Huang Ying <ying.huang@intel.com>: mm: code cleanup for MADV_FREE Subsystem: mm/virtio Alexander Duyck <alexander.h.duyck@linux.intel.com>: Patch series "mm / virtio: Provide support for free page reporting", v17: mm: adjust shuffle code to allow for future coalescing mm: use zone and order instead of free area in free_list manipulators mm: add function __putback_isolated_page mm: introduce Reported pages virtio-balloon: pull page poisoning config out of free page hinting virtio-balloon: add support for providing free page reports to host mm/page_reporting: rotate reported pages to the tail of the list mm/page_reporting: add budget limit on how many pages can be reported per pass mm/page_reporting: add free page reporting documentation David Hildenbrand <david@redhat.com>: virtio-balloon: switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM Subsystem: mm/userfaultfd Shaohua Li <shli@fb.com>: Patch series "userfaultfd: write protection support", v6: userfaultfd: wp: add helper for writeprotect check Andrea Arcangeli <aarcange@redhat.com>: userfaultfd: wp: hook userfault handler to write protection fault userfaultfd: wp: add WP pagetable tracking to x86 userfaultfd: wp: userfaultfd_pte/huge_pmd_wp() helpers userfaultfd: wp: add UFFDIO_COPY_MODE_WP Peter Xu <peterx@redhat.com>: mm: merge parameters for change_protection() userfaultfd: wp: apply _PAGE_UFFD_WP bit userfaultfd: wp: drop _PAGE_UFFD_WP properly when fork userfaultfd: wp: add pmd_swp_*uffd_wp() helpers userfaultfd: wp: support swap and page migration khugepaged: skip collapse if uffd-wp detected Shaohua Li <shli@fb.com>: userfaultfd: wp: support write protection for userfault vma range Andrea Arcangeli <aarcange@redhat.com>: userfaultfd: wp: add the writeprotect API to userfaultfd ioctl Shaohua Li <shli@fb.com>: userfaultfd: wp: enabled write protection in userfaultfd API Peter Xu <peterx@redhat.com>: userfaultfd: wp: don't wake up when doing write protect Martin Cracauer <cracauer@cons.org>: userfaultfd: wp: UFFDIO_REGISTER_MODE_WP documentation update Peter Xu <peterx@redhat.com>: userfaultfd: wp: declare _UFFDIO_WRITEPROTECT conditionally userfaultfd: selftests: refactor statistics userfaultfd: selftests: add write-protect test Subsystem: mm/memory-hotplug David Hildenbrand <david@redhat.com>: Patch series "mm: drop superfluous section checks when onlining/offlining": drivers/base/memory.c: drop section_count drivers/base/memory.c: drop pages_correctly_probed() mm/page_ext.c: drop pfn_present() check when onlining Baoquan He <bhe@redhat.com>: mm/memory_hotplug.c: only respect mem= parameter during boot stage David Hildenbrand <david@redhat.com>: mm/memory_hotplug.c: simplify calculation of number of pages in __remove_pages() mm/memory_hotplug.c: cleanup __add_pages() Baoquan He <bhe@redhat.com>: Patch series "mm/hotplug: Only use subsection map for VMEMMAP", v4: mm/sparse.c: introduce new function fill_subsection_map() mm/sparse.c: introduce a new function clear_subsection_map() mm/sparse.c: only use subsection map in VMEMMAP case mm/sparse.c: add note about only VMEMMAP supporting sub-section hotplug mm/sparse.c: move subsection_map related functions together David Hildenbrand <david@redhat.com>: Patch series "mm/memory_hotplug: allow to specify a default online_type", v3: drivers/base/memory: rename MMOP_ONLINE_KEEP to MMOP_ONLINE drivers/base/memory: map MMOP_OFFLINE to 0 drivers/base/memory: store mapping between MMOP_* and string in an array powernv/memtrace: always online added memory blocks hv_balloon: don't check for memhp_auto_online manually mm/memory_hotplug: unexport memhp_auto_online mm/memory_hotplug: convert memhp_auto_online to store an online_type mm/memory_hotplug: allow to specify a default online_type chenqiwu <chenqiwu@xiaomi.com>: mm/memory_hotplug.c: use __pfn_to_section() instead of open-coding Subsystem: mm/shmem Kees Cook <keescook@chromium.org>: mm/shmem.c: distribute switch variables for initialization Mateusz Nosek <mateusznosek0@gmail.com>: mm/shmem.c: clean code by removing unnecessary assignment Hugh Dickins <hughd@google.com>: mm: huge tmpfs: try to split_huge_page() when punching hole Subsystem: mm/rmap Palmer Dabbelt <palmerdabbelt@google.com>: mm: prevent a warning when casting void* -> enum Subsystem: mm/zswap "Maciej S. Szmigiero" <mail@maciej.szmigiero.name>: mm/zswap: allow setting default status, compressor and allocator in Kconfig Subsystem: mm/zsmalloc Subsystem: mm/cleanups Jules Irenge <jbi.octave@gmail.com>: mm/compaction: add missing annotation for compact_lock_irqsave mm/hugetlb: add missing annotation for gather_surplus_pages() mm/mempolicy: add missing annotation for queue_pages_pmd() mm/slub: add missing annotation for get_map() mm/slub: add missing annotation for put_map() mm/zsmalloc: add missing annotation for migrate_read_lock() mm/zsmalloc: add missing annotation for migrate_read_unlock() mm/zsmalloc: add missing annotation for pin_tag() mm/zsmalloc: add missing annotation for unpin_tag() chenqiwu <chenqiwu@xiaomi.com>: mm: fix ambiguous comments for better code readability Mateusz Nosek <mateusznosek0@gmail.com>: mm/mm_init.c: clean code. Use BUILD_BUG_ON when comparing compile time constant Joe Perches <joe@perches.com>: mm: use fallthrough; Steven Price <steven.price@arm.com>: include/linux/swapops.h: correct guards for non_swap_entry() Ira Weiny <ira.weiny@intel.com>: include/linux/memremap.h: remove stale comments Mateusz Nosek <mateusznosek0@gmail.com>: mm/dmapool.c: micro-optimisation remove unnecessary branch Waiman Long <longman@redhat.com>: mm: remove dummy struct bootmem_data/bootmem_data_t Subsystem: procfs Jules Irenge <jbi.octave@gmail.com>: fs/proc/inode.c: annotate close_pdeo() for sparse Alexey Dobriyan <adobriyan@gmail.com>: proc: faster open/read/close with "permanent" files proc: speed up /proc/*/statm "Matthew Wilcox (Oracle)" <willy@infradead.org>: proc: inline vma_stop into m_stop proc: remove m_cache_vma proc: use ppos instead of m->version seq_file: remove m->version proc: inline m_next_vma into m_next Subsystem: misc Michal Simek <michal.simek@xilinx.com>: asm-generic: fix unistd_32.h generation format Nathan Chancellor <natechancellor@gmail.com>: kernel/extable.c: use address-of operator on section symbols Masahiro Yamada <masahiroy@kernel.org>: sparc,x86: vdso: remove meaningless undefining CONFIG_OPTIMIZE_INLINING compiler: remove CONFIG_OPTIMIZE_INLINING entirely Vegard Nossum <vegard.nossum@oracle.com>: compiler.h: fix error in BUILD_BUG_ON() reporting Subsystem: MAINTAINERS Joe Perches <joe@perches.com>: MAINTAINERS: list the section entries in the preferred order Subsystem: bitops Josh Poimboeuf <jpoimboe@redhat.com>: bitops: always inline sign extension helpers Subsystem: lib Konstantin Khlebnikov <khlebnikov@yandex-team.ru>: lib/test_lockup: test module to generate lockups Colin Ian King <colin.king@canonical.com>: lib/test_lockup.c: fix spelling mistake "iteraions" -> "iterations" Konstantin Khlebnikov <khlebnikov@yandex-team.ru>: lib/test_lockup.c: add parameters for locking generic vfs locks "Gustavo A. R. Silva" <gustavo@embeddedor.com>: lib/bch.c: replace zero-length array with flexible-array member lib/ts_bm.c: replace zero-length array with flexible-array member lib/ts_fsm.c: replace zero-length array with flexible-array member lib/ts_kmp.c: replace zero-length array with flexible-array member Geert Uytterhoeven <geert+renesas@glider.be>: lib/scatterlist: fix sg_copy_buffer() kerneldoc Kees Cook <keescook@chromium.org>: lib: test_stackinit.c: XFAIL switch variable init tests Alexander Potapenko <glider@google.com>: lib/stackdepot.c: check depot_index before accessing the stack slab lib/stackdepot.c: fix a condition in stack_depot_fetch() lib/stackdepot.c: build with -fno-builtin kasan: stackdepot: move filter_irq_stacks() to stackdepot.c Qian Cai <cai@lca.pw>: percpu_counter: fix a data race at vm_committed_as Andy Shevchenko <andriy.shevchenko@linux.intel.com>: lib/test_bitmap.c: make use of EXP2_IN_BITS chenqiwu <chenqiwu@xiaomi.com>: lib/rbtree: fix coding style of assignments Dan Carpenter <dan.carpenter@oracle.com>: lib/test_kmod.c: remove a NULL test Rikard Falkeborn <rikard.falkeborn@gmail.com>: linux/bits.h: add compile time sanity check of GENMASK inputs Chris Wilson <chris@chris-wilson.co.uk>: lib/list: prevent compiler reloads inside 'safe' list iteration Nathan Chancellor <natechancellor@gmail.com>: lib/dynamic_debug.c: use address-of operator on section symbols Subsystem: checkpatch Joe Perches <joe@perches.com>: checkpatch: remove email address comment from email address comparisons Lubomir Rintel <lkundrak@v3.sk>: checkpatch: check SPDX tags in YAML files John Hubbard <jhubbard@nvidia.com>: checkpatch: support "base-commit:" format Joe Perches <joe@perches.com>: checkpatch: prefer fallthrough; over fallthrough comments Antonio Borneo <borneo.antonio@gmail.com>: checkpatch: fix minor typo and mixed space+tab in indentation checkpatch: fix multiple const * types checkpatch: add command-line option for TAB size Joe Perches <joe@perches.com>: checkpatch: improve Gerrit Change-Id: test Lubomir Rintel <lkundrak@v3.sk>: checkpatch: check proper licensing of Devicetree bindings Joe Perches <joe@perches.com>: checkpatch: avoid warning about uninitialized_var() Subsystem: epoll Roman Penyaev <rpenyaev@suse.de>: kselftest: introduce new epoll test case Jason Baron <jbaron@akamai.com>: fs/epoll: make nesting accounting safe for -rt kernel Subsystem: binfmt Alexey Dobriyan <adobriyan@gmail.com>: fs/binfmt_elf.c: delete "loc" variable fs/binfmt_elf.c: allocate less for static executable fs/binfmt_elf.c: don't free interpreter's ELF pheaders on common path Subsystem: kallsyms Will Deacon <will@kernel.org>: Patch series "Unexport kallsyms_lookup_name() and kallsyms_on_each_symbol()": samples/hw_breakpoint: drop HW_BREAKPOINT_R when reporting writes samples/hw_breakpoint: drop use of kallsyms_lookup_name() kallsyms: unexport kallsyms_lookup_name() and kallsyms_on_each_symbol() Subsystem: reiserfs Colin Ian King <colin.king@canonical.com>: reiserfs: clean up several indentation issues Subsystem: kmod Qiujun Huang <hqjagain@gmail.com>: kernel/kmod.c: fix a typo "assuems" -> "assumes" Subsystem: gcov "Gustavo A. R. Silva" <gustavo@embeddedor.com>: gcov: gcc_4_7: replace zero-length array with flexible-array member gcov: gcc_3_4: replace zero-length array with flexible-array member kernel/gcov/fs.c: replace zero-length array with flexible-array member Subsystem: kconfig Krzysztof Kozlowski <krzk@kernel.org>: init/Kconfig: clean up ANON_INODES and old IO schedulers options Subsystem: kcov Andrey Konovalov <andreyknvl@google.com>: Patch series "kcov: collect coverage from usb soft interrupts", v4: kcov: cleanup debug messages kcov: fix potential use-after-free in kcov_remote_start kcov: move t->kcov assignments into kcov_start/stop kcov: move t->kcov_sequence assignment kcov: use t->kcov_mode as enabled indicator kcov: collect coverage from interrupts usb: core: kcov: collect coverage from usb complete callback Subsystem: ubsan Kees Cook <keescook@chromium.org>: Patch series "ubsan: Split out bounds checker", v5: ubsan: add trap instrumentation option ubsan: split "bounds" checker from other options drivers/misc/lkdtm/bugs.c: add arithmetic overflow and array bounds checks ubsan: check panic_on_warn kasan: unset panic_on_warn before calling panic() ubsan: include bug type in report header Subsystem: fault-injection Qiujun Huang <hqjagain@gmail.com>: lib/Kconfig.debug: fix a typo "capabilitiy" -> "capability" Subsystem: ipc Somala Swaraj <somalaswaraj@gmail.com>: ipc/mqueue.c: fix a brace coding style issue Jason Yan <yanaijie@huawei.com>: ipc/shm.c: make compat_ksys_shmctl() static Documentation/admin-guide/kernel-parameters.txt | 13 Documentation/admin-guide/mm/transhuge.rst | 14 Documentation/admin-guide/mm/userfaultfd.rst | 51 Documentation/dev-tools/kcov.rst | 17 Documentation/vm/free_page_reporting.rst | 41 Documentation/vm/zswap.rst | 20 MAINTAINERS | 35 arch/alpha/include/asm/mmzone.h | 2 arch/alpha/kernel/syscalls/syscallhdr.sh | 2 arch/csky/mm/fault.c | 4 arch/ia64/kernel/syscalls/syscallhdr.sh | 2 arch/ia64/kernel/vmlinux.lds.S | 2 arch/m68k/mm/fault.c | 4 arch/microblaze/kernel/syscalls/syscallhdr.sh | 2 arch/mips/kernel/syscalls/syscallhdr.sh | 3 arch/mips/mm/fault.c | 4 arch/nds32/kernel/vmlinux.lds.S | 1 arch/parisc/kernel/syscalls/syscallhdr.sh | 2 arch/powerpc/kernel/syscalls/syscallhdr.sh | 3 arch/powerpc/kvm/e500_mmu_host.c | 2 arch/powerpc/mm/fault.c | 2 arch/powerpc/platforms/powernv/memtrace.c | 14 arch/sh/kernel/syscalls/syscallhdr.sh | 2 arch/sh/mm/fault.c | 2 arch/sparc/kernel/syscalls/syscallhdr.sh | 2 arch/sparc/vdso/vdso32/vclock_gettime.c | 4 arch/x86/Kconfig | 1 arch/x86/configs/i386_defconfig | 1 arch/x86/configs/x86_64_defconfig | 1 arch/x86/entry/vdso/vdso32/vclock_gettime.c | 4 arch/x86/include/asm/pgtable.h | 67 + arch/x86/include/asm/pgtable_64.h | 8 arch/x86/include/asm/pgtable_types.h | 12 arch/x86/mm/fault.c | 2 arch/xtensa/kernel/syscalls/syscallhdr.sh | 2 drivers/base/memory.c | 138 -- drivers/hv/hv_balloon.c | 25 drivers/misc/lkdtm/bugs.c | 75 + drivers/misc/lkdtm/core.c | 3 drivers/misc/lkdtm/lkdtm.h | 3 drivers/usb/core/hcd.c | 3 drivers/virtio/Kconfig | 1 drivers/virtio/virtio_balloon.c | 190 ++- fs/binfmt_elf.c | 56 fs/eventpoll.c | 64 - fs/proc/array.c | 39 fs/proc/cpuinfo.c | 1 fs/proc/generic.c | 31 fs/proc/inode.c | 188 ++- fs/proc/internal.h | 6 fs/proc/kmsg.c | 1 fs/proc/stat.c | 1 fs/proc/task_mmu.c | 97 - fs/reiserfs/do_balan.c | 2 fs/reiserfs/ioctl.c | 11 fs/reiserfs/namei.c | 10 fs/seq_file.c | 28 fs/userfaultfd.c | 116 + include/asm-generic/pgtable.h | 1 include/asm-generic/pgtable_uffd.h | 66 + include/asm-generic/tlb.h | 3 include/linux/bitops.h | 4 include/linux/bits.h | 22 include/linux/compiler.h | 2 include/linux/compiler_types.h | 11 include/linux/gfp.h | 2 include/linux/huge_mm.h | 2 include/linux/list.h | 50 include/linux/memory.h | 1 include/linux/memory_hotplug.h | 13 include/linux/memremap.h | 2 include/linux/mm.h | 25 include/linux/mm_inline.h | 15 include/linux/mm_types.h | 4 include/linux/mmzone.h | 47 include/linux/page-flags.h | 16 include/linux/page_reporting.h | 26 include/linux/pagemap.h | 4 include/linux/percpu_counter.h | 4 include/linux/proc_fs.h | 17 include/linux/sched.h | 3 include/linux/seq_file.h | 1 include/linux/shmem_fs.h | 10 include/linux/stackdepot.h | 2 include/linux/swapops.h | 5 include/linux/userfaultfd_k.h | 42 include/linux/vm_event_item.h | 5 include/trace/events/huge_memory.h | 1 include/trace/events/mmflags.h | 1 include/trace/events/vmscan.h | 2 include/uapi/linux/userfaultfd.h | 40 include/uapi/linux/virtio_balloon.h | 1 init/Kconfig | 8 ipc/mqueue.c | 5 ipc/shm.c | 2 ipc/util.c | 1 kernel/configs/tiny.config | 1 kernel/events/core.c | 3 kernel/extable.c | 3 kernel/fork.c | 10 kernel/gcov/fs.c | 2 kernel/gcov/gcc_3_4.c | 6 kernel/gcov/gcc_4_7.c | 2 kernel/kallsyms.c | 2 kernel/kcov.c | 282 +++- kernel/kmod.c | 2 kernel/module.c | 1 kernel/sched/fair.c | 2 lib/Kconfig.debug | 35 lib/Kconfig.ubsan | 51 lib/Makefile | 8 lib/bch.c | 2 lib/dynamic_debug.c | 2 lib/rbtree.c | 4 lib/scatterlist.c | 2 lib/stackdepot.c | 39 lib/test_bitmap.c | 2 lib/test_kmod.c | 2 lib/test_lockup.c | 601 +++++++++- lib/test_stackinit.c | 28 lib/ts_bm.c | 2 lib/ts_fsm.c | 2 lib/ts_kmp.c | 2 lib/ubsan.c | 47 mm/Kconfig | 135 ++ mm/Makefile | 1 mm/compaction.c | 3 mm/dmapool.c | 4 mm/filemap.c | 14 mm/gup.c | 9 mm/huge_memory.c | 36 mm/hugetlb.c | 1 mm/hugetlb_cgroup.c | 6 mm/internal.h | 2 mm/kasan/common.c | 23 mm/kasan/report.c | 10 mm/khugepaged.c | 39 mm/ksm.c | 5 mm/list_lru.c | 2 mm/memcontrol.c | 5 mm/memory-failure.c | 2 mm/memory.c | 42 mm/memory_hotplug.c | 53 mm/mempolicy.c | 11 mm/migrate.c | 122 +- mm/mm_init.c | 2 mm/mmap.c | 10 mm/mprotect.c | 76 - mm/page_alloc.c | 174 ++ mm/page_ext.c | 5 mm/page_isolation.c | 6 mm/page_reporting.c | 384 ++++++ mm/page_reporting.h | 54 mm/rmap.c | 23 mm/shmem.c | 168 +- mm/shuffle.c | 12 mm/shuffle.h | 6 mm/slab_common.c | 1 mm/slub.c | 3 mm/sparse.c | 236 ++- mm/swap.c | 20 mm/swapfile.c | 1 mm/userfaultfd.c | 98 + mm/vmalloc.c | 2 mm/vmscan.c | 12 mm/vmstat.c | 3 mm/zsmalloc.c | 10 mm/zswap.c | 24 samples/hw_breakpoint/data_breakpoint.c | 11 scripts/Makefile.ubsan | 16 scripts/checkpatch.pl | 155 +- tools/lib/rbtree.c | 4 tools/testing/selftests/filesystems/epoll/epoll_wakeup_test.c | 67 + tools/testing/selftests/vm/userfaultfd.c | 233 +++ 174 files changed, 3990 insertions(+), 1399 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-04-02 4:01 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-04-02 4:01 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits A large amount of MM, plenty more to come. 155 patches, based on GIT 1a323ea5356edbb3073dc59d51b9e6b86908857d Subsystems affected by this patch series: tools kthread kbuild scripts ocfs2 vfs mm/slub mm/kmemleak mm/pagecache mm/gup mm/swap mm/memcg mm/pagemap mm/mremap mm/sparsemem mm/kasan mm/pagealloc mm/vmscan mm/compaction mm/mempolicy mm/hugetlbfs mm/hugetlb Subsystem: tools David Ahern <dsahern@kernel.org>: tools/accounting/getdelays.c: fix netlink attribute length Subsystem: kthread Petr Mladek <pmladek@suse.com>: kthread: mark timer used by delayed kthread works as IRQ safe Subsystem: kbuild Masahiro Yamada <masahiroy@kernel.org>: asm-generic: make more kernel-space headers mandatory Subsystem: scripts Jonathan Neuschäfer <j.neuschaefer@gmx.net>: scripts/spelling.txt: add syfs/sysfs pattern Colin Ian King <colin.king@canonical.com>: scripts/spelling.txt: add more spellings to spelling.txt Subsystem: ocfs2 Alex Shi <alex.shi@linux.alibaba.com>: ocfs2: remove FS_OCFS2_NM ocfs2: remove unused macros ocfs2: use OCFS2_SEC_BITS in macro ocfs2: remove dlm_lock_is_remote wangyan <wangyan122@huawei.com>: ocfs2: there is no need to log twice in several functions ocfs2: correct annotation from "l_next_rec" to "l_next_free_rec" Alex Shi <alex.shi@linux.alibaba.com>: ocfs2: remove useless err Jules Irenge <jbi.octave@gmail.com>: ocfs2: Add missing annotations for ocfs2_refcount_cache_lock() and ocfs2_refcount_cache_unlock() "Gustavo A. R. Silva" <gustavo@embeddedor.com>: ocfs2: replace zero-length array with flexible-array member ocfs2: cluster: replace zero-length array with flexible-array member ocfs2: dlm: replace zero-length array with flexible-array member ocfs2: ocfs2_fs.h: replace zero-length array with flexible-array member wangjian <wangjian161@huawei.com>: ocfs2: roll back the reference count modification of the parent directory if an error occurs Takashi Iwai <tiwai@suse.de>: ocfs2: use scnprintf() for avoiding potential buffer overflow "Matthew Wilcox (Oracle)" <willy@infradead.org>: ocfs2: use memalloc_nofs_save instead of memalloc_noio_save Subsystem: vfs Kees Cook <keescook@chromium.org>: fs_parse: Remove pr_notice() about each validation Subsystem: mm/slub chenqiwu <chenqiwu@xiaomi.com>: mm/slub.c: replace cpu_slab->partial with wrapped APIs mm/slub.c: replace kmem_cache->cpu_partial with wrapped APIs Kees Cook <keescook@chromium.org>: slub: improve bit diffusion for freelist ptr obfuscation slub: relocate freelist pointer to middle of object Vlastimil Babka <vbabka@suse.cz>: Revert "topology: add support for node_to_mem_node() to determine the fallback node" Subsystem: mm/kmemleak Nathan Chancellor <natechancellor@gmail.com>: mm/kmemleak.c: use address-of operator on section symbols Qian Cai <cai@lca.pw>: mm/Makefile: disable KCSAN for kmemleak Subsystem: mm/pagecache Jan Kara <jack@suse.cz>: mm/filemap.c: don't bother dropping mmap_sem for zero size readahead Mauricio Faria de Oliveira <mfo@canonical.com>: mm/page-writeback.c: write_cache_pages(): deduplicate identical checks Xianting Tian <xianting_tian@126.com>: mm/filemap.c: clear page error before actual read Souptick Joarder <jrdr.linux@gmail.com>: mm/filemap.c: remove unused argument from shrink_readahead_size_eio() "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm/filemap.c: use vm_fault error code directly include/linux/pagemap.h: rename arguments to find_subpage mm/page-writeback.c: use VM_BUG_ON_PAGE in clear_page_dirty_for_io mm/filemap.c: unexport find_get_entry mm/filemap.c: rewrite pagecache_get_page documentation Subsystem: mm/gup John Hubbard <jhubbard@nvidia.com>: Patch series "mm/gup: track FOLL_PIN pages", v6: mm/gup: split get_user_pages_remote() into two routines mm/gup: pass a flags arg to __gup_device_* functions mm: introduce page_ref_sub_return() mm/gup: pass gup flags to two more routines mm/gup: require FOLL_GET for get_user_pages_fast() mm/gup: track FOLL_PIN pages mm/gup: page->hpage_pinned_refcount: exact pin counts for huge pages mm/gup: /proc/vmstat: pin_user_pages (FOLL_PIN) reporting mm/gup_benchmark: support pin_user_pages() and related calls selftests/vm: run_vmtests: invoke gup_benchmark with basic FOLL_PIN coverage "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm: improve dump_page() for compound pages John Hubbard <jhubbard@nvidia.com>: mm: dump_page(): additional diagnostics for huge pinned pages Claudio Imbrenda <imbrenda@linux.ibm.com>: mm/gup/writeback: add callbacks for inaccessible pages Pingfan Liu <kernelfans@gmail.com>: mm/gup: rename nr as nr_pinned in get_user_pages_fast() mm/gup: fix omission of check on FOLL_LONGTERM in gup fast path Subsystem: mm/swap Chen Wandun <chenwandun@huawei.com>: mm/swapfile.c: fix comments for swapcache_prepare Wei Yang <richardw.yang@linux.intel.com>: mm/swap.c: not necessary to export __pagevec_lru_add() Qian Cai <cai@lca.pw>: mm/swapfile: fix data races in try_to_unuse() Wei Yang <richard.weiyang@linux.alibaba.com>: mm/swap_slots.c: assign|reset cache slot by value directly Yang Shi <yang.shi@linux.alibaba.com>: mm: swap: make page_evictable() inline mm: swap: use smp_mb__after_atomic() to order LRU bit set Wei Yang <richard.weiyang@gmail.com>: mm/swap_state.c: use the same way to count page in [add_to|delete_from]_swap_cache Subsystem: mm/memcg Yafang Shao <laoar.shao@gmail.com>: mm, memcg: fix build error around the usage of kmem_caches Kirill Tkhai <ktkhai@virtuozzo.com>: mm/memcontrol.c: allocate shrinker_map on appropriate NUMA node Roman Gushchin <guro@fb.com>: mm: memcg/slab: use mem_cgroup_from_obj() Patch series "mm: memcg: kmem API cleanup", v2: mm: kmem: cleanup (__)memcg_kmem_charge_memcg() arguments mm: kmem: cleanup memcg_kmem_uncharge_memcg() arguments mm: kmem: rename memcg_kmem_(un)charge() into memcg_kmem_(un)charge_page() mm: kmem: switch to nr_pages in (__)memcg_kmem_charge_memcg() mm: memcg/slab: cache page number in memcg_(un)charge_slab() mm: kmem: rename (__)memcg_kmem_(un)charge_memcg() to __memcg_kmem_(un)charge() Johannes Weiner <hannes@cmpxchg.org>: Patch series "mm: memcontrol: recursive memory.low protection", v3: mm: memcontrol: fix memory.low proportional distribution mm: memcontrol: clean up and document effective low/min calculations mm: memcontrol: recursive memory.low protection Shakeel Butt <shakeelb@google.com>: memcg: css_tryget_online cleanups Vincenzo Frascino <vincenzo.frascino@arm.com>: mm/memcontrol.c: make mem_cgroup_id_get_many() __maybe_unused Chris Down <chris@chrisdown.name>: mm, memcg: prevent memory.high load/store tearing mm, memcg: prevent memory.max load tearing mm, memcg: prevent memory.low load/store tearing mm, memcg: prevent memory.min load/store tearing mm, memcg: prevent memory.swap.max load tearing mm, memcg: prevent mem_cgroup_protected store tearing Roman Gushchin <guro@fb.com>: mm: memcg: make memory.oom.group tolerable to task migration Subsystem: mm/pagemap Thomas Hellstrom <thellstrom@vmware.com>: mm/mapping_dirty_helpers: Update huge page-table entry callbacks Anshuman Khandual <anshuman.khandual@arm.com>: Patch series "mm/vma: some more minor changes", v2: mm/vma: move VM_NO_KHUGEPAGED into generic header mm/vma: make vma_is_foreign() available for general use mm/vma: make is_vma_temporary_stack() available for general use "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm: add pagemap.h to the fine documentation Peter Xu <peterx@redhat.com>: Patch series "mm: Page fault enhancements", v6: mm/gup: rename "nonblocking" to "locked" where proper mm/gup: fix __get_user_pages() on fault retry of hugetlb mm: introduce fault_signal_pending() x86/mm: use helper fault_signal_pending() arc/mm: use helper fault_signal_pending() arm64/mm: use helper fault_signal_pending() powerpc/mm: use helper fault_signal_pending() sh/mm: use helper fault_signal_pending() mm: return faster for non-fatal signals in user mode faults userfaultfd: don't retake mmap_sem to emulate NOPAGE mm: introduce FAULT_FLAG_DEFAULT mm: introduce FAULT_FLAG_INTERRUPTIBLE mm: allow VM_FAULT_RETRY for multiple times mm/gup: allow VM_FAULT_RETRY for multiple times mm/gup: allow to react to fatal signals mm/userfaultfd: honor FAULT_FLAG_KILLABLE in fault path WANG Wenhu <wenhu.wang@vivo.com>: mm: clarify a confusing comment for remap_pfn_range() Wang Wenhu <wenhu.wang@vivo.com>: mm/memory.c: clarify a confusing comment for vm_iomap_memory Jaewon Kim <jaewon31.kim@samsung.com>: Patch series "mm: mmap: add mmap trace point", v3: mmap: remove inline of vm_unmapped_area mm: mmap: add trace point of vm_unmapped_area Subsystem: mm/mremap Brian Geffon <bgeffon@google.com>: mm/mremap: add MREMAP_DONTUNMAP to mremap() selftests: add MREMAP_DONTUNMAP selftest Subsystem: mm/sparsemem Wei Yang <richardw.yang@linux.intel.com>: mm/sparsemem: get address to page struct instead of address to pfn Pingfan Liu <kernelfans@gmail.com>: mm/sparse: rename pfn_present() to pfn_in_present_section() Baoquan He <bhe@redhat.com>: mm/sparse.c: use kvmalloc/kvfree to alloc/free memmap for the classic sparse mm/sparse.c: allocate memmap preferring the given node Subsystem: mm/kasan Walter Wu <walter-zh.wu@mediatek.com>: Patch series "fix the missing underflow in memory operation function", v4: kasan: detect negative size in memory operation function kasan: add test for invalid size in memmove Subsystem: mm/pagealloc Joel Savitz <jsavitz@redhat.com>: mm/page_alloc: increase default min_free_kbytes bound Mateusz Nosek <mateusznosek0@gmail.com>: mm, pagealloc: micro-optimisation: save two branches on hot page allocation path chenqiwu <chenqiwu@xiaomi.com>: mm/page_alloc.c: use free_area_empty() instead of open-coding Mateusz Nosek <mateusznosek0@gmail.com>: mm/page_alloc.c: micro-optimisation Remove unnecessary branch chenqiwu <chenqiwu@xiaomi.com>: mm/page_alloc: simplify page_is_buddy() for better code readability Subsystem: mm/vmscan Yang Shi <yang.shi@linux.alibaba.com>: mm: vmpressure: don't need call kfree if kstrndup fails mm: vmpressure: use mem_cgroup_is_root API mm: vmscan: replace open codings to NUMA_NO_NODE Wei Yang <richardw.yang@linux.intel.com>: mm/vmscan.c: remove cpu online notification for now Qian Cai <cai@lca.pw>: mm/vmscan.c: fix data races using kswapd_classzone_idx Mateusz Nosek <mateusznosek0@gmail.com>: mm/vmscan.c: Clean code by removing unnecessary assignment Kirill Tkhai <ktkhai@virtuozzo.com>: mm/vmscan.c: make may_enter_fs bool in shrink_page_list() Mateusz Nosek <mateusznosek0@gmail.com>: mm/vmscan.c: do_try_to_free_pages(): clean code by removing unnecessary assignment Michal Hocko <mhocko@suse.com>: selftests: vm: drop dependencies on page flags from mlock2 tests Subsystem: mm/compaction Rik van Riel <riel@surriel.com>: Patch series "fix THP migration for CMA allocations", v2: mm,compaction,cma: add alloc_contig flag to compact_control mm,thp,compaction,cma: allow THP migration for CMA allocations Vlastimil Babka <vbabka@suse.cz>: mm, compaction: fully assume capture is not NULL in compact_zone_order() Sebastian Andrzej Siewior <bigeasy@linutronix.de>: mm/compaction: really limit compact_unevictable_allowed to 0 and 1 mm/compaction: Disable compact_unevictable_allowed on RT Mateusz Nosek <mateusznosek0@gmail.com>: mm/compaction.c: clean code by removing unnecessary assignment Subsystem: mm/mempolicy Li Xinhai <lixinhai.lxh@gmail.com>: mm/mempolicy: support MPOL_MF_STRICT for huge page mapping mm/mempolicy: check hugepage migration is supported by arch in vma_migratable() Yang Shi <yang.shi@linux.alibaba.com>: mm: mempolicy: use VM_BUG_ON_VMA in queue_pages_test_walk() Randy Dunlap <rdunlap@infradead.org>: mm: mempolicy: require at least one nodeid for MPOL_PREFERRED Colin Ian King <colin.king@canonical.com>: mm/memblock.c: remove redundant assignment to variable max_addr Subsystem: mm/hugetlbfs Mike Kravetz <mike.kravetz@oracle.com>: Patch series "hugetlbfs: use i_mmap_rwsem for more synchronization", v2: hugetlbfs: use i_mmap_rwsem for more pmd sharing synchronization hugetlbfs: Use i_mmap_rwsem to address page fault/truncate race Subsystem: mm/hugetlb Mina Almasry <almasrymina@google.com>: hugetlb_cgroup: add hugetlb_cgroup reservation counter hugetlb_cgroup: add interface for charge/uncharge hugetlb reservations mm/hugetlb_cgroup: fix hugetlb_cgroup migration hugetlb_cgroup: add reservation accounting for private mappings hugetlb: disable region_add file_region coalescing hugetlb_cgroup: add accounting for shared mappings hugetlb_cgroup: support noreserve mappings hugetlb: support file_region coalescing again hugetlb_cgroup: add hugetlb_cgroup reservation tests hugetlb_cgroup: add hugetlb_cgroup reservation docs Mateusz Nosek <mateusznosek0@gmail.com>: mm/hugetlb.c: clean code by removing unnecessary initialization Vlastimil Babka <vbabka@suse.cz>: mm/hugetlb: remove unnecessary memory fetch in PageHeadHuge() Christophe Leroy <christophe.leroy@c-s.fr>: selftests/vm: fix map_hugetlb length used for testing read and write mm/hugetlb: fix build failure with HUGETLB_PAGE but not HUGEBTLBFS "Matthew Wilcox (Oracle)" <willy@infradead.org>: include/linux/huge_mm.h: check PageTail in hpage_nr_pages even when !THP Documentation/admin-guide/cgroup-v1/hugetlb.rst | 103 +- Documentation/admin-guide/cgroup-v2.rst | 11 Documentation/admin-guide/sysctl/vm.rst | 3 Documentation/core-api/mm-api.rst | 3 Documentation/core-api/pin_user_pages.rst | 86 + arch/alpha/include/asm/Kbuild | 11 arch/alpha/mm/fault.c | 6 arch/arc/include/asm/Kbuild | 21 arch/arc/mm/fault.c | 37 arch/arm/include/asm/Kbuild | 12 arch/arm/mm/fault.c | 7 arch/arm64/include/asm/Kbuild | 18 arch/arm64/mm/fault.c | 26 arch/c6x/include/asm/Kbuild | 37 arch/csky/include/asm/Kbuild | 36 arch/h8300/include/asm/Kbuild | 46 arch/hexagon/include/asm/Kbuild | 33 arch/hexagon/mm/vm_fault.c | 5 arch/ia64/include/asm/Kbuild | 7 arch/ia64/mm/fault.c | 5 arch/m68k/include/asm/Kbuild | 24 arch/m68k/mm/fault.c | 7 arch/microblaze/include/asm/Kbuild | 29 arch/microblaze/mm/fault.c | 5 arch/mips/include/asm/Kbuild | 13 arch/mips/mm/fault.c | 5 arch/nds32/include/asm/Kbuild | 37 arch/nds32/mm/fault.c | 5 arch/nios2/include/asm/Kbuild | 38 arch/nios2/mm/fault.c | 7 arch/openrisc/include/asm/Kbuild | 36 arch/openrisc/mm/fault.c | 5 arch/parisc/include/asm/Kbuild | 18 arch/parisc/mm/fault.c | 8 arch/powerpc/include/asm/Kbuild | 4 arch/powerpc/mm/book3s64/pkeys.c | 12 arch/powerpc/mm/fault.c | 20 arch/powerpc/platforms/pseries/hotplug-memory.c | 2 arch/riscv/include/asm/Kbuild | 28 arch/riscv/mm/fault.c | 9 arch/s390/include/asm/Kbuild | 15 arch/s390/mm/fault.c | 10 arch/sh/include/asm/Kbuild | 16 arch/sh/mm/fault.c | 13 arch/sparc/include/asm/Kbuild | 14 arch/sparc/mm/fault_32.c | 5 arch/sparc/mm/fault_64.c | 5 arch/um/kernel/trap.c | 3 arch/unicore32/include/asm/Kbuild | 34 arch/unicore32/mm/fault.c | 8 arch/x86/include/asm/Kbuild | 2 arch/x86/include/asm/mmu_context.h | 15 arch/x86/mm/fault.c | 32 arch/xtensa/include/asm/Kbuild | 26 arch/xtensa/mm/fault.c | 5 drivers/base/node.c | 2 drivers/gpu/drm/ttm/ttm_bo_vm.c | 12 fs/fs_parser.c | 2 fs/hugetlbfs/inode.c | 30 fs/ocfs2/alloc.c | 3 fs/ocfs2/cluster/heartbeat.c | 12 fs/ocfs2/cluster/netdebug.c | 4 fs/ocfs2/cluster/tcp.c | 27 fs/ocfs2/cluster/tcp.h | 2 fs/ocfs2/dir.c | 4 fs/ocfs2/dlm/dlmcommon.h | 8 fs/ocfs2/dlm/dlmdebug.c | 100 - fs/ocfs2/dlm/dlmmaster.c | 2 fs/ocfs2/dlm/dlmthread.c | 3 fs/ocfs2/dlmglue.c | 2 fs/ocfs2/journal.c | 2 fs/ocfs2/namei.c | 15 fs/ocfs2/ocfs2_fs.h | 18 fs/ocfs2/refcounttree.c | 2 fs/ocfs2/reservations.c | 3 fs/ocfs2/stackglue.c | 2 fs/ocfs2/suballoc.c | 5 fs/ocfs2/super.c | 46 fs/pipe.c | 2 fs/userfaultfd.c | 64 - include/asm-generic/Kbuild | 52 + include/linux/cgroup-defs.h | 5 include/linux/fs.h | 5 include/linux/gfp.h | 6 include/linux/huge_mm.h | 10 include/linux/hugetlb.h | 76 + include/linux/hugetlb_cgroup.h | 175 +++ include/linux/kasan.h | 2 include/linux/kthread.h | 3 include/linux/memcontrol.h | 66 - include/linux/mempolicy.h | 29 include/linux/mm.h | 243 +++- include/linux/mm_types.h | 7 include/linux/mmzone.h | 6 include/linux/page_ref.h | 9 include/linux/pagemap.h | 29 include/linux/sched/signal.h | 18 include/linux/swap.h | 1 include/linux/topology.h | 17 include/trace/events/mmap.h | 48 include/uapi/linux/mman.h | 5 kernel/cgroup/cgroup.c | 17 kernel/fork.c | 9 kernel/sysctl.c | 31 lib/test_kasan.c | 19 mm/Makefile | 1 mm/compaction.c | 31 mm/debug.c | 54 - mm/filemap.c | 77 - mm/gup.c | 682 ++++++++++--- mm/gup_benchmark.c | 71 + mm/huge_memory.c | 29 mm/hugetlb.c | 866 ++++++++++++----- mm/hugetlb_cgroup.c | 347 +++++- mm/internal.h | 32 mm/kasan/common.c | 26 mm/kasan/generic.c | 9 mm/kasan/generic_report.c | 11 mm/kasan/kasan.h | 2 mm/kasan/report.c | 5 mm/kasan/tags.c | 9 mm/kasan/tags_report.c | 11 mm/khugepaged.c | 4 mm/kmemleak.c | 2 mm/list_lru.c | 12 mm/mapping_dirty_helpers.c | 42 mm/memblock.c | 2 mm/memcontrol.c | 378 ++++--- mm/memory-failure.c | 29 mm/memory.c | 4 mm/mempolicy.c | 73 + mm/migrate.c | 25 mm/mmap.c | 32 mm/mremap.c | 92 + mm/page-writeback.c | 19 mm/page_alloc.c | 82 - mm/page_counter.c | 29 mm/page_ext.c | 2 mm/rmap.c | 39 mm/shuffle.c | 2 mm/slab.h | 32 mm/slab_common.c | 2 mm/slub.c | 27 mm/sparse.c | 33 mm/swap.c | 5 mm/swap_slots.c | 12 mm/swap_state.c | 2 mm/swapfile.c | 10 mm/userfaultfd.c | 11 mm/vmpressure.c | 8 mm/vmscan.c | 111 -- mm/vmstat.c | 2 scripts/spelling.txt | 21 tools/accounting/getdelays.c | 2 tools/testing/selftests/vm/.gitignore | 1 tools/testing/selftests/vm/Makefile | 2 tools/testing/selftests/vm/charge_reserved_hugetlb.sh | 575 +++++++++++ tools/testing/selftests/vm/gup_benchmark.c | 15 tools/testing/selftests/vm/hugetlb_reparenting_test.sh | 244 ++++ tools/testing/selftests/vm/map_hugetlb.c | 14 tools/testing/selftests/vm/mlock2-tests.c | 233 ---- tools/testing/selftests/vm/mremap_dontunmap.c | 313 ++++++ tools/testing/selftests/vm/run_vmtests | 37 tools/testing/selftests/vm/write_hugetlb_memory.sh | 23 tools/testing/selftests/vm/write_to_hugetlbfs.c | 242 ++++ 165 files changed, 5020 insertions(+), 2376 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-03-29 2:14 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-03-29 2:14 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 5 fixes, based on 83fd69c93340177dcd66fd26ce6441fb581c1dbf: Naohiro Aota <naohiro.aota@wdc.com>: mm/swapfile.c: move inode_lock out of claim_swapfile David Hildenbrand <david@redhat.com>: drivers/base/memory.c: indicate all memory blocks as removable Mina Almasry <almasrymina@google.com>: hugetlb_cgroup: fix illegal access to memory Roman Gushchin <guro@fb.com>: mm: fork: fix kernel_stack memcg stats for various stack implementations "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>: mm/sparse: fix kernel crash with pfn_section_valid check drivers/base/memory.c | 23 +++-------------------- include/linux/memcontrol.h | 12 ++++++++++++ kernel/fork.c | 4 ++-- mm/hugetlb_cgroup.c | 3 +-- mm/memcontrol.c | 38 ++++++++++++++++++++++++++++++++++++++ mm/sparse.c | 6 ++++++ mm/swapfile.c | 41 ++++++++++++++++++++--------------------- 7 files changed, 82 insertions(+), 45 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-03-22 1:19 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-03-22 1:19 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 10 fixes, based on c63c50fc2ec9afc4de21ef9ead2eac64b178cce1: Chunguang Xu <brookxu@tencent.com>: memcg: fix NULL pointer dereference in __mem_cgroup_usage_unregister_event Baoquan He <bhe@redhat.com>: mm/hotplug: fix hot remove failure in SPARSEMEM|!VMEMMAP case Qian Cai <cai@lca.pw>: page-flags: fix a crash at SetPageError(THP_SWAP) Chris Down <chris@chrisdown.name>: mm, memcg: fix corruption on 64-bit divisor in memory.high throttling mm, memcg: throttle allocators based on ancestral memory.high Michal Hocko <mhocko@suse.com>: mm: do not allow MADV_PAGEOUT for CoW pages Roman Penyaev <rpenyaev@suse.de>: epoll: fix possible lost wakeup on epoll_ctl() path Qian Cai <cai@lca.pw>: mm/mmu_notifier: silence PROVE_RCU_LIST warnings Vlastimil Babka <vbabka@suse.cz>: mm, slub: prevent kmalloc_node crashes and memory leaks Joerg Roedel <jroedel@suse.de>: x86/mm: split vmalloc_sync_all() arch/x86/mm/fault.c | 26 ++++++++++- drivers/acpi/apei/ghes.c | 2 fs/eventpoll.c | 8 +-- include/linux/page-flags.h | 2 include/linux/vmalloc.h | 5 +- kernel/notifier.c | 2 mm/madvise.c | 12 +++-- mm/memcontrol.c | 105 ++++++++++++++++++++++++++++----------------- mm/mmu_notifier.c | 27 +++++++---- mm/nommu.c | 10 +++- mm/slub.c | 26 +++++++---- mm/sparse.c | 8 ++- mm/vmalloc.c | 11 +++- 13 files changed, 165 insertions(+), 79 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-03-06 6:27 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-03-06 6:27 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 7 fixes, based on 9f65ed5fe41ce08ed1cb1f6a950f9ec694c142ad: Mel Gorman <mgorman@techsingularity.net>: mm, numa: fix bad pmd by atomically check for pmd_trans_huge when marking page tables prot_numa Huang Ying <ying.huang@intel.com>: mm: fix possible PMD dirty bit lost in set_pmd_migration_entry() "Kirill A. Shutemov" <kirill@shutemov.name>: mm: avoid data corruption on CoW fault into PFN-mapped VMA OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>: fat: fix uninit-memory access for partial initialized inode Sebastian Andrzej Siewior <bigeasy@linutronix.de>: mm/z3fold.c: do not include rwlock.h directly Vlastimil Babka <vbabka@suse.cz>: mm, hotplug: fix page online with DEBUG_PAGEALLOC compiled but not enabled Miroslav Benes <mbenes@suse.cz>: arch/Kconfig: update HAVE_RELIABLE_STACKTRACE description arch/Kconfig | 5 +++-- fs/fat/inode.c | 19 +++++++------------ include/linux/mm.h | 4 ++++ mm/huge_memory.c | 3 +-- mm/memory.c | 35 +++++++++++++++++++++++++++-------- mm/memory_hotplug.c | 8 +++++++- mm/mprotect.c | 38 ++++++++++++++++++++++++++++++++++++-- mm/z3fold.c | 1 - 8 files changed, 85 insertions(+), 28 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-02-21 4:00 Andrew Morton 2020-02-21 4:03 ` incoming Andrew Morton 2020-02-21 18:21 ` incoming Linus Torvalds 0 siblings, 2 replies; 349+ messages in thread From: Andrew Morton @ 2020-02-21 4:00 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits - A few y2038 fixes which missed the merge window whiole dependencies in NFS were being sorted out. - A bunch of fixes. Some minor, some not. Subsystems affected by this patch series: Arnd Bergmann <arnd@arndb.de>: y2038: remove ktime to/from timespec/timeval conversion y2038: remove unused time32 interfaces y2038: hide timeval/timespec/itimerval/itimerspec types Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>: Revert "ipc,sem: remove uneeded sem_undo_list lock usage in exit_sem()" Christian Borntraeger <borntraeger@de.ibm.com>: include/uapi/linux/swab.h: fix userspace breakage, use __BITS_PER_LONG for swap SeongJae Park <sjpark@amazon.de>: selftests/vm: add missed tests in run_vmtests Joe Perches <joe@perches.com>: get_maintainer: remove uses of P: for maintainer name Douglas Anderson <dianders@chromium.org>: scripts/get_maintainer.pl: deprioritize old Fixes: addresses Christoph Hellwig <hch@lst.de>: mm/swapfile.c: fix a comment in sys_swapon() Vasily Averin <vvs@virtuozzo.com>: mm/memcontrol.c: lost css_put in memcg_expand_shrinker_maps() Alexandru Ardelean <alexandru.ardelean@analog.com>: lib/string.c: update match_string() doc-strings with correct behavior Gavin Shan <gshan@redhat.com>: mm/vmscan.c: don't round up scan size for online memory cgroup Wei Yang <richardw.yang@linux.intel.com>: mm/sparsemem: pfn_to_page is not valid yet on SPARSEMEM Alexander Potapenko <glider@google.com>: lib/stackdepot.c: fix global out-of-bounds in stack_slabs Randy Dunlap <rdunlap@infradead.org>: MAINTAINERS: use tabs for SAFESETID MAINTAINERS | 8 - include/linux/compat.h | 29 ------ include/linux/ktime.h | 37 ------- include/linux/time32.h | 154 --------------------------------- include/linux/timekeeping32.h | 32 ------ include/linux/types.h | 5 - include/uapi/asm-generic/posix_types.h | 2 include/uapi/linux/swab.h | 4 include/uapi/linux/time.h | 22 ++-- ipc/sem.c | 6 - kernel/compat.c | 64 ------------- kernel/time/time.c | 43 --------- lib/stackdepot.c | 8 + lib/string.c | 16 +++ mm/memcontrol.c | 4 mm/sparse.c | 2 mm/swapfile.c | 2 mm/vmscan.c | 9 + scripts/get_maintainer.pl | 32 ------ tools/testing/selftests/vm/run_vmtests | 33 +++++++ 20 files changed, 93 insertions(+), 419 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-02-21 4:00 incoming Andrew Morton @ 2020-02-21 4:03 ` Andrew Morton 2020-02-21 18:21 ` incoming Linus Torvalds 1 sibling, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-02-21 4:03 UTC (permalink / raw) To: Linus Torvalds, linux-mm, mm-commits On Thu, 20 Feb 2020 20:00:30 -0800 Andrew Morton <akpm@linux-foundation.org> wrote: > - A few y2038 fixes which missed the merge window whiole dependencies > in NFS were being sorted out. > > - A bunch of fixes. Some minor, some not. 15 patches, based on ca7e1fd1026c5af6a533b4b5447e1d2f153e28f2 ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-02-21 4:00 incoming Andrew Morton 2020-02-21 4:03 ` incoming Andrew Morton @ 2020-02-21 18:21 ` Linus Torvalds 2020-02-21 18:32 ` incoming Konstantin Ryabitsev 2020-02-21 19:33 ` incoming Linus Torvalds 1 sibling, 2 replies; 349+ messages in thread From: Linus Torvalds @ 2020-02-21 18:21 UTC (permalink / raw) To: Andrew Morton, Konstantin Ryabitsev; +Cc: Linux-MM, mm-commits On Thu, Feb 20, 2020 at 8:00 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > - A few y2038 fixes which missed the merge window whiole dependencies > in NFS were being sorted out. > > - A bunch of fixes. Some minor, some not. Hmm. Konstantin's nice lore script _used_ to pick up your patches, but now they don't. I'm not sure what changed. It worked with your big series of 118 patches. It doesn't work with this smaller series of fixes. I think the difference is that you've done something bad to your patch sending. That big series was properly threaded with each of the patches being a reply to the 'incoming' message. This series is not. Please, Andrew, can you make your email flow more consistent so that I can actually use the nice new tool to download a patch series? Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-02-21 18:21 ` incoming Linus Torvalds @ 2020-02-21 18:32 ` Konstantin Ryabitsev 2020-02-27 9:59 ` incoming Vlastimil Babka 2020-02-21 19:33 ` incoming Linus Torvalds 1 sibling, 1 reply; 349+ messages in thread From: Konstantin Ryabitsev @ 2020-02-21 18:32 UTC (permalink / raw) To: Linus Torvalds; +Cc: Andrew Morton, Linux-MM, mm-commits On Fri, Feb 21, 2020 at 10:21:19AM -0800, Linus Torvalds wrote: > On Thu, Feb 20, 2020 at 8:00 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > > > - A few y2038 fixes which missed the merge window whiole dependencies > > in NFS were being sorted out. > > > > - A bunch of fixes. Some minor, some not. > > Hmm. Konstantin's nice lore script _used_ to pick up your patches, but > now they don't. > > I'm not sure what changed. It worked with your big series of 118 patches. > > It doesn't work with this smaller series of fixes. > > I think the difference is that you've done something bad to your patch > sending. That big series was properly threaded with each of the > patches being a reply to the 'incoming' message. > > This series is not. This is correct -- each patch is posted without an in-reply-to, so public-inbox doesn't group them into a thread. E.g.: https://lore.kernel.org/linux-mm/20200221040350.84HaG%25akpm@linux-foundation.org/ > > Please, Andrew, can you make your email flow more consistent so that I > can actually use the nice new tool to download a patch series? Andrew, I'll be happy to provide you with a helper tool if you can describe me your workflow. E.g. if you have a quilt directory of patches plus a series file, it could easily be a tiny wrapper like: send-patches --base-commit 1234abcd --cover cover.txt patchdir/series -K ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-02-21 18:32 ` incoming Konstantin Ryabitsev @ 2020-02-27 9:59 ` Vlastimil Babka 0 siblings, 0 replies; 349+ messages in thread From: Vlastimil Babka @ 2020-02-27 9:59 UTC (permalink / raw) To: Konstantin Ryabitsev, Linus Torvalds; +Cc: Andrew Morton, Linux-MM, mm-commits On 2/21/20 7:32 PM, Konstantin Ryabitsev wrote: > On Fri, Feb 21, 2020 at 10:21:19AM -0800, Linus Torvalds wrote: >> On Thu, Feb 20, 2020 at 8:00 PM Andrew Morton <akpm@linux-foundation.org> wrote: >> > >> > - A few y2038 fixes which missed the merge window whiole dependencies >> > in NFS were being sorted out. >> > >> > - A bunch of fixes. Some minor, some not. >> >> Hmm. Konstantin's nice lore script _used_ to pick up your patches, but >> now they don't. >> >> I'm not sure what changed. It worked with your big series of 118 patches. >> >> It doesn't work with this smaller series of fixes. >> >> I think the difference is that you've done something bad to your patch >> sending. That big series was properly threaded with each of the >> patches being a reply to the 'incoming' message. >> >> This series is not. > > This is correct -- each patch is posted without an in-reply-to, so > public-inbox doesn't group them into a thread. > > E.g.: > https://lore.kernel.org/linux-mm/20200221040350.84HaG%25akpm@linux-foundation.org/ > >> >> Please, Andrew, can you make your email flow more consistent so that I >> can actually use the nice new tool to download a patch series? > > Andrew, I'll be happy to provide you with a helper tool if you can > describe me your workflow. E.g. if you have a quilt directory of patches > plus a series file, it could easily be a tiny wrapper like: > > send-patches --base-commit 1234abcd --cover cover.txt patchdir/series Once/if there is such tool, could it perhaps instead of mass e-mailing create git commits, push them to korg repo and send a pull request? Thanks, Vlastimil > -K > ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-02-21 18:21 ` incoming Linus Torvalds 2020-02-21 18:32 ` incoming Konstantin Ryabitsev @ 2020-02-21 19:33 ` Linus Torvalds 1 sibling, 0 replies; 349+ messages in thread From: Linus Torvalds @ 2020-02-21 19:33 UTC (permalink / raw) To: Andrew Morton, Konstantin Ryabitsev; +Cc: Linux-MM, mm-commits Side note: I've obviously picked it up the old-fashioned way, but I had been looking forward to seeing if I could just automate this more. Linus On Fri, Feb 21, 2020 at 10:21 AM Linus Torvalds <torvalds@linux-foundation.org> wrote: > > Please, Andrew, can you make your email flow more consistent so that I > can actually use the nice new tool to download a patch series? > > Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-02-04 1:33 Andrew Morton 2020-02-04 2:27 ` incoming Linus Torvalds 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2020-02-04 1:33 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm The rest of MM and the rest of everything else. Subsystems affected by this patch series: hotfixes mm/pagealloc mm/memory-hotplug ipc misc mm/cleanups mm/pagemap procfs lib cleanups arm Subsystem: hotfixes Gang He <GHe@suse.com>: ocfs2: fix oops when writing cloned file David Hildenbrand <david@redhat.com>: Patch series "mm: fix max_pfn not falling on section boundary", v2: mm/page_alloc.c: fix uninitialized memmaps on a partially populated last section fs/proc/page.c: allow inspection of last section and fix end detection mm/page_alloc.c: initialize memmap of unavailable memory directly Subsystem: mm/pagealloc David Hildenbrand <david@redhat.com>: mm/page_alloc: fix and rework pfn handling in memmap_init_zone() mm: factor out next_present_section_nr() Subsystem: mm/memory-hotplug "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>: Patch series "mm/memory_hotplug: Shrink zones before removing memory", v6: mm/memmap_init: update variable name in memmap_init_zone David Hildenbrand <david@redhat.com>: mm/memory_hotplug: poison memmap in remove_pfn_range_from_zone() mm/memory_hotplug: we always have a zone in find_(smallest|biggest)_section_pfn mm/memory_hotplug: don't check for "all holes" in shrink_zone_span() mm/memory_hotplug: drop local variables in shrink_zone_span() mm/memory_hotplug: cleanup __remove_pages() mm/memory_hotplug: drop valid_start/valid_end from test_pages_in_a_zone() Subsystem: ipc Manfred Spraul <manfred@colorfullife.com>: smp_mb__{before,after}_atomic(): update Documentation Davidlohr Bueso <dave@stgolabs.net>: ipc/mqueue.c: remove duplicated code Manfred Spraul <manfred@colorfullife.com>: ipc/mqueue.c: update/document memory barriers ipc/msg.c: update and document memory barriers ipc/sem.c: document and update memory barriers Lu Shuaibing <shuaibinglu@126.com>: ipc/msg.c: consolidate all xxxctl_down() functions drivers/block/null_blk_main.c: fix layout Subsystem: misc Andrew Morton <akpm@linux-foundation.org>: drivers/block/null_blk_main.c: fix layout drivers/block/null_blk_main.c: fix uninitialized var warnings Randy Dunlap <rdunlap@infradead.org>: pinctrl: fix pxa2xx.c build warnings Subsystem: mm/cleanups Florian Westphal <fw@strlen.de>: mm: remove __krealloc Subsystem: mm/pagemap Steven Price <steven.price@arm.com>: Patch series "Generic page walk and ptdump", v17: mm: add generic p?d_leaf() macros arc: mm: add p?d_leaf() definitions arm: mm: add p?d_leaf() definitions arm64: mm: add p?d_leaf() definitions mips: mm: add p?d_leaf() definitions powerpc: mm: add p?d_leaf() definitions riscv: mm: add p?d_leaf() definitions s390: mm: add p?d_leaf() definitions sparc: mm: add p?d_leaf() definitions x86: mm: add p?d_leaf() definitions mm: pagewalk: add p4d_entry() and pgd_entry() mm: pagewalk: allow walking without vma mm: pagewalk: don't lock PTEs for walk_page_range_novma() mm: pagewalk: fix termination condition in walk_pte_range() mm: pagewalk: add 'depth' parameter to pte_hole x86: mm: point to struct seq_file from struct pg_state x86: mm+efi: convert ptdump_walk_pgd_level() to take a mm_struct x86: mm: convert ptdump_walk_pgd_level_debugfs() to take an mm_struct mm: add generic ptdump x86: mm: convert dump_pagetables to use walk_page_range arm64: mm: convert mm/dump.c to use walk_page_range() arm64: mm: display non-present entries in ptdump mm: ptdump: reduce level numbers by 1 in note_page() x86: mm: avoid allocating struct mm_struct on the stack "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>: Patch series "Fixup page directory freeing", v4: powerpc/mmu_gather: enable RCU_TABLE_FREE even for !SMP case Peter Zijlstra <peterz@infradead.org>: mm/mmu_gather: invalidate TLB correctly on batch allocation failure and flush asm-generic/tlb: avoid potential double flush asm-gemeric/tlb: remove stray function declarations asm-generic/tlb: add missing CONFIG symbol asm-generic/tlb: rename HAVE_RCU_TABLE_FREE asm-generic/tlb: rename HAVE_MMU_GATHER_PAGE_SIZE asm-generic/tlb: rename HAVE_MMU_GATHER_NO_GATHER asm-generic/tlb: provide MMU_GATHER_TABLE_FREE Subsystem: procfs Alexey Dobriyan <adobriyan@gmail.com>: proc: decouple proc from VFS with "struct proc_ops" proc: convert everything to "struct proc_ops" Subsystem: lib Yury Norov <yury.norov@gmail.com>: Patch series "lib: rework bitmap_parse", v5: lib/string: add strnchrnul() bitops: more BITS_TO_* macros lib: add test for bitmap_parse() lib: make bitmap_parse_user a wrapper on bitmap_parse lib: rework bitmap_parse() lib: new testcases for bitmap_parse{_user} include/linux/cpumask.h: don't calculate length of the input string Subsystem: cleanups Masahiro Yamada <masahiroy@kernel.org>: treewide: remove redundant IS_ERR() before error code check Subsystem: arm Chen-Yu Tsai <wens@csie.org>: ARM: dma-api: fix max_pfn off-by-one error in __dma_supported() Documentation/memory-barriers.txt | 14 arch/Kconfig | 17 arch/alpha/kernel/srm_env.c | 17 arch/arc/include/asm/pgtable.h | 1 arch/arm/Kconfig | 2 arch/arm/include/asm/pgtable-2level.h | 1 arch/arm/include/asm/pgtable-3level.h | 1 arch/arm/include/asm/tlb.h | 6 arch/arm/kernel/atags_proc.c | 8 arch/arm/mm/alignment.c | 14 arch/arm/mm/dma-mapping.c | 2 arch/arm64/Kconfig | 3 arch/arm64/Kconfig.debug | 19 arch/arm64/include/asm/pgtable.h | 2 arch/arm64/include/asm/ptdump.h | 8 arch/arm64/mm/Makefile | 4 arch/arm64/mm/dump.c | 152 ++---- arch/arm64/mm/mmu.c | 4 arch/arm64/mm/ptdump_debugfs.c | 2 arch/ia64/kernel/salinfo.c | 24 - arch/m68k/kernel/bootinfo_proc.c | 8 arch/mips/include/asm/pgtable.h | 5 arch/mips/lasat/picvue_proc.c | 31 - arch/powerpc/Kconfig | 7 arch/powerpc/include/asm/book3s/32/pgalloc.h | 8 arch/powerpc/include/asm/book3s/64/pgalloc.h | 2 arch/powerpc/include/asm/book3s/64/pgtable.h | 3 arch/powerpc/include/asm/nohash/pgalloc.h | 8 arch/powerpc/include/asm/tlb.h | 11 arch/powerpc/kernel/proc_powerpc.c | 10 arch/powerpc/kernel/rtas-proc.c | 70 +-- arch/powerpc/kernel/rtas_flash.c | 34 - arch/powerpc/kernel/rtasd.c | 14 arch/powerpc/mm/book3s64/pgtable.c | 7 arch/powerpc/mm/numa.c | 12 arch/powerpc/platforms/pseries/lpar.c | 24 - arch/powerpc/platforms/pseries/lparcfg.c | 14 arch/powerpc/platforms/pseries/reconfig.c | 8 arch/powerpc/platforms/pseries/scanlog.c | 15 arch/riscv/include/asm/pgtable-64.h | 7 arch/riscv/include/asm/pgtable.h | 7 arch/s390/Kconfig | 4 arch/s390/include/asm/pgtable.h | 2 arch/sh/mm/alignment.c | 17 arch/sparc/Kconfig | 3 arch/sparc/include/asm/pgtable_64.h | 2 arch/sparc/include/asm/tlb_64.h | 11 arch/sparc/kernel/led.c | 15 arch/um/drivers/mconsole_kern.c | 9 arch/um/kernel/exitcode.c | 15 arch/um/kernel/process.c | 15 arch/x86/Kconfig | 3 arch/x86/Kconfig.debug | 20 arch/x86/include/asm/pgtable.h | 10 arch/x86/include/asm/tlb.h | 4 arch/x86/kernel/cpu/mtrr/if.c | 21 arch/x86/mm/Makefile | 4 arch/x86/mm/debug_pagetables.c | 18 arch/x86/mm/dump_pagetables.c | 418 +++++------------- arch/x86/platform/efi/efi_32.c | 2 arch/x86/platform/efi/efi_64.c | 4 arch/x86/platform/uv/tlb_uv.c | 14 arch/xtensa/platforms/iss/simdisk.c | 10 crypto/af_alg.c | 2 drivers/acpi/battery.c | 15 drivers/acpi/proc.c | 15 drivers/acpi/scan.c | 2 drivers/base/memory.c | 9 drivers/block/null_blk_main.c | 58 +- drivers/char/hw_random/bcm2835-rng.c | 2 drivers/char/hw_random/omap-rng.c | 4 drivers/clk/clk.c | 2 drivers/dma/mv_xor_v2.c | 2 drivers/firmware/efi/arm-runtime.c | 2 drivers/gpio/gpiolib-devres.c | 2 drivers/gpio/gpiolib-of.c | 8 drivers/gpio/gpiolib.c | 2 drivers/hwmon/dell-smm-hwmon.c | 15 drivers/i2c/busses/i2c-mv64xxx.c | 5 drivers/i2c/busses/i2c-synquacer.c | 2 drivers/ide/ide-proc.c | 19 drivers/input/input.c | 28 - drivers/isdn/capi/kcapi_proc.c | 6 drivers/macintosh/via-pmu.c | 17 drivers/md/md.c | 15 drivers/misc/sgi-gru/gruprocfs.c | 42 - drivers/mtd/ubi/build.c | 2 drivers/net/wireless/cisco/airo.c | 126 ++--- drivers/net/wireless/intel/ipw2x00/libipw_module.c | 15 drivers/net/wireless/intersil/hostap/hostap_hw.c | 4 drivers/net/wireless/intersil/hostap/hostap_proc.c | 14 drivers/net/wireless/intersil/hostap/hostap_wlan.h | 2 drivers/net/wireless/ray_cs.c | 20 drivers/of/device.c | 2 drivers/parisc/led.c | 17 drivers/pci/controller/pci-tegra.c | 2 drivers/pci/proc.c | 25 - drivers/phy/phy-core.c | 4 drivers/pinctrl/pxa/pinctrl-pxa2xx.c | 1 drivers/platform/x86/thinkpad_acpi.c | 15 drivers/platform/x86/toshiba_acpi.c | 60 +- drivers/pnp/isapnp/proc.c | 9 drivers/pnp/pnpbios/proc.c | 17 drivers/s390/block/dasd_proc.c | 15 drivers/s390/cio/blacklist.c | 14 drivers/s390/cio/css.c | 11 drivers/scsi/esas2r/esas2r_main.c | 9 drivers/scsi/scsi_devinfo.c | 15 drivers/scsi/scsi_proc.c | 29 - drivers/scsi/sg.c | 30 - drivers/spi/spi-orion.c | 3 drivers/staging/rtl8192u/ieee80211/ieee80211_module.c | 14 drivers/tty/sysrq.c | 8 drivers/usb/gadget/function/rndis.c | 17 drivers/video/fbdev/imxfb.c | 2 drivers/video/fbdev/via/viafbdev.c | 105 ++-- drivers/zorro/proc.c | 9 fs/cifs/cifs_debug.c | 108 ++-- fs/cifs/dfs_cache.c | 13 fs/cifs/dfs_cache.h | 2 fs/ext4/super.c | 2 fs/f2fs/node.c | 2 fs/fscache/internal.h | 2 fs/fscache/object-list.c | 11 fs/fscache/proc.c | 2 fs/jbd2/journal.c | 13 fs/jfs/jfs_debug.c | 14 fs/lockd/procfs.c | 12 fs/nfsd/nfsctl.c | 13 fs/nfsd/stats.c | 12 fs/ocfs2/file.c | 14 fs/ocfs2/suballoc.c | 2 fs/proc/cpuinfo.c | 12 fs/proc/generic.c | 38 - fs/proc/inode.c | 76 +-- fs/proc/internal.h | 5 fs/proc/kcore.c | 13 fs/proc/kmsg.c | 14 fs/proc/page.c | 54 +- fs/proc/proc_net.c | 32 - fs/proc/proc_sysctl.c | 2 fs/proc/root.c | 2 fs/proc/stat.c | 12 fs/proc/task_mmu.c | 4 fs/proc/vmcore.c | 10 fs/sysfs/group.c | 2 include/asm-generic/pgtable.h | 20 include/asm-generic/tlb.h | 138 +++-- include/linux/bitmap.h | 8 include/linux/bitops.h | 4 include/linux/cpumask.h | 4 include/linux/memory_hotplug.h | 4 include/linux/mm.h | 6 include/linux/mmzone.h | 10 include/linux/pagewalk.h | 49 +- include/linux/proc_fs.h | 23 include/linux/ptdump.h | 24 - include/linux/seq_file.h | 13 include/linux/slab.h | 1 include/linux/string.h | 1 include/linux/sunrpc/stats.h | 4 ipc/mqueue.c | 123 ++++- ipc/msg.c | 62 +- ipc/sem.c | 66 +- ipc/util.c | 14 kernel/configs.c | 9 kernel/irq/proc.c | 42 - kernel/kallsyms.c | 12 kernel/latencytop.c | 14 kernel/locking/lockdep_proc.c | 15 kernel/module.c | 12 kernel/profile.c | 24 - kernel/sched/psi.c | 48 +- lib/bitmap.c | 195 ++++---- lib/string.c | 17 lib/test_bitmap.c | 105 ++++ mm/Kconfig.debug | 21 mm/Makefile | 1 mm/gup.c | 2 mm/hmm.c | 66 +- mm/memory_hotplug.c | 104 +--- mm/memremap.c | 2 mm/migrate.c | 5 mm/mincore.c | 1 mm/mmu_gather.c | 158 ++++-- mm/page_alloc.c | 75 +-- mm/pagewalk.c | 167 +++++-- mm/ptdump.c | 159 ++++++ mm/slab_common.c | 37 - mm/sparse.c | 10 mm/swapfile.c | 14 net/atm/mpoa_proc.c | 17 net/atm/proc.c | 8 net/core/dev.c | 2 net/core/filter.c | 2 net/core/pktgen.c | 44 - net/ipv4/ipconfig.c | 10 net/ipv4/netfilter/ipt_CLUSTERIP.c | 16 net/ipv4/route.c | 24 - net/netfilter/xt_recent.c | 17 net/sunrpc/auth_gss/svcauth_gss.c | 10 net/sunrpc/cache.c | 45 - net/sunrpc/stats.c | 21 net/xfrm/xfrm_policy.c | 2 samples/kfifo/bytestream-example.c | 11 samples/kfifo/inttype-example.c | 11 samples/kfifo/record-example.c | 11 scripts/coccinelle/free/devm_free.cocci | 4 sound/core/info.c | 34 - sound/soc/codecs/ak4104.c | 3 sound/soc/codecs/cs4270.c | 3 sound/soc/codecs/tlv320aic32x4.c | 6 sound/soc/sunxi/sun4i-spdif.c | 2 tools/include/linux/bitops.h | 9 214 files changed, 2589 insertions(+), 2227 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-02-04 1:33 incoming Andrew Morton @ 2020-02-04 2:27 ` Linus Torvalds 2020-02-04 2:46 ` incoming Andrew Morton 0 siblings, 1 reply; 349+ messages in thread From: Linus Torvalds @ 2020-02-04 2:27 UTC (permalink / raw) To: Andrew Morton; +Cc: mm-commits, Linux-MM On Tue, Feb 4, 2020 at 1:33 AM Andrew Morton <akpm@linux-foundation.org> wrote: > > The rest of MM and the rest of everything else. What's the base? You've changed your scripts or something, and that information is no longer in your cover letter.. Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-02-04 2:27 ` incoming Linus Torvalds @ 2020-02-04 2:46 ` Andrew Morton 2020-02-04 3:11 ` incoming Linus Torvalds 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2020-02-04 2:46 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, Linux-MM On Tue, 4 Feb 2020 02:27:48 +0000 Linus Torvalds <torvalds@linux-foundation.org> wrote: > On Tue, Feb 4, 2020 at 1:33 AM Andrew Morton <akpm@linux-foundation.org> wrote: > > > > The rest of MM and the rest of everything else. > > What's the base? You've changed your scripts or something, and that > information is no longer in your cover letter.. > Crap, sorry, geriatric. d4e9056daedca3891414fe3c91de3449a5dad0f2 ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2020-02-04 2:46 ` incoming Andrew Morton @ 2020-02-04 3:11 ` Linus Torvalds 0 siblings, 0 replies; 349+ messages in thread From: Linus Torvalds @ 2020-02-04 3:11 UTC (permalink / raw) To: Andrew Morton; +Cc: mm-commits, Linux-MM On Tue, Feb 4, 2020 at 2:46 AM Andrew Morton <akpm@linux-foundation.org> wrote: > > On Tue, 4 Feb 2020 02:27:48 +0000 Linus Torvalds <torvalds@linux-foundation.org> wrote: > > > What's the base? You've changed your scripts or something, and that > > information is no longer in your cover letter.. > > Crap, sorry, geriatric. > > d4e9056daedca3891414fe3c91de3449a5dad0f2 Ok, I've tentatively applied it with the MIME decoding fixes I found, and I'll guess I'll let it build and sit for a while before merging it into my tree. I didn't find anything else odd in there. But... Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-01-31 6:10 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-01-31 6:10 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits Most of -mm and quite a number of other subsystems. MM is fairly quiet this time. Holidays, I assume. 119 patches, based on 39bed42de2e7d74686a2d5a45638d6a5d7e7d473: Subsystems affected by this patch series: hotfixes scripts ocfs2 mm/slub mm/kmemleak mm/debug mm/pagecache mm/gup mm/swap mm/memcg mm/pagemap mm/tracing mm/kasan mm/initialization mm/pagealloc mm/vmscan mm/tools mm/memblock mm/oom-kill mm/hugetlb mm/migration mm/mmap mm/memory-hotplug mm/zswap mm/cleanups mm/zram misc lib binfmt init reiserfs exec dma-mapping kcov Subsystem: hotfixes Andy Shevchenko <andriy.shevchenko@linux.intel.com>: lib/test_bitmap: correct test data offsets for 32-bit "Theodore Ts'o" <tytso@mit.edu>: memcg: fix a crash in wb_workfn when a device disappears Dan Carpenter <dan.carpenter@oracle.com>: mm/mempolicy.c: fix out of bounds write in mpol_parse_str() Pingfan Liu <kernelfans@gmail.com>: mm/sparse.c: reset section's mem_map when fully deactivated Wei Yang <richardw.yang@linux.intel.com>: mm/migrate.c: also overwrite error when it is bigger than zero Dan Williams <dan.j.williams@intel.com>: mm/memory_hotplug: fix remove_memory() lockdep splat Wei Yang <richardw.yang@linux.intel.com>: mm: thp: don't need care deferred split queue in memcg charge move path Yang Shi <yang.shi@linux.alibaba.com>: mm: move_pages: report the number of non-attempted pages Subsystem: scripts Xiong <xndchn@gmail.com>: scripts/spelling.txt: add more spellings to spelling.txt Luca Ceresoli <luca@lucaceresoli.net>: scripts/spelling.txt: add "issus" typo Subsystem: ocfs2 Aditya Pakki <pakki001@umn.edu>: fs: ocfs: remove unnecessary assertion in dlm_migrate_lockres zhengbin <zhengbin13@huawei.com>: ocfs2: remove unneeded semicolons Masahiro Yamada <masahiroy@kernel.org>: ocfs2: make local header paths relative to C files Colin Ian King <colin.king@canonical.com>: ocfs2/dlm: remove redundant assignment to ret Andy Shevchenko <andriy.shevchenko@linux.intel.com>: ocfs2/dlm: move BITS_TO_BYTES() to bitops.h for wider use wangyan <wangyan122@huawei.com>: ocfs2: fix a NULL pointer dereference when call ocfs2_update_inode_fsync_trans() ocfs2: use ocfs2_update_inode_fsync_trans() to access t_tid in handle->h_transaction Subsystem: mm/slub Yu Zhao <yuzhao@google.com>: mm/slub.c: avoid slub allocation while holding list_lock Subsystem: mm/kmemleak He Zhe <zhe.he@windriver.com>: mm/kmemleak: turn kmemleak_lock and object->lock to raw_spinlock_t Subsystem: mm/debug Vlastimil Babka <vbabka@suse.cz>: mm/debug.c: always print flags in dump_page() Subsystem: mm/pagecache Ira Weiny <ira.weiny@intel.com>: mm/filemap.c: clean up filemap_write_and_wait() Subsystem: mm/gup Qiujun Huang <hqjagain@gmail.com>: mm: fix gup_pud_range Wei Yang <richardw.yang@linux.intel.com>: mm/gup.c: use is_vm_hugetlb_page() to check whether to follow huge John Hubbard <jhubbard@nvidia.com>: Patch series "mm/gup: prereqs to track dma-pinned pages: FOLL_PIN", v12: mm/gup: factor out duplicate code from four routines mm/gup: move try_get_compound_head() to top, fix minor issues Dan Williams <dan.j.williams@intel.com>: mm: Cleanup __put_devmap_managed_page() vs ->page_free() John Hubbard <jhubbard@nvidia.com>: mm: devmap: refactor 1-based refcounting for ZONE_DEVICE pages goldish_pipe: rename local pin_user_pages() routine mm: fix get_user_pages_remote()'s handling of FOLL_LONGTERM vfio: fix FOLL_LONGTERM use, simplify get_user_pages_remote() call mm/gup: allow FOLL_FORCE for get_user_pages_fast() IB/umem: use get_user_pages_fast() to pin DMA pages media/v4l2-core: set pages dirty upon releasing DMA buffers mm/gup: introduce pin_user_pages*() and FOLL_PIN goldish_pipe: convert to pin_user_pages() and put_user_page() IB/{core,hw,umem}: set FOLL_PIN via pin_user_pages*(), fix up ODP mm/process_vm_access: set FOLL_PIN via pin_user_pages_remote() drm/via: set FOLL_PIN via pin_user_pages_fast() fs/io_uring: set FOLL_PIN via pin_user_pages() net/xdp: set FOLL_PIN via pin_user_pages() media/v4l2-core: pin_user_pages (FOLL_PIN) and put_user_page() conversion vfio, mm: pin_user_pages (FOLL_PIN) and put_user_page() conversion powerpc: book3s64: convert to pin_user_pages() and put_user_page() mm/gup_benchmark: use proper FOLL_WRITE flags instead of hard-coding "1" mm, tree-wide: rename put_user_page*() to unpin_user_page*() Subsystem: mm/swap Vasily Averin <vvs@virtuozzo.com>: mm/swapfile.c: swap_next should increase position index Subsystem: mm/memcg Kaitao Cheng <pilgrimtao@gmail.com>: mm/memcontrol.c: cleanup some useless code Subsystem: mm/pagemap Li Xinhai <lixinhai.lxh@gmail.com>: mm/page_vma_mapped.c: explicitly compare pfn for normal, hugetlbfs and THP page Subsystem: mm/tracing Junyong Sun <sunjy516@gmail.com>: mm, tracing: print symbol name for kmem_alloc_node call_site events Subsystem: mm/kasan "Gustavo A. R. Silva" <gustavo@embeddedor.com>: lib/test_kasan.c: fix memory leak in kmalloc_oob_krealloc_more() Subsystem: mm/initialization Andy Shevchenko <andriy.shevchenko@linux.intel.com>: mm/early_ioremap.c: use %pa to print resource_size_t variables Subsystem: mm/pagealloc "Kirill A. Shutemov" <kirill@shutemov.name>: mm/page_alloc: skip non present sections on zone initialization David Hildenbrand <david@redhat.com>: mm: remove the memory isolate notifier mm: remove "count" parameter from has_unmovable_pages() Subsystem: mm/vmscan Liu Song <liu.song11@zte.com.cn>: mm/vmscan.c: remove unused return value of shrink_node Alex Shi <alex.shi@linux.alibaba.com>: mm/vmscan: remove prefetch_prev_lru_page mm/vmscan: remove unused RECLAIM_OFF/RECLAIM_ZONE Subsystem: mm/tools Daniel Wagner <dwagner@suse.de>: tools/vm/slabinfo: fix sanity checks enabling Subsystem: mm/memblock Anshuman Khandual <anshuman.khandual@arm.com>: mm/memblock: define memblock_physmem_add() memblock: Use __func__ in remaining memblock_dbg() call sites Subsystem: mm/oom-kill David Rientjes <rientjes@google.com>: mm, oom: dump stack of victim when reaping failed Subsystem: mm/hugetlb Wei Yang <richardw.yang@linux.intel.com>: mm/huge_memory.c: use head to check huge zero page mm/huge_memory.c: use head to emphasize the purpose of page mm/huge_memory.c: reduce critical section protected by split_queue_lock Subsystem: mm/migration Ralph Campbell <rcampbell@nvidia.com>: mm/migrate: remove useless mask of start address mm/migrate: clean up some minor coding style mm/migrate: add stable check in migrate_vma_insert_page() David Rientjes <rientjes@google.com>: mm, thp: fix defrag setting if newline is not used Subsystem: mm/mmap Miaohe Lin <linmiaohe@huawei.com>: mm/mmap.c: get rid of odd jump labels in find_mergeable_anon_vma() Subsystem: mm/memory-hotplug David Hildenbrand <david@redhat.com>: Patch series "mm/memory_hotplug: pass in nid to online_pages()": mm/memory_hotplug: pass in nid to online_pages() Qian Cai <cai@lca.pw>: mm/hotplug: silence a lockdep splat with printk() mm/page_isolation: fix potential warning from user Subsystem: mm/zswap Vitaly Wool <vitaly.wool@konsulko.com>: mm/zswap.c: add allocation hysteresis if pool limit is hit Dan Carpenter <dan.carpenter@oracle.com>: zswap: potential NULL dereference on error in init_zswap() Subsystem: mm/cleanups Yu Zhao <yuzhao@google.com>: include/linux/mm.h: clean up obsolete check on space in page->flags Wei Yang <richardw.yang@linux.intel.com>: include/linux/mm.h: remove dead code totalram_pages_set() Anshuman Khandual <anshuman.khandual@arm.com>: include/linux/memory.h: drop fields 'hw' and 'phys_callback' from struct memory_block Hao Lee <haolee.swjtu@gmail.com>: mm: fix comments related to node reclaim Subsystem: mm/zram Taejoon Song <taejoon.song@lge.com>: zram: try to avoid worst-case scenario on same element pages Colin Ian King <colin.king@canonical.com>: drivers/block/zram/zram_drv.c: fix error return codes not being returned in writeback_store Subsystem: misc Akinobu Mita <akinobu.mita@gmail.com>: Patch series "add header file for kelvin to/from Celsius conversion: include/linux/units.h: add helpers for kelvin to/from Celsius conversion ACPI: thermal: switch to use <linux/units.h> helpers platform/x86: asus-wmi: switch to use <linux/units.h> helpers platform/x86: intel_menlow: switch to use <linux/units.h> helpers thermal: int340x: switch to use <linux/units.h> helpers thermal: intel_pch: switch to use <linux/units.h> helpers nvme: hwmon: switch to use <linux/units.h> helpers thermal: remove kelvin to/from Celsius conversion helpers from <linux/thermal.h> iwlegacy: use <linux/units.h> helpers iwlwifi: use <linux/units.h> helpers thermal: armada: remove unused TO_MCELSIUS macro iio: adc: qcom-vadc-common: use <linux/units.h> helpers Subsystem: lib Mikhail Zaslonko <zaslonko@linux.ibm.com>: Patch series "S390 hardware support for kernel zlib", v3: lib/zlib: add s390 hardware support for kernel zlib_deflate s390/boot: rename HEAP_SIZE due to name collision lib/zlib: add s390 hardware support for kernel zlib_inflate s390/boot: add dfltcc= kernel command line parameter lib/zlib: add zlib_deflate_dfltcc_enabled() function btrfs: use larger zlib buffer for s390 hardware compression Nathan Chancellor <natechancellor@gmail.com>: lib/scatterlist.c: adjust indentation in __sg_alloc_table Yury Norov <yury.norov@gmail.com>: uapi: rename ext2_swab() to swab() and share globally in swab.h lib/find_bit.c: join _find_next_bit{_le} lib/find_bit.c: uninline helper _find_next_bit() Subsystem: binfmt Alexey Dobriyan <adobriyan@gmail.com>: fs/binfmt_elf.c: smaller code generation around auxv vector fill fs/binfmt_elf.c: fix ->start_code calculation fs/binfmt_elf.c: don't copy ELF header around fs/binfmt_elf.c: better codegen around current->mm fs/binfmt_elf.c: make BAD_ADDR() unlikely fs/binfmt_elf.c: coredump: allocate core ELF header on stack fs/binfmt_elf.c: coredump: delete duplicated overflow check fs/binfmt_elf.c: coredump: allow process with empty address space to coredump Subsystem: init Arvind Sankar <nivedita@alum.mit.edu>: init/main.c: log arguments and environment passed to init init/main.c: remove unnecessary repair_env_string in do_initcall_level Patch series "init/main.c: minor cleanup/bugfix of envvar handling", v2: init/main.c: fix quoted value handling in unknown_bootoption Christophe Leroy <christophe.leroy@c-s.fr>: init/main.c: fix misleading "This architecture does not have kernel memory protection" message Subsystem: reiserfs Yunfeng Ye <yeyunfeng@huawei.com>: reiserfs: prevent NULL pointer dereference in reiserfs_insert_item() Subsystem: exec Alexey Dobriyan <adobriyan@gmail.com>: execve: warn if process starts with executable stack Subsystem: dma-mapping Andy Shevchenko <andriy.shevchenko@linux.intel.com>: include/linux/io-mapping.h-mapping: use PHYS_PFN() macro in io_mapping_map_atomic_wc() Subsystem: kcov Dmitry Vyukov <dvyukov@google.com>: kcov: ignore fault-inject and stacktrace Documentation/admin-guide/kernel-parameters.txt | 12 Documentation/core-api/index.rst | 1 Documentation/core-api/pin_user_pages.rst | 234 +++++ Documentation/vm/zswap.rst | 13 arch/powerpc/mm/book3s64/iommu_api.c | 14 arch/s390/boot/compressed/decompressor.c | 8 arch/s390/boot/ipl_parm.c | 14 arch/s390/include/asm/setup.h | 7 arch/s390/kernel/setup.c | 14 drivers/acpi/thermal.c | 34 drivers/base/memory.c | 25 drivers/block/zram/zram_drv.c | 10 drivers/gpu/drm/via/via_dmablit.c | 6 drivers/iio/adc/qcom-vadc-common.c | 6 drivers/iio/adc/qcom-vadc-common.h | 1 drivers/infiniband/core/umem.c | 21 drivers/infiniband/core/umem_odp.c | 13 drivers/infiniband/hw/hfi1/user_pages.c | 4 drivers/infiniband/hw/mthca/mthca_memfree.c | 8 drivers/infiniband/hw/qib/qib_user_pages.c | 4 drivers/infiniband/hw/qib/qib_user_sdma.c | 8 drivers/infiniband/hw/usnic/usnic_uiom.c | 4 drivers/infiniband/sw/siw/siw_mem.c | 4 drivers/media/v4l2-core/videobuf-dma-sg.c | 20 drivers/net/ethernet/broadcom/bnx2x/bnx2x_init.h | 1 drivers/net/wireless/intel/iwlegacy/4965-mac.c | 3 drivers/net/wireless/intel/iwlegacy/4965.c | 17 drivers/net/wireless/intel/iwlegacy/common.h | 3 drivers/net/wireless/intel/iwlwifi/dvm/dev.h | 5 drivers/net/wireless/intel/iwlwifi/dvm/devices.c | 6 drivers/nvdimm/pmem.c | 6 drivers/nvme/host/hwmon.c | 13 drivers/platform/goldfish/goldfish_pipe.c | 39 drivers/platform/x86/asus-wmi.c | 7 drivers/platform/x86/intel_menlow.c | 9 drivers/thermal/armada_thermal.c | 2 drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.c | 7 drivers/thermal/intel/intel_pch_thermal.c | 3 drivers/vfio/vfio_iommu_type1.c | 39 fs/binfmt_elf.c | 154 +-- fs/btrfs/compression.c | 2 fs/btrfs/zlib.c | 135 ++ fs/exec.c | 5 fs/fs-writeback.c | 2 fs/io_uring.c | 6 fs/ocfs2/cluster/quorum.c | 2 fs/ocfs2/dlm/Makefile | 2 fs/ocfs2/dlm/dlmast.c | 8 fs/ocfs2/dlm/dlmcommon.h | 4 fs/ocfs2/dlm/dlmconvert.c | 8 fs/ocfs2/dlm/dlmdebug.c | 8 fs/ocfs2/dlm/dlmdomain.c | 8 fs/ocfs2/dlm/dlmlock.c | 8 fs/ocfs2/dlm/dlmmaster.c | 10 fs/ocfs2/dlm/dlmrecovery.c | 10 fs/ocfs2/dlm/dlmthread.c | 8 fs/ocfs2/dlm/dlmunlock.c | 8 fs/ocfs2/dlmfs/Makefile | 2 fs/ocfs2/dlmfs/dlmfs.c | 4 fs/ocfs2/dlmfs/userdlm.c | 6 fs/ocfs2/dlmglue.c | 2 fs/ocfs2/journal.h | 8 fs/ocfs2/namei.c | 3 fs/reiserfs/stree.c | 3 include/linux/backing-dev.h | 10 include/linux/bitops.h | 1 include/linux/fs.h | 6 include/linux/io-mapping.h | 5 include/linux/memblock.h | 7 include/linux/memory.h | 29 include/linux/memory_hotplug.h | 3 include/linux/mm.h | 116 +- include/linux/mmzone.h | 2 include/linux/page-isolation.h | 8 include/linux/swab.h | 1 include/linux/thermal.h | 11 include/linux/units.h | 84 + include/linux/zlib.h | 6 include/trace/events/kmem.h | 4 include/trace/events/writeback.h | 37 include/uapi/linux/swab.h | 10 include/uapi/linux/sysctl.h | 2 init/main.c | 36 kernel/Makefile | 1 lib/Kconfig | 7 lib/Makefile | 2 lib/decompress_inflate.c | 13 lib/find_bit.c | 82 - lib/scatterlist.c | 2 lib/test_bitmap.c | 9 lib/test_kasan.c | 1 lib/zlib_deflate/deflate.c | 85 + lib/zlib_deflate/deflate_syms.c | 1 lib/zlib_deflate/deftree.c | 54 - lib/zlib_deflate/defutil.h | 134 ++ lib/zlib_dfltcc/Makefile | 13 lib/zlib_dfltcc/dfltcc.c | 57 + lib/zlib_dfltcc/dfltcc.h | 155 +++ lib/zlib_dfltcc/dfltcc_deflate.c | 280 ++++++ lib/zlib_dfltcc/dfltcc_inflate.c | 149 +++ lib/zlib_dfltcc/dfltcc_syms.c | 17 lib/zlib_dfltcc/dfltcc_util.h | 123 ++ lib/zlib_inflate/inflate.c | 32 lib/zlib_inflate/inflate.h | 8 lib/zlib_inflate/infutil.h | 18 mm/Makefile | 1 mm/backing-dev.c | 1 mm/debug.c | 18 mm/early_ioremap.c | 8 mm/filemap.c | 34 mm/gup.c | 503 ++++++----- mm/gup_benchmark.c | 9 mm/huge_memory.c | 44 mm/kmemleak.c | 112 +- mm/memblock.c | 22 mm/memcontrol.c | 25 mm/memory_hotplug.c | 24 mm/mempolicy.c | 6 mm/memremap.c | 95 -- mm/migrate.c | 77 + mm/mmap.c | 30 mm/oom_kill.c | 2 mm/page_alloc.c | 83 + mm/page_isolation.c | 69 - mm/page_vma_mapped.c | 12 mm/process_vm_access.c | 32 mm/slub.c | 88 + mm/sparse.c | 2 mm/swap.c | 27 mm/swapfile.c | 2 mm/vmscan.c | 24 mm/zswap.c | 88 + net/xdp/xdp_umem.c | 4 scripts/spelling.txt | 14 tools/testing/selftests/vm/gup_benchmark.c | 6 tools/vm/slabinfo.c | 4 136 files changed, 2790 insertions(+), 1358 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-01-14 0:28 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-01-14 0:28 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 11 MM fixes, based on b3a987b0264d3ddbb24293ebff10eddfc472f653: Vlastimil Babka <vbabka@suse.cz>: mm, thp: tweak reclaim/compaction effort of local-only and all-node allocations David Hildenbrand <david@redhat.com>: mm/memory_hotplug: don't free usage map when removing a re-added early section "Kirill A. Shutemov" <kirill@shutemov.name>: Patch series "Fix two above-47bit hint address vs. THP bugs": mm/huge_memory.c: thp: fix conflict of above-47bit hint address and PMD alignment mm/shmem.c: thp, shmem: fix conflict of above-47bit hint address and PMD alignment Roman Gushchin <guro@fb.com>: mm: memcg/slab: fix percpu slab vmstats flushing Vlastimil Babka <vbabka@suse.cz>: mm, debug_pagealloc: don't rely on static keys too early Wen Yang <wenyang@linux.alibaba.com>: Patch series "use div64_ul() instead of div_u64() if the divisor is: mm/page-writeback.c: avoid potential division by zero in wb_min_max_ratio() mm/page-writeback.c: use div64_ul() for u64-by-unsigned-long divide mm/page-writeback.c: improve arithmetic divisions Adrian Huang <ahuang12@lenovo.com>: mm: memcg/slab: call flush_memcg_workqueue() only if memcg workqueue is valid Yang Shi <yang.shi@linux.alibaba.com>: mm: khugepaged: add trace status description for SCAN_PAGE_HAS_PRIVATE include/linux/mm.h | 18 +++++++++- include/linux/mmzone.h | 5 +-- include/trace/events/huge_memory.h | 3 + init/main.c | 1 mm/huge_memory.c | 38 ++++++++++++++--------- mm/memcontrol.c | 37 +++++----------------- mm/mempolicy.c | 10 ++++-- mm/page-writeback.c | 10 +++--- mm/page_alloc.c | 61 ++++++++++--------------------------- mm/shmem.c | 7 ++-- mm/slab.c | 4 +- mm/slab_common.c | 3 + mm/slub.c | 2 - mm/sparse.c | 9 ++++- mm/vmalloc.c | 4 +- 15 files changed, 102 insertions(+), 110 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2020-01-04 20:55 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2020-01-04 20:55 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 17 fixes, base on 5613970af3f5f8372c596b138bd64f3918513515: David Hildenbrand <david@redhat.com>: mm/memory_hotplug: shrink zones when offlining memory Chanho Min <chanho.min@lge.com>: mm/zsmalloc.c: fix the migrated zspage statistics. Andrey Konovalov <andreyknvl@google.com>: kcov: fix struct layout for kcov_remote_arg Shakeel Butt <shakeelb@google.com>: memcg: account security cred as well to kmemcg Yang Shi <yang.shi@linux.alibaba.com>: mm: move_pages: return valid node id in status if the page is already on the target node Eric Biggers <ebiggers@google.com>: fs/direct-io.c: include fs/internal.h for missing prototype fs/nsfs.c: include headers for missing declarations fs/namespace.c: make to_mnt_ns() static Nick Desaulniers <ndesaulniers@google.com>: hexagon: parenthesize registers in asm predicates hexagon: work around compiler crash Randy Dunlap <rdunlap@infradead.org>: fs/posix_acl.c: fix kernel-doc warnings Ilya Dryomov <idryomov@gmail.com>: mm/oom: fix pgtables units mismatch in Killed process message Navid Emamdoost <navid.emamdoost@gmail.com>: mm/gup: fix memory leak in __gup_benchmark_ioctl Waiman Long <longman@redhat.com>: mm/hugetlb: defer freeing of huge pages if in non-task context Kai Li <li.kai4@h3c.com>: ocfs2: call journal flush to mark journal as empty after journal recovery when mount Gang He <GHe@suse.com>: ocfs2: fix the crash due to call ocfs2_get_dlm_debug once less Nick Desaulniers <ndesaulniers@google.com>: hexagon: define ioremap_uc Documentation/dev-tools/kcov.rst | 10 +++---- arch/arm64/mm/mmu.c | 4 -- arch/hexagon/include/asm/atomic.h | 8 ++--- arch/hexagon/include/asm/bitops.h | 8 ++--- arch/hexagon/include/asm/cmpxchg.h | 2 - arch/hexagon/include/asm/futex.h | 6 ++-- arch/hexagon/include/asm/io.h | 1 arch/hexagon/include/asm/spinlock.h | 20 +++++++------- arch/hexagon/kernel/stacktrace.c | 4 -- arch/hexagon/kernel/vm_entry.S | 2 - arch/ia64/mm/init.c | 4 -- arch/powerpc/mm/mem.c | 3 -- arch/s390/mm/init.c | 4 -- arch/sh/mm/init.c | 4 -- arch/x86/mm/init_32.c | 4 -- arch/x86/mm/init_64.c | 4 -- fs/direct-io.c | 2 + fs/namespace.c | 2 - fs/nsfs.c | 3 ++ fs/ocfs2/dlmglue.c | 1 fs/ocfs2/journal.c | 8 +++++ fs/posix_acl.c | 7 +++- include/linux/memory_hotplug.h | 7 +++- include/uapi/linux/kcov.h | 10 +++---- kernel/cred.c | 6 ++-- mm/gup_benchmark.c | 8 ++++- mm/hugetlb.c | 51 +++++++++++++++++++++++++++++++++++- mm/memory_hotplug.c | 31 +++++++++++---------- mm/memremap.c | 2 - mm/migrate.c | 23 ++++++++++++---- mm/oom_kill.c | 2 - mm/zsmalloc.c | 5 +++ 32 files changed, 166 insertions(+), 90 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2019-12-18 4:50 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2019-12-18 4:50 UTC (permalink / raw) To: Linus Torvalds; +Cc: linux-mm, mm-commits 6 fixes based on 2187f215ebaac73ddbd814696d7c7fa34f0c3de0: Andrey Ryabinin <aryabinin@virtuozzo.com>: kasan: fix crashes on access to memory mapped by vm_map_ram() Daniel Axtens <dja@axtens.net>: mm/memory.c: add apply_to_existing_page_range() helper kasan: use apply_to_existing_page_range() for releasing vmalloc shadow kasan: don't assume percpu shadow allocations will succeed Yang Shi <yang.shi@linux.alibaba.com>: mm: vmscan: protect shrinker idr replace with CONFIG_MEMCG Changbin Du <changbin.du@gmail.com>: lib/Kconfig.debug: fix some messed up configurations include/linux/kasan.h | 15 +++-- include/linux/mm.h | 3 + lib/Kconfig.debug | 100 ++++++++++++++++++------------------ mm/kasan/common.c | 36 ++++++++----- mm/memory.c | 136 ++++++++++++++++++++++++++++++++++---------------- mm/vmalloc.c | 133 ++++++++++++++++++++++++++++-------------------- mm/vmscan.c | 2 7 files changed, 260 insertions(+), 165 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2019-12-05 0:48 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2019-12-05 0:48 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm Most of the rest of MM and various other things. Some Kconfig rework still awaits merges of dependent trees from linux-next. 86 patches, based on 63de37476ebd1e9bab6a9e17186dc5aa1da9ea99. Subsystems affected by this patch series: mm/hotfixes mm/memcg mm/vmstat mm/thp procfs sysctl misc notifiers core-kernel bitops lib checkpatch epoll binfmt init rapidio uaccess kcov ubsan ipc bitmap mm/pagemap Subsystem: mm/hotfixes zhong jiang <zhongjiang@huawei.com>: mm/kasan/common.c: fix compile error Subsystem: mm/memcg Roman Gushchin <guro@fb.com>: mm: memcg/slab: wait for !root kmem_cache refcnt killing on root kmem_cache destruction Subsystem: mm/vmstat Konstantin Khlebnikov <khlebnikov@yandex-team.ru>: mm/vmstat: add helpers to get vmstat item names for each enum type mm/memcontrol: use vmstat names for printing statistics Subsystem: mm/thp Yu Zhao <yuzhao@google.com>: mm/memory.c: replace is_zero_pfn with is_huge_zero_pmd for thp Subsystem: procfs Alexey Dobriyan <adobriyan@gmail.com>: proc: change ->nlink under proc_subdir_lock fs/proc/generic.c: delete useless "len" variable fs/proc/internal.h: shuffle "struct pde_opener" Miaohe Lin <linmiaohe@huawei.com>: include/linux/proc_fs.h: fix confusing macro arg name Krzysztof Kozlowski <krzk@kernel.org>: fs/proc/Kconfig: fix indentation Subsystem: sysctl Alessio Balsini <balsini@android.com>: include/linux/sysctl.h: inline braces for ctl_table and ctl_table_header Subsystem: misc Stephen Boyd <swboyd@chromium.org>: .gitattributes: use 'dts' diff driver for dts files Rikard Falkeborn <rikard.falkeborn@gmail.com>: linux/build_bug.h: change type to int Masahiro Yamada <yamada.masahiro@socionext.com>: linux/scc.h: make uapi linux/scc.h self-contained Krzysztof Kozlowski <krzk@kernel.org>: arch/Kconfig: fix indentation Joe Perches <joe@perches.com>: scripts/get_maintainer.pl: add signatures from Fixes: <badcommit> lines in commit message Andy Shevchenko <andriy.shevchenko@linux.intel.com>: kernel.h: update comment about simple_strto<foo>() functions auxdisplay: charlcd: deduplicate simple_strtoul() Subsystem: notifiers Xiaoming Ni <nixiaoming@huawei.com>: kernel/notifier.c: intercept duplicate registrations to avoid infinite loops kernel/notifier.c: remove notifier_chain_cond_register() kernel/notifier.c: remove blocking_notifier_chain_cond_register() Subsystem: core-kernel Nathan Chancellor <natechancellor@gmail.com>: kernel/profile.c: use cpumask_available to check for NULL cpumask Joe Perches <joe@perches.com>: kernel/sys.c: avoid copying possible padding bytes in copy_to_user Subsystem: bitops William Breathitt Gray <vilhelm.gray@gmail.com>: bitops: introduce the for_each_set_clump8 macro lib/test_bitmap.c: add for_each_set_clump8 test cases gpio: 104-dio-48e: utilize for_each_set_clump8 macro gpio: 104-idi-48: utilize for_each_set_clump8 macro gpio: gpio-mm: utilize for_each_set_clump8 macro gpio: ws16c48: utilize for_each_set_clump8 macro gpio: pci-idio-16: utilize for_each_set_clump8 macro gpio: pcie-idio-24: utilize for_each_set_clump8 macro gpio: uniphier: utilize for_each_set_clump8 macro gpio: 74x164: utilize the for_each_set_clump8 macro thermal: intel: intel_soc_dts_iosf: Utilize for_each_set_clump8 macro gpio: pisosr: utilize the for_each_set_clump8 macro gpio: max3191x: utilize the for_each_set_clump8 macro gpio: pca953x: utilize the for_each_set_clump8 macro Subsystem: lib Wei Yang <richardw.yang@linux.intel.com>: lib/rbtree: set successor's parent unconditionally lib/rbtree: get successor's color directly Laura Abbott <labbott@redhat.com>: lib/test_meminit.c: add bulk alloc/free tests Trent Piepho <tpiepho@gmail.com>: lib/math/rational.c: fix possible incorrect result from rational fractions helper Huang Shijie <sjhuang@iluvatar.ai>: lib/genalloc.c: export symbol addr_in_gen_pool lib/genalloc.c: rename addr_in_gen_pool to gen_pool_has_addr Subsystem: checkpatch Joe Perches <joe@perches.com>: checkpatch: improve ignoring CamelCase SI style variants like mA checkpatch: reduce is_maintained_obsolete lookup runtime Subsystem: epoll Jason Baron <jbaron@akamai.com>: epoll: simplify ep_poll_safewake() for CONFIG_DEBUG_LOCK_ALLOC Heiher <r@hev.cc>: fs/epoll: remove unnecessary wakeups of nested epoll selftests: add epoll selftests Subsystem: binfmt Alexey Dobriyan <adobriyan@gmail.com>: fs/binfmt_elf.c: delete unused "interp_map_addr" argument fs/binfmt_elf.c: extract elf_read() function Subsystem: init Krzysztof Kozlowski <krzk@kernel.org>: init/Kconfig: fix indentation Subsystem: rapidio "Ben Dooks (Codethink)" <ben.dooks@codethink.co.uk>: drivers/rapidio/rio-driver.c: fix missing include of <linux/rio_drv.h> drivers/rapidio/rio-access.c: fix missing include of <linux/rio_drv.h> Subsystem: uaccess Daniel Vetter <daniel.vetter@ffwll.ch>: drm: limit to INT_MAX in create_blob ioctl Kees Cook <keescook@chromium.org>: uaccess: disallow > INT_MAX copy sizes Subsystem: kcov Andrey Konovalov <andreyknvl@google.com>: Patch series " kcov: collect coverage from usb and vhost", v3: kcov: remote coverage support usb, kcov: collect coverage from hub_event vhost, kcov: collect coverage from vhost_worker Subsystem: ubsan Julien Grall <julien.grall@arm.com>: lib/ubsan: don't serialize UBSAN report Subsystem: ipc Masahiro Yamada <yamada.masahiro@socionext.com>: arch: ipcbuf.h: make uapi asm/ipcbuf.h self-contained arch: msgbuf.h: make uapi asm/msgbuf.h self-contained arch: sembuf.h: make uapi asm/sembuf.h self-contained Subsystem: bitmap Andy Shevchenko <andriy.shevchenko@linux.intel.com>: Patch series "gpio: pca953x: Convert to bitmap (extended) API", v2: lib/test_bitmap: force argument of bitmap_parselist_user() to proper address space lib/test_bitmap: undefine macros after use lib/test_bitmap: name EXP_BYTES properly lib/test_bitmap: rename exp to exp1 to avoid ambiguous name lib/test_bitmap: move exp1 and exp2 upper for others to use lib/test_bitmap: fix comment about this file lib/bitmap: introduce bitmap_replace() helper gpio: pca953x: remove redundant variable and check in IRQ handler gpio: pca953x: use input from regs structure in pca953x_irq_pending() gpio: pca953x: convert to use bitmap API gpio: pca953x: tighten up indentation Subsystem: mm/pagemap Mike Rapoport <rppt@linux.ibm.com>: Patch series "mm: remove __ARCH_HAS_4LEVEL_HACK", v13: alpha: use pgtable-nopud instead of 4level-fixup arm: nommu: use pgtable-nopud instead of 4level-fixup c6x: use pgtable-nopud instead of 4level-fixup m68k: nommu: use pgtable-nopud instead of 4level-fixup m68k: mm: use pgtable-nopXd instead of 4level-fixup microblaze: use pgtable-nopmd instead of 4level-fixup nds32: use pgtable-nopmd instead of 4level-fixup parisc: use pgtable-nopXd instead of 4level-fixup Helge Deller <deller@gmx.de>: parisc/hugetlb: use pgtable-nopXd instead of 4level-fixup Mike Rapoport <rppt@linux.ibm.com>: sparc32: use pgtable-nopud instead of 4level-fixup um: remove unused pxx_offset_proc() and addr_pte() functions um: add support for folded p4d page tables mm: remove __ARCH_HAS_4LEVEL_HACK and include/asm-generic/4level-fixup.h .gitattributes | 2 Documentation/core-api/genalloc.rst | 2 Documentation/dev-tools/kcov.rst | 129 arch/Kconfig | 22 arch/alpha/include/asm/mmzone.h | 1 arch/alpha/include/asm/pgalloc.h | 4 arch/alpha/include/asm/pgtable.h | 24 arch/alpha/mm/init.c | 12 arch/arm/include/asm/pgtable.h | 2 arch/arm/mm/dma-mapping.c | 2 arch/c6x/include/asm/pgtable.h | 2 arch/m68k/include/asm/mcf_pgalloc.h | 7 arch/m68k/include/asm/mcf_pgtable.h | 28 arch/m68k/include/asm/mmu_context.h | 12 arch/m68k/include/asm/motorola_pgalloc.h | 4 arch/m68k/include/asm/motorola_pgtable.h | 32 arch/m68k/include/asm/page.h | 9 arch/m68k/include/asm/pgtable_mm.h | 11 arch/m68k/include/asm/pgtable_no.h | 2 arch/m68k/include/asm/sun3_pgalloc.h | 5 arch/m68k/include/asm/sun3_pgtable.h | 18 arch/m68k/kernel/sys_m68k.c | 10 arch/m68k/mm/init.c | 6 arch/m68k/mm/kmap.c | 39 arch/m68k/mm/mcfmmu.c | 16 arch/m68k/mm/motorola.c | 17 arch/m68k/sun3x/dvma.c | 7 arch/microblaze/include/asm/page.h | 3 arch/microblaze/include/asm/pgalloc.h | 16 arch/microblaze/include/asm/pgtable.h | 32 arch/microblaze/kernel/signal.c | 10 arch/microblaze/mm/init.c | 7 arch/microblaze/mm/pgtable.c | 13 arch/mips/include/uapi/asm/msgbuf.h | 1 arch/mips/include/uapi/asm/sembuf.h | 2 arch/nds32/include/asm/page.h | 3 arch/nds32/include/asm/pgalloc.h | 3 arch/nds32/include/asm/pgtable.h | 12 arch/nds32/include/asm/tlb.h | 1 arch/nds32/kernel/pm.c | 4 arch/nds32/mm/fault.c | 16 arch/nds32/mm/init.c | 11 arch/nds32/mm/mm-nds32.c | 6 arch/nds32/mm/proc.c | 26 arch/parisc/include/asm/page.h | 30 arch/parisc/include/asm/pgalloc.h | 41 arch/parisc/include/asm/pgtable.h | 52 arch/parisc/include/asm/tlb.h | 2 arch/parisc/include/uapi/asm/msgbuf.h | 1 arch/parisc/include/uapi/asm/sembuf.h | 1 arch/parisc/kernel/cache.c | 13 arch/parisc/kernel/pci-dma.c | 9 arch/parisc/mm/fixmap.c | 10 arch/parisc/mm/hugetlbpage.c | 18 arch/powerpc/include/uapi/asm/msgbuf.h | 2 arch/powerpc/include/uapi/asm/sembuf.h | 2 arch/s390/include/uapi/asm/ipcbuf.h | 2 arch/sparc/include/asm/pgalloc_32.h | 6 arch/sparc/include/asm/pgtable_32.h | 28 arch/sparc/include/uapi/asm/ipcbuf.h | 2 arch/sparc/include/uapi/asm/msgbuf.h | 2 arch/sparc/include/uapi/asm/sembuf.h | 2 arch/sparc/mm/fault_32.c | 11 arch/sparc/mm/highmem.c | 6 arch/sparc/mm/io-unit.c | 6 arch/sparc/mm/iommu.c | 6 arch/sparc/mm/srmmu.c | 51 arch/um/include/asm/pgtable-2level.h | 1 arch/um/include/asm/pgtable-3level.h | 1 arch/um/include/asm/pgtable.h | 3 arch/um/kernel/mem.c | 8 arch/um/kernel/skas/mmu.c | 12 arch/um/kernel/skas/uaccess.c | 7 arch/um/kernel/tlb.c | 85 arch/um/kernel/trap.c | 4 arch/x86/include/uapi/asm/msgbuf.h | 3 arch/x86/include/uapi/asm/sembuf.h | 2 arch/xtensa/include/uapi/asm/ipcbuf.h | 2 arch/xtensa/include/uapi/asm/msgbuf.h | 2 arch/xtensa/include/uapi/asm/sembuf.h | 1 drivers/auxdisplay/charlcd.c | 34 drivers/base/node.c | 9 drivers/gpio/gpio-104-dio-48e.c | 75 drivers/gpio/gpio-104-idi-48.c | 36 drivers/gpio/gpio-74x164.c | 19 drivers/gpio/gpio-gpio-mm.c | 75 drivers/gpio/gpio-max3191x.c | 19 drivers/gpio/gpio-pca953x.c | 209 drivers/gpio/gpio-pci-idio-16.c | 75 drivers/gpio/gpio-pcie-idio-24.c | 111 drivers/gpio/gpio-pisosr.c | 12 drivers/gpio/gpio-uniphier.c | 13 drivers/gpio/gpio-ws16c48.c | 73 drivers/gpu/drm/drm_property.c | 2 drivers/misc/sram-exec.c | 2 drivers/rapidio/rio-access.c | 2 drivers/rapidio/rio-driver.c | 1 drivers/thermal/intel/intel_soc_dts_iosf.c | 31 drivers/thermal/intel/intel_soc_dts_iosf.h | 2 drivers/usb/core/hub.c | 5 drivers/vhost/vhost.c | 6 drivers/vhost/vhost.h | 1 fs/binfmt_elf.c | 56 fs/eventpoll.c | 52 fs/proc/Kconfig | 8 fs/proc/generic.c | 37 fs/proc/internal.h | 2 include/asm-generic/4level-fixup.h | 39 include/asm-generic/bitops/find.h | 17 include/linux/bitmap.h | 51 include/linux/bitops.h | 12 include/linux/build_bug.h | 4 include/linux/genalloc.h | 2 include/linux/kcov.h | 23 include/linux/kernel.h | 19 include/linux/mm.h | 10 include/linux/notifier.h | 4 include/linux/proc_fs.h | 4 include/linux/rbtree_augmented.h | 6 include/linux/sched.h | 8 include/linux/sysctl.h | 6 include/linux/thread_info.h | 2 include/linux/vmstat.h | 54 include/uapi/asm-generic/ipcbuf.h | 2 include/uapi/asm-generic/msgbuf.h | 2 include/uapi/asm-generic/sembuf.h | 1 include/uapi/linux/kcov.h | 28 include/uapi/linux/scc.h | 1 init/Kconfig | 78 kernel/dma/remap.c | 2 kernel/kcov.c | 547 + kernel/notifier.c | 45 kernel/profile.c | 6 kernel/sys.c | 4 lib/bitmap.c | 12 lib/find_bit.c | 14 lib/genalloc.c | 7 lib/math/rational.c | 63 lib/test_bitmap.c | 206 lib/test_meminit.c | 20 lib/ubsan.c | 64 mm/kasan/common.c | 1 mm/memcontrol.c | 52 mm/memory.c | 10 mm/slab_common.c | 12 mm/vmstat.c | 60 net/sunrpc/rpc_pipe.c | 2 scripts/checkpatch.pl | 13 scripts/get_maintainer.pl | 38 tools/testing/selftests/Makefile | 1 tools/testing/selftests/filesystems/epoll/.gitignore | 1 tools/testing/selftests/filesystems/epoll/Makefile | 7 tools/testing/selftests/filesystems/epoll/epoll_wakeup_test.c | 3074 ++++++++++ usr/include/Makefile | 4 154 files changed, 5270 insertions(+), 1360 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2019-12-01 1:47 Andrew Morton 2019-12-01 5:17 ` incoming James Bottomley 2019-12-01 21:07 ` incoming Linus Torvalds 0 siblings, 2 replies; 349+ messages in thread From: Andrew Morton @ 2019-12-01 1:47 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm - a small number of updates to scripts/, ocfs2 and fs/buffer.c - most of MM. I still have quite a lot of material (mostly not MM) staged after linux-next due to -next dependencies. I'll send thos across next week as the preprequisites get merged up. 158 patches, based on 32ef9553635ab1236c33951a8bd9b5af1c3b1646. Subsystems affected by this patch series: scripts ocfs2 vfs mm/slab mm/slub mm/pagecache mm/gup mm/swap mm/memcg mm/pagemap mm/memfd mm/memory-failure mm/memory-hotplug mm/sparsemem mm/vmalloc mm/kasan mm/pagealloc mm/vmscan mm/proc mm/z3fold mm/mempolicy mm/memblock mm/hugetlbfs mm/hugetlb mm/migration mm/thp mm/cma mm/autonuma mm/page-poison mm/mmap mm/madvise mm/userfaultfd mm/shmem mm/cleanups mm/support Subsystem: scripts Colin Ian King <colin.king@canonical.com>: scripts/spelling.txt: add more spellings to spelling.txt Subsystem: ocfs2 Ding Xiang <dingxiang@cmss.chinamobile.com>: ocfs2: fix passing zero to 'PTR_ERR' warning Subsystem: vfs Saurav Girepunje <saurav.girepunje@gmail.com>: fs/buffer.c: fix use true/false for bool type Ben Dooks <ben.dooks@codethink.co.uk>: fs/buffer.c: include internal.h for missing declarations Subsystem: mm/slab Pengfei Li <lpf.vector@gmail.com>: Patch series "mm, slab: Make kmalloc_info[] contain all types of names", v6: mm, slab: make kmalloc_info[] contain all types of names mm, slab: remove unused kmalloc_size() mm, slab_common: use enum kmalloc_cache_type to iterate over kmalloc caches Subsystem: mm/slub Miles Chen <miles.chen@mediatek.com>: mm: slub: print the offset of fault addresses Yu Zhao <yuzhao@google.com>: mm/slub.c: update comments mm/slub.c: clean up validate_slab() Subsystem: mm/pagecache Konstantin Khlebnikov <khlebnikov@yandex-team.ru>: mm/filemap.c: remove redundant cache invalidation after async direct-io write fs/direct-io.c: keep dio_warn_stale_pagecache() when CONFIG_BLOCK=n mm/filemap.c: warn if stale pagecache is left after direct write Subsystem: mm/gup zhong jiang <zhongjiang@huawei.com>: mm/gup.c: allow CMA migration to propagate errors back to caller Liu Xiang <liuxiang_1999@126.com>: mm/gup.c: fix comments of __get_user_pages() and get_user_pages_remote() Subsystem: mm/swap Naohiro Aota <naohiro.aota@wdc.com>: mm, swap: disallow swapon() on zoned block devices Fengguang Wu <fengguang.wu@intel.com>: mm/swap.c: trivial mark_page_accessed() cleanup Subsystem: mm/memcg Yafang Shao <laoar.shao@gmail.com>: mm, memcg: clean up reclaim iter array Johannes Weiner <hannes@cmpxchg.org>: mm: memcontrol: remove dead code from memory_max_write() mm: memcontrol: try harder to set a new memory.high Hao Lee <haolee.swjtu@gmail.com>: include/linux/memcontrol.h: fix comments based on per-node memcg Shakeel Butt <shakeelb@google.com>: mm: vmscan: memcontrol: remove mem_cgroup_select_victim_node() Chris Down <chris@chrisdown.name>: Documentation/admin-guide/cgroup-v2.rst: document why inactive_X + active_X may not equal X Subsystem: mm/pagemap Johannes Weiner <hannes@cmpxchg.org>: mm: drop mmap_sem before calling balance_dirty_pages() in write fault "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>: shmem: pin the file in shmem_fault() if mmap_sem is dropped "Joel Fernandes (Google)" <joel@joelfernandes.org>: mm: emit tracepoint when RSS changes rss_stat: add support to detect RSS updates of external mm Wei Yang <richardw.yang@linux.intel.com>: mm/mmap.c: remove a never-triggered warning in __vma_adjust() Konstantin Khlebnikov <khlebnikov@yandex-team.ru>: mm/swap.c: piggyback lru_add_drain_all() calls Wei Yang <richardw.yang@linux.intel.com>: mm/mmap.c: prev could be retrieved from vma->vm_prev mm/mmap.c: __vma_unlink_prev() is not necessary now mm/mmap.c: extract __vma_unlink_list() as counterpart for __vma_link_list() mm/mmap.c: rb_parent is not necessary in __vma_link_list() mm/rmap.c: don't reuse anon_vma if we just want a copy mm/rmap.c: reuse mergeable anon_vma as parent when fork Gaowei Pu <pugaowei@gmail.com>: mm/mmap.c: use IS_ERR_VALUE to check return value of get_unmapped_area Vineet Gupta <Vineet.Gupta1@synopsys.com>: Patch series "elide extraneous generated code for folded p4d/pud/pmd", v3: ARC: mm: remove __ARCH_USE_5LEVEL_HACK asm-generic/tlb: stub out pud_free_tlb() if nopud ... asm-generic/tlb: stub out p4d_free_tlb() if nop4d ... asm-generic/tlb: stub out pmd_free_tlb() if nopmd asm-generic/mm: stub out p{4,u}d_clear_bad() if __PAGETABLE_P{4,U}D_FOLDED Miles Chen <miles.chen@mediatek.com>: mm/rmap.c: fix outdated comment in page_get_anon_vma() Yang Shi <yang.shi@linux.alibaba.com>: mm/rmap.c: use VM_BUG_ON_PAGE() in __page_check_anon_rmap() Thomas Hellstrom <thellstrom@vmware.com>: mm: move the backup x_devmap() functions to asm-generic/pgtable.h mm/memory.c: fix a huge pud insertion race during faulting Steven Price <steven.price@arm.com>: Patch series "Generic page walk and ptdump", v15: mm: add generic p?d_leaf() macros arc: mm: add p?d_leaf() definitions arm: mm: add p?d_leaf() definitions arm64: mm: add p?d_leaf() definitions mips: mm: add p?d_leaf() definitions powerpc: mm: add p?d_leaf() definitions riscv: mm: add p?d_leaf() definitions s390: mm: add p?d_leaf() definitions sparc: mm: add p?d_leaf() definitions x86: mm: add p?d_leaf() definitions mm: pagewalk: add p4d_entry() and pgd_entry() mm: pagewalk: allow walking without vma mm: pagewalk: add test_p?d callbacks mm: pagewalk: add 'depth' parameter to pte_hole x86: mm: point to struct seq_file from struct pg_state x86: mm+efi: convert ptdump_walk_pgd_level() to take a mm_struct x86: mm: convert ptdump_walk_pgd_level_debugfs() to take an mm_struct x86: mm: convert ptdump_walk_pgd_level_core() to take an mm_struct mm: add generic ptdump x86: mm: convert dump_pagetables to use walk_page_range arm64: mm: convert mm/dump.c to use walk_page_range() arm64: mm: display non-present entries in ptdump mm: ptdump: reduce level numbers by 1 in note_page() Subsystem: mm/memfd Nicolas Geoffray <ngeoffray@google.com>: mm, memfd: fix COW issue on MAP_PRIVATE and F_SEAL_FUTURE_WRITE mappings "Joel Fernandes (Google)" <joel@joelfernandes.org>: memfd: add test for COW on MAP_PRIVATE and F_SEAL_FUTURE_WRITE mappings Subsystem: mm/memory-failure Jane Chu <jane.chu@oracle.com>: mm/memory-failure.c clean up around tk pre-allocation Naoya Horiguchi <nao.horiguchi@gmail.com>: mm, soft-offline: convert parameter to pfn Yunfeng Ye <yeyunfeng@huawei.com>: mm/memory-failure.c: use page_shift() in add_to_kill() Subsystem: mm/memory-hotplug Anshuman Khandual <anshuman.khandual@arm.com>: mm/hotplug: reorder memblock_[free|remove]() calls in try_remove_memory() Alastair D'Silva <alastair@d-silva.org>: mm/memory_hotplug.c: add a bounds check to __add_pages() David Hildenbrand <david@redhat.com>: Patch series "mm/memory_hotplug: Export generic_online_page()": mm/memory_hotplug: export generic_online_page() hv_balloon: use generic_online_page() mm/memory_hotplug: remove __online_page_free() and __online_page_increment_counters() Patch series "mm: Memory offlining + page isolation cleanups", v2: mm/page_alloc.c: don't set pages PageReserved() when offlining mm/page_isolation.c: convert SKIP_HWPOISON to MEMORY_OFFLINE "Ben Dooks (Codethink)" <ben.dooks@codethink.co.uk>: include/linux/memory_hotplug.h: move definitions of {set,clear}_zone_contiguous David Hildenbrand <david@redhat.com>: drivers/base/memory.c: drop the mem_sysfs_mutex mm/memory_hotplug.c: don't allow to online/offline memory blocks with holes Subsystem: mm/sparsemem Vincent Whitchurch <vincent.whitchurch@axis.com>: mm/sparse: consistently do not zero memmap Ilya Leoshkevich <iii@linux.ibm.com>: mm/sparse.c: mark populate_section_memmap as __meminit Michal Hocko <mhocko@suse.com>: mm/sparse.c: do not waste pre allocated memmap space Subsystem: mm/vmalloc Liu Xiang <liuxiang_1999@126.com>: mm/vmalloc.c: remove unnecessary highmem_mask from parameter of gfpflags_allow_blocking() "Uladzislau Rezki (Sony)" <urezki@gmail.com>: mm/vmalloc: remove preempt_disable/enable when doing preloading mm/vmalloc: respect passed gfp_mask when doing preloading mm/vmalloc: add more comments to the adjust_va_to_fit_type() Anders Roxell <anders.roxell@linaro.org>: selftests: vm: add fragment CONFIG_TEST_VMALLOC "Uladzislau Rezki (Sony)" <urezki@gmail.com>: mm/vmalloc: rework vmap_area_lock Subsystem: mm/kasan Daniel Axtens <dja@axtens.net>: Patch series "kasan: support backing vmalloc space with real shadow: kasan: support backing vmalloc space with real shadow memory kasan: add test for vmalloc fork: support VMAP_STACK with KASAN_VMALLOC x86/kasan: support KASAN_VMALLOC Subsystem: mm/pagealloc Anshuman Khandual <anshuman.khandual@arm.com>: mm/page_alloc: add alloc_contig_pages() Mel Gorman <mgorman@techsingularity.net>: mm, pcp: share common code between memory hotplug and percpu sysctl handler mm, pcpu: make zone pcp updates and reset internal to the mm Hao Lee <haolee.swjtu@gmail.com>: include/linux/mmzone.h: fix comment for ISOLATE_UNMAPPED macro lijiazi <jqqlijiazi@gmail.com>: mm/page_alloc.c: print reserved_highatomic info Subsystem: mm/vmscan Andrey Ryabinin <aryabinin@virtuozzo.com>: mm/vmscan: remove unused lru_pages argument Yang Shi <yang.shi@linux.alibaba.com>: mm/vmscan.c: remove unused scan_control parameter from pageout() Johannes Weiner <hannes@cmpxchg.org>: Patch series "mm: vmscan: cgroup-related cleanups": mm: vmscan: simplify lruvec_lru_size() mm: clean up and clarify lruvec lookup procedure mm: vmscan: move inactive_list_is_low() swap check to the caller mm: vmscan: naming fixes: global_reclaim() and sane_reclaim() mm: vmscan: replace shrink_node() loop with a retry jump mm: vmscan: turn shrink_node_memcg() into shrink_lruvec() mm: vmscan: split shrink_node() into node part and memcgs part mm: vmscan: harmonize writeback congestion tracking for nodes & memcgs Patch series "mm: fix page aging across multiple cgroups": mm: vmscan: move file exhaustion detection to the node level mm: vmscan: detect file thrashing at the reclaim root mm: vmscan: enforce inactive:active ratio at the reclaim root Xianting Tian <xianting_tian@126.com>: mm/vmscan.c: fix typo in comment Subsystem: mm/proc Johannes Weiner <hannes@cmpxchg.org>: kernel: sysctl: make drop_caches write-only Subsystem: mm/z3fold Vitaly Wool <vitaly.wool@konsulko.com>: mm/z3fold.c: add inter-page compaction Subsystem: mm/mempolicy Li Xinhai <lixinhai.lxh@gmail.com>: Patch series "mm: Fix checking unmapped holes for mbind", v4: mm/mempolicy.c: check range first in queue_pages_test_walk mm/mempolicy.c: fix checking unmapped holes for mbind Subsystem: mm/memblock Cao jin <caoj.fnst@cn.fujitsu.com>: mm/memblock.c: cleanup doc mm/memblock: correct doc for function Yunfeng Ye <yeyunfeng@huawei.com>: mm: support memblock alloc on the exact node for sparse_buffer_init() Subsystem: mm/hugetlbfs Mike Kravetz <mike.kravetz@oracle.com>: hugetlbfs: hugetlb_fault_mutex_hash() cleanup mm/hugetlbfs: fix error handling when setting up mounts Patch series "hugetlbfs: convert macros to static inline, fix sparse warning": powerpc/mm: remove pmd_huge/pud_huge stubs and include hugetlb.h hugetlbfs: convert macros to static inline, fix sparse warning Piotr Sarna <p.sarna@tlen.pl>: hugetlbfs: add O_TMPFILE support Waiman Long <longman@redhat.com>: hugetlbfs: take read_lock on i_mmap for PMD sharing Subsystem: mm/hugetlb Mina Almasry <almasrymina@google.com>: hugetlb: region_chg provides only cache entry hugetlb: remove duplicated code Wei Yang <richardw.yang@linux.intel.com>: hugetlb: remove unused hstate in hugetlb_fault_mutex_hash() Zhigang Lu <tonnylu@tencent.com>: mm/hugetlb: avoid looping to the same hugepage if !pages and !vmas zhong jiang <zhongjiang@huawei.com>: mm/huge_memory.c: split_huge_pages_fops should be defined with DEFINE_DEBUGFS_ATTRIBUTE Subsystem: mm/migration Yang Shi <yang.shi@linux.alibaba.com>: mm/migrate.c: handle freed page at the first place Subsystem: mm/thp "Kirill A. Shutemov" <kirill@shutemov.name>: mm, thp: do not queue fully unmapped pages for deferred split Song Liu <songliubraving@fb.com>: mm/thp: flush file for !is_shmem PageDirty() case in collapse_file() Subsystem: mm/cma Yunfeng Ye <yeyunfeng@huawei.com>: mm/cma.c: switch to bitmap_zalloc() for cma bitmap allocation zhong jiang <zhongjiang@huawei.com>: mm/cma_debug.c: use DEFINE_DEBUGFS_ATTRIBUTE to define debugfs fops Subsystem: mm/autonuma Huang Ying <ying.huang@intel.com>: autonuma: fix watermark checking in migrate_balanced_pgdat() autonuma: reduce cache footprint when scanning page tables Subsystem: mm/page-poison zhong jiang <zhongjiang@huawei.com>: mm/hwpoison-inject: use DEFINE_DEBUGFS_ATTRIBUTE to define debugfs fops Subsystem: mm/mmap Wei Yang <richardw.yang@linux.intel.com>: mm/mmap.c: make vma_merge() comment more easy to understand Subsystem: mm/madvise Yunfeng Ye <yeyunfeng@huawei.com>: mm/madvise.c: replace with page_size() in madvise_inject_error() Wei Yang <richardw.yang@linux.intel.com>: mm/madvise.c: use PAGE_ALIGN[ED] for range checking Subsystem: mm/userfaultfd Wei Yang <richardw.yang@linux.intel.com>: userfaultfd: use vma_pagesize for all huge page size calculation userfaultfd: remove unnecessary WARN_ON() in __mcopy_atomic_hugetlb() userfaultfd: wrap the common dst_vma check into an inlined function Andrea Arcangeli <aarcange@redhat.com>: fs/userfaultfd.c: wp: clear VM_UFFD_MISSING or VM_UFFD_WP during userfaultfd_register() Mike Rapoport <rppt@linux.ibm.com>: userfaultfd: require CAP_SYS_PTRACE for UFFD_FEATURE_EVENT_FORK Subsystem: mm/shmem Colin Ian King <colin.king@canonical.com>: mm/shmem.c: make array 'values' static const, makes object smaller Yang Shi <yang.shi@linux.alibaba.com>: mm: shmem: use proper gfp flags for shmem_writepage() Chen Jun <chenjun102@huawei.com>: mm/shmem.c: cast the type of unmap_start to u64 Subsystem: mm/cleanups Hao Lee <haolee.swjtu@gmail.com>: mm: fix struct member name in function comments Wei Yang <richardw.yang@linux.intel.com>: mm: fix typos in comments when calling __SetPageUptodate() Souptick Joarder <jrdr.linux@gmail.com>: mm/memory_hotplug.c: remove __online_page_set_limits() Krzysztof Kozlowski <krzk@kernel.org>: mm/Kconfig: fix indentation Randy Dunlap <rdunlap@infradead.org>: mm/Kconfig: fix trivial help text punctuation Subsystem: mm/support Minchan Kim <minchan@google.com>: mm/page_io.c: annotate refault stalls from swap_readpage Documentation/admin-guide/cgroup-v2.rst | 7 Documentation/dev-tools/kasan.rst | 63 + arch/Kconfig | 9 arch/arc/include/asm/pgtable.h | 2 arch/arc/mm/fault.c | 10 arch/arc/mm/highmem.c | 4 arch/arm/include/asm/pgtable-2level.h | 1 arch/arm/include/asm/pgtable-3level.h | 1 arch/arm64/Kconfig | 1 arch/arm64/Kconfig.debug | 19 arch/arm64/include/asm/pgtable.h | 2 arch/arm64/include/asm/ptdump.h | 8 arch/arm64/mm/Makefile | 4 arch/arm64/mm/dump.c | 148 +--- arch/arm64/mm/mmu.c | 4 arch/arm64/mm/ptdump_debugfs.c | 2 arch/mips/include/asm/pgtable.h | 5 arch/powerpc/include/asm/book3s/64/pgtable-4k.h | 3 arch/powerpc/include/asm/book3s/64/pgtable-64k.h | 3 arch/powerpc/include/asm/book3s/64/pgtable.h | 30 arch/powerpc/mm/book3s64/radix_pgtable.c | 1 arch/riscv/include/asm/pgtable-64.h | 7 arch/riscv/include/asm/pgtable.h | 7 arch/s390/include/asm/pgtable.h | 2 arch/sparc/include/asm/pgtable_64.h | 2 arch/x86/Kconfig | 2 arch/x86/Kconfig.debug | 20 arch/x86/include/asm/pgtable.h | 10 arch/x86/mm/Makefile | 4 arch/x86/mm/debug_pagetables.c | 8 arch/x86/mm/dump_pagetables.c | 431 +++--------- arch/x86/mm/kasan_init_64.c | 61 + arch/x86/platform/efi/efi_32.c | 2 arch/x86/platform/efi/efi_64.c | 4 drivers/base/memory.c | 40 - drivers/firmware/efi/arm-runtime.c | 2 drivers/hv/hv_balloon.c | 4 drivers/xen/balloon.c | 1 fs/buffer.c | 6 fs/direct-io.c | 21 fs/hugetlbfs/inode.c | 67 + fs/ocfs2/acl.c | 4 fs/proc/task_mmu.c | 4 fs/userfaultfd.c | 21 include/asm-generic/4level-fixup.h | 1 include/asm-generic/5level-fixup.h | 1 include/asm-generic/pgtable-nop4d.h | 2 include/asm-generic/pgtable-nopmd.h | 2 include/asm-generic/pgtable-nopud.h | 2 include/asm-generic/pgtable.h | 71 ++ include/asm-generic/tlb.h | 4 include/linux/fs.h | 6 include/linux/gfp.h | 2 include/linux/hugetlb.h | 142 +++- include/linux/kasan.h | 31 include/linux/memblock.h | 3 include/linux/memcontrol.h | 51 - include/linux/memory_hotplug.h | 11 include/linux/mm.h | 42 - include/linux/mmzone.h | 34 include/linux/moduleloader.h | 2 include/linux/page-isolation.h | 4 include/linux/pagewalk.h | 42 - include/linux/ptdump.h | 22 include/linux/slab.h | 20 include/linux/string.h | 2 include/linux/swap.h | 2 include/linux/vmalloc.h | 12 include/trace/events/kmem.h | 53 + kernel/events/uprobes.c | 2 kernel/fork.c | 4 kernel/sysctl.c | 2 lib/Kconfig.kasan | 16 lib/test_kasan.c | 26 lib/vsprintf.c | 40 - mm/Kconfig | 40 - mm/Kconfig.debug | 21 mm/Makefile | 1 mm/cma.c | 6 mm/cma_debug.c | 10 mm/filemap.c | 56 - mm/gup.c | 40 - mm/hmm.c | 8 mm/huge_memory.c | 2 mm/hugetlb.c | 298 ++------ mm/hwpoison-inject.c | 4 mm/internal.h | 27 mm/kasan/common.c | 233 ++++++ mm/kasan/generic_report.c | 3 mm/kasan/kasan.h | 1 mm/khugepaged.c | 18 mm/madvise.c | 14 mm/memblock.c | 113 ++- mm/memcontrol.c | 167 ---- mm/memory-failure.c | 61 - mm/memory.c | 56 + mm/memory_hotplug.c | 86 +- mm/mempolicy.c | 59 + mm/migrate.c | 21 mm/mincore.c | 1 mm/mmap.c | 75 -- mm/mprotect.c | 8 mm/mremap.c | 4 mm/nommu.c | 10 mm/page_alloc.c | 137 +++ mm/page_io.c | 15 mm/page_isolation.c | 12 mm/pagewalk.c | 126 ++- mm/pgtable-generic.c | 9 mm/ptdump.c | 167 ++++ mm/rmap.c | 65 + mm/shmem.c | 29 mm/slab.c | 7 mm/slab.h | 6 mm/slab_common.c | 101 +- mm/slub.c | 36 - mm/sparse.c | 22 mm/swap.c | 29 mm/swapfile.c | 7 mm/userfaultfd.c | 77 +- mm/util.c | 22 mm/vmalloc.c | 196 +++-- mm/vmscan.c | 798 +++++++++++------------ mm/workingset.c | 75 +- mm/z3fold.c | 375 ++++++++-- scripts/spelling.txt | 28 tools/testing/selftests/memfd/memfd_test.c | 36 + tools/testing/selftests/vm/config | 1 128 files changed, 3409 insertions(+), 2121 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2019-12-01 1:47 incoming Andrew Morton @ 2019-12-01 5:17 ` James Bottomley 2019-12-01 21:07 ` incoming Linus Torvalds 1 sibling, 0 replies; 349+ messages in thread From: James Bottomley @ 2019-12-01 5:17 UTC (permalink / raw) To: Andrew Morton, Linus Torvalds; +Cc: mm-commits, linux-mm On Sat, 2019-11-30 at 17:47 -0800, Andrew Morton wrote: > - a small number of updates to scripts/, ocfs2 and fs/buffer.c > > - most of MM. I still have quite a lot of material (mostly not MM) > staged after linux-next due to -next dependencies. I'll send thos > across next week as the preprequisites get merged up. > > 158 patches, based on 32ef9553635ab1236c33951a8bd9b5af1c3b1646. Hey, Andrew, would it be at all possible for you to thread these patches under something like this incoming message? The selfish reason I'm asking is so I can mark the thread as read instead of having to do it individually for 158 messages ... my thumb would thank you for this. Regards, James ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2019-12-01 1:47 incoming Andrew Morton 2019-12-01 5:17 ` incoming James Bottomley @ 2019-12-01 21:07 ` Linus Torvalds 2019-12-02 8:21 ` incoming Steven Price 1 sibling, 1 reply; 349+ messages in thread From: Linus Torvalds @ 2019-12-01 21:07 UTC (permalink / raw) To: Andrew Morton, Steven Price; +Cc: mm-commits, Linux-MM On Sat, Nov 30, 2019 at 5:47 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > Steven Price <steven.price@arm.com>: > Patch series "Generic page walk and ptdump", v15: > mm: add generic p?d_leaf() macros > arc: mm: add p?d_leaf() definitions > arm: mm: add p?d_leaf() definitions > arm64: mm: add p?d_leaf() definitions > mips: mm: add p?d_leaf() definitions > powerpc: mm: add p?d_leaf() definitions > riscv: mm: add p?d_leaf() definitions > s390: mm: add p?d_leaf() definitions > sparc: mm: add p?d_leaf() definitions > x86: mm: add p?d_leaf() definitions > mm: pagewalk: add p4d_entry() and pgd_entry() > mm: pagewalk: allow walking without vma > mm: pagewalk: add test_p?d callbacks > mm: pagewalk: add 'depth' parameter to pte_hole > x86: mm: point to struct seq_file from struct pg_state > x86: mm+efi: convert ptdump_walk_pgd_level() to take a mm_struct > x86: mm: convert ptdump_walk_pgd_level_debugfs() to take an mm_struct > x86: mm: convert ptdump_walk_pgd_level_core() to take an mm_struct > mm: add generic ptdump > x86: mm: convert dump_pagetables to use walk_page_range > arm64: mm: convert mm/dump.c to use walk_page_range() > arm64: mm: display non-present entries in ptdump > mm: ptdump: reduce level numbers by 1 in note_page() I've dropped these, and since they clearly weren't ready I don't want to see them re-sent for 5.5. If somebody figures out the bug, trying again for 5.6 sounds fine. Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2019-12-01 21:07 ` incoming Linus Torvalds @ 2019-12-02 8:21 ` Steven Price 0 siblings, 0 replies; 349+ messages in thread From: Steven Price @ 2019-12-02 8:21 UTC (permalink / raw) To: Linus Torvalds; +Cc: Andrew Morton, mm-commits, Linux-MM On Sun, Dec 01, 2019 at 09:07:47PM +0000, Linus Torvalds wrote: > On Sat, Nov 30, 2019 at 5:47 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > > > Steven Price <steven.price@arm.com>: > > Patch series "Generic page walk and ptdump", v15: > > mm: add generic p?d_leaf() macros > > arc: mm: add p?d_leaf() definitions > > arm: mm: add p?d_leaf() definitions > > arm64: mm: add p?d_leaf() definitions > > mips: mm: add p?d_leaf() definitions > > powerpc: mm: add p?d_leaf() definitions > > riscv: mm: add p?d_leaf() definitions > > s390: mm: add p?d_leaf() definitions > > sparc: mm: add p?d_leaf() definitions > > x86: mm: add p?d_leaf() definitions > > mm: pagewalk: add p4d_entry() and pgd_entry() > > mm: pagewalk: allow walking without vma > > mm: pagewalk: add test_p?d callbacks > > mm: pagewalk: add 'depth' parameter to pte_hole > > x86: mm: point to struct seq_file from struct pg_state > > x86: mm+efi: convert ptdump_walk_pgd_level() to take a mm_struct > > x86: mm: convert ptdump_walk_pgd_level_debugfs() to take an mm_struct > > x86: mm: convert ptdump_walk_pgd_level_core() to take an mm_struct > > mm: add generic ptdump > > x86: mm: convert dump_pagetables to use walk_page_range > > arm64: mm: convert mm/dump.c to use walk_page_range() > > arm64: mm: display non-present entries in ptdump > > mm: ptdump: reduce level numbers by 1 in note_page() > > I've dropped these, and since they clearly weren't ready I don't want > to see them re-sent for 5.5. Sorry about this, I'll try to track down the cause of this and hopefully resubmit for 5.6. Thanks, Steve > If somebody figures out the bug, trying again for 5.6 sounds fine. > > Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2019-11-22 1:53 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2019-11-22 1:53 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 4 fixes, based on 81429eb8d9ca40b0c65bb739d29fa856c5d5e958: Vincent Whitchurch <vincent.whitchurch@axis.com>: mm/sparse: consistently do not zero memmap Joseph Qi <joseph.qi@linux.alibaba.com>: Revert "fs: ocfs2: fix possible null-pointer dereferences in ocfs2_xa_prepare_entry()" David Hildenbrand <david@redhat.com>: mm/memory_hotplug: don't access uninitialized memmaps in shrink_zone_span() Andrey Ryabinin <aryabinin@virtuozzo.com>: mm/ksm.c: don't WARN if page is still mapped in remove_stable_node() fs/ocfs2/xattr.c | 56 ++++++++++++++++++++++++++++++---------------------- mm/ksm.c | 14 ++++++------- mm/memory_hotplug.c | 16 ++++++++++++-- mm/sparse.c | 2 - 4 files changed, 54 insertions(+), 34 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2019-11-16 1:34 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2019-11-16 1:34 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 11 fixes, based on 875fef493f21e54d20d71a581687990aaa50268c: Yang Shi <yang.shi@linux.alibaba.com>: mm: mempolicy: fix the wrong return value and potential pages leak of mbind zhong jiang <zhongjiang@huawei.com>: mm: fix trying to reclaim unevictable lru page when calling madvise_pageout Lasse Collin <lasse.collin@tukaani.org>: lib/xz: fix XZ_DYNALLOC to avoid useless memory reallocations Roman Gushchin <guro@fb.com>: mm: memcg: switch to css_tryget() in get_mem_cgroup_from_mm() mm: hugetlb: switch to css_tryget() in hugetlb_cgroup_charge_cgroup() Laura Abbott <labbott@redhat.com>: mm: slub: really fix slab walking for init_on_free Song Liu <songliubraving@fb.com>: mm,thp: recheck each page before collapsing file THP David Hildenbrand <david@redhat.com>: mm/memory_hotplug: fix try_offline_node() Vinayak Menon <vinmenon@codeaurora.org>: mm/page_io.c: do not free shared swap slots Ralph Campbell <rcampbell@nvidia.com>: mm/debug.c: __dump_page() prints an extra line mm/debug.c: PageAnon() is true for PageKsm() pages drivers/base/memory.c | 36 ++++++++++++++++++++++++++++++++++++ include/linux/memory.h | 1 + lib/xz/xz_dec_lzma2.c | 1 + mm/debug.c | 33 ++++++++++++++++++--------------- mm/hugetlb_cgroup.c | 2 +- mm/khugepaged.c | 28 ++++++++++++++++------------ mm/madvise.c | 16 ++++++++++++---- mm/memcontrol.c | 2 +- mm/memory_hotplug.c | 47 +++++++++++++++++++++++++++++------------------ mm/mempolicy.c | 14 +++++++++----- mm/page_io.c | 6 +++--- mm/slub.c | 39 +++++++++------------------------------ 12 files changed, 136 insertions(+), 89 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2019-11-06 5:16 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2019-11-06 5:16 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 17 fixes, based on 26bc672134241a080a83b2ab9aa8abede8d30e1c: Shakeel Butt <shakeelb@google.com>: mm: memcontrol: fix NULL-ptr deref in percpu stats flush John Hubbard <jhubbard@nvidia.com>: mm/gup_benchmark: fix MAP_HUGETLB case Mel Gorman <mgorman@techsingularity.net>: mm, meminit: recalculate pcpu batch and high limits after init completes Yang Shi <yang.shi@linux.alibaba.com>: mm: thp: handle page cache THP correctly in PageTransCompoundMap Shuning Zhang <sunny.s.zhang@oracle.com>: ocfs2: protect extent tree in ocfs2_prepare_inode_for_write() Jason Gunthorpe <jgg@mellanox.com>: mm/mmu_notifiers: use the right return code for WARN_ON Michal Hocko <mhocko@suse.com>: mm, vmstat: hide /proc/pagetypeinfo from normal users mm, vmstat: reduce zone->lock holding time by /proc/pagetypeinfo Ville Syrjälä <ville.syrjala@linux.intel.com>: mm/khugepaged: fix might_sleep() warn with CONFIG_HIGHPTE=y Johannes Weiner <hannes@cmpxchg.org>: mm/page_alloc.c: ratelimit allocation failure warnings more aggressively Vitaly Wool <vitaly.wool@konsulko.com>: zswap: add Vitaly to the maintainers list Kevin Hao <haokexin@gmail.com>: dump_stack: avoid the livelock of the dump_lock Song Liu <songliubraving@fb.com>: MAINTAINERS: update information for "MEMORY MANAGEMENT" Roman Gushchin <guro@fb.com>: mm: slab: make page_cgroup_ino() to recognize non-compound slab pages properly Ilya Leoshkevich <iii@linux.ibm.com>: scripts/gdb: fix debugging modules compiled with hot/cold partitioning David Hildenbrand <david@redhat.com>: mm/memory_hotplug: fix updating the node span Johannes Weiner <hannes@cmpxchg.org>: mm: memcontrol: fix network errors from failing __GFP_ATOMIC charges MAINTAINERS | 5 + fs/ocfs2/file.c | 125 ++++++++++++++++++++++------- include/linux/mm.h | 5 - include/linux/mm_types.h | 5 + include/linux/page-flags.h | 20 ++++ lib/dump_stack.c | 7 + mm/khugepaged.c | 7 - mm/memcontrol.c | 23 +++-- mm/memory_hotplug.c | 8 + mm/mmu_notifier.c | 2 mm/page_alloc.c | 17 ++- mm/slab.h | 4 mm/vmstat.c | 25 ++++- scripts/gdb/linux/symbols.py | 3 tools/testing/selftests/vm/gup_benchmark.c | 2 15 files changed, 197 insertions(+), 61 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2019-10-19 3:19 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2019-10-19 3:19 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm Rather a lot of fixes, almost all affecting mm/. 26 patches, based on b9959c7a347d6adbb558fba7e36e9fef3cba3b07: David Hildenbrand <david@redhat.com>: drivers/base/memory.c: don't access uninitialized memmaps in soft_offline_page_store() fs/proc/page.c: don't access uninitialized memmaps in fs/proc/page.c mm/memory-failure.c: don't access uninitialized memmaps in memory_failure() Joel Colledge <joel.colledge@linbit.com>: scripts/gdb: fix lx-dmesg when CONFIG_PRINTK_CALLER is set Qian Cai <cai@lca.pw>: mm/page_owner: don't access uninitialized memmaps when reading /proc/pagetypeinfo David Hildenbrand <david@redhat.com>: mm/memory_hotplug: don't access uninitialized memmaps in shrink_pgdat_span() "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>: Patch series "mm/memory_hotplug: Shrink zones before removing memory", v6: mm/memunmap: don't access uninitialized memmap in memunmap_pages() Roman Gushchin <guro@fb.com>: mm: memcg/slab: fix panic in __free_slab() caused by premature memcg pointer release Chengguang Xu <cgxu519@mykernel.net>: ocfs2: fix error handling in ocfs2_setattr() John Hubbard <jhubbard@nvidia.com>: mm/gup_benchmark: add a missing "w" to getopt string mm/gup: fix a misnamed "write" argument, and a related bug Honglei Wang <honglei.wang@oracle.com>: mm: memcg: get number of pages on the LRU list in memcgroup base on lru_zone_size Mike Rapoport <rppt@linux.ibm.com>: mm: memblock: do not enforce current limit for memblock_phys* family David Hildenbrand <david@redhat.com>: hugetlbfs: don't access uninitialized memmaps in pfn_range_valid_gigantic() Yi Li <yilikernel@gmail.com>: ocfs2: fix panic due to ocfs2_wq is null Konstantin Khlebnikov <khlebnikov@yandex-team.ru>: mm/memcontrol: update lruvec counters in mem_cgroup_move_account Chenwandun <chenwandun@huawei.com>: zram: fix race between backing_dev_show and backing_dev_store Ben Dooks <ben.dooks@codethink.co.uk>: mm: include <linux/huge_mm.h> for is_vma_temporary_stack mm/filemap.c: include <linux/ramfs.h> for generic_file_vm_ops definition "Ben Dooks (Codethink)" <ben.dooks@codethink.co.uk>: mm/init-mm.c: include <linux/mman.h> for vm_committed_as_batch "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>: Patch series "Fixes for THP in page cache", v2: proc/meminfo: fix output alignment mm/thp: fix node page state in split_huge_page_to_list() William Kucharski <william.kucharski@oracle.com>: mm/vmscan.c: support removing arbitrary sized pages from mapping "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>: mm/thp: allow dropping THP from page cache Song Liu <songliubraving@fb.com>: kernel/events/uprobes.c: only do FOLL_SPLIT_PMD for uprobe register Ilya Leoshkevich <iii@linux.ibm.com>: scripts/gdb: fix debugging modules on s390 drivers/base/memory.c | 3 + drivers/block/zram/zram_drv.c | 5 + fs/ocfs2/file.c | 2 fs/ocfs2/journal.c | 3 - fs/ocfs2/localalloc.c | 3 - fs/proc/meminfo.c | 4 - fs/proc/page.c | 28 ++++++---- kernel/events/uprobes.c | 13 ++++- mm/filemap.c | 1 mm/gup.c | 14 +++-- mm/huge_memory.c | 9 ++- mm/hugetlb.c | 5 - mm/init-mm.c | 1 mm/memblock.c | 6 +- mm/memcontrol.c | 18 ++++--- mm/memory-failure.c | 14 +++-- mm/memory_hotplug.c | 74 ++++++----------------------- mm/memremap.c | 11 ++-- mm/page_owner.c | 5 + mm/rmap.c | 1 mm/slab_common.c | 9 +-- mm/truncate.c | 12 ++++ mm/vmscan.c | 14 ++--- scripts/gdb/linux/dmesg.py | 16 ++++-- scripts/gdb/linux/symbols.py | 8 ++- scripts/gdb/linux/utils.py | 25 +++++---- tools/testing/selftests/vm/gup_benchmark.c | 2 27 files changed, 166 insertions(+), 140 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2019-10-14 21:11 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2019-10-14 21:11 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm The usual shower of hotfixes and some followups to the recently merged page_owner enhancements. 16 patches, based on 2abd839aa7e615f2bbc50c8ba7deb9e40d186768. Subsystems affected by this patch series: Vlastimil Babka <vbabka@suse.cz>: Patch series "followups to debug_pagealloc improvements through page_owner", v3: mm, page_owner: fix off-by-one error in __set_page_owner_handle() mm, page_owner: decouple freeing stack trace from debug_pagealloc mm, page_owner: rename flag indicating that page is allocated Qian Cai <cai@lca.pw>: mm/slub: fix a deadlock in show_slab_objects() Eric Biggers <ebiggers@google.com>: lib/generic-radix-tree.c: add kmemleak annotations Alexander Potapenko <glider@google.com>: mm/slub.c: init_on_free=1 should wipe freelist ptr for bulk allocations lib/test_meminit: add a kmem_cache_alloc_bulk() test David Rientjes <rientjes@google.com>: mm, hugetlb: allow hugepage allocations to reclaim as needed Vlastimil Babka <vbabka@suse.cz>: mm, compaction: fix wrong pfn handling in __reset_isolation_pfn() Randy Dunlap <rdunlap@infradead.org>: fs/direct-io.c: fix kernel-doc warning fs/libfs.c: fix kernel-doc warning fs/fs-writeback.c: fix kernel-doc warning bitmap.h: fix kernel-doc warning and typo xarray.h: fix kernel-doc warning mm/slab.c: fix kernel-doc warning for __ksize() Jane Chu <jane.chu@oracle.com>: mm/memory-failure: poison read receives SIGKILL instead of SIGBUS if mmaped more than once Documentation/dev-tools/kasan.rst | 3 ++ fs/direct-io.c | 3 -- fs/fs-writeback.c | 2 - fs/libfs.c | 3 -- include/linux/bitmap.h | 3 +- include/linux/page_ext.h | 10 ++++++ include/linux/xarray.h | 4 +- lib/generic-radix-tree.c | 32 +++++++++++++++++----- lib/test_meminit.c | 27 ++++++++++++++++++ mm/compaction.c | 7 ++-- mm/memory-failure.c | 22 ++++++++------- mm/page_alloc.c | 6 ++-- mm/page_ext.c | 23 ++++++--------- mm/page_owner.c | 55 +++++++++++++------------------------- mm/slab.c | 3 ++ mm/slub.c | 35 ++++++++++++++++++------ 16 files changed, 152 insertions(+), 86 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2019-10-07 0:57 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2019-10-07 0:57 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm The usual shower of hotfixes. Chris's memcg patches aren't actually fixes - they're mature but a few niggling review issues were late to arrive. The ocfs2 fixes are quite old - those took some time to get reviewer attention. 18 patches, based on 4ea655343ce4180fe9b2c7ec8cb8ef9884a47901. Subsystems affected by this patch series: ocfs2 hotfixes mm/memcg mm/slab-generic Subsystem: ocfs2 Jia Guo <guojia12@huawei.com>: ocfs2: clear zero in unaligned direct IO Jia-Ju Bai <baijiaju1990@gmail.com>: fs: ocfs2: fix possible null-pointer dereferences in ocfs2_xa_prepare_entry() fs: ocfs2: fix a possible null-pointer dereference in ocfs2_write_end_nolock() fs: ocfs2: fix a possible null-pointer dereference in ocfs2_info_scan_inode_alloc() Subsystem: hotfixes Will Deacon <will@kernel.org>: panic: ensure preemption is disabled during panic() Anshuman Khandual <anshuman.khandual@arm.com>: mm/memremap: drop unused SECTION_SIZE and SECTION_MASK Tejun Heo <tj@kernel.org>: writeback: fix use-after-free in finish_writeback_work() Yi Wang <wang.yi59@zte.com.cn>: mm: fix -Wmissing-prototypes warnings Baoquan He <bhe@redhat.com>: memcg: only record foreign writebacks with dirty pages when memcg is not disabled Michal Hocko <mhocko@suse.com>: kernel/sysctl.c: do not override max_threads provided by userspace Vitaly Wool <vitalywool@gmail.com>: mm/z3fold.c: claim page in the beginning of free Qian Cai <cai@lca.pw>: mm/page_alloc.c: fix a crash in free_pages_prepare() Dan Carpenter <dan.carpenter@oracle.com>: mm/vmpressure.c: fix a signedness bug in vmpressure_register_event() Subsystem: mm/memcg Chris Down <chris@chrisdown.name>: mm, memcg: proportional memory.{low,min} reclaim mm, memcg: make memory.emin the baseline for utilisation determination mm, memcg: make scan aggression always exclude protection Subsystem: mm/slab-generic Vlastimil Babka <vbabka@suse.cz>: Patch series "guarantee natural alignment for kmalloc()", v2: mm, sl[ou]b: improve memory accounting mm, sl[aou]b: guarantee natural alignment for kmalloc(power-of-two) Documentation/admin-guide/cgroup-v2.rst | 20 +- Documentation/core-api/memory-allocation.rst | 4 fs/fs-writeback.c | 9 - fs/ocfs2/aops.c | 25 +++ fs/ocfs2/ioctl.c | 2 fs/ocfs2/xattr.c | 56 +++---- include/linux/memcontrol.h | 67 ++++++--- include/linux/slab.h | 4 kernel/fork.c | 4 kernel/panic.c | 1 mm/memcontrol.c | 5 mm/memremap.c | 2 mm/page_alloc.c | 8 - mm/shuffle.c | 2 mm/slab_common.c | 19 ++ mm/slob.c | 62 ++++++-- mm/slub.c | 14 + mm/sparse.c | 2 mm/vmpressure.c | 20 +- mm/vmscan.c | 198 +++++++++++++++++---------- mm/z3fold.c | 10 + 21 files changed, 363 insertions(+), 171 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2019-09-25 23:45 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2019-09-25 23:45 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm - almost all of the rest of -mm - various other subsystems 76 patches, based on 351c8a09b00b5c51c8f58b016fffe51f87e2d820: Subsystems affected by this patch series: memcg misc core-kernel lib checkpatch reiserfs fat fork cpumask kexec uaccess kconfig kgdb bug ipc lzo kasan madvise cleanups pagemap Subsystem: memcg Michal Hocko <mhocko@suse.com>: memcg, kmem: do not fail __GFP_NOFAIL charges Subsystem: misc Masahiro Yamada <yamada.masahiro@socionext.com>: linux/coff.h: add include guard Subsystem: core-kernel Valdis Kletnieks <valdis.kletnieks@vt.edu>: kernel/elfcore.c: include proper prototypes Subsystem: lib Michel Lespinasse <walken@google.com>: rbtree: avoid generating code twice for the cached versions (tools copy) Patch series "make RB_DECLARE_CALLBACKS more generic", v3: augmented rbtree: add comments for RB_DECLARE_CALLBACKS macro augmented rbtree: add new RB_DECLARE_CALLBACKS_MAX macro augmented rbtree: rework the RB_DECLARE_CALLBACKS macro definition Joe Perches <joe@perches.com>: kernel-doc: core-api: include string.h into core-api Qian Cai <cai@lca.pw>: include/trace/events/writeback.h: fix -Wstringop-truncation warnings Kees Cook <keescook@chromium.org>: strscpy: reject buffer sizes larger than INT_MAX Valdis Kletnieks <valdis.kletnieks@vt.edu>: lib/generic-radix-tree.c: make 2 functions static inline lib/extable.c: add missing prototypes Stephen Boyd <swboyd@chromium.org>: lib/hexdump: make print_hex_dump_bytes() a nop on !DEBUG builds Subsystem: checkpatch Joe Perches <joe@perches.com>: checkpatch: don't interpret stack dumps as commit IDs checkpatch: improve SPDX license checking Matteo Croce <mcroce@redhat.com>: checkpatch.pl: warn on invalid commit id Brendan Jackman <brendan.jackman@bluwireless.co.uk>: checkpatch: exclude sizeof sub-expressions from MACRO_ARG_REUSE Joe Perches <joe@perches.com>: checkpatch: prefer __section over __attribute__((section(...))) checkpatch: allow consecutive close braces Sean Christopherson <sean.j.christopherson@intel.com>: checkpatch: remove obsolete period from "ambiguous SHA1" query Joe Perches <joe@perches.com>: checkpatch: make git output use LANGUAGE=en_US.utf8 Subsystem: reiserfs Jia-Ju Bai <baijiaju1990@gmail.com>: fs: reiserfs: remove unnecessary check of bh in remove_from_transaction() zhengbin <zhengbin13@huawei.com>: fs/reiserfs/journal.c: remove set but not used variables fs/reiserfs/stree.c: remove set but not used variables fs/reiserfs/lbalance.c: remove set but not used variables fs/reiserfs/objectid.c: remove set but not used variables fs/reiserfs/prints.c: remove set but not used variables fs/reiserfs/fix_node.c: remove set but not used variables fs/reiserfs/do_balan.c: remove set but not used variables Jason Yan <yanaijie@huawei.com>: fs/reiserfs/journal.c: remove set but not used variable fs/reiserfs/do_balan.c: remove set but not used variable Subsystem: fat Markus Elfring <elfring@users.sourceforge.net>: fat: delete an unnecessary check before brelse() Subsystem: fork Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>: fork: improve error message for corrupted page tables Subsystem: cpumask Alexey Dobriyan <adobriyan@gmail.com>: cpumask: nicer for_each_cpumask_and() signature Subsystem: kexec Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>: kexec: bail out upon SIGKILL when allocating memory. Vasily Gorbik <gor@linux.ibm.com>: kexec: restore arch_kexec_kernel_image_probe declaration Subsystem: uaccess Kees Cook <keescook@chromium.org>: uaccess: add missing __must_check attributes Subsystem: kconfig Masahiro Yamada <yamada.masahiro@socionext.com>: compiler: enable CONFIG_OPTIMIZE_INLINING forcibly Subsystem: kgdb Douglas Anderson <dianders@chromium.org>: kgdb: don't use a notifier to enter kgdb at panic; call directly scripts/gdb: handle split debug Subsystem: bug Kees Cook <keescook@chromium.org>: Patch series "Clean up WARN() "cut here" handling", v2: bug: refactor away warn_slowpath_fmt_taint() bug: rename __WARN_printf_taint() to __WARN_printf() bug: consolidate warn_slowpath_fmt() usage bug: lift "cut here" out of __warn() bug: clean up helper macros to remove __WARN_TAINT() bug: consolidate __WARN_FLAGS usage bug: move WARN_ON() "cut here" into exception handler Subsystem: ipc Markus Elfring <elfring@users.sourceforge.net>: ipc/mqueue.c: delete an unnecessary check before the macro call dev_kfree_skb() ipc/mqueue: improve exception handling in do_mq_notify() "Joel Fernandes (Google)" <joel@joelfernandes.org>: ipc/sem.c: convert to use built-in RCU list checking Subsystem: lzo Dave Rodgman <dave.rodgman@arm.com>: lib/lzo/lzo1x_compress.c: fix alignment bug in lzo-rle Subsystem: kasan Andrey Konovalov <andreyknvl@google.com>: Patch series "arm64: untag user pointers passed to the kernel", v19: lib: untag user pointers in strn*_user mm: untag user pointers passed to memory syscalls mm: untag user pointers in mm/gup.c mm: untag user pointers in get_vaddr_frames fs/namespace: untag user pointers in copy_mount_options userfaultfd: untag user pointers drm/amdgpu: untag user pointers drm/radeon: untag user pointers in radeon_gem_userptr_ioctl media/v4l2-core: untag user pointers in videobuf_dma_contig_user_get tee/shm: untag user pointers in tee_shm_register vfio/type1: untag user pointers in vaddr_get_pfn Catalin Marinas <catalin.marinas@arm.com>: mm: untag user pointers in mmap/munmap/mremap/brk Subsystem: madvise Minchan Kim <minchan@kernel.org>: Patch series "Introduce MADV_COLD and MADV_PAGEOUT", v7: mm: introduce MADV_COLD mm: change PAGEREF_RECLAIM_CLEAN with PAGE_REFRECLAIM mm: introduce MADV_PAGEOUT mm: factor out common parts between MADV_COLD and MADV_PAGEOUT Subsystem: cleanups Mike Rapoport <rppt@linux.ibm.com>: hexagon: drop empty and unused free_initrd_mem Denis Efremov <efremov@linux.com>: checkpatch: check for nested (un)?likely() calls xen/events: remove unlikely() from WARN() condition fs: remove unlikely() from WARN_ON() condition wimax/i2400m: remove unlikely() from WARN*() condition xfs: remove unlikely() from WARN_ON() condition IB/hfi1: remove unlikely() from IS_ERR*() condition ntfs: remove (un)?likely() from IS_ERR() conditions Subsystem: pagemap Mark Rutland <mark.rutland@arm.com>: mm: treewide: clarify pgtable_page_{ctor,dtor}() naming Documentation/core-api/kernel-api.rst | 3 Documentation/vm/split_page_table_lock.rst | 10 arch/alpha/include/uapi/asm/mman.h | 3 arch/arc/include/asm/pgalloc.h | 4 arch/arm/include/asm/tlb.h | 2 arch/arm/mm/mmu.c | 2 arch/arm64/include/asm/tlb.h | 2 arch/arm64/mm/mmu.c | 2 arch/csky/include/asm/pgalloc.h | 2 arch/hexagon/include/asm/pgalloc.h | 2 arch/hexagon/mm/init.c | 13 arch/m68k/include/asm/mcf_pgalloc.h | 6 arch/m68k/include/asm/motorola_pgalloc.h | 6 arch/m68k/include/asm/sun3_pgalloc.h | 2 arch/mips/include/asm/pgalloc.h | 2 arch/mips/include/uapi/asm/mman.h | 3 arch/nios2/include/asm/pgalloc.h | 2 arch/openrisc/include/asm/pgalloc.h | 6 arch/parisc/include/uapi/asm/mman.h | 3 arch/powerpc/mm/pgtable-frag.c | 6 arch/riscv/include/asm/pgalloc.h | 2 arch/s390/mm/pgalloc.c | 6 arch/sh/include/asm/pgalloc.h | 2 arch/sparc/include/asm/pgtable_64.h | 5 arch/sparc/mm/init_64.c | 4 arch/sparc/mm/srmmu.c | 4 arch/um/include/asm/pgalloc.h | 2 arch/unicore32/include/asm/tlb.h | 2 arch/x86/mm/pat_rbtree.c | 19 arch/x86/mm/pgtable.c | 2 arch/xtensa/include/asm/pgalloc.h | 4 arch/xtensa/include/uapi/asm/mman.h | 3 drivers/block/drbd/drbd_interval.c | 29 - drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 2 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 2 drivers/gpu/drm/radeon/radeon_gem.c | 2 drivers/infiniband/hw/hfi1/verbs.c | 2 drivers/media/v4l2-core/videobuf-dma-contig.c | 9 drivers/net/wimax/i2400m/tx.c | 3 drivers/tee/tee_shm.c | 1 drivers/vfio/vfio_iommu_type1.c | 2 drivers/xen/events/events_base.c | 2 fs/fat/dir.c | 4 fs/namespace.c | 2 fs/ntfs/mft.c | 12 fs/ntfs/namei.c | 2 fs/ntfs/runlist.c | 2 fs/ntfs/super.c | 2 fs/open.c | 2 fs/reiserfs/do_balan.c | 15 fs/reiserfs/fix_node.c | 6 fs/reiserfs/journal.c | 22 fs/reiserfs/lbalance.c | 3 fs/reiserfs/objectid.c | 3 fs/reiserfs/prints.c | 3 fs/reiserfs/stree.c | 4 fs/userfaultfd.c | 22 fs/xfs/xfs_buf.c | 4 include/asm-generic/bug.h | 71 +- include/asm-generic/pgalloc.h | 8 include/linux/cpumask.h | 14 include/linux/interval_tree_generic.h | 22 include/linux/kexec.h | 2 include/linux/kgdb.h | 2 include/linux/mm.h | 4 include/linux/mm_types_task.h | 4 include/linux/printk.h | 22 include/linux/rbtree_augmented.h | 114 +++- include/linux/string.h | 5 include/linux/swap.h | 2 include/linux/thread_info.h | 2 include/linux/uaccess.h | 21 include/trace/events/writeback.h | 38 - include/uapi/asm-generic/mman-common.h | 3 include/uapi/linux/coff.h | 5 ipc/mqueue.c | 22 ipc/sem.c | 3 kernel/debug/debug_core.c | 31 - kernel/elfcore.c | 1 kernel/fork.c | 16 kernel/kexec_core.c | 2 kernel/panic.c | 48 - lib/Kconfig.debug | 4 lib/bug.c | 11 lib/extable.c | 1 lib/generic-radix-tree.c | 4 lib/hexdump.c | 21 lib/lzo/lzo1x_compress.c | 14 lib/rbtree_test.c | 37 - lib/string.c | 12 lib/strncpy_from_user.c | 3 lib/strnlen_user.c | 3 mm/frame_vector.c | 2 mm/gup.c | 4 mm/internal.h | 2 mm/madvise.c | 562 ++++++++++++++++------- mm/memcontrol.c | 10 mm/mempolicy.c | 3 mm/migrate.c | 2 mm/mincore.c | 2 mm/mlock.c | 4 mm/mmap.c | 34 - mm/mprotect.c | 2 mm/mremap.c | 13 mm/msync.c | 2 mm/oom_kill.c | 2 mm/swap.c | 42 + mm/vmalloc.c | 5 mm/vmscan.c | 62 ++ scripts/checkpatch.pl | 69 ++ scripts/gdb/linux/symbols.py | 4 tools/include/linux/rbtree.h | 71 +- tools/include/linux/rbtree_augmented.h | 145 +++-- tools/lib/rbtree.c | 37 - 114 files changed, 1195 insertions(+), 754 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2019-09-23 22:31 Andrew Morton 2019-09-24 0:55 ` incoming Linus Torvalds 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2019-09-23 22:31 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm - a few hot fixes - ocfs2 updates - almost all of -mm, as below. 134 patches, based on 619e17cf75dd58905aa67ccd494a6ba5f19d6cc6: Subsystems affected by this patch series: hotfixes ocfs2 slab-generic slab slub kmemleak kasan cleanups debug pagecache memcg gup pagemap memory-hotplug sparsemem vmalloc initialization z3fold compaction mempolicy oom-kill hugetlb migration thp mmap madvise shmem zswap zsmalloc Subsystem: hotfixes OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>: fat: work around race with userspace's read via blockdev while mounting Vitaly Wool <vitalywool@gmail.com>: Revert "mm/z3fold.c: fix race between migration and destruction" Arnd Bergmann <arnd@arndb.de>: mm: add dummy can_do_mlock() helper Vitaly Wool <vitalywool@gmail.com>: z3fold: fix retry mechanism in page reclaim Greg Thelen <gthelen@google.com>: kbuild: clean compressed initramfs image Subsystem: ocfs2 Joseph Qi <joseph.qi@linux.alibaba.com>: ocfs2: use jbd2_inode dirty range scoping jbd2: remove jbd2_journal_inode_add_[write|wait] Greg Kroah-Hartman <gregkh@linuxfoundation.org>: ocfs2: further debugfs cleanups Guozhonghua <guozhonghua@h3c.com>: ocfs2: remove unused ocfs2_calc_tree_trunc_credits() ocfs2: remove unused ocfs2_orphan_scan_exit() declaration zhengbin <zhengbin13@huawei.com>: fs/ocfs2/namei.c: remove set but not used variables fs/ocfs2/file.c: remove set but not used variables fs/ocfs2/dir.c: remove set but not used variables Markus Elfring <elfring@users.sourceforge.net>: ocfs2: delete unnecessary checks before brelse() Changwei Ge <gechangwei@live.cn>: ocfs2: wait for recovering done after direct unlock request ocfs2: checkpoint appending truncate log transaction before flushing Colin Ian King <colin.king@canonical.com>: ocfs2: fix spelling mistake "ambigous" -> "ambiguous" Subsystem: slab-generic Waiman Long <longman@redhat.com>: mm, slab: extend slab/shrink to shrink all memcg caches Subsystem: slab Waiman Long <longman@redhat.com>: mm, slab: move memcg_cache_params structure to mm/slab.h Subsystem: slub Qian Cai <cai@lca.pw>: mm/slub.c: fix -Wunused-function compiler warnings Subsystem: kmemleak Nicolas Boichat <drinkcat@chromium.org>: kmemleak: increase DEBUG_KMEMLEAK_EARLY_LOG_SIZE default to 16K Catalin Marinas <catalin.marinas@arm.com>: Patch series "mm: kmemleak: Use a memory pool for kmemleak object: mm: kmemleak: make the tool tolerant to struct scan_area allocation failures mm: kmemleak: simple memory allocation pool for kmemleak objects mm: kmemleak: use the memory pool for early allocations Qian Cai <cai@lca.pw>: mm/kmemleak.c: record the current memory pool size mm/kmemleak: increase the max mem pool to 1M Subsystem: kasan Walter Wu <walter-zh.wu@mediatek.com>: kasan: add memory corruption identification for software tag-based mode Mark Rutland <mark.rutland@arm.com>: lib/test_kasan.c: add roundtrip tests Subsystem: cleanups Christophe JAILLET <christophe.jaillet@wanadoo.fr>: mm/page_poison.c: fix a typo in a comment YueHaibing <yuehaibing@huawei.com>: mm/rmap.c: remove set but not used variable 'cstart' Matthew Wilcox (Oracle) <willy@infradead.org>: Patch series "Make working with compound pages easier", v2: mm: introduce page_size() "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm: introduce page_shift() Matthew Wilcox (Oracle) <willy@infradead.org>: mm: introduce compound_nr() Yu Zhao <yuzhao@google.com>: mm: replace list_move_tail() with add_page_to_lru_list_tail() Subsystem: debug Vlastimil Babka <vbabka@suse.cz>: Patch series "debug_pagealloc improvements through page_owner", v2: mm, page_owner: record page owner for each subpage mm, page_owner: keep owner info when freeing the page mm, page_owner, debug_pagealloc: save and dump freeing stack trace Subsystem: pagecache Konstantin Khlebnikov <khlebnikov@yandex-team.ru>: mm/filemap.c: don't initiate writeback if mapping has no dirty pages mm/filemap.c: rewrite mapping_needs_writeback in less fancy manner "Matthew Wilcox (Oracle)" <willy@infradead.org>: mm: page cache: store only head pages in i_pages Subsystem: memcg Chris Down <chris@chrisdown.name>: mm, memcg: throttle allocators when failing reclaim over memory.high Roman Gushchin <guro@fb.com>: mm: memcontrol: switch to rcu protection in drain_all_stock() Johannes Weiner <hannes@cmpxchg.org>: mm: vmscan: do not share cgroup iteration between reclaimers Subsystem: gup [11~From: John Hubbard <jhubbard@nvidia.com>: Patch series "mm/gup: add make_dirty arg to put_user_pages_dirty_lock()",: mm/gup: add make_dirty arg to put_user_pages_dirty_lock() John Hubbard <jhubbard@nvidia.com>: drivers/gpu/drm/via: convert put_page() to put_user_page*() net/xdp: convert put_page() to put_user_page*() Subsystem: pagemap Wei Yang <richardw.yang@linux.intel.com>: mm: remove redundant assignment of entry Minchan Kim <minchan@kernel.org>: mm: release the spinlock on zap_pte_range Nicholas Piggin <npiggin@gmail.com>: Patch series "mm: remove quicklist page table caches": mm: remove quicklist page table caches Mike Rapoport <rppt@linux.ibm.com>: ia64: switch to generic version of pte allocation sh: switch to generic version of pte allocation microblaze: switch to generic version of pte allocation mm: consolidate pgtable_cache_init() and pgd_cache_init() Kefeng Wang <wangkefeng.wang@huawei.com>: mm: do not hash address in print_bad_pte() Subsystem: memory-hotplug David Hildenbrand <david@redhat.com>: mm/memory_hotplug: remove move_pfn_range() drivers/base/node.c: simplify unregister_memory_block_under_nodes() drivers/base/memory.c: fixup documentation of removable/phys_index/block_size_bytes driver/base/memory.c: validate memory block size early drivers/base/memory.c: don't store end_section_nr in memory blocks Wei Yang <richardw.yang@linux.intel.com>: mm/memory_hotplug.c: prevent memory leak when reusing pgdat David Hildenbrand <david@redhat.com>: Patch series "mm/memory_hotplug: online_pages() cleanups", v2: mm/memory_hotplug.c: use PFN_UP / PFN_DOWN in walk_system_ram_range() mm/memory_hotplug: drop PageReserved() check in online_pages_range() mm/memory_hotplug: simplify online_pages_range() mm/memory_hotplug: make sure the pfn is aligned to the order when onlining mm/memory_hotplug: online_pages cannot be 0 in online_pages() Alastair D'Silva <alastair@d-silva.org>: Patch series "Add bounds check for Hotplugged memory", v3: mm/memory_hotplug.c: add a bounds check to check_hotplug_memory_range() mm/memremap.c: add a bounds check in devm_memremap_pages() Souptick Joarder <jrdr.linux@gmail.com>: mm/memory_hotplug.c: s/is/if Subsystem: sparsemem Lecopzer Chen <lecopzer.chen@mediatek.com>: mm/sparse.c: fix memory leak of sparsemap_buf in aligned memory mm/sparse.c: fix ALIGN() without power of 2 in sparse_buffer_alloc() Wei Yang <richardw.yang@linux.intel.com>: mm/sparse.c: use __nr_to_section(section_nr) to get mem_section Alastair D'Silva <alastair@d-silva.org>: mm/sparse.c: don't manually decrement num_poisoned_pages "Alastair D'Silva" <alastair@d-silva.org>: mm/sparse.c: remove NULL check in clear_hwpoisoned_pages() Subsystem: vmalloc "Uladzislau Rezki (Sony)" <urezki@gmail.com>: mm/vmalloc: do not keep unpurged areas in the busy tree Pengfei Li <lpf.vector@gmail.com>: mm/vmalloc: modify struct vmap_area to reduce its size Austin Kim <austindh.kim@gmail.com>: mm/vmalloc.c: move 'area->pages' after if statement Subsystem: initialization Mike Rapoport <rppt@linux.ibm.com>: mm: use CPU_BITS_NONE to initialize init_mm.cpu_bitmask Qian Cai <cai@lca.pw>: mm: silence -Woverride-init/initializer-overrides Subsystem: z3fold Vitaly Wool <vitalywool@gmail.com>: z3fold: fix memory leak in kmem cache Subsystem: compaction Yafang Shao <laoar.shao@gmail.com>: mm/compaction.c: clear total_{migrate,free}_scanned before scanning a new zone Pengfei Li <lpf.vector@gmail.com>: mm/compaction.c: remove unnecessary zone parameter in isolate_migratepages() Subsystem: mempolicy Kefeng Wang <wangkefeng.wang@huawei.com>: mm/mempolicy.c: remove unnecessary nodemask check in kernel_migrate_pages() Subsystem: oom-kill Joel Savitz <jsavitz@redhat.com>: mm/oom_kill.c: add task UID to info message on an oom kill Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>: memcg, oom: don't require __GFP_FS when invoking memcg OOM killer Edward Chron <echron@arista.com>: mm/oom: add oom_score_adj and pgtables to Killed process message Yi Wang <wang.yi59@zte.com.cn>: mm/oom_kill.c: fix oom_cpuset_eligible() comment Michal Hocko <mhocko@suse.com>: mm, oom: consider present pages for the node size Qian Cai <cai@lca.pw>: mm/memcontrol.c: fix a -Wunused-function warning Michal Hocko <mhocko@suse.com>: memcg, kmem: deprecate kmem.limit_in_bytes Subsystem: hugetlb Hillf Danton <hdanton@sina.com>: Patch series "address hugetlb page allocation stalls", v2: mm, reclaim: make should_continue_reclaim perform dryrun detection Vlastimil Babka <vbabka@suse.cz>: mm, reclaim: cleanup should_continue_reclaim() mm, compaction: raise compaction priority after it withdrawns Mike Kravetz <mike.kravetz@oracle.com>: hugetlbfs: don't retry when pool page allocations start to fail Subsystem: migration Pingfan Liu <kernelfans@gmail.com>: mm/migrate.c: clean up useless code in migrate_vma_collect_pmd() Subsystem: thp Kefeng Wang <wangkefeng.wang@huawei.com>: thp: update split_huge_page_pmd() comment Song Liu <songliubraving@fb.com>: Patch series "Enable THP for text section of non-shmem files", v10;: filemap: check compound_head(page)->mapping in filemap_fault() filemap: check compound_head(page)->mapping in pagecache_get_page() filemap: update offset check in filemap_fault() mm,thp: stats for file backed THP khugepaged: rename collapse_shmem() and khugepaged_scan_shmem() mm,thp: add read-only THP support for (non-shmem) FS mm,thp: avoid writes to file with THP in pagecache Yang Shi <yang.shi@linux.alibaba.com>: Patch series "Make deferred split shrinker memcg aware", v6: mm: thp: extract split_queue_* into a struct mm: move mem_cgroup_uncharge out of __page_cache_release() mm: shrinker: make shrinker not depend on memcg kmem mm: thp: make deferred split shrinker memcg aware Song Liu <songliubraving@fb.com>: Patch series "THP aware uprobe", v13: mm: move memcmp_pages() and pages_identical() uprobe: use original page when all uprobes are removed mm, thp: introduce FOLL_SPLIT_PMD uprobe: use FOLL_SPLIT_PMD instead of FOLL_SPLIT khugepaged: enable collapse pmd for pte-mapped THP uprobe: collapse THP pmd after removing all uprobes Subsystem: mmap Alexandre Ghiti <alex@ghiti.fr>: Patch series "Provide generic top-down mmap layout functions", v6: mm, fs: move randomize_stack_top from fs to mm arm64: make use of is_compat_task instead of hardcoding this test arm64: consider stack randomization for mmap base only when necessary arm64, mm: move generic mmap layout functions to mm arm64, mm: make randomization selected by generic topdown mmap layout arm: properly account for stack randomization and stack guard gap arm: use STACK_TOP when computing mmap base address arm: use generic mmap top-down layout and brk randomization mips: properly account for stack randomization and stack guard gap mips: use STACK_TOP when computing mmap base address mips: adjust brk randomization offset to fit generic version mips: replace arch specific way to determine 32bit task with generic version mips: use generic mmap top-down layout and brk randomization riscv: make mmap allocation top-down by default Wei Yang <richardw.yang@linux.intel.com>: mm/mmap.c: refine find_vma_prev() with rb_last() Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>: mm: mmap: increase sockets maximum memory size pgoff for 32bits Subsystem: madvise Mike Rapoport <rppt@linux.ibm.com>: mm/madvise: reduce code duplication in error handling paths Subsystem: shmem Miles Chen <miles.chen@mediatek.com>: shmem: fix obsolete comment in shmem_getpage_gfp() Subsystem: zswap Hui Zhu <teawaterz@linux.alibaba.com>: zpool: add malloc_support_movable to zpool_driver zswap: use movable memory if zpool support allocate movable memory Vitaly Wool <vitalywool@gmail.com>: zswap: do not map same object twice Subsystem: zsmalloc Qian Cai <cai@lca.pw>: mm/zsmalloc.c: fix a -Wunused-function warning Documentation/ABI/testing/sysfs-kernel-slab | 13 Documentation/admin-guide/cgroup-v1/memory.rst | 4 Documentation/admin-guide/kernel-parameters.txt | 2 arch/Kconfig | 11 arch/alpha/include/asm/pgalloc.h | 2 arch/alpha/include/asm/pgtable.h | 5 arch/arc/include/asm/pgalloc.h | 1 arch/arc/include/asm/pgtable.h | 5 arch/arm/Kconfig | 1 arch/arm/include/asm/pgalloc.h | 2 arch/arm/include/asm/pgtable-nommu.h | 5 arch/arm/include/asm/pgtable.h | 2 arch/arm/include/asm/processor.h | 2 arch/arm/kernel/process.c | 5 arch/arm/mm/flush.c | 7 arch/arm/mm/mmap.c | 80 ----- arch/arm64/Kconfig | 2 arch/arm64/include/asm/pgalloc.h | 2 arch/arm64/include/asm/pgtable.h | 2 arch/arm64/include/asm/processor.h | 2 arch/arm64/kernel/process.c | 8 arch/arm64/mm/flush.c | 3 arch/arm64/mm/mmap.c | 84 ----- arch/arm64/mm/pgd.c | 2 arch/c6x/include/asm/pgtable.h | 5 arch/csky/include/asm/pgalloc.h | 2 arch/csky/include/asm/pgtable.h | 5 arch/h8300/include/asm/pgtable.h | 6 arch/hexagon/include/asm/pgalloc.h | 2 arch/hexagon/include/asm/pgtable.h | 3 arch/hexagon/mm/Makefile | 2 arch/hexagon/mm/pgalloc.c | 10 arch/ia64/Kconfig | 4 arch/ia64/include/asm/pgalloc.h | 64 ---- arch/ia64/include/asm/pgtable.h | 5 arch/ia64/mm/init.c | 2 arch/m68k/include/asm/pgtable_mm.h | 7 arch/m68k/include/asm/pgtable_no.h | 7 arch/microblaze/include/asm/pgalloc.h | 128 -------- arch/microblaze/include/asm/pgtable.h | 7 arch/microblaze/mm/pgtable.c | 4 arch/mips/Kconfig | 2 arch/mips/include/asm/pgalloc.h | 2 arch/mips/include/asm/pgtable.h | 5 arch/mips/include/asm/processor.h | 5 arch/mips/mm/mmap.c | 124 +------- arch/nds32/include/asm/pgalloc.h | 2 arch/nds32/include/asm/pgtable.h | 2 arch/nios2/include/asm/pgalloc.h | 2 arch/nios2/include/asm/pgtable.h | 2 arch/openrisc/include/asm/pgalloc.h | 2 arch/openrisc/include/asm/pgtable.h | 5 arch/parisc/include/asm/pgalloc.h | 2 arch/parisc/include/asm/pgtable.h | 2 arch/powerpc/include/asm/pgalloc.h | 2 arch/powerpc/include/asm/pgtable.h | 1 arch/powerpc/mm/book3s64/hash_utils.c | 2 arch/powerpc/mm/book3s64/iommu_api.c | 7 arch/powerpc/mm/hugetlbpage.c | 2 arch/riscv/Kconfig | 12 arch/riscv/include/asm/pgalloc.h | 4 arch/riscv/include/asm/pgtable.h | 5 arch/s390/include/asm/pgtable.h | 6 arch/sh/include/asm/pgalloc.h | 56 --- arch/sh/include/asm/pgtable.h | 5 arch/sh/mm/Kconfig | 3 arch/sh/mm/nommu.c | 4 arch/sparc/include/asm/pgalloc_32.h | 2 arch/sparc/include/asm/pgalloc_64.h | 2 arch/sparc/include/asm/pgtable_32.h | 5 arch/sparc/include/asm/pgtable_64.h | 1 arch/sparc/mm/init_32.c | 1 arch/um/include/asm/pgalloc.h | 2 arch/um/include/asm/pgtable.h | 2 arch/unicore32/include/asm/pgalloc.h | 2 arch/unicore32/include/asm/pgtable.h | 2 arch/x86/include/asm/pgtable_32.h | 2 arch/x86/include/asm/pgtable_64.h | 3 arch/x86/mm/pgtable.c | 6 arch/xtensa/include/asm/pgtable.h | 1 arch/xtensa/include/asm/tlbflush.h | 3 drivers/base/memory.c | 44 +- drivers/base/node.c | 55 +-- drivers/crypto/chelsio/chtls/chtls_io.c | 5 drivers/gpu/drm/via/via_dmablit.c | 10 drivers/infiniband/core/umem.c | 5 drivers/infiniband/hw/hfi1/user_pages.c | 5 drivers/infiniband/hw/qib/qib_user_pages.c | 5 drivers/infiniband/hw/usnic/usnic_uiom.c | 5 drivers/infiniband/sw/siw/siw_mem.c | 10 drivers/staging/android/ion/ion_system_heap.c | 4 drivers/target/tcm_fc/tfc_io.c | 3 drivers/vfio/vfio_iommu_spapr_tce.c | 8 fs/binfmt_elf.c | 20 - fs/fat/dir.c | 13 fs/fat/fatent.c | 3 fs/inode.c | 3 fs/io_uring.c | 2 fs/jbd2/journal.c | 2 fs/jbd2/transaction.c | 12 fs/ocfs2/alloc.c | 20 + fs/ocfs2/aops.c | 13 fs/ocfs2/blockcheck.c | 26 - fs/ocfs2/cluster/heartbeat.c | 109 +------ fs/ocfs2/dir.c | 3 fs/ocfs2/dlm/dlmcommon.h | 1 fs/ocfs2/dlm/dlmdebug.c | 55 --- fs/ocfs2/dlm/dlmdebug.h | 16 - fs/ocfs2/dlm/dlmdomain.c | 7 fs/ocfs2/dlm/dlmunlock.c | 23 + fs/ocfs2/dlmglue.c | 29 - fs/ocfs2/extent_map.c | 3 fs/ocfs2/file.c | 13 fs/ocfs2/inode.c | 2 fs/ocfs2/journal.h | 42 -- fs/ocfs2/namei.c | 2 fs/ocfs2/ocfs2.h | 3 fs/ocfs2/super.c | 10 fs/open.c | 8 fs/proc/meminfo.c | 8 fs/proc/task_mmu.c | 6 include/asm-generic/pgalloc.h | 5 include/asm-generic/pgtable.h | 7 include/linux/compaction.h | 22 + include/linux/fs.h | 32 ++ include/linux/huge_mm.h | 9 include/linux/hugetlb.h | 2 include/linux/jbd2.h | 2 include/linux/khugepaged.h | 12 include/linux/memcontrol.h | 23 - include/linux/memory.h | 7 include/linux/memory_hotplug.h | 1 include/linux/mm.h | 37 ++ include/linux/mm_types.h | 1 include/linux/mmzone.h | 14 include/linux/page_ext.h | 1 include/linux/pagemap.h | 10 include/linux/quicklist.h | 94 ------ include/linux/shrinker.h | 7 include/linux/slab.h | 62 ---- include/linux/vmalloc.h | 20 - include/linux/zpool.h | 3 init/main.c | 6 kernel/events/uprobes.c | 81 ++++- kernel/resource.c | 4 kernel/sched/idle.c | 1 kernel/sysctl.c | 6 lib/Kconfig.debug | 15 lib/Kconfig.kasan | 8 lib/iov_iter.c | 2 lib/show_mem.c | 5 lib/test_kasan.c | 41 ++ mm/Kconfig | 16 - mm/Kconfig.debug | 4 mm/Makefile | 4 mm/compaction.c | 50 +-- mm/filemap.c | 168 ++++------ mm/gup.c | 125 +++----- mm/huge_memory.c | 129 ++++++-- mm/hugetlb.c | 89 +++++ mm/hugetlb_cgroup.c | 2 mm/init-mm.c | 2 mm/kasan/common.c | 32 +- mm/kasan/kasan.h | 14 mm/kasan/report.c | 44 ++ mm/kasan/tags_report.c | 24 + mm/khugepaged.c | 372 ++++++++++++++++++++---- mm/kmemleak.c | 338 +++++---------------- mm/ksm.c | 18 - mm/madvise.c | 52 +-- mm/memcontrol.c | 188 ++++++++++-- mm/memfd.c | 2 mm/memory.c | 21 + mm/memory_hotplug.c | 120 ++++--- mm/mempolicy.c | 4 mm/memremap.c | 5 mm/migrate.c | 13 mm/mmap.c | 12 mm/mmu_gather.c | 2 mm/nommu.c | 2 mm/oom_kill.c | 30 + mm/page_alloc.c | 27 + mm/page_owner.c | 127 +++++--- mm/page_poison.c | 2 mm/page_vma_mapped.c | 3 mm/quicklist.c | 103 ------ mm/rmap.c | 25 - mm/shmem.c | 12 mm/slab.h | 64 ++++ mm/slab_common.c | 37 ++ mm/slob.c | 2 mm/slub.c | 22 - mm/sparse.c | 25 + mm/swap.c | 16 - mm/swap_state.c | 6 mm/util.c | 126 +++++++- mm/vmalloc.c | 84 +++-- mm/vmscan.c | 163 ++++------ mm/vmstat.c | 2 mm/z3fold.c | 154 ++------- mm/zpool.c | 16 + mm/zsmalloc.c | 23 - mm/zswap.c | 15 net/xdp/xdp_umem.c | 9 net/xdp/xsk.c | 2 usr/Makefile | 3 206 files changed, 2385 insertions(+), 2533 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2019-09-23 22:31 incoming Andrew Morton @ 2019-09-24 0:55 ` Linus Torvalds 2019-09-24 4:31 ` incoming Andrew Morton 0 siblings, 1 reply; 349+ messages in thread From: Linus Torvalds @ 2019-09-24 0:55 UTC (permalink / raw) To: Andrew Morton, David Rientjes, Vlastimil Babka, Michal Hocko, Andrea Arcangeli Cc: mm-commits, Linux-MM On Mon, Sep 23, 2019 at 3:31 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > - almost all of -mm, as below. I was hoping that we could at least test the THP locality thing? Is it in your queue at all, or am I supposed to just do it myself? Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2019-09-24 0:55 ` incoming Linus Torvalds @ 2019-09-24 4:31 ` Andrew Morton 2019-09-24 7:48 ` incoming Michal Hocko 0 siblings, 1 reply; 349+ messages in thread From: Andrew Morton @ 2019-09-24 4:31 UTC (permalink / raw) To: Linus Torvalds Cc: David Rientjes, Vlastimil Babka, Michal Hocko, Andrea Arcangeli, mm-commits, Linux-MM On Mon, 23 Sep 2019 17:55:24 -0700 Linus Torvalds <torvalds@linux-foundation.org> wrote: > On Mon, Sep 23, 2019 at 3:31 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > > > - almost all of -mm, as below. > > I was hoping that we could at least test the THP locality thing? Is it > in your queue at all, or am I supposed to just do it myself? > Confused. I saw a privately emailed patch from David which nobody seems to have tested yet. I parked that for consideration after -rc1. Or are you referring to something else? This thing keeps stalling. It would be nice to push this along and get something nailed down which we can at least get into 5.4-rc, perhaps with a backport-this tag? ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2019-09-24 4:31 ` incoming Andrew Morton @ 2019-09-24 7:48 ` Michal Hocko 2019-09-24 15:34 ` incoming Linus Torvalds 2019-09-24 19:55 ` incoming Vlastimil Babka 0 siblings, 2 replies; 349+ messages in thread From: Michal Hocko @ 2019-09-24 7:48 UTC (permalink / raw) To: Andrew Morton Cc: Linus Torvalds, David Rientjes, Vlastimil Babka, Andrea Arcangeli, mm-commits, Linux-MM On Mon 23-09-19 21:31:53, Andrew Morton wrote: > On Mon, 23 Sep 2019 17:55:24 -0700 Linus Torvalds <torvalds@linux-foundation.org> wrote: > > > On Mon, Sep 23, 2019 at 3:31 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > > > > > - almost all of -mm, as below. > > > > I was hoping that we could at least test the THP locality thing? Is it > > in your queue at all, or am I supposed to just do it myself? > > > > Confused. I saw a privately emailed patch from David which nobody > seems to have tested yet. I parked that for consideration after -rc1. > Or are you referring to something else? > > This thing keeps stalling. It would be nice to push this along and get > something nailed down which we can at least get into 5.4-rc, perhaps > with a backport-this tag? The patch proposed by David is really non trivial wrt. potential side effects. I have provided my review feedback [1] and it didn't get any reaction. I really believe that we need to debug this properly. A reproducer would be useful for others to work on that. There is a more fundamental problem here and we need to address it rather than to duck tape it and whack a mole afterwards. [1] http://lkml.kernel.org/r/20190909193020.GD2063@dhcp22.suse.cz -- Michal Hocko SUSE Labs ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2019-09-24 7:48 ` incoming Michal Hocko @ 2019-09-24 15:34 ` Linus Torvalds 2019-09-25 6:36 ` incoming Michal Hocko 2019-09-24 19:55 ` incoming Vlastimil Babka 1 sibling, 1 reply; 349+ messages in thread From: Linus Torvalds @ 2019-09-24 15:34 UTC (permalink / raw) To: Michal Hocko Cc: Andrew Morton, David Rientjes, Vlastimil Babka, Andrea Arcangeli, mm-commits, Linux-MM On Tue, Sep 24, 2019 at 12:48 AM Michal Hocko <mhocko@kernel.org> wrote: > > The patch proposed by David is really non trivial wrt. potential side > effects. The thing is, that's not an argument when we know that the current state is garbage and has a lot of these non-trivial side effects that are bad. So the patch by David _fixes_ a non-trivial bad side effect. You can't then say "there may be other non-trivial side effects that I don't even know about" as an argument for saying it's bad. David at least has numbers and an argument for his patch. Linus ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2019-09-24 15:34 ` incoming Linus Torvalds @ 2019-09-25 6:36 ` Michal Hocko 0 siblings, 0 replies; 349+ messages in thread From: Michal Hocko @ 2019-09-25 6:36 UTC (permalink / raw) To: Linus Torvalds Cc: Andrew Morton, David Rientjes, Vlastimil Babka, Andrea Arcangeli, mm-commits, Linux-MM On Tue 24-09-19 08:34:20, Linus Torvalds wrote: > On Tue, Sep 24, 2019 at 12:48 AM Michal Hocko <mhocko@kernel.org> wrote: > > > > The patch proposed by David is really non trivial wrt. potential side > > effects. > > The thing is, that's not an argument when we know that the current > state is garbage and has a lot of these non-trivial side effects that > are bad. > > So the patch by David _fixes_ a non-trivial bad side effect. > > You can't then say "there may be other non-trivial side effects that I > don't even know about" as an argument for saying it's bad. David at > least has numbers and an argument for his patch. All I am saying is that I am not able to wrap my head around this patch to provide a competent Ack. I also believe that the fix is targetting a wrong layer of the problem as explained in my review feedback. Appart from reclaim/compaction interaction mentioned by Vlastimil, it seems that it is an overly eager fallback to a remote node in the fast path that is causing a large part of the problem as well. Kcompactd is not eager enough to keep high order allocations ready for the fast path. This is not specific to THP we have many other high order allocations which are going to follow the same pattern, likely not visible in any counters but still having performance implications. Let's discuss technical details in the respective email thread -- Michal Hocko SUSE Labs ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2019-09-24 7:48 ` incoming Michal Hocko 2019-09-24 15:34 ` incoming Linus Torvalds @ 2019-09-24 19:55 ` Vlastimil Babka 1 sibling, 0 replies; 349+ messages in thread From: Vlastimil Babka @ 2019-09-24 19:55 UTC (permalink / raw) To: Michal Hocko, Andrew Morton Cc: Linus Torvalds, David Rientjes, Andrea Arcangeli, mm-commits, Linux-MM On 9/24/19 9:48 AM, Michal Hocko wrote: > On Mon 23-09-19 21:31:53, Andrew Morton wrote: >> On Mon, 23 Sep 2019 17:55:24 -0700 Linus Torvalds >> <torvalds@linux-foundation.org> wrote: >> >>> On Mon, Sep 23, 2019 at 3:31 PM Andrew Morton >>> <akpm@linux-foundation.org> wrote: >>>> >>>> - almost all of -mm, as below. >>> >>> I was hoping that we could at least test the THP locality thing? >>> Is it in your queue at all, or am I supposed to just do it >>> myself? >>> >> >> Confused. I saw a privately emailed patch from David which nobody >> seems to have tested yet. I parked that for consideration after >> -rc1. Or are you referring to something else? >> >> This thing keeps stalling. It would be nice to push this along and >> get something nailed down which we can at least get into 5.4-rc, >> perhaps with a backport-this tag? > > The patch proposed by David is really non trivial wrt. potential > side effects. I have provided my review feedback [1] and it didn't > get any reaction. I really believe that we need to debug this > properly. A reproducer would be useful for others to work on that. > > There is a more fundamental problem here and we need to address it > rather than to duck tape it and whack a mole afterwards. I believe we found a problem when investigating over-reclaim in this thread [1] where it seems madvised THP allocation attempt can result in 4MB reclaimed, if there is a small zone such as ZONE_DMA on the node. As it happens, the patch "[patch 090/134] mm, reclaim: make should_continue_reclaim perform dryrun detection" in Andrew's pile should change this 4MB to 32 pages reclaimed (as a side-effect), but that has to be tested. I'm also working on a patch to not reclaim even those few pages. Of course there might be more fundamental issues with reclaim/compaction interaction, but this one seems to become hopefully clear now. [1] https://lore.kernel.org/linux-mm/4b4ba042-3741-7b16-2292-198c569da2aa@profihost.ag/ > [1] http://lkml.kernel.org/r/20190909193020.GD2063@dhcp22.suse.cz > ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2019-08-30 23:04 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2019-08-30 23:04 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 7 fixes, based on 846d2db3e00048da3f650e0cfb0b8d67669cec3e: Roman Gushchin <guro@fb.com>: mm: memcontrol: flush percpu slab vmstats on kmem offlining Andrew Morton <akpm@linux-foundation.org>: mm/zsmalloc.c: fix build when CONFIG_COMPACTION=n Roman Gushchin <guro@fb.com>: mm, memcg: partially revert "mm/memcontrol.c: keep local VM counters in sync with the hierarchical ones" "Gustavo A. R. Silva" <gustavo@embeddedor.com>: mm/z3fold.c: fix lock/unlock imbalance in z3fold_page_isolate Dmitry Safonov <dima@arista.com>: mailmap: add aliases for Dmitry Safonov Michal Hocko <mhocko@suse.com>: mm, memcg: do not set reclaim_state on soft limit reclaim Shakeel Butt <shakeelb@google.com>: mm: memcontrol: fix percpu vmstats and vmevents flush .mailmap | 3 ++ include/linux/mmzone.h | 5 ++-- mm/memcontrol.c | 53 ++++++++++++++++++++++++++++++++----------------- mm/vmscan.c | 5 ++-- mm/z3fold.c | 1 mm/zsmalloc.c | 2 + 6 files changed, 47 insertions(+), 22 deletions(-) ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2019-08-25 0:54 Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2019-08-25 0:54 UTC (permalink / raw) To: Linus Torvalds; +Cc: mm-commits, linux-mm 11 fixes, based on 361469211f876e67d7ca3d3d29e6d1c3e313d0f1: Henry Burns <henryburns@google.com>: mm/z3fold.c: fix race between migration and destruction David Rientjes <rientjes@google.com>: mm, page_alloc: move_freepages should not examine struct page of reserved memory Qian Cai <cai@lca.pw>: parisc: fix compilation errrors Roman Gushchin <guro@fb.com>: mm: memcontrol: flush percpu vmstats before releasing memcg mm: memcontrol: flush percpu vmevents before releasing memcg Jason Xing <kerneljasonxing@linux.alibaba.com>: psi: get poll_work to run when calling poll syscall next time Oleg Nesterov <oleg@redhat.com>: userfaultfd_release: always remove uffd flags and clear vm_userfaultfd_ctx Vlastimil Babka <vbabka@suse.cz>: mm, page_owner: handle THP splits correctly Henry Burns <henryburns@google.com>: mm/zsmalloc.c: migration can leave pages in ZS_EMPTY indefinitely mm/zsmalloc.c: fix race condition in zs_destroy_pool Andrey Ryabinin <aryabinin@virtuozzo.com>: mm/kasan: fix false positive invalid-free reports with CONFIG_KASAN_SW_TAGS=y ^ permalink raw reply [flat|nested] 349+ messages in thread
[parent not found: <20190716162536.bb52b8f34a8ecf5331a86a42@linux-foundation.org>]
* Re: incoming [not found] <20190716162536.bb52b8f34a8ecf5331a86a42@linux-foundation.org> @ 2019-07-17 8:47 ` Vlastimil Babka 2019-07-17 8:57 ` incoming Bhaskar Chowdhury 2019-07-17 16:13 ` incoming Linus Torvalds 0 siblings, 2 replies; 349+ messages in thread From: Vlastimil Babka @ 2019-07-17 8:47 UTC (permalink / raw) To: linux-kernel, Linus Torvalds Cc: linux-mm, Jonathan Corbet, Thorsten Leemhuis, LKML On 7/17/19 1:25 AM, Andrew Morton wrote: > > Most of the rest of MM and just about all of the rest of everything > else. Hi, as I've mentioned at LSF/MM [1], I think it would be nice if mm pull requests had summaries similar to other subsystems. I see they are now more structured (thanks!), but they are now probably hitting the limit of what scripting can do to produce a high-level summary for human readers (unless patch authors themselves provide a blurb that can be extracted later?). So I've tried now to provide an example what I had in mind, below. Maybe it's too concise - if there were "larger" features in this pull request, they would probably benefit from more details. I'm CCing the known (to me) consumers of these mails to judge :) Note I've only covered mm, and core stuff that I think will be interesting to wide audience (change in LIST_POISON2 value? I'm sure as hell glad to know about that one :) Feel free to include this in the merge commit, if you find it useful. Thanks, Vlastimil [1] https://lwn.net/Articles/787705/ ----- - z3fold fixes and enhancements by Henry Burns and Vitaly Wool - more accurate reclaimed slab caches calculations by Yafang Shao - fix MAP_UNINITIALIZED UAPI symbol to not depend on config, by Christoph Hellwig - !CONFIG_MMU fixes by Christoph Hellwig - new novmcoredd parameter to omit device dumps from vmcore, by Kairui Song - new test_meminit module for testing heap and pagealloc initialization, by Alexander Potapenko - ioremap improvements for huge mappings, by Anshuman Khandual - generalize kprobe page fault handling, by Anshuman Khandual - device-dax hotplug fixes and improvements, by Pavel Tatashin - enable synchronous DAX fault on powerpc, by Aneesh Kumar K.V - add pte_devmap() support for arm64, by Robin Murphy - unify locked_vm accounting with a helper, by Daniel Jordan - several misc fixes core/lib - new typeof_member() macro including some users, by Alexey Dobriyan - make BIT() and GENMASK() available in asm, by Masahiro Yamada - changed LIST_POISON2 on x86_64 to 0xdead000000000122 for better code generation, by Alexey Dobriyan - rbtree code size optimizations, by Michel Lespinasse - convert struct pid count to refcount_t, by Joel Fernandes get_maintainer.pl - add --no-moderated switch to skip moderated ML's, by Joe Perches ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2019-07-17 8:47 ` incoming Vlastimil Babka @ 2019-07-17 8:57 ` Bhaskar Chowdhury 2019-07-17 16:13 ` incoming Linus Torvalds 1 sibling, 0 replies; 349+ messages in thread From: Bhaskar Chowdhury @ 2019-07-17 8:57 UTC (permalink / raw) To: Vlastimil Babka Cc: linux-kernel, Linus Torvalds, linux-mm, Jonathan Corbet, Thorsten Leemhuis [-- Attachment #1: Type: text/plain, Size: 2496 bytes --] Cool !! On 10:47 Wed 17 Jul , Vlastimil Babka wrote: >On 7/17/19 1:25 AM, Andrew Morton wrote: >> >> Most of the rest of MM and just about all of the rest of everything >> else. > >Hi, > >as I've mentioned at LSF/MM [1], I think it would be nice if mm pull >requests had summaries similar to other subsystems. I see they are now >more structured (thanks!), but they are now probably hitting the limit >of what scripting can do to produce a high-level summary for human >readers (unless patch authors themselves provide a blurb that can be >extracted later?). > >So I've tried now to provide an example what I had in mind, below. Maybe >it's too concise - if there were "larger" features in this pull request, >they would probably benefit from more details. I'm CCing the known (to >me) consumers of these mails to judge :) Note I've only covered mm, and >core stuff that I think will be interesting to wide audience (change in >LIST_POISON2 value? I'm sure as hell glad to know about that one :) > >Feel free to include this in the merge commit, if you find it useful. > >Thanks, >Vlastimil > >[1] https://lwn.net/Articles/787705/ > >----- > >- z3fold fixes and enhancements by Henry Burns and Vitaly Wool >- more accurate reclaimed slab caches calculations by Yafang Shao >- fix MAP_UNINITIALIZED UAPI symbol to not depend on config, by >Christoph Hellwig >- !CONFIG_MMU fixes by Christoph Hellwig >- new novmcoredd parameter to omit device dumps from vmcore, by Kairui Song >- new test_meminit module for testing heap and pagealloc initialization, >by Alexander Potapenko >- ioremap improvements for huge mappings, by Anshuman Khandual >- generalize kprobe page fault handling, by Anshuman Khandual >- device-dax hotplug fixes and improvements, by Pavel Tatashin >- enable synchronous DAX fault on powerpc, by Aneesh Kumar K.V >- add pte_devmap() support for arm64, by Robin Murphy >- unify locked_vm accounting with a helper, by Daniel Jordan >- several misc fixes > >core/lib >- new typeof_member() macro including some users, by Alexey Dobriyan >- make BIT() and GENMASK() available in asm, by Masahiro Yamada >- changed LIST_POISON2 on x86_64 to 0xdead000000000122 for better code >generation, by Alexey Dobriyan >- rbtree code size optimizations, by Michel Lespinasse >- convert struct pid count to refcount_t, by Joel Fernandes > >get_maintainer.pl >- add --no-moderated switch to skip moderated ML's, by Joe Perches > > [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 488 bytes --] ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2019-07-17 8:47 ` incoming Vlastimil Babka 2019-07-17 8:57 ` incoming Bhaskar Chowdhury @ 2019-07-17 16:13 ` Linus Torvalds 2019-07-17 17:09 ` incoming Christian Brauner 2019-07-17 18:13 ` incoming Vlastimil Babka 1 sibling, 2 replies; 349+ messages in thread From: Linus Torvalds @ 2019-07-17 16:13 UTC (permalink / raw) To: Vlastimil Babka Cc: Linux List Kernel Mailing, linux-mm, Jonathan Corbet, Thorsten Leemhuis On Wed, Jul 17, 2019 at 1:47 AM Vlastimil Babka <vbabka@suse.cz> wrote: > > So I've tried now to provide an example what I had in mind, below. I'll take it as a trial. I added one-line notes about coda and the PTRACE_GET_SYSCALL_INFO interface too. I do hope that eventually I'll just get pull requests, and they'll have more of a "theme" than this all (*) Linus (*) Although in many ways, the theme for Andrew is "falls through the cracks otherwise" so I'm not really complaining. This has been working for years and years. ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2019-07-17 16:13 ` incoming Linus Torvalds @ 2019-07-17 17:09 ` Christian Brauner 2019-07-17 18:13 ` incoming Vlastimil Babka 1 sibling, 0 replies; 349+ messages in thread From: Christian Brauner @ 2019-07-17 17:09 UTC (permalink / raw) To: Linus Torvalds Cc: Vlastimil Babka, Linux List Kernel Mailing, linux-mm, Jonathan Corbet, Thorsten Leemhuis On Wed, Jul 17, 2019 at 09:13:26AM -0700, Linus Torvalds wrote: > On Wed, Jul 17, 2019 at 1:47 AM Vlastimil Babka <vbabka@suse.cz> wrote: > > > > So I've tried now to provide an example what I had in mind, below. > > I'll take it as a trial. I added one-line notes about coda and the > PTRACE_GET_SYSCALL_INFO interface too. > > I do hope that eventually I'll just get pull requests, and they'll > have more of a "theme" than this all (*) > > Linus > > (*) Although in many ways, the theme for Andrew is "falls through the > cracks otherwise" so I'm not really complaining. This has been working I put all pid{fd}/clone{3} which is mostly related to pid.c, exit.c, fork.c into my tree and try to give it a consistent theme for the prs I sent. And that at least from my perspective that worked and was pretty easy to coordinate with Andrew. That should hopefully make it a little easier to theme the -mm tree overall going forward. ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2019-07-17 16:13 ` incoming Linus Torvalds 2019-07-17 17:09 ` incoming Christian Brauner @ 2019-07-17 18:13 ` Vlastimil Babka 1 sibling, 0 replies; 349+ messages in thread From: Vlastimil Babka @ 2019-07-17 18:13 UTC (permalink / raw) To: Linus Torvalds Cc: Linux List Kernel Mailing, linux-mm, Jonathan Corbet, Thorsten Leemhuis On 7/17/19 6:13 PM, Linus Torvalds wrote: > On Wed, Jul 17, 2019 at 1:47 AM Vlastimil Babka <vbabka@suse.cz> wrote: >> >> So I've tried now to provide an example what I had in mind, below. > > I'll take it as a trial. I added one-line notes about coda and the > PTRACE_GET_SYSCALL_INFO interface too. Thanks. > I do hope that eventually I'll just get pull requests, Very much agree, that was also discussed at length in the LSF/MM mm process session I've linked. > and they'll > have more of a "theme" than this all (*) I'll check if the first patch bomb would be more amenable to that, as I plan to fill in the mm part for 5.3 on LinuxChanges wiki, but for a merge commit it's too late. > Linus > > (*) Although in many ways, the theme for Andrew is "falls through the > cracks otherwise" so I'm not really complaining. This has been working > for years and years. Nevermind the misc stuff that much, but I think mm itself is more important and deserves what other subsystems have. ^ permalink raw reply [flat|nested] 349+ messages in thread
* incoming @ 2007-05-02 22:02 Andrew Morton 2007-05-02 22:31 ` incoming Benjamin Herrenschmidt ` (2 more replies) 0 siblings, 3 replies; 349+ messages in thread From: Andrew Morton @ 2007-05-02 22:02 UTC (permalink / raw) To: Linus Torvalds Cc: Hugh Dickins, Christoph Lameter, David S. Miller, Andi Kleen, Luck, Tony, Rik van Riel, Benjamin Herrenschmidt, linux-kernel, linux-mm So this is what I have lined up for the first mm->2.6.22 batch. I won't be sending it off for another 12-24 hours yet. To give people time for final comment and to give me time to see if it actually works. - A few serial bits. - A few pcmcia bits. - Some of the MM queue. Includes: - An enhancement to /proc/pid/smaps to permit monitoring of a running program's working set. There's another patchset which builds on this quite a lot from Matt Mackall, but it's not quite ready yet. - The SLUB allocator. It's pretty green but I do want to push ahead with this pretty aggressively with a view to replacing slab altogether. If it ends up not working out then we should remove slub altogether again, but I doubt if that will occur. If SLUB isn't in good shape by 2.6.22 we should hide it in Kconfig to prevent people from hitting known problems. It'll remain EXPERIMENTAL. - generic pagetable quicklist management. We have x86_64 and ia64 and sparc64 implementations, but I'll only include David's sparc64 implementation here. I'll send the x86_64 and ia64 implementations through maintainers. - Various random MM bits - Benh's teach-get_unmapped_area-about-MAP_FIXED changes - madvise(MADV_FREE) This means I'm holding back Mel's page allocator work, and Andy's lumpy-reclaim. A shame in a way - I have high hopes for lumpy reclaim against the moveable zone, but these things are not to be done lightly. A few MM things have been held back awaiting subsystem tree merges (probably x86 - I didn't check). - One little security patch - the blackfin architecture - small h8300 update - small alpha update - swsusp updates - m68k bits - cris udpates - Lots of UML updates - v850, xtensa slab-introduce-krealloc.patch at91_cf-minor-fix.patch add-new_id-to-pcmcia-drivers.patch ide-cs-recognize-2gb-compactflash-from-transcend.patch serial-driver-pmc-msp71xx.patch rm9000-serial-driver.patch serial-define-fixed_port-flag-for-serial_core.patch serial-use-resource_size_t-for-serial-port-io-addresses.patch mpsc-serial-driver-tx-locking.patch 8250_pci-fix-pci-must_checks.patch serial-serial_core-use-pr_debug.patch add-apply_to_page_range-which-applies-a-function-to-a-pte-range.patch safer-nr_node_ids-and-nr_node_ids-determination-and-initial.patch use-zvc-counters-to-establish-exact-size-of-dirtyable-pages.patch proper-prototype-for-hugetlb_get_unmapped_area.patch mm-remove-gcc-workaround.patch slab-ensure-cache_alloc_refill-terminates.patch mm-make-read_cache_page-synchronous.patch fs-buffer-dont-pageuptodate-without-page-locked.patch allow-oom_adj-of-saintly-processes.patch introduce-config_has_dma.patch mm-slabc-proper-prototypes.patch add-pfn_valid_within-helper-for-sub-max_order-hole-detection.patch mm-simplify-filemap_nopage.patch add-unitialized_var-macro-for-suppressing-gcc-warnings.patch i386-add-ptep_test_and_clear_dirtyyoung.patch i386-use-pte_update_defer-in-ptep_test_and_clear_dirtyyoung.patch smaps-extract-pmd-walker-from-smaps-code.patch smaps-add-pages-referenced-count-to-smaps.patch smaps-add-clear_refs-file-to-clear-reference.patch readahead-improve-heuristic-detecting-sequential-reads.patch readahead-code-cleanup.patch slab-use-num_possible_cpus-in-enable_cpucache.patch slab-dont-allocate-empty-shared-caches.patch slab-numa-kmem_cache-diet.patch do-not-disable-interrupts-when-reading-min_free_kbytes.patch slab-mark-set_up_list3s-__init.patch cpusets-allow-tif_memdie-threads-to-allocate-anywhere.patch i386-use-page-allocator-to-allocate-thread_info-structure.patch slub-core.patch make-page-private-usable-in-compound-pages-v1.patch optimize-compound_head-by-avoiding-a-shared-page.patch add-virt_to_head_page-and-consolidate-code-in-slab-and-slub.patch slub-fix-object-tracking.patch slub-enable-tracking-of-full-slabs.patch slub-validation-of-slabs-metadata-and-guard-zones.patch slub-add-min_partial.patch slub-add-ability-to-list-alloc--free-callers-per-slab.patch slub-free-slabs-and-sort-partial-slab-lists-in-kmem_cache_shrink.patch slub-remove-object-activities-out-of-checking-functions.patch slub-user-documentation.patch slub-add-slabinfo-tool.patch quicklists-for-page-table-pages.patch quicklist-support-for-sparc64.patch slob-handle-slab_panic-flag.patch include-kern_-constant-in-printk-calls-in-mm-slabc.patch mm-madvise-avoid-exclusive-mmap_sem.patch mm-remove-destroy_dirty_buffers-from-invalidate_bdev.patch mm-optimize-kill_bdev.patch mm-optimize-acorn-partition-truncate.patch slab-allocators-remove-obsolete-slab_must_hwcache_align.patch kmem_cache-simplify-slab-cache-creation.patch slab-allocators-remove-multiple-alignment-specifications.patch fault-injection-fix-failslab-with-config_numa.patch mm-fix-handling-of-panic_on_oom-when-cpusets-are-in-use.patch oom-fix-constraint-deadlock.patch get_unmapped_area-handles-map_fixed-on-powerpc.patch get_unmapped_area-handles-map_fixed-on-alpha.patch get_unmapped_area-handles-map_fixed-on-arm.patch get_unmapped_area-handles-map_fixed-on-frv.patch get_unmapped_area-handles-map_fixed-on-i386.patch get_unmapped_area-handles-map_fixed-on-ia64.patch get_unmapped_area-handles-map_fixed-on-parisc.patch get_unmapped_area-handles-map_fixed-on-sparc64.patch get_unmapped_area-handles-map_fixed-on-x86_64.patch get_unmapped_area-handles-map_fixed-in-hugetlbfs.patch get_unmapped_area-handles-map_fixed-in-generic-code.patch get_unmapped_area-doesnt-need-hugetlbfs-hacks-anymore.patch slab-allocators-remove-slab_debug_initial-flag.patch slab-allocators-remove-slab_ctor_atomic.patch slab-allocators-remove-useless-__gfp_no_grow-flag.patch lazy-freeing-of-memory-through-madv_free.patch restore-madv_dontneed-to-its-original-linux-behaviour.patch hugetlbfs-add-null-check-in-hugetlb_zero_setup.patch slob-fix-page-order-calculation-on-not-4kb-page.patch page-migration-only-migrate-pages-if-allocation-in-the-highest-zone-is-possible.patch return-eperm-not-echild-on-security_task_wait-failure.patch blackfin-arch.patch driver_bfin_serial_core.patch blackfin-on-chip-ethernet-mac-controller-driver.patch blackfin-patch-add-blackfin-support-in-smc91x.patch blackfin-on-chip-rtc-controller-driver.patch blackfin-blackfin-on-chip-spi-controller-driver.patch convert-h8-300-to-generic-timekeeping.patch h8300-generic-irq.patch h8300-add-zimage-support.patch round_up-macro-cleanup-in-arch-alpha-kernel-osf_sysc.patch alpha-fix-bootp-image-creation.patch alpha-prctl-macros.patch srmcons-fix-kmallocgfp_kernel-inside-spinlock.patch arm26-remove-useless-config-option-generic_bust_spinlock.patch fix-refrigerator-vs-thaw_process-race.patch swsusp-use-inline-functions-for-changing-page-flags.patch swsusp-do-not-use-page-flags.patch mm-remove-unused-page-flags.patch swsusp-fix-error-paths-in-snapshot_open.patch swsusp-use-gfp_kernel-for-creating-basic-data-structures.patch freezer-remove-pf_nofreeze-from-handle_initrd.patch swsusp-use-rbtree-for-tracking-allocated-swap.patch freezer-fix-racy-usage-of-try_to_freeze-in-kswapd.patch remove-software_suspend.patch power-management-change-sys-power-disk-display.patch kconfig-mentioneds-hibernation-not-just-swsusp.patch swsusp-fix-snapshot_release.patch swsusp-free-more-memory.patch remove-unused-header-file-arch-m68k-atari-atasoundh.patch spin_lock_unlocked-cleanup-in-arch-m68k.patch remove-unused-header-file-drivers-serial-crisv10h.patch cris-check-for-memory-allocation.patch cris-remove-code-related-to-pre-22-kernel.patch uml-delete-unused-code.patch uml-formatting-fixes.patch uml-host_info-tidying.patch uml-mark-tt-mode-code-for-future-removal.patch uml-print-coredump-limits.patch uml-handle-block-device-hotplug-errors.patch uml-driver-formatting-fixes.patch uml-driver-formatting-fixes-fix.patch uml-network-interface-hotplug-error-handling.patch array_size-check-for-type.patch uml-move-sigio-testing-to-sigioc.patch uml-create-archh.patch uml-create-as-layouth.patch uml-move-remaining-useful-contents-of-user_utilh.patch uml-remove-user_utilh.patch uml-add-missing-__init-declarations.patch remove-unused-header-file-arch-um-kernel-tt-include-mode_kern-tth.patch uml-improve-checking-and-diagnostics-of-ethernet-macs.patch uml-eliminate-temporary-buffer-in-eth_configure.patch uml-replace-one-element-array-with-zero-element-array.patch uml-fix-umid-in-xterm-titles.patch uml-speed-up-exec.patch uml-no-locking-needed-in-tlsc.patch uml-tidy-processc.patch uml-remove-page_size.patch uml-kernel_thread-shouldnt-panic.patch uml-tidy-fault-code.patch uml-kernel-segfaults-should-dump-proper-registers.patch uml-comment-early-boot-locking.patch uml-irq-locking-commentary.patch uml-delete-host_frame_size.patch uml-drivers-get-release-methods.patch uml-dump-registers-on-ptrace-or-wait-failure.patch uml-speed-up-page-table-walking.patch uml-remove-unused-x86_64-code.patch uml-start-fixing-os_read_file-and-os_write_file.patch uml-tidy-libc-code.patch uml-convert-libc-layer-to-call-read-and-write.patch uml-batch-i-o-requests.patch uml-send-pointers-instead-of-structures-to-i-o-thread.patch uml-send-pointers-instead-of-structures-to-i-o-thread-fix.patch uml-dump-core-on-panic.patch uml-dont-try-to-handle-signals-on-initial-process-stack.patch uml-change-remaining-callers-of-os_read_write_file.patch uml-formatting-fixes-around-os_read_write_file-callers.patch uml-remove-debugging-remnants.patch uml-rename-os_read_write_file_k-back-to-os_read_write_file.patch uml-aio-deadlock-avoidance.patch uml-speed-page-fault-path.patch uml-eliminate-a-piece-of-debugging-code.patch uml-more-page-fault-path-trimming.patch uml-only-flush-areas-covered-by-vma.patch uml-out-of-tmpfs-space-error-clarification.patch uml-virtualized-time-fix.patch uml-fix-prototypes.patch v850-generic-timekeeping-conversion.patch xtensa-strlcpy-is-smart-enough.patch -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2007-05-02 22:02 incoming Andrew Morton @ 2007-05-02 22:31 ` Benjamin Herrenschmidt 2007-05-03 7:55 ` incoming Russell King 2007-05-04 13:37 ` incoming Greg KH 2 siblings, 0 replies; 349+ messages in thread From: Benjamin Herrenschmidt @ 2007-05-02 22:31 UTC (permalink / raw) To: Andrew Morton Cc: Linus Torvalds, Hugh Dickins, Christoph Lameter, David S. Miller, Andi Kleen, Luck, Tony, Rik van Riel, linux-kernel, linux-mm On Wed, 2007-05-02 at 15:02 -0700, Andrew Morton wrote: > So this is what I have lined up for the first mm->2.6.22 batch. I won't be > sending it off for another 12-24 hours yet. To give people time for final > comment and to give me time to see if it actually works. Thanks. I have some powerpc bits that depend on that stuff that will go through Paulus after these show up in git and I've rebased. Cheers, Ben. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2007-05-02 22:02 incoming Andrew Morton 2007-05-02 22:31 ` incoming Benjamin Herrenschmidt @ 2007-05-03 7:55 ` Russell King 2007-05-03 8:05 ` incoming Andrew Morton 2007-05-04 13:37 ` incoming Greg KH 2 siblings, 1 reply; 349+ messages in thread From: Russell King @ 2007-05-03 7:55 UTC (permalink / raw) To: Andrew Morton Cc: Linus Torvalds, Hugh Dickins, Christoph Lameter, David S. Miller, Andi Kleen, Luck, Tony, Rik van Riel, Benjamin Herrenschmidt, linux-kernel, linux-mm On Wed, May 02, 2007 at 03:02:52PM -0700, Andrew Morton wrote: > So this is what I have lined up for the first mm->2.6.22 batch. I won't be > sending it off for another 12-24 hours yet. To give people time for final > comment and to give me time to see if it actually works. I assume you're going to update this list with my comments I sent yesterday? -- Russell King Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/ maintainer of: -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2007-05-03 7:55 ` incoming Russell King @ 2007-05-03 8:05 ` Andrew Morton 0 siblings, 0 replies; 349+ messages in thread From: Andrew Morton @ 2007-05-03 8:05 UTC (permalink / raw) To: Russell King Cc: Linus Torvalds, Hugh Dickins, Christoph Lameter, David S. Miller, Andi Kleen, Luck, Tony, Rik van Riel, Benjamin Herrenschmidt, linux-kernel, linux-mm On Thu, 3 May 2007 08:55:43 +0100 Russell King <rmk+lkml@arm.linux.org.uk> wrote: > On Wed, May 02, 2007 at 03:02:52PM -0700, Andrew Morton wrote: > > So this is what I have lined up for the first mm->2.6.22 batch. I won't be > > sending it off for another 12-24 hours yet. To give people time for final > > comment and to give me time to see if it actually works. > > I assume you're going to update this list with my comments I sent > yesterday? > Serial drivers? Well you saw me drop a bunch of them. I now have: serial-driver-pmc-msp71xx.patch rm9000-serial-driver.patch serial-define-fixed_port-flag-for-serial_core.patch mpsc-serial-driver-tx-locking.patch serial-serial_core-use-pr_debug.patch I'll also be holding off on MADV_FREE - Nick has some performance things to share and I'm assuming they're not as good as he'd like. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2007-05-02 22:02 incoming Andrew Morton 2007-05-02 22:31 ` incoming Benjamin Herrenschmidt 2007-05-03 7:55 ` incoming Russell King @ 2007-05-04 13:37 ` Greg KH 2007-05-04 16:14 ` incoming Andrew Morton 2 siblings, 1 reply; 349+ messages in thread From: Greg KH @ 2007-05-04 13:37 UTC (permalink / raw) To: Andrew Morton Cc: Linus Torvalds, Hugh Dickins, Christoph Lameter, David S. Miller, Andi Kleen, Luck, Tony, Rik van Riel, Benjamin Herrenschmidt, linux-kernel, linux-mm On Wed, May 02, 2007 at 03:02:52PM -0700, Andrew Morton wrote: > - One little security patch Care to cc: linux-stable with it so we can do a new 2.6.21 release with it if needed? thanks, greg k-h -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2007-05-04 13:37 ` incoming Greg KH @ 2007-05-04 16:14 ` Andrew Morton 2007-05-04 17:02 ` incoming Greg KH 2007-05-04 18:57 ` incoming Roland McGrath 0 siblings, 2 replies; 349+ messages in thread From: Andrew Morton @ 2007-05-04 16:14 UTC (permalink / raw) To: Greg KH Cc: Linus Torvalds, Hugh Dickins, Christoph Lameter, David S. Miller, Andi Kleen, Luck, Tony, Rik van Riel, Benjamin Herrenschmidt, linux-kernel, linux-mm, Roland McGrath, Stephen Smalley On Fri, 4 May 2007 06:37:28 -0700 Greg KH <greg@kroah.com> wrote: > On Wed, May 02, 2007 at 03:02:52PM -0700, Andrew Morton wrote: > > - One little security patch > > Care to cc: linux-stable with it so we can do a new 2.6.21 release with > it if needed? > Ah. The patch affects security code, but it doesn't actually address any insecurity. I didn't think it was needed for -stable? From: Roland McGrath <roland@redhat.com> wait* syscalls return -ECHILD even when an individual PID of a live child was requested explicitly, when security_task_wait denies the operation. This means that something like a broken SELinux policy can produce an unexpected failure that looks just like a bug with wait or ptrace or something. This patch makes do_wait return -EACCES (or other appropriate error returned from security_task_wait() instead of -ECHILD if some children were ruled out solely because security_task_wait failed. [jmorris@namei.org: switch error code to EACCES] Signed-off-by: Roland McGrath <roland@redhat.com> Cc: Stephen Smalley <sds@tycho.nsa.gov> Cc: Chris Wright <chrisw@sous-sol.org> Cc: James Morris <jmorris@namei.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- kernel/exit.c | 17 +++++++++++++++-- 1 files changed, 15 insertions(+), 2 deletions(-) diff -puN kernel/exit.c~return-eperm-not-echild-on-security_task_wait-failure kernel/exit.c --- a/kernel/exit.c~return-eperm-not-echild-on-security_task_wait-failure +++ a/kernel/exit.c @@ -1033,6 +1033,8 @@ asmlinkage void sys_exit_group(int error static int eligible_child(pid_t pid, int options, struct task_struct *p) { + int err; + if (pid > 0) { if (p->pid != pid) return 0; @@ -1066,8 +1068,9 @@ static int eligible_child(pid_t pid, int if (delay_group_leader(p)) return 2; - if (security_task_wait(p)) - return 0; + err = security_task_wait(p); + if (err) + return err; return 1; } @@ -1449,6 +1452,7 @@ static long do_wait(pid_t pid, int optio DECLARE_WAITQUEUE(wait, current); struct task_struct *tsk; int flag, retval; + int allowed, denied; add_wait_queue(¤t->signal->wait_chldexit,&wait); repeat: @@ -1457,6 +1461,7 @@ repeat: * match our criteria, even if we are not able to reap it yet. */ flag = 0; + allowed = denied = 0; current->state = TASK_INTERRUPTIBLE; read_lock(&tasklist_lock); tsk = current; @@ -1472,6 +1477,12 @@ repeat: if (!ret) continue; + if (unlikely(ret < 0)) { + denied = ret; + continue; + } + allowed = 1; + switch (p->state) { case TASK_TRACED: /* @@ -1570,6 +1581,8 @@ check_continued: goto repeat; } retval = -ECHILD; + if (unlikely(denied) && !allowed) + retval = denied; end: current->state = TASK_RUNNING; remove_wait_queue(¤t->signal->wait_chldexit,&wait); _ -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2007-05-04 16:14 ` incoming Andrew Morton @ 2007-05-04 17:02 ` Greg KH 2007-05-04 18:57 ` incoming Roland McGrath 1 sibling, 0 replies; 349+ messages in thread From: Greg KH @ 2007-05-04 17:02 UTC (permalink / raw) To: Andrew Morton Cc: Linus Torvalds, Hugh Dickins, Christoph Lameter, David S. Miller, Andi Kleen, Luck, Tony, Rik van Riel, Benjamin Herrenschmidt, linux-kernel, linux-mm, Roland McGrath, Stephen Smalley On Fri, May 04, 2007 at 09:14:34AM -0700, Andrew Morton wrote: > On Fri, 4 May 2007 06:37:28 -0700 Greg KH <greg@kroah.com> wrote: > > > On Wed, May 02, 2007 at 03:02:52PM -0700, Andrew Morton wrote: > > > - One little security patch > > > > Care to cc: linux-stable with it so we can do a new 2.6.21 release with > > it if needed? > > > > Ah. The patch affects security code, but it doesn't actually address any > insecurity. I didn't think it was needed for -stable? Ah, ok, I read "security" as fixing a insecure problem, my mistake :) thanks, greg k-h -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2007-05-04 16:14 ` incoming Andrew Morton 2007-05-04 17:02 ` incoming Greg KH @ 2007-05-04 18:57 ` Roland McGrath 2007-05-04 19:24 ` incoming Greg KH 1 sibling, 1 reply; 349+ messages in thread From: Roland McGrath @ 2007-05-04 18:57 UTC (permalink / raw) To: Andrew Morton Cc: Greg KH, Linus Torvalds, Hugh Dickins, Christoph Lameter, David S. Miller, Andi Kleen, Luck, Tony, Rik van Riel, Benjamin Herrenschmidt, linux-kernel, linux-mm, Stephen Smalley > Ah. The patch affects security code, but it doesn't actually address any > insecurity. I didn't think it was needed for -stable? I would not recommend it for -stable. It is an ABI change for the case of a security refusal. Thanks, Roland -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2007-05-04 18:57 ` incoming Roland McGrath @ 2007-05-04 19:24 ` Greg KH 2007-05-04 19:29 ` incoming Roland McGrath 0 siblings, 1 reply; 349+ messages in thread From: Greg KH @ 2007-05-04 19:24 UTC (permalink / raw) To: Roland McGrath Cc: Andrew Morton, Linus Torvalds, Hugh Dickins, Christoph Lameter, David S. Miller, Andi Kleen, Luck, Tony, Rik van Riel, Benjamin Herrenschmidt, linux-kernel, linux-mm, Stephen Smalley On Fri, May 04, 2007 at 11:57:21AM -0700, Roland McGrath wrote: > > Ah. The patch affects security code, but it doesn't actually address any > > insecurity. I didn't think it was needed for -stable? > > I would not recommend it for -stable. > It is an ABI change for the case of a security refusal. ABI changes are not a problem for -stable, so don't let that stop anyone :) thanks, greg k-h -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 349+ messages in thread
* Re: incoming 2007-05-04 19:24 ` incoming Greg KH @ 2007-05-04 19:29 ` Roland McGrath 0 siblings, 0 replies; 349+ messages in thread From: Roland McGrath @ 2007-05-04 19:29 UTC (permalink / raw) To: Greg KH Cc: Andrew Morton, Linus Torvalds, Hugh Dickins, Christoph Lameter, David S. Miller, Andi Kleen, Luck, Tony, Rik van Riel, Benjamin Herrenschmidt, linux-kernel, linux-mm, Stephen Smalley > ABI changes are not a problem for -stable, so don't let that stop anyone > :) In fact this is the harmless sort (changes only the error code of a failure case) that might actually go in if there were any important reason. But the smiley stands. Thanks, Roland -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 349+ messages in thread
end of thread, other threads:[~2022-04-27 19:41 UTC | newest] Thread overview: 349+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2020-06-03 22:55 incoming Andrew Morton 2020-06-03 22:56 ` [patch 001/131] mm/slub: fix a memory leak in sysfs_slab_add() Andrew Morton 2020-06-03 22:56 ` [patch 002/131] mm/memcg: optimize memory.numa_stat like memory.stat Andrew Morton 2020-06-03 22:56 ` [patch 003/131] mm/gup: move __get_user_pages_fast() down a few lines in gup.c Andrew Morton 2020-06-04 1:51 ` John Hubbard 2020-06-03 22:56 ` [patch 004/131] mm/gup: refactor and de-duplicate gup_fast() code Andrew Morton 2020-06-04 2:19 ` Linus Torvalds 2020-06-04 3:19 ` Linus Torvalds 2020-06-04 4:31 ` Linus Torvalds 2020-06-04 5:18 ` John Hubbard 2020-06-03 22:56 ` [patch 005/131] mm/gup: introduce pin_user_pages_fast_only() Andrew Morton 2020-06-03 22:56 ` [patch 006/131] drm/i915: convert get_user_pages() --> pin_user_pages() Andrew Morton 2020-06-03 22:56 ` [patch 007/131] mm/gup: might_lock_read(mmap_sem) in get_user_pages_fast() Andrew Morton 2020-06-03 22:56 ` [patch 008/131] kasan: stop tests being eliminated as dead code with FORTIFY_SOURCE Andrew Morton 2020-06-03 22:56 ` [patch 009/131] string.h: fix incompatibility between FORTIFY_SOURCE and KASAN Andrew Morton 2020-06-03 22:56 ` [patch 010/131] mm: clarify __GFP_MEMALLOC usage Andrew Morton 2020-06-03 22:56 ` [patch 011/131] mm: memblock: replace dereferences of memblock_region.nid with API calls Andrew Morton 2020-06-03 22:56 ` [patch 012/131] mm: make early_pfn_to_nid() and related defintions close to each other Andrew Morton 2020-06-03 22:57 ` [patch 013/131] mm: remove CONFIG_HAVE_MEMBLOCK_NODE_MAP option Andrew Morton 2020-06-03 22:57 ` [patch 014/131] mm: free_area_init: use maximal zone PFNs rather than zone sizes Andrew Morton 2020-06-03 22:57 ` [patch 015/131] mm: use free_area_init() instead of free_area_init_nodes() Andrew Morton 2020-06-03 22:57 ` [patch 016/131] alpha: simplify detection of memory zone boundaries Andrew Morton 2020-06-03 22:57 ` [patch 017/131] arm: " Andrew Morton 2020-06-03 22:57 ` [patch 018/131] arm64: simplify detection of memory zone boundaries for UMA configs Andrew Morton 2020-06-03 22:57 ` [patch 019/131] csky: simplify detection of memory zone boundaries Andrew Morton 2020-06-03 22:57 ` [patch 020/131] m68k: mm: " Andrew Morton 2020-06-03 22:57 ` [patch 021/131] parisc: " Andrew Morton 2020-06-03 22:57 ` [patch 022/131] sparc32: " Andrew Morton 2020-06-03 22:57 ` [patch 023/131] unicore32: " Andrew Morton 2020-06-03 22:57 ` [patch 024/131] xtensa: " Andrew Morton 2020-06-03 22:57 ` [patch 025/131] mm: memmap_init: iterate over memblock regions rather that check each PFN Andrew Morton 2020-06-03 22:57 ` [patch 026/131] mm: remove early_pfn_in_nid() and CONFIG_NODES_SPAN_OTHER_NODES Andrew Morton 2020-06-03 22:58 ` [patch 027/131] mm: free_area_init: allow defining max_zone_pfn in descending order Andrew Morton 2020-06-03 22:58 ` [patch 028/131] mm: rename free_area_init_node() to free_area_init_memoryless_node() Andrew Morton 2020-06-03 22:58 ` [patch 029/131] mm: clean up free_area_init_node() and its helpers Andrew Morton 2020-06-03 22:58 ` [patch 030/131] mm: simplify find_min_pfn_with_active_regions() Andrew Morton 2020-06-03 22:58 ` [patch 031/131] docs/vm: update memory-models documentation Andrew Morton 2020-06-03 22:58 ` [patch 032/131] mm/page_alloc.c: bad_[reason|flags] is not necessary when PageHWPoison Andrew Morton 2020-06-03 22:58 ` [patch 033/131] mm/page_alloc.c: bad_flags is not necessary for bad_page() Andrew Morton 2020-06-03 22:58 ` [patch 034/131] mm/page_alloc.c: rename free_pages_check_bad() to check_free_page_bad() Andrew Morton 2020-06-03 22:58 ` [patch 035/131] mm/page_alloc.c: rename free_pages_check() to check_free_page() Andrew Morton 2020-06-03 22:58 ` [patch 036/131] mm/page_alloc.c: extract check_[new|free]_page_bad() common part to page_bad_reason() Andrew Morton 2020-06-03 22:58 ` [patch 037/131] mm,page_alloc,cma: conditionally prefer cma pageblocks for movable allocations Andrew Morton 2020-06-03 22:58 ` [patch 038/131] mm/page_alloc.c: remove unused free_bootmem_with_active_regions Andrew Morton 2020-06-03 22:58 ` [patch 039/131] mm/page_alloc.c: only tune sysctl_lowmem_reserve_ratio value once when changing it Andrew Morton 2020-06-03 22:58 ` [patch 040/131] mm/page_alloc.c: clear out zone->lowmem_reserve[] if the zone is empty Andrew Morton 2020-06-03 22:58 ` [patch 041/131] mm/vmstat.c: do not show lowmem reserve protection information of empty zone Andrew Morton 2020-06-03 22:58 ` [patch 042/131] mm/page_alloc: use ac->high_zoneidx for classzone_idx Andrew Morton 2020-06-03 22:59 ` [patch 043/131] mm/page_alloc: integrate classzone_idx and high_zoneidx Andrew Morton 2020-06-03 22:59 ` [patch 044/131] mm/page_alloc.c: use NODE_MASK_NONE in build_zonelists() Andrew Morton 2020-06-03 22:59 ` [patch 045/131] mm: rename gfpflags_to_migratetype to gfp_migratetype for same convention Andrew Morton 2020-06-03 22:59 ` [patch 046/131] mm/page_alloc.c: reset numa stats for boot pagesets Andrew Morton 2020-06-03 22:59 ` [patch 047/131] mm, page_alloc: reset the zone->watermark_boost early Andrew Morton 2020-06-03 22:59 ` [patch 048/131] mm/page_alloc: restrict and formalize compound_page_dtors[] Andrew Morton 2020-06-03 22:59 ` [patch 049/131] mm/pagealloc.c: call touch_nmi_watchdog() on max order boundaries in deferred init Andrew Morton 2020-06-03 22:59 ` [patch 050/131] mm: initialize deferred pages with interrupts enabled Andrew Morton 2020-06-03 22:59 ` [patch 051/131] mm: call cond_resched() from deferred_init_memmap() Andrew Morton 2020-06-03 22:59 ` [patch 052/131] padata: remove exit routine Andrew Morton 2020-06-03 22:59 ` [patch 053/131] padata: initialize earlier Andrew Morton 2020-06-03 22:59 ` [patch 054/131] padata: allocate work structures for parallel jobs from a pool Andrew Morton 2020-06-03 22:59 ` [patch 055/131] padata: add basic support for multithreaded jobs Andrew Morton 2020-06-03 22:59 ` [patch 056/131] mm: don't track number of pages during deferred initialization Andrew Morton 2020-06-03 22:59 ` [patch 057/131] mm: parallelize deferred_init_memmap() Andrew Morton 2020-06-03 22:59 ` [patch 058/131] mm: make deferred init's max threads arch-specific Andrew Morton 2020-06-03 22:59 ` [patch 059/131] padata: document multithreaded jobs Andrew Morton 2020-06-03 23:00 ` [patch 060/131] mm/page_alloc.c: add missing newline Andrew Morton 2020-06-03 23:00 ` [patch 061/131] khugepaged: add self test Andrew Morton 2020-06-03 23:00 ` [patch 062/131] khugepaged: do not stop collapse if less than half PTEs are referenced Andrew Morton 2020-06-03 23:00 ` [patch 063/131] khugepaged: drain all LRU caches before scanning pages Andrew Morton 2020-06-03 23:00 ` [patch 064/131] khugepaged: drain LRU add pagevec after swapin Andrew Morton 2020-06-03 23:00 ` [patch 065/131] khugepaged: allow to collapse a page shared across fork Andrew Morton 2020-06-03 23:00 ` [patch 066/131] khugepaged: allow to collapse PTE-mapped compound pages Andrew Morton 2020-06-03 23:00 ` [patch 067/131] thp: change CoW semantics for anon-THP Andrew Morton 2020-06-03 23:00 ` [patch 068/131] khugepaged: introduce 'max_ptes_shared' tunable Andrew Morton 2020-06-03 23:00 ` [patch 069/131] hugetlbfs: add arch_hugetlb_valid_size Andrew Morton 2020-06-03 23:00 ` [patch 070/131] hugetlbfs: move hugepagesz= parsing to arch independent code Andrew Morton 2020-06-03 23:00 ` [patch 071/131] hugetlbfs: remove hugetlb_add_hstate() warning for existing hstate Andrew Morton 2020-06-03 23:00 ` [patch 072/131] hugetlbfs: clean up command line processing Andrew Morton 2020-06-03 23:00 ` [patch 073/131] hugetlbfs: fix changes to " Andrew Morton 2020-06-03 23:00 ` [patch 074/131] mm/hugetlb: avoid unnecessary check on pud and pmd entry in huge_pte_offset Andrew Morton 2020-06-03 23:00 ` [patch 075/131] arm64/mm: drop __HAVE_ARCH_HUGE_PTEP_GET Andrew Morton 2020-06-03 23:01 ` [patch 076/131] mm/hugetlb: define a generic fallback for is_hugepage_only_range() Andrew Morton 2020-06-03 23:01 ` [patch 077/131] mm/hugetlb: define a generic fallback for arch_clear_hugepage_flags() Andrew Morton 2020-06-03 23:01 ` [patch 078/131] mm: simplify calling a compound page destructor Andrew Morton 2020-06-03 23:01 ` [patch 079/131] mm/vmscan.c: use update_lru_size() in update_lru_sizes() Andrew Morton 2020-06-03 23:01 ` [patch 080/131] mm/vmscan: count layzfree pages and fix nr_isolated_* mismatch Andrew Morton 2020-06-03 23:01 ` [patch 081/131] mm/vmscan.c: change prototype for shrink_page_list Andrew Morton 2020-06-03 23:01 ` [patch 082/131] mm/vmscan: update the comment of should_continue_reclaim() Andrew Morton 2020-06-03 23:01 ` [patch 083/131] mm: fix NUMA node file count error in replace_page_cache() Andrew Morton 2020-06-03 23:01 ` [patch 084/131] mm: memcontrol: fix stat-corrupting race in charge moving Andrew Morton 2020-06-03 23:01 ` [patch 085/131] mm: memcontrol: drop @compound parameter from memcg charging API Andrew Morton 2020-06-03 23:01 ` [patch 086/131] mm: shmem: remove rare optimization when swapin races with hole punching Andrew Morton 2020-06-03 23:01 ` [patch 087/131] mm: memcontrol: move out cgroup swaprate throttling Andrew Morton 2020-06-03 23:01 ` [patch 088/131] mm: memcontrol: convert page cache to a new mem_cgroup_charge() API Andrew Morton 2020-06-03 23:01 ` [patch 089/131] mm: memcontrol: prepare uncharging for removal of private page type counters Andrew Morton 2020-06-03 23:01 ` [patch 090/131] mm: memcontrol: prepare move_account " Andrew Morton 2020-06-03 23:01 ` [patch 091/131] mm: memcontrol: prepare cgroup vmstat infrastructure for native anon counters Andrew Morton 2020-06-03 23:01 ` [patch 092/131] mm: memcontrol: switch to native NR_FILE_PAGES and NR_SHMEM counters Andrew Morton 2020-06-03 23:01 ` [patch 093/131] mm: memcontrol: switch to native NR_ANON_MAPPED counter Andrew Morton 2020-06-03 23:02 ` [patch 094/131] mm: memcontrol: switch to native NR_ANON_THPS counter Andrew Morton 2020-06-03 23:02 ` [patch 095/131] mm: memcontrol: convert anon and file-thp to new mem_cgroup_charge() API Andrew Morton 2020-06-03 23:02 ` [patch 096/131] mm: memcontrol: drop unused try/commit/cancel charge API Andrew Morton 2020-06-03 23:02 ` [patch 097/131] mm: memcontrol: prepare swap controller setup for integration Andrew Morton 2020-06-03 23:02 ` [patch 098/131] mm: memcontrol: make swap tracking an integral part of memory control Andrew Morton 2020-06-03 23:02 ` [patch 099/131] mm: memcontrol: charge swapin pages on instantiation Andrew Morton 2020-06-03 23:02 ` [patch 100/131] mm: memcontrol: document the new swap control behavior Andrew Morton 2020-06-03 23:02 ` [patch 101/131] mm: memcontrol: delete unused lrucare handling Andrew Morton 2020-06-03 23:02 ` [patch 102/131] mm: memcontrol: update page->mem_cgroup stability rules Andrew Morton 2020-06-03 23:02 ` [patch 103/131] mm: fix LRU balancing effect of new transparent huge pages Andrew Morton 2020-06-03 23:02 ` [patch 104/131] mm: keep separate anon and file statistics on page reclaim activity Andrew Morton 2020-06-03 23:02 ` [patch 105/131] mm: allow swappiness that prefers reclaiming anon over the file workingset Andrew Morton 2020-06-03 23:02 ` [patch 106/131] mm: fold and remove lru_cache_add_anon() and lru_cache_add_file() Andrew Morton 2020-06-03 23:02 ` [patch 107/131] mm: workingset: let cache workingset challenge anon Andrew Morton 2020-06-03 23:02 ` [patch 108/131] mm: remove use-once cache bias from LRU balancing Andrew Morton 2020-06-03 23:02 ` [patch 109/131] mm: vmscan: drop unnecessary div0 avoidance rounding in get_scan_count() Andrew Morton 2020-06-03 23:02 ` [patch 110/131] mm: base LRU balancing on an explicit cost model Andrew Morton 2020-06-03 23:02 ` [patch 111/131] mm: deactivations shouldn't bias the LRU balance Andrew Morton 2020-06-03 23:03 ` [patch 112/131] mm: only count actual rotations as LRU reclaim cost Andrew Morton 2020-06-03 23:03 ` [patch 113/131] mm: balance LRU lists based on relative thrashing Andrew Morton 2020-06-09 9:15 ` Alex Shi 2020-06-09 14:45 ` Johannes Weiner 2020-06-10 5:23 ` Joonsoo Kim 2020-06-11 3:28 ` Alex Shi 2020-06-03 23:03 ` [patch 114/131] mm: vmscan: determine anon/file pressure balance at the reclaim root Andrew Morton 2020-06-03 23:03 ` [patch 115/131] mm: vmscan: reclaim writepage is IO cost Andrew Morton 2020-06-03 23:03 ` [patch 116/131] mm: vmscan: limit the range of LRU type balancing Andrew Morton 2020-06-03 23:03 ` [patch 117/131] mm: swap: fix vmstats for huge pages Andrew Morton 2020-06-03 23:03 ` [patch 118/131] mm: swap: memcg: fix memcg stats " Andrew Morton 2020-06-03 23:03 ` [patch 119/131] tools/vm/page_owner_sort.c: filter out unneeded line Andrew Morton 2020-06-03 23:03 ` [patch 120/131] mm, mempolicy: fix up gup usage in lookup_node Andrew Morton 2020-06-03 23:03 ` [patch 121/131] include/linux/memblock.h: fix minor typo and unclear comment Andrew Morton 2020-06-03 23:03 ` [patch 122/131] sparc32: register memory occupied by kernel as memblock.memory Andrew Morton 2020-06-03 23:03 ` [patch 123/131] hugetlbfs: get unmapped area below TASK_UNMAPPED_BASE for hugetlbfs Andrew Morton 2020-06-03 23:03 ` [patch 124/131] mm: thp: don't need to drain lru cache when splitting and mlocking THP Andrew Morton 2020-06-03 23:03 ` [patch 125/131] powerpc/mm: drop platform defined pmd_mknotpresent() Andrew Morton 2020-06-03 23:03 ` [patch 126/131] mm/thp: rename pmd_mknotpresent() as pmd_mkinvalid() Andrew Morton 2020-06-03 23:03 ` [patch 127/131] drivers/base/memory.c: cache memory blocks in xarray to accelerate lookup Andrew Morton 2020-06-03 23:03 ` [patch 128/131] mm: add DEBUG_WX support Andrew Morton 2020-06-03 23:03 ` [patch 129/131] riscv: support DEBUG_WX Andrew Morton 2020-06-03 23:03 ` [patch 130/131] x86: mm: use ARCH_HAS_DEBUG_WX instead of arch defined Andrew Morton 2020-06-03 23:04 ` [patch 131/131] arm64: " Andrew Morton 2020-06-04 0:54 ` mmotm 2020-06-03-17-54 uploaded Andrew Morton -- strict thread matches above, loose matches on Subject: below -- 2022-04-27 19:41 incoming Andrew Morton 2022-04-21 23:35 incoming Andrew Morton 2022-04-15 2:12 incoming Andrew Morton 2022-04-08 20:08 incoming Andrew Morton 2022-04-01 18:27 incoming Andrew Morton 2022-04-01 18:20 incoming Andrew Morton 2022-04-01 18:27 ` incoming Andrew Morton 2022-03-25 1:07 incoming Andrew Morton 2022-03-23 23:04 incoming Andrew Morton 2022-03-22 21:38 incoming Andrew Morton 2022-03-16 23:14 incoming Andrew Morton 2022-03-05 4:28 incoming Andrew Morton 2022-02-26 3:10 incoming Andrew Morton 2022-02-12 0:27 incoming Andrew Morton 2022-02-12 2:02 ` incoming Linus Torvalds 2022-02-12 5:24 ` incoming Andrew Morton 2022-02-04 4:48 incoming Andrew Morton 2022-01-29 21:40 incoming Andrew Morton 2022-01-29 2:13 incoming Andrew Morton 2022-01-29 4:25 ` incoming Matthew Wilcox 2022-01-29 6:23 ` incoming Andrew Morton 2022-01-22 6:10 incoming Andrew Morton 2022-01-20 2:07 incoming Andrew Morton 2022-01-14 22:02 incoming Andrew Morton 2021-12-31 4:12 incoming Andrew Morton 2021-12-25 5:11 incoming Andrew Morton 2021-12-10 22:45 incoming Andrew Morton 2021-11-20 0:42 incoming Andrew Morton 2021-11-11 4:32 incoming Andrew Morton 2021-11-09 2:30 incoming Andrew Morton 2021-11-05 20:34 incoming Andrew Morton 2021-10-28 21:35 incoming Andrew Morton 2021-10-18 22:14 incoming Andrew Morton 2021-09-24 22:42 incoming Andrew Morton 2021-09-10 3:09 incoming Andrew Morton 2021-09-10 17:11 ` incoming Kees Cook 2021-09-10 20:13 ` incoming Kees Cook 2021-09-09 1:08 incoming Andrew Morton 2021-09-08 22:17 incoming Andrew Morton 2021-09-08 2:52 incoming Andrew Morton 2021-09-08 8:57 ` incoming Vlastimil Babka 2021-09-02 21:48 incoming Andrew Morton 2021-09-02 21:49 ` incoming Andrew Morton 2021-08-25 19:17 incoming Andrew Morton 2021-08-20 2:03 incoming Andrew Morton 2021-08-13 23:53 incoming Andrew Morton 2021-07-29 21:52 incoming Andrew Morton 2021-07-23 22:49 incoming Andrew Morton 2021-07-15 4:26 incoming Andrew Morton 2021-07-08 0:59 incoming Andrew Morton 2021-07-01 1:46 incoming Andrew Morton 2021-07-03 0:28 ` incoming Linus Torvalds 2021-07-03 1:06 ` incoming Linus Torvalds 2021-06-29 2:32 incoming Andrew Morton 2021-06-25 1:38 incoming Andrew Morton 2021-06-16 1:22 incoming Andrew Morton 2021-06-05 3:00 incoming Andrew Morton 2021-05-23 0:41 incoming Andrew Morton 2021-05-15 0:26 incoming Andrew Morton 2021-05-07 1:01 incoming Andrew Morton 2021-05-07 7:12 ` incoming Linus Torvalds 2021-05-05 1:32 incoming Andrew Morton 2021-05-05 1:47 ` incoming Linus Torvalds 2021-05-05 3:16 ` incoming Andrew Morton 2021-05-05 17:10 ` incoming Linus Torvalds 2021-05-05 17:44 ` incoming Andrew Morton 2021-05-06 3:19 ` incoming Anshuman Khandual 2021-04-30 5:52 incoming Andrew Morton 2021-04-23 21:28 incoming Andrew Morton 2021-04-16 22:45 incoming Andrew Morton 2021-04-09 20:26 incoming Andrew Morton 2021-03-25 4:36 incoming Andrew Morton 2021-03-13 5:06 incoming Andrew Morton 2021-02-26 1:14 incoming Andrew Morton 2021-02-26 17:55 ` incoming Linus Torvalds 2021-02-26 19:16 ` incoming Andrew Morton 2021-02-24 19:58 incoming Andrew Morton 2021-02-24 21:30 ` incoming Linus Torvalds 2021-02-24 21:37 ` incoming Linus Torvalds 2021-02-25 8:53 ` incoming Arnd Bergmann 2021-02-25 9:12 ` incoming Andrey Ryabinin 2021-02-25 11:07 ` incoming Walter Wu 2021-02-13 4:52 incoming Andrew Morton 2021-02-09 21:41 incoming Andrew Morton 2021-02-10 19:30 ` incoming Linus Torvalds 2021-02-05 2:31 incoming Andrew Morton 2021-01-24 5:00 incoming Andrew Morton 2021-01-12 23:48 incoming Andrew Morton 2021-01-15 23:32 ` incoming Linus Torvalds 2020-12-29 23:13 incoming Andrew Morton 2020-12-22 19:58 incoming Andrew Morton 2020-12-22 21:43 ` incoming Linus Torvalds 2020-12-18 22:00 incoming Andrew Morton 2020-12-16 4:41 incoming Andrew Morton 2020-12-15 20:32 incoming Andrew Morton 2020-12-15 21:00 ` incoming Linus Torvalds 2020-12-15 22:48 ` incoming Linus Torvalds 2020-12-15 22:49 ` incoming Linus Torvalds 2020-12-15 22:55 ` incoming Andrew Morton 2020-12-15 3:02 incoming Andrew Morton 2020-12-15 3:25 ` incoming Linus Torvalds 2020-12-15 3:30 ` incoming Linus Torvalds 2020-12-15 14:04 ` incoming Konstantin Ryabitsev 2020-12-11 21:35 incoming Andrew Morton 2020-12-06 6:14 incoming Andrew Morton 2020-11-22 6:16 incoming Andrew Morton 2020-11-14 6:51 incoming Andrew Morton 2020-11-02 1:06 incoming Andrew Morton 2020-10-17 23:13 incoming Andrew Morton 2020-10-16 2:40 incoming Andrew Morton 2020-10-16 3:03 ` incoming Andrew Morton 2020-10-13 23:46 incoming Andrew Morton 2020-10-11 6:15 incoming Andrew Morton 2020-10-03 5:20 incoming Andrew Morton 2020-09-26 4:17 incoming Andrew Morton 2020-09-19 4:19 incoming Andrew Morton 2020-09-04 23:34 incoming Andrew Morton 2020-08-21 0:41 incoming Andrew Morton 2020-08-15 0:29 incoming Andrew Morton 2020-08-12 1:29 incoming Andrew Morton 2020-08-07 6:16 incoming Andrew Morton 2020-07-24 4:14 incoming Andrew Morton 2020-07-03 22:14 incoming Andrew Morton 2020-06-26 3:28 incoming Andrew Morton 2020-06-26 6:51 ` incoming Linus Torvalds 2020-06-26 7:31 ` incoming Linus Torvalds 2020-06-26 17:39 ` incoming Konstantin Ryabitsev 2020-06-26 17:40 ` incoming Konstantin Ryabitsev 2020-06-12 0:30 incoming Andrew Morton 2020-06-11 1:40 incoming Andrew Morton 2020-06-09 4:29 incoming Andrew Morton 2020-06-09 16:58 ` incoming Linus Torvalds 2020-06-08 4:35 incoming Andrew Morton 2020-06-04 23:45 incoming Andrew Morton 2020-06-02 20:09 incoming Andrew Morton 2020-06-02 4:44 incoming Andrew Morton 2020-06-02 20:08 ` incoming Andrew Morton 2020-06-02 20:45 ` incoming Linus Torvalds 2020-06-02 21:38 ` incoming Andrew Morton 2020-06-02 22:18 ` incoming Linus Torvalds 2020-05-28 5:20 incoming Andrew Morton 2020-05-28 20:10 ` incoming Linus Torvalds 2020-05-29 20:31 ` incoming Andrew Morton 2020-05-29 20:38 ` incoming Linus Torvalds 2020-05-29 21:12 ` incoming Andrew Morton 2020-05-29 21:20 ` incoming Linus Torvalds 2020-05-23 5:22 incoming Andrew Morton 2020-05-14 0:50 incoming Andrew Morton 2020-05-08 1:35 incoming Andrew Morton 2020-04-21 1:13 incoming Andrew Morton 2020-04-12 7:41 incoming Andrew Morton 2020-04-10 21:30 incoming Andrew Morton 2020-04-07 3:02 incoming Andrew Morton 2020-04-02 4:01 incoming Andrew Morton 2020-03-29 2:14 incoming Andrew Morton 2020-03-22 1:19 incoming Andrew Morton 2020-03-06 6:27 incoming Andrew Morton 2020-02-21 4:00 incoming Andrew Morton 2020-02-21 4:03 ` incoming Andrew Morton 2020-02-21 18:21 ` incoming Linus Torvalds 2020-02-21 18:32 ` incoming Konstantin Ryabitsev 2020-02-27 9:59 ` incoming Vlastimil Babka 2020-02-21 19:33 ` incoming Linus Torvalds 2020-02-04 1:33 incoming Andrew Morton 2020-02-04 2:27 ` incoming Linus Torvalds 2020-02-04 2:46 ` incoming Andrew Morton 2020-02-04 3:11 ` incoming Linus Torvalds 2020-01-31 6:10 incoming Andrew Morton 2020-01-14 0:28 incoming Andrew Morton 2020-01-04 20:55 incoming Andrew Morton 2019-12-18 4:50 incoming Andrew Morton 2019-12-05 0:48 incoming Andrew Morton 2019-12-01 1:47 incoming Andrew Morton 2019-12-01 5:17 ` incoming James Bottomley 2019-12-01 21:07 ` incoming Linus Torvalds 2019-12-02 8:21 ` incoming Steven Price 2019-11-22 1:53 incoming Andrew Morton 2019-11-16 1:34 incoming Andrew Morton 2019-11-06 5:16 incoming Andrew Morton 2019-10-19 3:19 incoming Andrew Morton 2019-10-14 21:11 incoming Andrew Morton 2019-10-07 0:57 incoming Andrew Morton 2019-09-25 23:45 incoming Andrew Morton 2019-09-23 22:31 incoming Andrew Morton 2019-09-24 0:55 ` incoming Linus Torvalds 2019-09-24 4:31 ` incoming Andrew Morton 2019-09-24 7:48 ` incoming Michal Hocko 2019-09-24 15:34 ` incoming Linus Torvalds 2019-09-25 6:36 ` incoming Michal Hocko 2019-09-24 19:55 ` incoming Vlastimil Babka 2019-08-30 23:04 incoming Andrew Morton 2019-08-25 0:54 incoming Andrew Morton [not found] <20190716162536.bb52b8f34a8ecf5331a86a42@linux-foundation.org> 2019-07-17 8:47 ` incoming Vlastimil Babka 2019-07-17 8:57 ` incoming Bhaskar Chowdhury 2019-07-17 16:13 ` incoming Linus Torvalds 2019-07-17 17:09 ` incoming Christian Brauner 2019-07-17 18:13 ` incoming Vlastimil Babka 2007-05-02 22:02 incoming Andrew Morton 2007-05-02 22:31 ` incoming Benjamin Herrenschmidt 2007-05-03 7:55 ` incoming Russell King 2007-05-03 8:05 ` incoming Andrew Morton 2007-05-04 13:37 ` incoming Greg KH 2007-05-04 16:14 ` incoming Andrew Morton 2007-05-04 17:02 ` incoming Greg KH 2007-05-04 18:57 ` incoming Roland McGrath 2007-05-04 19:24 ` incoming Greg KH 2007-05-04 19:29 ` incoming Roland McGrath
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).