* [PATCH AUTOSEL 5.0 56/99] slab: store tagged freelist for off-slab slabmgmt
[not found] <20190507053235.29900-1-sashal@kernel.org>
@ 2019-05-07 5:31 ` Sasha Levin
2019-05-07 5:31 ` [PATCH AUTOSEL 5.0 57/99] mm/hotplug: treat CMA pages as unmovable Sasha Levin
` (3 subsequent siblings)
4 siblings, 0 replies; 5+ messages in thread
From: Sasha Levin @ 2019-05-07 5:31 UTC (permalink / raw)
To: linux-kernel, stable
Cc: Qian Cai, Andrey Konovalov, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Andrey Ryabinin,
Alexander Potapenko, Dmitry Vyukov, Andrew Morton,
Linus Torvalds, Sasha Levin, linux-mm
From: Qian Cai <cai@lca.pw>
[ Upstream commit 1a62b18d51e5c5ecc0345c85bb9fef870ab721ed ]
Commit 51dedad06b5f ("kasan, slab: make freelist stored without tags")
calls kasan_reset_tag() for off-slab slab management object leading to
freelist being stored non-tagged.
However, cache_grow_begin() calls alloc_slabmgmt() which calls
kmem_cache_alloc_node() assigns a tag for the address and stores it in
the shadow address. As the result, it causes endless errors below
during boot due to drain_freelist() -> slab_destroy() ->
kasan_slab_free() which compares already untagged freelist against the
stored tag in the shadow address.
Since off-slab slab management object freelist is such a special case,
just store it tagged. Non-off-slab management object freelist is still
stored untagged which has not been assigned a tag and should not cause
any other troubles with this inconsistency.
BUG: KASAN: double-free or invalid-free in slab_destroy+0x84/0x88
Pointer tag: [ff], memory tag: [99]
CPU: 0 PID: 1376 Comm: kworker/0:4 Tainted: G W 5.1.0-rc3+ #8
Hardware name: HPE Apollo 70 /C01_APACHE_MB , BIOS L50_5.13_1.0.6 07/10/2018
Workqueue: cgroup_destroy css_killed_work_fn
Call trace:
print_address_description+0x74/0x2a4
kasan_report_invalid_free+0x80/0xc0
__kasan_slab_free+0x204/0x208
kasan_slab_free+0xc/0x18
kmem_cache_free+0xe4/0x254
slab_destroy+0x84/0x88
drain_freelist+0xd0/0x104
__kmem_cache_shrink+0x1ac/0x224
__kmemcg_cache_deactivate+0x1c/0x28
memcg_deactivate_kmem_caches+0xa0/0xe8
memcg_offline_kmem+0x8c/0x3d4
mem_cgroup_css_offline+0x24c/0x290
css_killed_work_fn+0x154/0x618
process_one_work+0x9cc/0x183c
worker_thread+0x9b0/0xe38
kthread+0x374/0x390
ret_from_fork+0x10/0x18
Allocated by task 1625:
__kasan_kmalloc+0x168/0x240
kasan_slab_alloc+0x18/0x20
kmem_cache_alloc_node+0x1f8/0x3a0
cache_grow_begin+0x4fc/0xa24
cache_alloc_refill+0x2f8/0x3e8
kmem_cache_alloc+0x1bc/0x3bc
sock_alloc_inode+0x58/0x334
alloc_inode+0xb8/0x164
new_inode_pseudo+0x20/0xec
sock_alloc+0x74/0x284
__sock_create+0xb0/0x58c
sock_create+0x98/0xb8
__sys_socket+0x60/0x138
__arm64_sys_socket+0xa4/0x110
el0_svc_handler+0x2c0/0x47c
el0_svc+0x8/0xc
Freed by task 1625:
__kasan_slab_free+0x114/0x208
kasan_slab_free+0xc/0x18
kfree+0x1a8/0x1e0
single_release+0x7c/0x9c
close_pdeo+0x13c/0x43c
proc_reg_release+0xec/0x108
__fput+0x2f8/0x784
____fput+0x1c/0x28
task_work_run+0xc0/0x1b0
do_notify_resume+0xb44/0x1278
work_pending+0x8/0x10
The buggy address belongs to the object at ffff809681b89e00
which belongs to the cache kmalloc-128 of size 128
The buggy address is located 0 bytes inside of
128-byte region [ffff809681b89e00, ffff809681b89e80)
The buggy address belongs to the page:
page:ffff7fe025a06e00 count:1 mapcount:0 mapping:01ff80082000fb00
index:0xffff809681b8fe04
flags: 0x17ffffffc000200(slab)
raw: 017ffffffc000200 ffff7fe025a06d08 ffff7fe022ef7b88 01ff80082000fb00
raw: ffff809681b8fe04 ffff809681b80000 00000001000000e0 0000000000000000
page dumped because: kasan: bad access detected
page allocated via order 0, migratetype Unmovable, gfp_mask
0x2420c0(__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_COMP|__GFP_THISNODE)
prep_new_page+0x4e0/0x5e0
get_page_from_freelist+0x4ce8/0x50d4
__alloc_pages_nodemask+0x738/0x38b8
cache_grow_begin+0xd8/0xa24
____cache_alloc_node+0x14c/0x268
__kmalloc+0x1c8/0x3fc
ftrace_free_mem+0x408/0x1284
ftrace_free_init_mem+0x20/0x28
kernel_init+0x24/0x548
ret_from_fork+0x10/0x18
Memory state around the buggy address:
ffff809681b89c00: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
ffff809681b89d00: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
>ffff809681b89e00: 99 99 99 99 99 99 99 99 fe fe fe fe fe fe fe fe
^
ffff809681b89f00: 43 43 43 43 43 fe fe fe fe fe fe fe fe fe fe fe
ffff809681b8a000: 6d fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
Link: http://lkml.kernel.org/r/20190403022858.97584-1-cai@lca.pw
Fixes: 51dedad06b5f ("kasan, slab: make freelist stored without tags")
Signed-off-by: Qian Cai <cai@lca.pw>
Reviewed-by: Andrey Konovalov <andreyknvl@google.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
mm/slab.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/mm/slab.c b/mm/slab.c
index 2f2aa8eaf7d9..516df2d854ef 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2371,7 +2371,6 @@ static void *alloc_slabmgmt(struct kmem_cache *cachep,
/* Slab management obj is off-slab. */
freelist = kmem_cache_alloc_node(cachep->freelist_cache,
local_flags, nodeid);
- freelist = kasan_reset_tag(freelist);
if (!freelist)
return NULL;
} else {
--
2.20.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH AUTOSEL 5.0 57/99] mm/hotplug: treat CMA pages as unmovable
[not found] <20190507053235.29900-1-sashal@kernel.org>
2019-05-07 5:31 ` [PATCH AUTOSEL 5.0 56/99] slab: store tagged freelist for off-slab slabmgmt Sasha Levin
@ 2019-05-07 5:31 ` Sasha Levin
2019-05-07 5:31 ` [PATCH AUTOSEL 5.0 58/99] mm: fix inactive list balancing between NUMA nodes and cgroups Sasha Levin
` (2 subsequent siblings)
4 siblings, 0 replies; 5+ messages in thread
From: Sasha Levin @ 2019-05-07 5:31 UTC (permalink / raw)
To: linux-kernel, stable
Cc: Qian Cai, Michal Hocko, Vlastimil Babka, Oscar Salvador,
Andrew Morton, Linus Torvalds, Sasha Levin, linux-mm
From: Qian Cai <cai@lca.pw>
[ Upstream commit 1a9f219157b22d0ffb340a9c5f431afd02cd2cf3 ]
has_unmovable_pages() is used by allocating CMA and gigantic pages as
well as the memory hotplug. The later doesn't know how to offline CMA
pool properly now, but if an unused (free) CMA page is encountered, then
has_unmovable_pages() happily considers it as a free memory and
propagates this up the call chain. Memory offlining code then frees the
page without a proper CMA tear down which leads to an accounting issues.
Moreover if the same memory range is onlined again then the memory never
gets back to the CMA pool.
State after memory offline:
# grep cma /proc/vmstat
nr_free_cma 205824
# cat /sys/kernel/debug/cma/cma-kvm_cma/count
209920
Also, kmemleak still think those memory address are reserved below but
have already been used by the buddy allocator after onlining. This
patch fixes the situation by treating CMA pageblocks as unmovable except
when has_unmovable_pages() is called as part of CMA allocation.
Offlined Pages 4096
kmemleak: Cannot insert 0xc000201f7d040008 into the object search tree (overlaps existing)
Call Trace:
dump_stack+0xb0/0xf4 (unreliable)
create_object+0x344/0x380
__kmalloc_node+0x3ec/0x860
kvmalloc_node+0x58/0x110
seq_read+0x41c/0x620
__vfs_read+0x3c/0x70
vfs_read+0xbc/0x1a0
ksys_read+0x7c/0x140
system_call+0x5c/0x70
kmemleak: Kernel memory leak detector disabled
kmemleak: Object 0xc000201cc8000000 (size 13757317120):
kmemleak: comm "swapper/0", pid 0, jiffies 4294937297
kmemleak: min_count = -1
kmemleak: count = 0
kmemleak: flags = 0x5
kmemleak: checksum = 0
kmemleak: backtrace:
cma_declare_contiguous+0x2a4/0x3b0
kvm_cma_reserve+0x11c/0x134
setup_arch+0x300/0x3f8
start_kernel+0x9c/0x6e8
start_here_common+0x1c/0x4b0
kmemleak: Automatic memory scanning thread ended
[cai@lca.pw: use is_migrate_cma_page() and update commit log]
Link: http://lkml.kernel.org/r/20190416170510.20048-1-cai@lca.pw
Link: http://lkml.kernel.org/r/20190413002623.8967-1-cai@lca.pw
Signed-off-by: Qian Cai <cai@lca.pw>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
mm/page_alloc.c | 30 ++++++++++++++++++------------
1 file changed, 18 insertions(+), 12 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 318ef6ccdb3b..eedb57f9b40b 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7945,7 +7945,10 @@ void *__init alloc_large_system_hash(const char *tablename,
bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
int migratetype, int flags)
{
- unsigned long pfn, iter, found;
+ unsigned long found;
+ unsigned long iter = 0;
+ unsigned long pfn = page_to_pfn(page);
+ const char *reason = "unmovable page";
/*
* TODO we could make this much more efficient by not checking every
@@ -7955,17 +7958,20 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
* can still lead to having bootmem allocations in zone_movable.
*/
- /*
- * CMA allocations (alloc_contig_range) really need to mark isolate
- * CMA pageblocks even when they are not movable in fact so consider
- * them movable here.
- */
- if (is_migrate_cma(migratetype) &&
- is_migrate_cma(get_pageblock_migratetype(page)))
- return false;
+ if (is_migrate_cma_page(page)) {
+ /*
+ * CMA allocations (alloc_contig_range) really need to mark
+ * isolate CMA pageblocks even when they are not movable in fact
+ * so consider them movable here.
+ */
+ if (is_migrate_cma(migratetype))
+ return false;
+
+ reason = "CMA page";
+ goto unmovable;
+ }
- pfn = page_to_pfn(page);
- for (found = 0, iter = 0; iter < pageblock_nr_pages; iter++) {
+ for (found = 0; iter < pageblock_nr_pages; iter++) {
unsigned long check = pfn + iter;
if (!pfn_valid_within(check))
@@ -8045,7 +8051,7 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
unmovable:
WARN_ON_ONCE(zone_idx(zone) == ZONE_MOVABLE);
if (flags & REPORT_FAILURE)
- dump_page(pfn_to_page(pfn+iter), "unmovable page");
+ dump_page(pfn_to_page(pfn + iter), reason);
return true;
}
--
2.20.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH AUTOSEL 5.0 58/99] mm: fix inactive list balancing between NUMA nodes and cgroups
[not found] <20190507053235.29900-1-sashal@kernel.org>
2019-05-07 5:31 ` [PATCH AUTOSEL 5.0 56/99] slab: store tagged freelist for off-slab slabmgmt Sasha Levin
2019-05-07 5:31 ` [PATCH AUTOSEL 5.0 57/99] mm/hotplug: treat CMA pages as unmovable Sasha Levin
@ 2019-05-07 5:31 ` Sasha Levin
2019-05-07 5:32 ` [PATCH AUTOSEL 5.0 94/99] mm/memory_hotplug.c: drop memory device reference after find_memory_block() Sasha Levin
2019-05-07 5:32 ` [PATCH AUTOSEL 5.0 95/99] mm/page_alloc.c: avoid potential NULL pointer dereference Sasha Levin
4 siblings, 0 replies; 5+ messages in thread
From: Sasha Levin @ 2019-05-07 5:31 UTC (permalink / raw)
To: linux-kernel, stable
Cc: Johannes Weiner, Shakeel Butt, Roman Gushchin, Michal Hocko,
Andrew Morton, Linus Torvalds, Sasha Levin, linux-mm
From: Johannes Weiner <hannes@cmpxchg.org>
[ Upstream commit 3b991208b897f52507168374033771a984b947b1 ]
During !CONFIG_CGROUP reclaim, we expand the inactive list size if it's
thrashing on the node that is about to be reclaimed. But when cgroups
are enabled, we suddenly ignore the node scope and use the cgroup scope
only. The result is that pressure bleeds between NUMA nodes depending
on whether cgroups are merely compiled into Linux. This behavioral
difference is unexpected and undesirable.
When the refault adaptivity of the inactive list was first introduced,
there were no statistics at the lruvec level - the intersection of node
and memcg - so it was better than nothing.
But now that we have that infrastructure, use lruvec_page_state() to
make the list balancing decision always NUMA aware.
[hannes@cmpxchg.org: fix bisection hole]
Link: http://lkml.kernel.org/r/20190417155241.GB23013@cmpxchg.org
Link: http://lkml.kernel.org/r/20190412144438.2645-1-hannes@cmpxchg.org
Fixes: 2a2e48854d70 ("mm: vmscan: fix IO/refault regression in cache workingset transition")
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
mm/vmscan.c | 29 +++++++++--------------------
1 file changed, 9 insertions(+), 20 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index e979705bbf32..022afabac3f6 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2199,7 +2199,6 @@ static void shrink_active_list(unsigned long nr_to_scan,
* 10TB 320 32GB
*/
static bool inactive_list_is_low(struct lruvec *lruvec, bool file,
- struct mem_cgroup *memcg,
struct scan_control *sc, bool actual_reclaim)
{
enum lru_list active_lru = file * LRU_FILE + LRU_ACTIVE;
@@ -2220,16 +2219,12 @@ static bool inactive_list_is_low(struct lruvec *lruvec, bool file,
inactive = lruvec_lru_size(lruvec, inactive_lru, sc->reclaim_idx);
active = lruvec_lru_size(lruvec, active_lru, sc->reclaim_idx);
- if (memcg)
- refaults = memcg_page_state(memcg, WORKINGSET_ACTIVATE);
- else
- refaults = node_page_state(pgdat, WORKINGSET_ACTIVATE);
-
/*
* When refaults are being observed, it means a new workingset
* is being established. Disable active list protection to get
* rid of the stale workingset quickly.
*/
+ refaults = lruvec_page_state(lruvec, WORKINGSET_ACTIVATE);
if (file && actual_reclaim && lruvec->refaults != refaults) {
inactive_ratio = 0;
} else {
@@ -2250,12 +2245,10 @@ static bool inactive_list_is_low(struct lruvec *lruvec, bool file,
}
static unsigned long shrink_list(enum lru_list lru, unsigned long nr_to_scan,
- struct lruvec *lruvec, struct mem_cgroup *memcg,
- struct scan_control *sc)
+ struct lruvec *lruvec, struct scan_control *sc)
{
if (is_active_lru(lru)) {
- if (inactive_list_is_low(lruvec, is_file_lru(lru),
- memcg, sc, true))
+ if (inactive_list_is_low(lruvec, is_file_lru(lru), sc, true))
shrink_active_list(nr_to_scan, lruvec, sc, lru);
return 0;
}
@@ -2355,7 +2348,7 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
* anonymous pages on the LRU in eligible zones.
* Otherwise, the small LRU gets thrashed.
*/
- if (!inactive_list_is_low(lruvec, false, memcg, sc, false) &&
+ if (!inactive_list_is_low(lruvec, false, sc, false) &&
lruvec_lru_size(lruvec, LRU_INACTIVE_ANON, sc->reclaim_idx)
>> sc->priority) {
scan_balance = SCAN_ANON;
@@ -2373,7 +2366,7 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
* lruvec even if it has plenty of old anonymous pages unless the
* system is under heavy pressure.
*/
- if (!inactive_list_is_low(lruvec, true, memcg, sc, false) &&
+ if (!inactive_list_is_low(lruvec, true, sc, false) &&
lruvec_lru_size(lruvec, LRU_INACTIVE_FILE, sc->reclaim_idx) >> sc->priority) {
scan_balance = SCAN_FILE;
goto out;
@@ -2526,7 +2519,7 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc
nr[lru] -= nr_to_scan;
nr_reclaimed += shrink_list(lru, nr_to_scan,
- lruvec, memcg, sc);
+ lruvec, sc);
}
}
@@ -2593,7 +2586,7 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc
* Even if we did not try to evict anon pages at all, we want to
* rebalance the anon lru active/inactive ratio.
*/
- if (inactive_list_is_low(lruvec, false, memcg, sc, true))
+ if (inactive_list_is_low(lruvec, false, sc, true))
shrink_active_list(SWAP_CLUSTER_MAX, lruvec,
sc, LRU_ACTIVE_ANON);
}
@@ -2993,12 +2986,8 @@ static void snapshot_refaults(struct mem_cgroup *root_memcg, pg_data_t *pgdat)
unsigned long refaults;
struct lruvec *lruvec;
- if (memcg)
- refaults = memcg_page_state(memcg, WORKINGSET_ACTIVATE);
- else
- refaults = node_page_state(pgdat, WORKINGSET_ACTIVATE);
-
lruvec = mem_cgroup_lruvec(pgdat, memcg);
+ refaults = lruvec_page_state(lruvec, WORKINGSET_ACTIVATE);
lruvec->refaults = refaults;
} while ((memcg = mem_cgroup_iter(root_memcg, memcg, NULL)));
}
@@ -3363,7 +3352,7 @@ static void age_active_anon(struct pglist_data *pgdat,
do {
struct lruvec *lruvec = mem_cgroup_lruvec(pgdat, memcg);
- if (inactive_list_is_low(lruvec, false, memcg, sc, true))
+ if (inactive_list_is_low(lruvec, false, sc, true))
shrink_active_list(SWAP_CLUSTER_MAX, lruvec,
sc, LRU_ACTIVE_ANON);
--
2.20.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH AUTOSEL 5.0 94/99] mm/memory_hotplug.c: drop memory device reference after find_memory_block()
[not found] <20190507053235.29900-1-sashal@kernel.org>
` (2 preceding siblings ...)
2019-05-07 5:31 ` [PATCH AUTOSEL 5.0 58/99] mm: fix inactive list balancing between NUMA nodes and cgroups Sasha Levin
@ 2019-05-07 5:32 ` Sasha Levin
2019-05-07 5:32 ` [PATCH AUTOSEL 5.0 95/99] mm/page_alloc.c: avoid potential NULL pointer dereference Sasha Levin
4 siblings, 0 replies; 5+ messages in thread
From: Sasha Levin @ 2019-05-07 5:32 UTC (permalink / raw)
To: linux-kernel, stable
Cc: David Hildenbrand, Oscar Salvador, Wei Yang, Michal Hocko,
Pankaj Gupta, Pavel Tatashin, Qian Cai, Arun KS,
Mathieu Malaterre, Andrew Morton, Linus Torvalds, Sasha Levin,
linux-mm
From: David Hildenbrand <david@redhat.com>
[ Upstream commit 89c02e69fc5245f8a2f34b58b42d43a737af1a5e ]
Right now we are using find_memory_block() to get the node id for the
pfn range to online. We are missing to drop a reference to the memory
block device. While the device still gets unregistered via
device_unregister(), resulting in no user visible problem, the device is
never released via device_release(), resulting in a memory leak. Fix
that by properly using a put_device().
Link: http://lkml.kernel.org/r/20190411110955.1430-1-david@redhat.com
Fixes: d0dc12e86b31 ("mm/memory_hotplug: optimize memory hotplug")
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: Wei Yang <richard.weiyang@gmail.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Pankaj Gupta <pagupta@redhat.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Qian Cai <cai@lca.pw>
Cc: Arun KS <arunks@codeaurora.org>
Cc: Mathieu Malaterre <malat@debian.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
mm/memory_hotplug.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 11593a03c051..7493f50ee880 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -858,6 +858,7 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages, int online_typ
*/
mem = find_memory_block(__pfn_to_section(pfn));
nid = mem->nid;
+ put_device(&mem->dev);
/* associate pfn range with the zone */
zone = move_pfn_range(online_type, nid, pfn, nr_pages);
--
2.20.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH AUTOSEL 5.0 95/99] mm/page_alloc.c: avoid potential NULL pointer dereference
[not found] <20190507053235.29900-1-sashal@kernel.org>
` (3 preceding siblings ...)
2019-05-07 5:32 ` [PATCH AUTOSEL 5.0 94/99] mm/memory_hotplug.c: drop memory device reference after find_memory_block() Sasha Levin
@ 2019-05-07 5:32 ` Sasha Levin
4 siblings, 0 replies; 5+ messages in thread
From: Sasha Levin @ 2019-05-07 5:32 UTC (permalink / raw)
To: linux-kernel, stable
Cc: Andrey Ryabinin, Mel Gorman, Andrew Morton, Linus Torvalds,
Sasha Levin, linux-mm
From: Andrey Ryabinin <aryabinin@virtuozzo.com>
[ Upstream commit 8139ad043d632c0e9e12d760068a7a8e91659aa1 ]
ac.preferred_zoneref->zone passed to alloc_flags_nofragment() can be NULL.
'zone' pointer unconditionally derefernced in alloc_flags_nofragment().
Bail out on NULL zone to avoid potential crash. Currently we don't see
any crashes only because alloc_flags_nofragment() has another bug which
allows compiler to optimize away all accesses to 'zone'.
Link: http://lkml.kernel.org/r/20190423120806.3503-1-aryabinin@virtuozzo.com
Fixes: 6bb154504f8b ("mm, page_alloc: spread allocations across zones before introducing fragmentation")
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
mm/page_alloc.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index eedb57f9b40b..d59be95ba45c 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3385,6 +3385,9 @@ alloc_flags_nofragment(struct zone *zone, gfp_t gfp_mask)
alloc_flags |= ALLOC_KSWAPD;
#ifdef CONFIG_ZONE_DMA32
+ if (!zone)
+ return alloc_flags;
+
if (zone_idx(zone) != ZONE_NORMAL)
goto out;
--
2.20.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
end of thread, other threads:[~2019-05-07 5:35 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <20190507053235.29900-1-sashal@kernel.org>
2019-05-07 5:31 ` [PATCH AUTOSEL 5.0 56/99] slab: store tagged freelist for off-slab slabmgmt Sasha Levin
2019-05-07 5:31 ` [PATCH AUTOSEL 5.0 57/99] mm/hotplug: treat CMA pages as unmovable Sasha Levin
2019-05-07 5:31 ` [PATCH AUTOSEL 5.0 58/99] mm: fix inactive list balancing between NUMA nodes and cgroups Sasha Levin
2019-05-07 5:32 ` [PATCH AUTOSEL 5.0 94/99] mm/memory_hotplug.c: drop memory device reference after find_memory_block() Sasha Levin
2019-05-07 5:32 ` [PATCH AUTOSEL 5.0 95/99] mm/page_alloc.c: avoid potential NULL pointer dereference Sasha Levin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).