* + mm-zsmallocc-convert-to-use-kmem_cache_zalloc-in-cache_alloc_zspage.patch added to -mm tree
@ 2021-01-17 21:44 akpm
0 siblings, 0 replies; only message in thread
From: akpm @ 2021-01-17 21:44 UTC (permalink / raw)
To: linmiaohe, minchan, mm-commits, sergey.senozhatsky
The patch titled
Subject: mm/zsmalloc.c: convert to use kmem_cache_zalloc in cache_alloc_zspage()
has been added to the -mm tree. Its filename is
mm-zsmallocc-convert-to-use-kmem_cache_zalloc-in-cache_alloc_zspage.patch
This patch should soon appear at
https://ozlabs.org/~akpm/mmots/broken-out/mm-zsmallocc-convert-to-use-kmem_cache_zalloc-in-cache_alloc_zspage.patch
and later at
https://ozlabs.org/~akpm/mmotm/broken-out/mm-zsmallocc-convert-to-use-kmem_cache_zalloc-in-cache_alloc_zspage.patch
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Miaohe Lin <linmiaohe@huawei.com>
Subject: mm/zsmalloc.c: convert to use kmem_cache_zalloc in cache_alloc_zspage()
We always memset the zspage allocated via cache_alloc_zspage. So it's
more convenient to use kmem_cache_zalloc in cache_alloc_zspage than caller
do it manually.
Link: https://lkml.kernel.org/r/20210114120032.25885-1-linmiaohe@huawei.com
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Minchan Kim <minchan@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
mm/zsmalloc.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
--- a/mm/zsmalloc.c~mm-zsmallocc-convert-to-use-kmem_cache_zalloc-in-cache_alloc_zspage
+++ a/mm/zsmalloc.c
@@ -357,7 +357,7 @@ static void cache_free_handle(struct zs_
static struct zspage *cache_alloc_zspage(struct zs_pool *pool, gfp_t flags)
{
- return kmem_cache_alloc(pool->zspage_cachep,
+ return kmem_cache_zalloc(pool->zspage_cachep,
flags & ~(__GFP_HIGHMEM|__GFP_MOVABLE));
}
@@ -1064,7 +1064,6 @@ static struct zspage *alloc_zspage(struc
if (!zspage)
return NULL;
- memset(zspage, 0, sizeof(struct zspage));
zspage->magic = ZSPAGE_MAGIC;
migrate_lock_init(zspage);
_
Patches currently in -mm which might be from linmiaohe@huawei.com are
mm-hugetlb-fix-potential-double-free-in-hugetlb_register_node-error-path.patch
mm-hugetlb-avoid-unnecessary-hugetlb_acct_memory-call.patch
mm-compaction-remove-duplicated-vm_bug_on_page-pagelocked.patch
mm-zsmallocc-convert-to-use-kmem_cache_zalloc-in-cache_alloc_zspage.patch
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2021-01-17 21:44 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-17 21:44 + mm-zsmallocc-convert-to-use-kmem_cache_zalloc-in-cache_alloc_zspage.patch added to -mm tree akpm
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).