* [PATCH 1/6] mm/z3fold: define macro NCHUNKS as TOTAL_CHUNKS - ZHDR_CHUNKS
2021-06-19 9:31 [PATCH 0/6] Cleanup and fixup for z3fold Miaohe Lin
@ 2021-06-19 9:31 ` Miaohe Lin
2021-06-19 9:31 ` [PATCH 2/6] mm/z3fold: avoid possible underflow in z3fold_alloc() Miaohe Lin
` (5 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Miaohe Lin @ 2021-06-19 9:31 UTC (permalink / raw)
To: akpm; +Cc: vitalywool, linux-kernel, linux-mm, linmiaohe
To improve the code readability, we could define macro NCHUNKS as
TOTAL_CHUNKS - ZHDR_CHUNKS. No functional change intended.
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
mm/z3fold.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 56321bf37d56..04d0e493bd2e 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -62,7 +62,7 @@
#define ZHDR_SIZE_ALIGNED round_up(sizeof(struct z3fold_header), CHUNK_SIZE)
#define ZHDR_CHUNKS (ZHDR_SIZE_ALIGNED >> CHUNK_SHIFT)
#define TOTAL_CHUNKS (PAGE_SIZE >> CHUNK_SHIFT)
-#define NCHUNKS ((PAGE_SIZE - ZHDR_SIZE_ALIGNED) >> CHUNK_SHIFT)
+#define NCHUNKS (TOTAL_CHUNKS - ZHDR_CHUNKS)
#define BUDDY_MASK (0x3)
#define BUDDY_SHIFT 2
--
2.23.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 2/6] mm/z3fold: avoid possible underflow in z3fold_alloc()
2021-06-19 9:31 [PATCH 0/6] Cleanup and fixup for z3fold Miaohe Lin
2021-06-19 9:31 ` [PATCH 1/6] mm/z3fold: define macro NCHUNKS as TOTAL_CHUNKS - ZHDR_CHUNKS Miaohe Lin
@ 2021-06-19 9:31 ` Miaohe Lin
2021-06-19 9:31 ` [PATCH 3/6] mm/z3fold: remove magic number in z3fold_create_pool() Miaohe Lin
` (4 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Miaohe Lin @ 2021-06-19 9:31 UTC (permalink / raw)
To: akpm; +Cc: vitalywool, linux-kernel, linux-mm, linmiaohe
It is not enough to just make sure the z3fold header is not larger than the
page size. When z3fold header is equal to PAGE_SIZE, we would underflow
when check alloc size against PAGE_SIZE - ZHDR_SIZE_ALIGNED - CHUNK_SIZE
in z3fold_alloc(). Make sure there has remaining spaces for its buddy to
fix this theoretical issue.
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
This causes the below checkpatch warning:
WARNING: Comparisons should place the constant on the right side of the
test
#31: FILE: mm/z3fold.c:1812:
+ BUILD_BUG_ON(ZHDR_SIZE_ALIGNED > PAGE_SIZE - CHUNK_SIZE);
But I think the error is false positives as all members are constant.
---
mm/z3fold.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 04d0e493bd2e..e261e14b7753 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -1805,8 +1805,11 @@ static int __init init_z3fold(void)
{
int ret;
- /* Make sure the z3fold header is not larger than the page size */
- BUILD_BUG_ON(ZHDR_SIZE_ALIGNED > PAGE_SIZE);
+ /*
+ * Make sure the z3fold header is not larger than the page size and
+ * there has remaining spaces for its buddy.
+ */
+ BUILD_BUG_ON(ZHDR_SIZE_ALIGNED > PAGE_SIZE - CHUNK_SIZE);
ret = z3fold_mount();
if (ret)
return ret;
--
2.23.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 3/6] mm/z3fold: remove magic number in z3fold_create_pool()
2021-06-19 9:31 [PATCH 0/6] Cleanup and fixup for z3fold Miaohe Lin
2021-06-19 9:31 ` [PATCH 1/6] mm/z3fold: define macro NCHUNKS as TOTAL_CHUNKS - ZHDR_CHUNKS Miaohe Lin
2021-06-19 9:31 ` [PATCH 2/6] mm/z3fold: avoid possible underflow in z3fold_alloc() Miaohe Lin
@ 2021-06-19 9:31 ` Miaohe Lin
2021-06-19 9:31 ` [PATCH 4/6] mm/z3fold: remove unused function handle_to_z3fold_header() Miaohe Lin
` (3 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Miaohe Lin @ 2021-06-19 9:31 UTC (permalink / raw)
To: akpm; +Cc: vitalywool, linux-kernel, linux-mm, linmiaohe
It's meaningless to pass a magic number 2 to __alloc_percpu() as there is
a minimum alignment size of PCPU_MIN_ALLOC_SIZE (> 2) in it. Also there is
no special alignment requirement for unbuddied. So we could replace this
magic number with nature alignment, i.e. __alignof__(struct list_head), to
improve readability.
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
mm/z3fold.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index e261e14b7753..988d57c143fd 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -1000,7 +1000,8 @@ static struct z3fold_pool *z3fold_create_pool(const char *name, gfp_t gfp,
goto out_c;
spin_lock_init(&pool->lock);
spin_lock_init(&pool->stale_lock);
- pool->unbuddied = __alloc_percpu(sizeof(struct list_head)*NCHUNKS, 2);
+ pool->unbuddied = __alloc_percpu(sizeof(struct list_head) * NCHUNKS,
+ __alignof__(struct list_head));
if (!pool->unbuddied)
goto out_pool;
for_each_possible_cpu(cpu) {
--
2.23.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 4/6] mm/z3fold: remove unused function handle_to_z3fold_header()
2021-06-19 9:31 [PATCH 0/6] Cleanup and fixup for z3fold Miaohe Lin
` (2 preceding siblings ...)
2021-06-19 9:31 ` [PATCH 3/6] mm/z3fold: remove magic number in z3fold_create_pool() Miaohe Lin
@ 2021-06-19 9:31 ` Miaohe Lin
2021-06-19 9:31 ` [PATCH 5/6] mm/z3fold: fix potential memory leak in z3fold_destroy_pool() Miaohe Lin
` (2 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Miaohe Lin @ 2021-06-19 9:31 UTC (permalink / raw)
To: akpm; +Cc: vitalywool, linux-kernel, linux-mm, linmiaohe
handle_to_z3fold_header() is unused now. So we can remove it. As a result,
get_z3fold_header() becomes the only caller of __get_z3fold_header() and
the argument lock is always true. Therefore we could further fold the
__get_z3fold_header() into get_z3fold_header() with lock = true.
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
mm/z3fold.c | 22 ++++------------------
1 file changed, 4 insertions(+), 18 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 988d57c143fd..bab08c08bf19 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -255,9 +255,8 @@ static inline void z3fold_page_unlock(struct z3fold_header *zhdr)
spin_unlock(&zhdr->page_lock);
}
-
-static inline struct z3fold_header *__get_z3fold_header(unsigned long handle,
- bool lock)
+/* return locked z3fold page if it's not headless */
+static inline struct z3fold_header *get_z3fold_header(unsigned long handle)
{
struct z3fold_buddy_slots *slots;
struct z3fold_header *zhdr;
@@ -271,13 +270,12 @@ static inline struct z3fold_header *__get_z3fold_header(unsigned long handle,
read_lock(&slots->lock);
addr = *(unsigned long *)handle;
zhdr = (struct z3fold_header *)(addr & PAGE_MASK);
- if (lock)
- locked = z3fold_page_trylock(zhdr);
+ locked = z3fold_page_trylock(zhdr);
read_unlock(&slots->lock);
if (locked)
break;
cpu_relax();
- } while (lock);
+ } while (true);
} else {
zhdr = (struct z3fold_header *)(handle & PAGE_MASK);
}
@@ -285,18 +283,6 @@ static inline struct z3fold_header *__get_z3fold_header(unsigned long handle,
return zhdr;
}
-/* Returns the z3fold page where a given handle is stored */
-static inline struct z3fold_header *handle_to_z3fold_header(unsigned long h)
-{
- return __get_z3fold_header(h, false);
-}
-
-/* return locked z3fold page if it's not headless */
-static inline struct z3fold_header *get_z3fold_header(unsigned long h)
-{
- return __get_z3fold_header(h, true);
-}
-
static inline void put_z3fold_header(struct z3fold_header *zhdr)
{
struct page *page = virt_to_page(zhdr);
--
2.23.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 5/6] mm/z3fold: fix potential memory leak in z3fold_destroy_pool()
2021-06-19 9:31 [PATCH 0/6] Cleanup and fixup for z3fold Miaohe Lin
` (3 preceding siblings ...)
2021-06-19 9:31 ` [PATCH 4/6] mm/z3fold: remove unused function handle_to_z3fold_header() Miaohe Lin
@ 2021-06-19 9:31 ` Miaohe Lin
2021-06-19 9:31 ` [PATCH 6/6] mm/z3fold: use release_z3fold_page_locked() to release locked z3fold page Miaohe Lin
2021-06-21 8:00 ` [PATCH 0/6] Cleanup and fixup for z3fold Vitaly Wool
6 siblings, 0 replies; 10+ messages in thread
From: Miaohe Lin @ 2021-06-19 9:31 UTC (permalink / raw)
To: akpm; +Cc: vitalywool, linux-kernel, linux-mm, linmiaohe
There is a memoryleak in z3fold_destroy_pool() as it forgets to free_percpu
pool->unbuddied. Call free_percpu for pool->unbuddied to fix this issue.
Fixes: d30561c56f41 ("z3fold: use per-cpu unbuddied lists")
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
mm/z3fold.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index bab08c08bf19..196d886a3436 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -1048,6 +1048,7 @@ static void z3fold_destroy_pool(struct z3fold_pool *pool)
destroy_workqueue(pool->compact_wq);
destroy_workqueue(pool->release_wq);
z3fold_unregister_migration(pool);
+ free_percpu(pool->unbuddied);
kfree(pool);
}
--
2.23.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 6/6] mm/z3fold: use release_z3fold_page_locked() to release locked z3fold page
2021-06-19 9:31 [PATCH 0/6] Cleanup and fixup for z3fold Miaohe Lin
` (4 preceding siblings ...)
2021-06-19 9:31 ` [PATCH 5/6] mm/z3fold: fix potential memory leak in z3fold_destroy_pool() Miaohe Lin
@ 2021-06-19 9:31 ` Miaohe Lin
2021-06-20 0:26 ` Hillf Danton
2021-06-21 8:00 ` [PATCH 0/6] Cleanup and fixup for z3fold Vitaly Wool
6 siblings, 1 reply; 10+ messages in thread
From: Miaohe Lin @ 2021-06-19 9:31 UTC (permalink / raw)
To: akpm; +Cc: vitalywool, linux-kernel, linux-mm, linmiaohe
We should use release_z3fold_page_locked() to release z3fold page when it's
locked, although it looks harmless to use release_z3fold_page() now.
Fixes: dcf5aedb24f8 ("z3fold: stricter locking and more careful reclaim")
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
mm/z3fold.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 196d886a3436..b3c0577b8095 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -1372,7 +1372,7 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
if (zhdr->foreign_handles ||
test_and_set_bit(PAGE_CLAIMED, &page->private)) {
if (kref_put(&zhdr->refcount,
- release_z3fold_page))
+ release_z3fold_page_locked))
atomic64_dec(&pool->pages_nr);
else
z3fold_page_unlock(zhdr);
--
2.23.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 6/6] mm/z3fold: use release_z3fold_page_locked() to release locked z3fold page
2021-06-19 9:31 ` [PATCH 6/6] mm/z3fold: use release_z3fold_page_locked() to release locked z3fold page Miaohe Lin
@ 2021-06-20 0:26 ` Hillf Danton
2021-06-22 13:49 ` Miaohe Lin
0 siblings, 1 reply; 10+ messages in thread
From: Hillf Danton @ 2021-06-20 0:26 UTC (permalink / raw)
To: Miaohe Lin
Cc: akpm, vitalywool, linux-kernel, linux-mm, Mikhail Gavrilov, Hillf Danton
On Sat, 19 Jun 2021 17:31:51 +0800 Miaohe Lin wrote:
> We should use release_z3fold_page_locked() to release z3fold page when it's
> locked, although it looks harmless to use release_z3fold_page() now.
>
> Fixes: dcf5aedb24f8 ("z3fold: stricter locking and more careful reclaim")
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
> mm/z3fold.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/z3fold.c b/mm/z3fold.c
> index 196d886a3436..b3c0577b8095 100644
> --- a/mm/z3fold.c
> +++ b/mm/z3fold.c
> @@ -1372,7 +1372,7 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
> if (zhdr->foreign_handles ||
> test_and_set_bit(PAGE_CLAIMED, &page->private)) {
> if (kref_put(&zhdr->refcount,
> - release_z3fold_page))
> + release_z3fold_page_locked))
> atomic64_dec(&pool->pages_nr);
LGTM. JFYI other issue in z3fold was reported [1] and if the fix proposed there
makes any sense to you feel free to pick it up and ask Mike for his tests.
[1] https://lore.kernel.org/linux-mm/20210316061351.1649-1-hdanton@sina.com/
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 6/6] mm/z3fold: use release_z3fold_page_locked() to release locked z3fold page
2021-06-20 0:26 ` Hillf Danton
@ 2021-06-22 13:49 ` Miaohe Lin
0 siblings, 0 replies; 10+ messages in thread
From: Miaohe Lin @ 2021-06-22 13:49 UTC (permalink / raw)
To: Hillf Danton; +Cc: akpm, vitalywool, linux-kernel, linux-mm, Mikhail Gavrilov
On 2021/6/20 8:26, Hillf Danton wrote:
> On Sat, 19 Jun 2021 17:31:51 +0800 Miaohe Lin wrote:
>> We should use release_z3fold_page_locked() to release z3fold page when it's
>> locked, although it looks harmless to use release_z3fold_page() now.
>>
>> Fixes: dcf5aedb24f8 ("z3fold: stricter locking and more careful reclaim")
>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>> ---
>> mm/z3fold.c | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/mm/z3fold.c b/mm/z3fold.c
>> index 196d886a3436..b3c0577b8095 100644
>> --- a/mm/z3fold.c
>> +++ b/mm/z3fold.c
>> @@ -1372,7 +1372,7 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
>> if (zhdr->foreign_handles ||
>> test_and_set_bit(PAGE_CLAIMED, &page->private)) {
>> if (kref_put(&zhdr->refcount,
>> - release_z3fold_page))
>> + release_z3fold_page_locked))
>> atomic64_dec(&pool->pages_nr);
>
> LGTM. JFYI other issue in z3fold was reported [1] and if the fix proposed there
> makes any sense to you feel free to pick it up and ask Mike for his tests.
>
Thank you for review and reply.
I browsed [1] several times but I failed to figure out what's the root cause. And I found
some bugs and possible race windows from previous code inspection. I think we can try fix
these first and see whether [1] is (hopefully) fixed. :)
Thanks again.
> [1] https://lore.kernel.org/linux-mm/20210316061351.1649-1-hdanton@sina.com/
> .
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 0/6] Cleanup and fixup for z3fold
2021-06-19 9:31 [PATCH 0/6] Cleanup and fixup for z3fold Miaohe Lin
` (5 preceding siblings ...)
2021-06-19 9:31 ` [PATCH 6/6] mm/z3fold: use release_z3fold_page_locked() to release locked z3fold page Miaohe Lin
@ 2021-06-21 8:00 ` Vitaly Wool
6 siblings, 0 replies; 10+ messages in thread
From: Vitaly Wool @ 2021-06-21 8:00 UTC (permalink / raw)
To: Miaohe Lin; +Cc: Andrew Morton, LKML, Linux-MM
[-- Attachment #1: Type: text/plain, Size: 966 bytes --]
On Sat, Jun 19, 2021 at 11:32 AM Miaohe Lin <linmiaohe@huawei.com> wrote:
> Hi all,
> This series contains cleanups to remove unused function, redefine macro
> to improve readability and so on. Also this fixes several bugs in z3fold,
> such as memory leak in z3fold_destroy_pool(). More details can be found
> in the respective changelogs. Thanks!
>
> Miaohe Lin (6):
> mm/z3fold: define macro NCHUNKS as TOTAL_CHUNKS - ZHDR_CHUNKS
> mm/z3fold: avoid possible underflow in z3fold_alloc()
> mm/z3fold: remove magic number in z3fold_create_pool()
> mm/z3fold: remove unused function handle_to_z3fold_header()
> mm/z3fold: fix potential memory leak in z3fold_destroy_pool()
> mm/z3fold: use release_z3fold_page_locked() to release locked z3fold
> page
>
> mm/z3fold.c | 37 ++++++++++++++-----------------------
> 1 file changed, 14 insertions(+), 23 deletions(-)
>
> Thanks for looking into this!
Reviewed-by: Vitaly Wool <vitaly.wool@konsulko.com>
[-- Attachment #2: Type: text/html, Size: 1431 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread