* [PATCH stable 4.14,4.19 1/1] mm: Fix page counter mismatch in shmem_mfill_atomic_pte
@ 2022-08-02 1:32 Wupeng Ma
2022-08-16 3:27 ` mawupeng
0 siblings, 1 reply; 5+ messages in thread
From: Wupeng Ma @ 2022-08-02 1:32 UTC (permalink / raw)
To: rppt, hughd, aarcange, hannes
Cc: linux-mm, linux-kernel, wangkefeng.wang, willy, mawupeng1
From: Ma Wupeng <mawupeng1@huawei.com>
shmem_mfill_atomic_pte() wrongly called mem_cgroup_cancel_charge() in "success"
path, it should mem_cgroup_uncharge() to dec memory counter instead.
mem_cgroup_cancel_charge() should only be used if this transaction is
unsuccessful and mem_cgroup_uncharge() is used to do this if this transaction
succeed.
This will lead to page->memcg not null and will uncharge one more in put_page().
The page counter will underflow to maximum value and trigger oom to kill all
process include sshd and leave system unaccessible.
page->memcg is set in the following path:
mem_cgroup_commit_charge
commit_charge
page->mem_cgroup = memcg;
extra uncharge will be done in the following path:
put_page
__put_page
__put_single_page
mem_cgroup_uncharge
if (!page->mem_cgroup) <-- should return here
return
uncharge_page
uncharge_batch
To fix this, call mem_cgroup_commit_charge() at the end of this transaction to
make sure this transaction is really finished.
Fixes: 4c27fe4c4c84 ("userfaultfd: shmem: add shmem_mcopy_atomic_pte for userfaultfd support")
Signed-off-by: Ma Wupeng <mawupeng1@huawei.com>
---
mm/shmem.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 0788616696dc..0b06724c189e 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2339,8 +2339,6 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
if (ret)
goto out_release_uncharge;
- mem_cgroup_commit_charge(page, memcg, false, false);
-
_dst_pte = mk_pte(page, dst_vma->vm_page_prot);
if (dst_vma->vm_flags & VM_WRITE)
_dst_pte = pte_mkwrite(pte_mkdirty(_dst_pte));
@@ -2366,6 +2364,8 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
if (!pte_none(*dst_pte))
goto out_release_uncharge_unlock;
+ mem_cgroup_commit_charge(page, memcg, false, false);
+
lru_cache_add_anon(page);
spin_lock_irq(&info->lock);
--
2.25.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH stable 4.14,4.19 1/1] mm: Fix page counter mismatch in shmem_mfill_atomic_pte
2022-08-02 1:32 [PATCH stable 4.14,4.19 1/1] mm: Fix page counter mismatch in shmem_mfill_atomic_pte Wupeng Ma
@ 2022-08-16 3:27 ` mawupeng
2022-08-16 5:31 ` Greg KH
0 siblings, 1 reply; 5+ messages in thread
From: mawupeng @ 2022-08-16 3:27 UTC (permalink / raw)
To: rppt, hughd, aarcange, hannes
Cc: mawupeng1, linux-mm, linux-kernel, wangkefeng.wang, willy, gregkh
Cc Greg
On 2022/8/2 9:32, Wupeng Ma wrote:
> From: Ma Wupeng <mawupeng1@huawei.com>
>
> shmem_mfill_atomic_pte() wrongly called mem_cgroup_cancel_charge() in "success"
> path, it should mem_cgroup_uncharge() to dec memory counter instead.
> mem_cgroup_cancel_charge() should only be used if this transaction is
> unsuccessful and mem_cgroup_uncharge() is used to do this if this transaction
> succeed.
>
> This will lead to page->memcg not null and will uncharge one more in put_page().
> The page counter will underflow to maximum value and trigger oom to kill all
> process include sshd and leave system unaccessible.
>
> page->memcg is set in the following path:
> mem_cgroup_commit_charge
> commit_charge
> page->mem_cgroup = memcg;
>
> extra uncharge will be done in the following path:
> put_page
> __put_page
> __put_single_page
> mem_cgroup_uncharge
> if (!page->mem_cgroup) <-- should return here
> return
> uncharge_page
> uncharge_batch
>
> To fix this, call mem_cgroup_commit_charge() at the end of this transaction to
> make sure this transaction is really finished.
>
> Fixes: 4c27fe4c4c84 ("userfaultfd: shmem: add shmem_mcopy_atomic_pte for userfaultfd support")
> Signed-off-by: Ma Wupeng <mawupeng1@huawei.com>
> ---
> mm/shmem.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 0788616696dc..0b06724c189e 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -2339,8 +2339,6 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
> if (ret)
> goto out_release_uncharge;
>
> - mem_cgroup_commit_charge(page, memcg, false, false);
> -
> _dst_pte = mk_pte(page, dst_vma->vm_page_prot);
> if (dst_vma->vm_flags & VM_WRITE)
> _dst_pte = pte_mkwrite(pte_mkdirty(_dst_pte));
> @@ -2366,6 +2364,8 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
> if (!pte_none(*dst_pte))
> goto out_release_uncharge_unlock;
>
> + mem_cgroup_commit_charge(page, memcg, false, false);
> +
> lru_cache_add_anon(page);
>
> spin_lock_irq(&info->lock);
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH stable 4.14,4.19 1/1] mm: Fix page counter mismatch in shmem_mfill_atomic_pte
2022-08-16 3:27 ` mawupeng
@ 2022-08-16 5:31 ` Greg KH
2022-08-16 7:04 ` mawupeng
0 siblings, 1 reply; 5+ messages in thread
From: Greg KH @ 2022-08-16 5:31 UTC (permalink / raw)
To: mawupeng
Cc: rppt, hughd, aarcange, hannes, linux-mm, linux-kernel,
wangkefeng.wang, willy
On Tue, Aug 16, 2022 at 11:27:08AM +0800, mawupeng wrote:
> Cc Greg
Cc Greg for what? I have no context here at all as to what you want me
to do...
totally confused,
greg k-h
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH stable 4.14,4.19 1/1] mm: Fix page counter mismatch in shmem_mfill_atomic_pte
2022-08-16 5:31 ` Greg KH
@ 2022-08-16 7:04 ` mawupeng
2022-08-16 7:15 ` Greg KH
0 siblings, 1 reply; 5+ messages in thread
From: mawupeng @ 2022-08-16 7:04 UTC (permalink / raw)
To: gregkh
Cc: mawupeng1, rppt, hughd, aarcange, hannes, linux-mm, linux-kernel,
wangkefeng.wang, willy
On 2022/8/16 13:31, Greg KH wrote:
> On Tue, Aug 16, 2022 at 11:27:08AM +0800, mawupeng wrote:
>> Cc Greg
>
> Cc Greg for what? I have no context here at all as to what you want me
> to do..
We found a bug related to memory cgroup counter in stable 4.14/4.19.
shmem_mfill_atomic_pte() wrongly called mem_cgroup_cancel_charge() in "success"
path, it should mem_cgroup_uncharge() to dec memory counter instead.
mem_cgroup_cancel_charge() should only be used if this transaction is
unsuccessful and mem_cgroup_uncharge() is used to do this if this transaction
succeed.
Commit 3fea5a499d57 ("mm: memcontrol: convert page cache to a new mem_cgroup_charge() API")
in v5.8-rc1 change is charge/uncharge/cancel logic so don't have this
problem.
This counter will underflow to negative maximum value and trigger oom to kill all
process include sshd and leave system unaccessible.
The reason cc you is that we want to merge this bugfix into stable 4.14/4.19.
The error call trace:
------------[ cut here ]------------
WARNING: CPU: 0 PID: 17127 at mm/page_counter.c:62 page_counter_cancel+0x57/0x90
RIP: 0010:page_counter_cancel+0x57/0x90
Call Trace:
page_counter_uncharge+0x33/0x60
uncharge_batch+0xb5/0x5f0
mem_cgroup_uncharge_list+0x102/0x170
release_pages+0x814/0xcc0
tlb_flush_mmu_free+0xa9/0x140
arch_tlb_finish_mmu+0xa4/0x140
tlb_finish_mmu+0x90/0xf0
exit_mmap+0x264/0x4b0
>
> totally confused,
>
> greg k-h
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH stable 4.14,4.19 1/1] mm: Fix page counter mismatch in shmem_mfill_atomic_pte
2022-08-16 7:04 ` mawupeng
@ 2022-08-16 7:15 ` Greg KH
0 siblings, 0 replies; 5+ messages in thread
From: Greg KH @ 2022-08-16 7:15 UTC (permalink / raw)
To: mawupeng
Cc: rppt, hughd, aarcange, hannes, linux-mm, linux-kernel,
wangkefeng.wang, willy
On Tue, Aug 16, 2022 at 03:04:08PM +0800, mawupeng wrote:
>
>
> On 2022/8/16 13:31, Greg KH wrote:
> > On Tue, Aug 16, 2022 at 11:27:08AM +0800, mawupeng wrote:
> >> Cc Greg
> >
> > Cc Greg for what? I have no context here at all as to what you want me
> > to do..
>
> We found a bug related to memory cgroup counter in stable 4.14/4.19.
> shmem_mfill_atomic_pte() wrongly called mem_cgroup_cancel_charge() in "success"
> path, it should mem_cgroup_uncharge() to dec memory counter instead.
> mem_cgroup_cancel_charge() should only be used if this transaction is
> unsuccessful and mem_cgroup_uncharge() is used to do this if this transaction
> succeed.
>
> Commit 3fea5a499d57 ("mm: memcontrol: convert page cache to a new mem_cgroup_charge() API")
> in v5.8-rc1 change is charge/uncharge/cancel logic so don't have this
> problem.
>
> This counter will underflow to negative maximum value and trigger oom to kill all
> process include sshd and leave system unaccessible.
>
> The reason cc you is that we want to merge this bugfix into stable 4.14/4.19.
<formletter>
This is not the correct way to submit patches for inclusion in the
stable kernel tree. Please read:
https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
for how to do this properly.
</formletter>
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2022-08-16 10:27 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-02 1:32 [PATCH stable 4.14,4.19 1/1] mm: Fix page counter mismatch in shmem_mfill_atomic_pte Wupeng Ma
2022-08-16 3:27 ` mawupeng
2022-08-16 5:31 ` Greg KH
2022-08-16 7:04 ` mawupeng
2022-08-16 7:15 ` Greg KH
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.