* Re: [bug report] mm/zswap :memory corruption after zswap_load().
@ 2024-03-28 15:45 Trisha Busch
0 siblings, 0 replies; 3+ messages in thread
From: Trisha Busch @ 2024-03-28 15:45 UTC (permalink / raw)
To: hezhongkun.hzk
Cc: akpm, hannes, linux-mm, nphamcs, wuyun.abel, yosryahmed, zhouchengming
Sent from my iPhone
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [bug report] mm/zswap :memory corruption after zswap_load().
2024-03-21 4:34 Zhongkun He
@ 2024-03-21 4:42 ` Chengming Zhou
0 siblings, 0 replies; 3+ messages in thread
From: Chengming Zhou @ 2024-03-21 4:42 UTC (permalink / raw)
To: Zhongkun He, Johannes Weiner, Yosry Ahmed, Andrew Morton
Cc: linux-mm, wuyun.abel, zhouchengming, Nhat Pham
On 2024/3/21 12:34, Zhongkun He wrote:
> Hey folks,
>
> Recently, I tested the zswap with memory reclaiming in the mainline
> (6.8) and found a memory corruption issue related to exclusive loads.
Is this fix included? 13ddaf26be32 ("mm/swap: fix race when skipping swapcache")
This fix avoids concurrent swapin using the same swap entry.
Thanks.
>
>
> root@**:/sys/fs/cgroup/zz# stress --vm 5 --vm-bytes 1g --vm-hang 3 --vm-keep
> stress: info: [31753] dispatching hogs: 0 cpu, 0 io, 5 vm, 0 hdd
> stress: FAIL: [31758] (522) memory corruption at: 0x7f347ed1a010
> stress: FAIL: [31753] (394) <-- worker 31758 returned error 1
> stress: WARN: [31753] (396) now reaping child worker processes
> stress: FAIL: [31753] (451) failed run completed in 14s
>
>
> 1. Test step(the frequency of memory reclaiming has been accelerated):
> -------------------------
> a. set up the zswap, zram and cgroup V2
> b. echo 0 > /sys/kernel/mm/lru_gen/enabled
> (Increase the probability of problems occurring)
> c. mkdir /sys/fs/cgroup/zz
> echo $$ > /sys/fs/cgroup/zz/cgroup.procs
> cd /sys/fs/cgroup/zz/
> stress --vm 5 --vm-bytes 1g --vm-hang 3 --vm-keep
>
> e. in other shell:
> while :;do for i in {1..5};do echo 20g >
> /sys/fs/cgroup/zz/memory.reclaim & done;sleep 1;done
>
> 2. Root cause:
> --------------------------
> With a small probability, the page fault will occur twice with the
> original pte, even if a new pte has been successfully set.
> Unfortunately, zswap_entry has been released during the first page fault
> with exclusive loads, so zswap_load will fail, and there is no corresponding
> data in swap space, memory corruption occurs.
>
> bpftrace -e'k:zswap_load {printf("%lld, %lld\n", ((struct page
> *)arg0)->private,nsecs)}'
> --include linux/mm_types.h > a.txt
>
> look up the same index:
>
> index nsecs
> 1318876, 8976040736819
> 1318876, 8976040746078
>
> 4123110, 8976234682970
> 4123110, 8976234689736
>
> 2268896, 8976660124792
> 2268896, 8976660130607
>
> 4634105, 8976662117938
> 4634105, 8976662127596
>
> 3. Solution
>
> Should we free zswap_entry in batches so that zswap_entry will be
> valid when the next page fault occurs with the
> original pte? It would be great if there are other better solutions.
>
^ permalink raw reply [flat|nested] 3+ messages in thread
* [bug report] mm/zswap :memory corruption after zswap_load().
@ 2024-03-21 4:34 Zhongkun He
2024-03-21 4:42 ` Chengming Zhou
0 siblings, 1 reply; 3+ messages in thread
From: Zhongkun He @ 2024-03-21 4:34 UTC (permalink / raw)
To: Johannes Weiner, Yosry Ahmed, Andrew Morton
Cc: linux-mm, wuyun.abel, zhouchengming, Nhat Pham
Hey folks,
Recently, I tested the zswap with memory reclaiming in the mainline
(6.8) and found a memory corruption issue related to exclusive loads.
root@**:/sys/fs/cgroup/zz# stress --vm 5 --vm-bytes 1g --vm-hang 3 --vm-keep
stress: info: [31753] dispatching hogs: 0 cpu, 0 io, 5 vm, 0 hdd
stress: FAIL: [31758] (522) memory corruption at: 0x7f347ed1a010
stress: FAIL: [31753] (394) <-- worker 31758 returned error 1
stress: WARN: [31753] (396) now reaping child worker processes
stress: FAIL: [31753] (451) failed run completed in 14s
1. Test step(the frequency of memory reclaiming has been accelerated):
-------------------------
a. set up the zswap, zram and cgroup V2
b. echo 0 > /sys/kernel/mm/lru_gen/enabled
(Increase the probability of problems occurring)
c. mkdir /sys/fs/cgroup/zz
echo $$ > /sys/fs/cgroup/zz/cgroup.procs
cd /sys/fs/cgroup/zz/
stress --vm 5 --vm-bytes 1g --vm-hang 3 --vm-keep
e. in other shell:
while :;do for i in {1..5};do echo 20g >
/sys/fs/cgroup/zz/memory.reclaim & done;sleep 1;done
2. Root cause:
--------------------------
With a small probability, the page fault will occur twice with the
original pte, even if a new pte has been successfully set.
Unfortunately, zswap_entry has been released during the first page fault
with exclusive loads, so zswap_load will fail, and there is no corresponding
data in swap space, memory corruption occurs.
bpftrace -e'k:zswap_load {printf("%lld, %lld\n", ((struct page
*)arg0)->private,nsecs)}'
--include linux/mm_types.h > a.txt
look up the same index:
index nsecs
1318876, 8976040736819
1318876, 8976040746078
4123110, 8976234682970
4123110, 8976234689736
2268896, 8976660124792
2268896, 8976660130607
4634105, 8976662117938
4634105, 8976662127596
3. Solution
Should we free zswap_entry in batches so that zswap_entry will be
valid when the next page fault occurs with the
original pte? It would be great if there are other better solutions.
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2024-03-28 15:45 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-28 15:45 [bug report] mm/zswap :memory corruption after zswap_load() Trisha Busch
-- strict thread matches above, loose matches on Subject: below --
2024-03-21 4:34 Zhongkun He
2024-03-21 4:42 ` Chengming Zhou
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.