linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Martin Doucha <mdoucha@suse.cz>
To: Minchan Kim <minchan@kernel.org>, Petr Vorel <pvorel@suse.cz>
Cc: ltp@lists.linux.it, linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org,
	Nitin Gupta <ngupta@vflare.org>,
	Sergey Senozhatsky <senozhatsky@chromium.org>,
	Jens Axboe <axboe@kernel.dk>,
	OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>,
	Yang Xu <xuyang2018.jy@fujitsu.com>
Subject: Re: [PATCH 0/1] Possible bug in zram on ppc64le on vfat
Date: Thu, 10 Nov 2022 15:29:58 +0100	[thread overview]
Message-ID: <3ac740c0-954b-5e68-b413-0adc7bc5a2b5@suse.cz> (raw)
In-Reply-To: <Y2l3vJb1y2Jynf50@google.com>

On 07. 11. 22 22:25, Minchan Kim wrote:
> On Mon, Nov 07, 2022 at 08:11:35PM +0100, Petr Vorel wrote:
>> Hi all,
>>
>> following bug is trying to workaround an error on ppc64le, where
>> zram01.sh LTP test (there is also kernel selftest
>> tools/testing/selftests/zram/zram01.sh, but LTP test got further
>> updates) has often mem_used_total 0 although zram is already filled.
> 
> Hi, Petr,
> 
> Is it happening on only ppc64le?
> 
> Is it a new regression? What kernel version did you use?

Hi,
I've reported the same issue on kernels 4.12.14 and 5.3.18 internally to 
our kernel developers at SUSE. The bugreport is not public but I'll copy 
the bug description here:

New version of LTP test zram01 found a sysfile issue with zram devices 
mounted using VFAT filesystem. When when all available space is filled, 
e.g. by `dd if=/dev/zero of=/mnt/zram0/file`, the corresponding sysfile 
/sys/block/zram0/mm_stat will report that the compressed data size on 
the device is 0 and total memory usage is also 0. LTP test zram01 uses 
these values to calculate compression ratio, which results in division 
by zero.

The issue is specific to PPC64LE architecture and the VFAT filesystem. 
No other tested filesystem has this issue and I could not reproduce it 
on other archs (s390 not tested). The issue appears randomly about every 
3 test runs on SLE-15SP2 and 15SP3 (kernel 5.3). It appears less 
frequently on SLE-12SP5 (kernel 4.12). Other SLE version were not tested 
with the new test version yet. The previous version of the test did not 
check the VFAT filesystem on zram devices.

I've tried to debug the issue and collected some interesting data (all 
values come from zram device with 25M size limit and zstd compression 
algorithm):
- mm_stat values are correct after mkfs.vfat:
65536      220    65536 26214400    65536        0        0        0

- mm_stat values stay correct after mount:
65536      220    65536 26214400    65536        0        0        0

- the bug is triggered by filling the filesystem to capacity (using dd):
4194304        0        0 26214400   327680       64        0        0

- adding `sleep 1` between `dd` and reading mm_stat does not help
- adding sync between `dd` and reading mm_stat appears to fix the issue:
26214400     2404   262144 26214400   327680      399        0        0

-- 
Martin Doucha   mdoucha@suse.cz
QA Engineer for Software Maintenance
SUSE LINUX, s.r.o.
CORSO IIa
Krizikova 148/34
186 00 Prague 8
Czech Republic


  parent reply	other threads:[~2022-11-10 14:30 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-07 19:11 [PATCH 0/1] Possible bug in zram on ppc64le on vfat Petr Vorel
2022-11-07 19:11 ` [PATCH 1/1] zram01.sh: Workaround division by 0 on vfat on ppc64le Petr Vorel
     [not found]   ` <CAEemH2fYv_=9UWdWB7VDiFOd8EC89qdCbxnPcTPAtGnkwLOYFg@mail.gmail.com>
2022-11-21  8:59     ` [LTP] " Petr Vorel
2022-11-07 21:25 ` [PATCH 0/1] Possible bug in zram on ppc64le on vfat Minchan Kim
2022-11-07 21:47   ` Petr Vorel
2022-11-07 22:42     ` Petr Vorel
2022-11-08  1:05       ` Sergey Senozhatsky
2022-11-09 22:08         ` Petr Vorel
2022-11-10 23:04     ` Minchan Kim
2022-11-11  9:29       ` Petr Vorel
2022-11-10 14:29   ` Martin Doucha [this message]
2022-11-11  0:48     ` Sergey Senozhatsky
2022-11-21  9:41       ` Petr Vorel
2022-11-22 14:56       ` Martin Doucha
2022-11-22 15:07         ` Petr Vorel
2022-11-29  4:38           ` Sergey Senozhatsky
2023-05-02 15:23             ` Martin Doucha
2022-11-11  9:18     ` Petr Vorel
2023-08-04  6:37   ` Ian Wienand
2023-08-07  4:44     ` Ian Wienand
2023-08-07  5:19       ` Sergey Senozhatsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3ac740c0-954b-5e68-b413-0adc7bc5a2b5@suse.cz \
    --to=mdoucha@suse.cz \
    --cc=axboe@kernel.dk \
    --cc=hirofumi@mail.parknet.co.jp \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=ltp@lists.linux.it \
    --cc=minchan@kernel.org \
    --cc=ngupta@vflare.org \
    --cc=pvorel@suse.cz \
    --cc=senozhatsky@chromium.org \
    --cc=xuyang2018.jy@fujitsu.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).