From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from prv-mh.provo.novell.com ([137.65.248.74]:58878 "EHLO prv-mh.provo.novell.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752273AbdEDElY (ORCPT ); Thu, 4 May 2017 00:41:24 -0400 Message-Id: <590B1185020000F900073C66@prv-mh.provo.novell.com> Date: Wed, 03 May 2017 21:33:25 -0600 From: "Gang He" To: , Subject: GFS2 file system does not invalidate page cache after direct IO write References: <590B1185020000F900073C66@prv-mh.provo.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 8BIT Content-Disposition: inline Sender: linux-fsdevel-owner@vger.kernel.org List-ID: Hello Guys, I found a interesting thing on GFS2 file system, After I did a direct IO write for a whole file, I still saw there were some page caches in this inode. It looks this GFS2 behavior does not follow file system POSIX semantics, I just want to know this problem belongs to a know issue or we can fix it? By the way, I did the same testing on EXT4 and OCFS2 file systems, the result looks OK. I will paste my testing command lines and outputs as below, For EXT4 file system, tb-nd1:/mnt/ext4 # rm -rf f3 tb-nd1:/mnt/ext4 # dd if=/dev/urandom of=./f3 bs=1M count=4 oflag=direct 4+0 records in 4+0 records out 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0393563 s, 107 MB/s tb-nd1:/mnt/ext4 # vmtouch -v f3 f3 [ ] 0/1024 Files: 1 Directories: 0 Resident Pages: 0/1024 0/4M 0% Elapsed: 0.000424 seconds tb-nd1:/mnt/ext4 # For OCFS2 file system, tb-nd1:/mnt/ocfs2 # rm -rf f3 tb-nd1:/mnt/ocfs2 # dd if=/dev/urandom of=./f3 bs=1M count=4 oflag=direct 4+0 records in 4+0 records out 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0592058 s, 70.8 MB/s tb-nd1:/mnt/ocfs2 # vmtouch -v f3 f3 [ ] 0/1024 Files: 1 Directories: 0 Resident Pages: 0/1024 0/4M 0% Elapsed: 0.000226 seconds For GFS2 file system, tb-nd1:/mnt/gfs2 # rm -rf f3 tb-nd1:/mnt/gfs2 # dd if=/dev/urandom of=./f3 bs=1M count=4 oflag=direct 4+0 records in 4+0 records out 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0579509 s, 72.4 MB/s tb-nd1:/mnt/gfs2 # vmtouch -v f3 f3 [ oo oOo ] 48/1024 Files: 1 Directories: 0 Resident Pages: 48/1024 192K/4M 4.69% Elapsed: 0.000287 seconds For vmtouch tool, you can download it's source code from https://github.com/hoytech/vmtouch I also printk the inode's address_space after a full file direct-IO write in kernel space, the nrpages value in the inode's address_space is always greater than zero. Thanks Gang From mboxrd@z Thu Jan 1 00:00:00 1970 From: Gang He Date: Wed, 03 May 2017 21:33:25 -0600 Subject: [Cluster-devel] GFS2 file system does not invalidate page cache after direct IO write References: <590B1185020000F900073C66@prv-mh.provo.novell.com> Message-ID: <590B1185020000F900073C66@prv-mh.provo.novell.com> List-Id: To: cluster-devel.redhat.com MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Hello Guys, I found a interesting thing on GFS2 file system, After I did a direct IO write for a whole file, I still saw there were some page caches in this inode. It looks this GFS2 behavior does not follow file system POSIX semantics, I just want to know this problem belongs to a know issue or we can fix it? By the way, I did the same testing on EXT4 and OCFS2 file systems, the result looks OK. I will paste my testing command lines and outputs as below, For EXT4 file system, tb-nd1:/mnt/ext4 # rm -rf f3 tb-nd1:/mnt/ext4 # dd if=/dev/urandom of=./f3 bs=1M count=4 oflag=direct 4+0 records in 4+0 records out 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0393563 s, 107 MB/s tb-nd1:/mnt/ext4 # vmtouch -v f3 f3 [ ] 0/1024 Files: 1 Directories: 0 Resident Pages: 0/1024 0/4M 0% Elapsed: 0.000424 seconds tb-nd1:/mnt/ext4 # For OCFS2 file system, tb-nd1:/mnt/ocfs2 # rm -rf f3 tb-nd1:/mnt/ocfs2 # dd if=/dev/urandom of=./f3 bs=1M count=4 oflag=direct 4+0 records in 4+0 records out 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0592058 s, 70.8 MB/s tb-nd1:/mnt/ocfs2 # vmtouch -v f3 f3 [ ] 0/1024 Files: 1 Directories: 0 Resident Pages: 0/1024 0/4M 0% Elapsed: 0.000226 seconds For GFS2 file system, tb-nd1:/mnt/gfs2 # rm -rf f3 tb-nd1:/mnt/gfs2 # dd if=/dev/urandom of=./f3 bs=1M count=4 oflag=direct 4+0 records in 4+0 records out 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0579509 s, 72.4 MB/s tb-nd1:/mnt/gfs2 # vmtouch -v f3 f3 [ oo oOo ] 48/1024 Files: 1 Directories: 0 Resident Pages: 48/1024 192K/4M 4.69% Elapsed: 0.000287 seconds For vmtouch tool, you can download it's source code from https://github.com/hoytech/vmtouch I also printk the inode's address_space after a full file direct-IO write in kernel space, the nrpages value in the inode's address_space is always greater than zero. Thanks Gang