linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ritesh Harjani <riteshh@linux.ibm.com>
To: Damien Le Moal <Damien.LeMoal@wdc.com>,
	"linux-f2fs-devel@lists.sourceforge.net" 
	<linux-f2fs-devel@lists.sourceforge.net>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Jaegeuk Kim <jaegeuk@kernel.org>, Chao Yu <yuchao0@huawei.com>
Cc: "linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	Javier Gonzalez <javier@javigon.com>,
	Shinichiro Kawasaki <shinichiro.kawasaki@wdc.com>
Subject: Re: [PATCH] f2fs: Fix direct IO handling
Date: Thu, 28 Nov 2019 15:50:30 +0530	[thread overview]
Message-ID: <20191128102033.6085952057@d06av21.portsmouth.uk.ibm.com> (raw)
In-Reply-To: <BYAPR04MB5816C82F708612381216895BE7470@BYAPR04MB5816.namprd04.prod.outlook.com>



On 11/28/19 7:40 AM, Damien Le Moal wrote:
> On 2019/11/26 17:34, Ritesh Harjani wrote:
>> Hello Damien,
>>
>> IIUC, you are trying to fix a stale data read by DIO read for the case
>> you explained in your patch w.r.t. DIO-write forced to write as buffIO.
>>
>> Coincidentally I was just looking at the same code path just now.
>> So I do have a query to you/f2fs group. Below could be silly one, as I
>> don't understand F2FS in great detail.
>>
>> How is the stale data by DIO read, is protected against a mmap
>> writes via f2fs_vm_page_mkwrite?
>>
>> f2fs_vm_page_mkwrite()		 f2fs_direct_IO (read)
>> 					filemap_write_and_wait_range()
>> 	-> f2fs_get_blocks()				
>> 					 -> submit_bio()
>>
>> 	-> set_page_dirty()
>>
>> Is above race possible with current f2fs code?
>> i.e. f2fs_direct_IO could read the stale data from the blocks
>> which were allocated due to mmap fault?
> 
> The faulted page is locked until the fault is fully processed so direct
> IO has to wait for that to complete first.

How about below parallelism?

  f2fs_vm_page_mkwrite()		 f2fs_direct_IO (read)
  					filemap_write_and_wait_range()
	-> down_read(->i_mmap_sem);
	-> lock_page()
	-> f2fs_get_blocks()				
  					 -> submit_bio()

  	-> set_page_dirty()

Can above DIO read not expose the stale data from block which was
allocated in f2fs_vm_page_mkwrite path?


> 
>>
>> Am I missing something here?
>>
>> -ritesh
>>
>> On 11/26/19 1:27 PM, Damien Le Moal wrote:
>>> f2fs_preallocate_blocks() identifies direct IOs using the IOCB_DIRECT
>>> flag for a kiocb structure. However, the file system direct IO handler
>>> function f2fs_direct_IO() may have decided that a direct IO has to be
>>> exececuted as a buffered IO using the function f2fs_force_buffered_io().
>>> This is the case for instance for volumes including zoned block device
>>> and for unaligned write IOs with LFS mode enabled.
>>>
>>> These 2 different methods of identifying direct IOs can result in
>>> inconsistencies generating stale data access for direct reads after a
>>> direct IO write that is treated as a buffered write. Fix this
>>> inconsistency by combining the IOCB_DIRECT flag test with the result
>>> of f2fs_force_buffered_io().
>>>
>>> Reported-by: Javier Gonzalez <javier@javigon.com>
>>> Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
>>> ---
>>>    fs/f2fs/data.c | 4 +++-
>>>    1 file changed, 3 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
>>> index 5755e897a5f0..8ac2d3b70022 100644
>>> --- a/fs/f2fs/data.c
>>> +++ b/fs/f2fs/data.c
>>> @@ -1073,6 +1073,8 @@ int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *from)
>>>    	int flag;
>>>    	int err = 0;
>>>    	bool direct_io = iocb->ki_flags & IOCB_DIRECT;
>>> +	bool do_direct_io = direct_io &&
>>> +		!f2fs_force_buffered_io(inode, iocb, from);
>>>    
>>>    	/* convert inline data for Direct I/O*/
>>>    	if (direct_io) {
>>> @@ -1081,7 +1083,7 @@ int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *from)
>>>    			return err;
>>>    	}
>>>    
>>> -	if (direct_io && allow_outplace_dio(inode, iocb, from))
>>> +	if (do_direct_io && allow_outplace_dio(inode, iocb, from))
>>>    		return 0;
>>>    
>>>    	if (is_inode_flag_set(inode, FI_NO_PREALLOC))
>>>
>>
>>
> 
> 


  reply	other threads:[~2019-11-28 10:20 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-26  7:57 [PATCH] f2fs: Fix direct IO handling Damien Le Moal
2019-11-26  8:34 ` Ritesh Harjani
2019-11-28  2:10   ` Damien Le Moal
2019-11-28 10:20     ` Ritesh Harjani [this message]
2019-11-26 16:33 ` Christoph Hellwig
2019-11-26 23:44 ` Jaegeuk Kim
2019-11-28  2:34   ` Shinichiro Kawasaki
2019-11-29  3:35   ` Javier Gonzalez
2019-11-30  6:43   ` Chao Yu
2019-12-03 17:33   ` [PATCH v2] " Jaegeuk Kim
2019-12-04  1:27     ` Chao Yu
2019-12-04  4:01     ` Shinichiro Kawasaki
2019-12-04  8:16     ` Javier Gonzalez

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191128102033.6085952057@d06av21.portsmouth.uk.ibm.com \
    --to=riteshh@linux.ibm.com \
    --cc=Damien.LeMoal@wdc.com \
    --cc=jaegeuk@kernel.org \
    --cc=javier@javigon.com \
    --cc=linux-f2fs-devel@lists.sourceforge.net \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=shinichiro.kawasaki@wdc.com \
    --cc=yuchao0@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).