All of lore.kernel.org
 help / color / mirror / Atom feed
From: "ruansy.fnst@fujitsu.com" <ruansy.fnst@fujitsu.com>
To: Ritesh Harjani <ritesh.list@gmail.com>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-xfs@vger.kernel.org" <linux-xfs@vger.kernel.org>,
	"linux-nvdimm@lists.01.org" <linux-nvdimm@lists.01.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"darrick.wong@oracle.com" <darrick.wong@oracle.com>,
	"willy@infradead.org" <willy@infradead.org>,
	"jack@suse.cz" <jack@suse.cz>,
	"viro@zeniv.linux.org.uk" <viro@zeniv.linux.org.uk>,
	"linux-btrfs@vger.kernel.org" <linux-btrfs@vger.kernel.org>,
	"ocfs2-devel@oss.oracle.com" <ocfs2-devel@oss.oracle.com>,
	"david@fromorbit.com" <david@fromorbit.com>,
	"hch@lst.de" <hch@lst.de>, "rgoldwyn@suse.de" <rgoldwyn@suse.de>,
	Goldwyn Rodrigues <rgoldwyn@suse.com>
Subject: RE: [PATCH v3 05/10] fsdax: Replace mmap entry in case of CoW
Date: Thu, 1 Apr 2021 07:03:19 +0000	[thread overview]
Message-ID: <OSBPR01MB2920C6CE58C98BF5A42E1700F47B9@OSBPR01MB2920.jpnprd01.prod.outlook.com> (raw)
In-Reply-To: <20210401063932.tro7a4hhy25zdmho@riteshh-domain>



> -----Original Message-----
> From: Ritesh Harjani <ritesh.list@gmail.com>
> Sent: Thursday, April 1, 2021 2:40 PM
> Subject: Re: [PATCH v3 05/10] fsdax: Replace mmap entry in case of CoW
> 
> On 21/03/19 09:52AM, Shiyang Ruan wrote:
> > We replace the existing entry to the newly allocated one in case of CoW.
> > Also, we mark the entry as PAGECACHE_TAG_TOWRITE so writeback marks
> > this entry as writeprotected.  This helps us snapshots so new write
> > pagefaults after snapshots trigger a CoW.
> >
> 
> Please correct me here. So the flow is like this.
> 1. In case of CoW or a reflinked file, on an mmaped file if write is attempted,
>    Then in DAX fault handler code, ->iomap_begin() on a given filesystem will
>    populate iomap and srcmap. srcmap being from where the read needs to be
>    attempted from and iomap on where the new write should go to.
> 2. So the dax_insert_entry() code as part of the fault handling will take care
>    of removing the old entry and inserting the new pfn entry to xas and mark
>    it with PAGECACHE_TAG_TOWRITE so that dax writeback can mark the
> entry as
>    write protected.
> Is my above understanding correct?

Yes, you are right.

> 
> > Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
> > Signed-off-by: Shiyang Ruan <ruansy.fnst@fujitsu.com>
> > Reviewed-by: Christoph Hellwig <hch@lst.de>
> > ---
> >  fs/dax.c | 37 ++++++++++++++++++++++++++-----------
> >  1 file changed, 26 insertions(+), 11 deletions(-)
> >
> > diff --git a/fs/dax.c b/fs/dax.c
> > index 181aad97136a..cfe513eb111e 100644
> > --- a/fs/dax.c
> > +++ b/fs/dax.c
> > @@ -722,6 +722,9 @@ static int copy_cow_page_dax(struct block_device
> *bdev, struct dax_device *dax_d
> >  	return 0;
> >  }
> >
> > +#define DAX_IF_DIRTY		(1 << 0)
> > +#define DAX_IF_COW		(1 << 1)
> > +
> >
> small comment expalining this means DAX insert flags used in dax_insert_entry()

OK.  I'll add it.
> 
> >
> >  /*
> >   * By this point grab_mapping_entry() has ensured that we have a locked
> entry
> >   * of the appropriate size so we don't have to worry about
> > downgrading PMDs to @@ -729,16 +732,19 @@ static int
> copy_cow_page_dax(struct block_device *bdev, struct dax_device *dax_d
> >   * already in the tree, we will skip the insertion and just dirty the PMD as
> >   * appropriate.
> >   */
> > -static void *dax_insert_entry(struct xa_state *xas,
> > -		struct address_space *mapping, struct vm_fault *vmf,
> > -		void *entry, pfn_t pfn, unsigned long flags, bool dirty)
> > +static void *dax_insert_entry(struct xa_state *xas, struct vm_fault *vmf,
> > +		void *entry, pfn_t pfn, unsigned long flags,
> > +		unsigned int insert_flags)
> >  {
> > +	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
> >  	void *new_entry = dax_make_entry(pfn, flags);
> > +	bool dirty = insert_flags & DAX_IF_DIRTY;
> > +	bool cow = insert_flags & DAX_IF_COW;
> >
> >  	if (dirty)
> >  		__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
> >
> > -	if (dax_is_zero_entry(entry) && !(flags & DAX_ZERO_PAGE)) {
> > +	if (cow || (dax_is_zero_entry(entry) && !(flags & DAX_ZERO_PAGE))) {
> >  		unsigned long index = xas->xa_index;
> >  		/* we are replacing a zero page with block mapping */
> >  		if (dax_is_pmd_entry(entry))
> > @@ -750,7 +756,7 @@ static void *dax_insert_entry(struct xa_state
> > *xas,
> >
> >  	xas_reset(xas);
> >  	xas_lock_irq(xas);
> > -	if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
> > +	if (cow || dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
> >  		void *old;
> >
> >  		dax_disassociate_entry(entry, mapping, false); @@ -774,6 +780,9
> @@
> > static void *dax_insert_entry(struct xa_state *xas,
> >  	if (dirty)
> >  		xas_set_mark(xas, PAGECACHE_TAG_DIRTY);
> >
> > +	if (cow)
> > +		xas_set_mark(xas, PAGECACHE_TAG_TOWRITE);
> > +
> >  	xas_unlock_irq(xas);
> >  	return entry;
> >  }
> > @@ -1098,8 +1107,7 @@ static vm_fault_t dax_load_hole(struct xa_state
> *xas,
> >  	pfn_t pfn = pfn_to_pfn_t(my_zero_pfn(vaddr));
> >  	vm_fault_t ret;
> >
> > -	*entry = dax_insert_entry(xas, mapping, vmf, *entry, pfn,
> > -			DAX_ZERO_PAGE, false);
> > +	*entry = dax_insert_entry(xas, vmf, *entry, pfn, DAX_ZERO_PAGE, 0);
> >
> >  	ret = vmf_insert_mixed(vmf->vma, vaddr, pfn);
> >  	trace_dax_load_hole(inode, vmf, ret); @@ -1126,8 +1134,8 @@ static
> > vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf,
> >  		goto fallback;
> >
> >  	pfn = page_to_pfn_t(zero_page);
> > -	*entry = dax_insert_entry(xas, mapping, vmf, *entry, pfn,
> > -			DAX_PMD | DAX_ZERO_PAGE, false);
> > +	*entry = dax_insert_entry(xas, vmf, *entry, pfn,
> > +				  DAX_PMD | DAX_ZERO_PAGE, 0);
> >
> >  	if (arch_needs_pgtable_deposit()) {
> >  		pgtable = pte_alloc_one(vma->vm_mm); @@ -1431,6 +1439,7 @@
> static
> > vm_fault_t dax_fault_actor(struct vm_fault *vmf, pfn_t *pfnp,
> >  	loff_t pos = (loff_t)xas->xa_offset << PAGE_SHIFT;
> >  	bool write = vmf->flags & FAULT_FLAG_WRITE;
> >  	bool sync = dax_fault_is_synchronous(flags, vmf->vma, iomap);
> > +	unsigned int insert_flags = 0;
> >  	int err = 0;
> >  	pfn_t pfn;
> >  	void *kaddr;
> > @@ -1453,8 +1462,14 @@ static vm_fault_t dax_fault_actor(struct vm_fault
> *vmf, pfn_t *pfnp,
> >  	if (err)
> >  		return dax_fault_return(err);
> >
> > -	entry = dax_insert_entry(xas, mapping, vmf, entry, pfn, 0,
> > -				 write && !sync);
> > +	if (write) {
> > +		if (!sync)
> > +			insert_flags |= DAX_IF_DIRTY;
> > +		if (iomap->flags & IOMAP_F_SHARED)
> > +			insert_flags |= DAX_IF_COW;
> > +	}
> > +
> > +	entry = dax_insert_entry(xas, vmf, entry, pfn, 0, insert_flags);
> >
> >  	if (write && srcmap->addr != iomap->addr) {
> >  		err = dax_iomap_cow_copy(pos, size, size, srcmap, kaddr, false);
> >
> 
> Rest looks good to me. Please feel free to add
> Reviewed-by: Ritesh Harjani <riteshh@gmail.com>
> 
> sorry about changing my email in between of this code review.
> I am planning to use above gmail id as primary account for all upstream work
> from now.
> 

--
Thanks,
Ruan Shiyang.

> > --
> > 2.30.1
> >
> >
> >
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

WARNING: multiple messages have this Message-ID (diff)
From: "ruansy.fnst@fujitsu.com" <ruansy.fnst@fujitsu.com>
To: Ritesh Harjani <ritesh.list@gmail.com>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-xfs@vger.kernel.org" <linux-xfs@vger.kernel.org>,
	"linux-nvdimm@lists.01.org" <linux-nvdimm@lists.01.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"darrick.wong@oracle.com" <darrick.wong@oracle.com>,
	"dan.j.williams@intel.com" <dan.j.williams@intel.com>,
	"willy@infradead.org" <willy@infradead.org>,
	"jack@suse.cz" <jack@suse.cz>,
	"viro@zeniv.linux.org.uk" <viro@zeniv.linux.org.uk>,
	"linux-btrfs@vger.kernel.org" <linux-btrfs@vger.kernel.org>,
	"ocfs2-devel@oss.oracle.com" <ocfs2-devel@oss.oracle.com>,
	"david@fromorbit.com" <david@fromorbit.com>,
	"hch@lst.de" <hch@lst.de>, "rgoldwyn@suse.de" <rgoldwyn@suse.de>,
	Goldwyn Rodrigues <rgoldwyn@suse.com>
Subject: RE: [PATCH v3 05/10] fsdax: Replace mmap entry in case of CoW
Date: Thu, 1 Apr 2021 07:03:19 +0000	[thread overview]
Message-ID: <OSBPR01MB2920C6CE58C98BF5A42E1700F47B9@OSBPR01MB2920.jpnprd01.prod.outlook.com> (raw)
In-Reply-To: <20210401063932.tro7a4hhy25zdmho@riteshh-domain>



> -----Original Message-----
> From: Ritesh Harjani <ritesh.list@gmail.com>
> Sent: Thursday, April 1, 2021 2:40 PM
> Subject: Re: [PATCH v3 05/10] fsdax: Replace mmap entry in case of CoW
> 
> On 21/03/19 09:52AM, Shiyang Ruan wrote:
> > We replace the existing entry to the newly allocated one in case of CoW.
> > Also, we mark the entry as PAGECACHE_TAG_TOWRITE so writeback marks
> > this entry as writeprotected.  This helps us snapshots so new write
> > pagefaults after snapshots trigger a CoW.
> >
> 
> Please correct me here. So the flow is like this.
> 1. In case of CoW or a reflinked file, on an mmaped file if write is attempted,
>    Then in DAX fault handler code, ->iomap_begin() on a given filesystem will
>    populate iomap and srcmap. srcmap being from where the read needs to be
>    attempted from and iomap on where the new write should go to.
> 2. So the dax_insert_entry() code as part of the fault handling will take care
>    of removing the old entry and inserting the new pfn entry to xas and mark
>    it with PAGECACHE_TAG_TOWRITE so that dax writeback can mark the
> entry as
>    write protected.
> Is my above understanding correct?

Yes, you are right.

> 
> > Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
> > Signed-off-by: Shiyang Ruan <ruansy.fnst@fujitsu.com>
> > Reviewed-by: Christoph Hellwig <hch@lst.de>
> > ---
> >  fs/dax.c | 37 ++++++++++++++++++++++++++-----------
> >  1 file changed, 26 insertions(+), 11 deletions(-)
> >
> > diff --git a/fs/dax.c b/fs/dax.c
> > index 181aad97136a..cfe513eb111e 100644
> > --- a/fs/dax.c
> > +++ b/fs/dax.c
> > @@ -722,6 +722,9 @@ static int copy_cow_page_dax(struct block_device
> *bdev, struct dax_device *dax_d
> >  	return 0;
> >  }
> >
> > +#define DAX_IF_DIRTY		(1 << 0)
> > +#define DAX_IF_COW		(1 << 1)
> > +
> >
> small comment expalining this means DAX insert flags used in dax_insert_entry()

OK.  I'll add it.
> 
> >
> >  /*
> >   * By this point grab_mapping_entry() has ensured that we have a locked
> entry
> >   * of the appropriate size so we don't have to worry about
> > downgrading PMDs to @@ -729,16 +732,19 @@ static int
> copy_cow_page_dax(struct block_device *bdev, struct dax_device *dax_d
> >   * already in the tree, we will skip the insertion and just dirty the PMD as
> >   * appropriate.
> >   */
> > -static void *dax_insert_entry(struct xa_state *xas,
> > -		struct address_space *mapping, struct vm_fault *vmf,
> > -		void *entry, pfn_t pfn, unsigned long flags, bool dirty)
> > +static void *dax_insert_entry(struct xa_state *xas, struct vm_fault *vmf,
> > +		void *entry, pfn_t pfn, unsigned long flags,
> > +		unsigned int insert_flags)
> >  {
> > +	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
> >  	void *new_entry = dax_make_entry(pfn, flags);
> > +	bool dirty = insert_flags & DAX_IF_DIRTY;
> > +	bool cow = insert_flags & DAX_IF_COW;
> >
> >  	if (dirty)
> >  		__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
> >
> > -	if (dax_is_zero_entry(entry) && !(flags & DAX_ZERO_PAGE)) {
> > +	if (cow || (dax_is_zero_entry(entry) && !(flags & DAX_ZERO_PAGE))) {
> >  		unsigned long index = xas->xa_index;
> >  		/* we are replacing a zero page with block mapping */
> >  		if (dax_is_pmd_entry(entry))
> > @@ -750,7 +756,7 @@ static void *dax_insert_entry(struct xa_state
> > *xas,
> >
> >  	xas_reset(xas);
> >  	xas_lock_irq(xas);
> > -	if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
> > +	if (cow || dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
> >  		void *old;
> >
> >  		dax_disassociate_entry(entry, mapping, false); @@ -774,6 +780,9
> @@
> > static void *dax_insert_entry(struct xa_state *xas,
> >  	if (dirty)
> >  		xas_set_mark(xas, PAGECACHE_TAG_DIRTY);
> >
> > +	if (cow)
> > +		xas_set_mark(xas, PAGECACHE_TAG_TOWRITE);
> > +
> >  	xas_unlock_irq(xas);
> >  	return entry;
> >  }
> > @@ -1098,8 +1107,7 @@ static vm_fault_t dax_load_hole(struct xa_state
> *xas,
> >  	pfn_t pfn = pfn_to_pfn_t(my_zero_pfn(vaddr));
> >  	vm_fault_t ret;
> >
> > -	*entry = dax_insert_entry(xas, mapping, vmf, *entry, pfn,
> > -			DAX_ZERO_PAGE, false);
> > +	*entry = dax_insert_entry(xas, vmf, *entry, pfn, DAX_ZERO_PAGE, 0);
> >
> >  	ret = vmf_insert_mixed(vmf->vma, vaddr, pfn);
> >  	trace_dax_load_hole(inode, vmf, ret); @@ -1126,8 +1134,8 @@ static
> > vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf,
> >  		goto fallback;
> >
> >  	pfn = page_to_pfn_t(zero_page);
> > -	*entry = dax_insert_entry(xas, mapping, vmf, *entry, pfn,
> > -			DAX_PMD | DAX_ZERO_PAGE, false);
> > +	*entry = dax_insert_entry(xas, vmf, *entry, pfn,
> > +				  DAX_PMD | DAX_ZERO_PAGE, 0);
> >
> >  	if (arch_needs_pgtable_deposit()) {
> >  		pgtable = pte_alloc_one(vma->vm_mm); @@ -1431,6 +1439,7 @@
> static
> > vm_fault_t dax_fault_actor(struct vm_fault *vmf, pfn_t *pfnp,
> >  	loff_t pos = (loff_t)xas->xa_offset << PAGE_SHIFT;
> >  	bool write = vmf->flags & FAULT_FLAG_WRITE;
> >  	bool sync = dax_fault_is_synchronous(flags, vmf->vma, iomap);
> > +	unsigned int insert_flags = 0;
> >  	int err = 0;
> >  	pfn_t pfn;
> >  	void *kaddr;
> > @@ -1453,8 +1462,14 @@ static vm_fault_t dax_fault_actor(struct vm_fault
> *vmf, pfn_t *pfnp,
> >  	if (err)
> >  		return dax_fault_return(err);
> >
> > -	entry = dax_insert_entry(xas, mapping, vmf, entry, pfn, 0,
> > -				 write && !sync);
> > +	if (write) {
> > +		if (!sync)
> > +			insert_flags |= DAX_IF_DIRTY;
> > +		if (iomap->flags & IOMAP_F_SHARED)
> > +			insert_flags |= DAX_IF_COW;
> > +	}
> > +
> > +	entry = dax_insert_entry(xas, vmf, entry, pfn, 0, insert_flags);
> >
> >  	if (write && srcmap->addr != iomap->addr) {
> >  		err = dax_iomap_cow_copy(pos, size, size, srcmap, kaddr, false);
> >
> 
> Rest looks good to me. Please feel free to add
> Reviewed-by: Ritesh Harjani <riteshh@gmail.com>
> 
> sorry about changing my email in between of this code review.
> I am planning to use above gmail id as primary account for all upstream work
> from now.
> 

--
Thanks,
Ruan Shiyang.

> > --
> > 2.30.1
> >
> >
> >

WARNING: multiple messages have this Message-ID (diff)
From: "ruansy.fnst@fujitsu.com" <ruansy.fnst@fujitsu.com>
To: Ritesh Harjani <ritesh.list@gmail.com>
Cc: "jack@suse.cz" <jack@suse.cz>,
	"darrick.wong@oracle.com" <darrick.wong@oracle.com>,
	"linux-nvdimm@lists.01.org" <linux-nvdimm@lists.01.org>,
	"david@fromorbit.com" <david@fromorbit.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-xfs@vger.kernel.org" <linux-xfs@vger.kernel.org>,
	"ocfs2-devel@oss.oracle.com" <ocfs2-devel@oss.oracle.com>,
	"viro@zeniv.linux.org.uk" <viro@zeniv.linux.org.uk>,
	Goldwyn Rodrigues <rgoldwyn@suse.com>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"dan.j.williams@intel.com" <dan.j.williams@intel.com>,
	"linux-btrfs@vger.kernel.org" <linux-btrfs@vger.kernel.org>
Subject: Re: [Ocfs2-devel] [PATCH v3 05/10] fsdax: Replace mmap entry in case of CoW
Date: Thu, 1 Apr 2021 07:03:19 +0000	[thread overview]
Message-ID: <OSBPR01MB2920C6CE58C98BF5A42E1700F47B9@OSBPR01MB2920.jpnprd01.prod.outlook.com> (raw)
In-Reply-To: <20210401063932.tro7a4hhy25zdmho@riteshh-domain>



> -----Original Message-----
> From: Ritesh Harjani <ritesh.list@gmail.com>
> Sent: Thursday, April 1, 2021 2:40 PM
> Subject: Re: [PATCH v3 05/10] fsdax: Replace mmap entry in case of CoW
> 
> On 21/03/19 09:52AM, Shiyang Ruan wrote:
> > We replace the existing entry to the newly allocated one in case of CoW.
> > Also, we mark the entry as PAGECACHE_TAG_TOWRITE so writeback marks
> > this entry as writeprotected.  This helps us snapshots so new write
> > pagefaults after snapshots trigger a CoW.
> >
> 
> Please correct me here. So the flow is like this.
> 1. In case of CoW or a reflinked file, on an mmaped file if write is attempted,
>    Then in DAX fault handler code, ->iomap_begin() on a given filesystem will
>    populate iomap and srcmap. srcmap being from where the read needs to be
>    attempted from and iomap on where the new write should go to.
> 2. So the dax_insert_entry() code as part of the fault handling will take care
>    of removing the old entry and inserting the new pfn entry to xas and mark
>    it with PAGECACHE_TAG_TOWRITE so that dax writeback can mark the
> entry as
>    write protected.
> Is my above understanding correct?

Yes, you are right.

> 
> > Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
> > Signed-off-by: Shiyang Ruan <ruansy.fnst@fujitsu.com>
> > Reviewed-by: Christoph Hellwig <hch@lst.de>
> > ---
> >  fs/dax.c | 37 ++++++++++++++++++++++++++-----------
> >  1 file changed, 26 insertions(+), 11 deletions(-)
> >
> > diff --git a/fs/dax.c b/fs/dax.c
> > index 181aad97136a..cfe513eb111e 100644
> > --- a/fs/dax.c
> > +++ b/fs/dax.c
> > @@ -722,6 +722,9 @@ static int copy_cow_page_dax(struct block_device
> *bdev, struct dax_device *dax_d
> >  	return 0;
> >  }
> >
> > +#define DAX_IF_DIRTY		(1 << 0)
> > +#define DAX_IF_COW		(1 << 1)
> > +
> >
> small comment expalining this means DAX insert flags used in dax_insert_entry()

OK.  I'll add it.
> 
> >
> >  /*
> >   * By this point grab_mapping_entry() has ensured that we have a locked
> entry
> >   * of the appropriate size so we don't have to worry about
> > downgrading PMDs to @@ -729,16 +732,19 @@ static int
> copy_cow_page_dax(struct block_device *bdev, struct dax_device *dax_d
> >   * already in the tree, we will skip the insertion and just dirty the PMD as
> >   * appropriate.
> >   */
> > -static void *dax_insert_entry(struct xa_state *xas,
> > -		struct address_space *mapping, struct vm_fault *vmf,
> > -		void *entry, pfn_t pfn, unsigned long flags, bool dirty)
> > +static void *dax_insert_entry(struct xa_state *xas, struct vm_fault *vmf,
> > +		void *entry, pfn_t pfn, unsigned long flags,
> > +		unsigned int insert_flags)
> >  {
> > +	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
> >  	void *new_entry = dax_make_entry(pfn, flags);
> > +	bool dirty = insert_flags & DAX_IF_DIRTY;
> > +	bool cow = insert_flags & DAX_IF_COW;
> >
> >  	if (dirty)
> >  		__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
> >
> > -	if (dax_is_zero_entry(entry) && !(flags & DAX_ZERO_PAGE)) {
> > +	if (cow || (dax_is_zero_entry(entry) && !(flags & DAX_ZERO_PAGE))) {
> >  		unsigned long index = xas->xa_index;
> >  		/* we are replacing a zero page with block mapping */
> >  		if (dax_is_pmd_entry(entry))
> > @@ -750,7 +756,7 @@ static void *dax_insert_entry(struct xa_state
> > *xas,
> >
> >  	xas_reset(xas);
> >  	xas_lock_irq(xas);
> > -	if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
> > +	if (cow || dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
> >  		void *old;
> >
> >  		dax_disassociate_entry(entry, mapping, false); @@ -774,6 +780,9
> @@
> > static void *dax_insert_entry(struct xa_state *xas,
> >  	if (dirty)
> >  		xas_set_mark(xas, PAGECACHE_TAG_DIRTY);
> >
> > +	if (cow)
> > +		xas_set_mark(xas, PAGECACHE_TAG_TOWRITE);
> > +
> >  	xas_unlock_irq(xas);
> >  	return entry;
> >  }
> > @@ -1098,8 +1107,7 @@ static vm_fault_t dax_load_hole(struct xa_state
> *xas,
> >  	pfn_t pfn = pfn_to_pfn_t(my_zero_pfn(vaddr));
> >  	vm_fault_t ret;
> >
> > -	*entry = dax_insert_entry(xas, mapping, vmf, *entry, pfn,
> > -			DAX_ZERO_PAGE, false);
> > +	*entry = dax_insert_entry(xas, vmf, *entry, pfn, DAX_ZERO_PAGE, 0);
> >
> >  	ret = vmf_insert_mixed(vmf->vma, vaddr, pfn);
> >  	trace_dax_load_hole(inode, vmf, ret); @@ -1126,8 +1134,8 @@ static
> > vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf,
> >  		goto fallback;
> >
> >  	pfn = page_to_pfn_t(zero_page);
> > -	*entry = dax_insert_entry(xas, mapping, vmf, *entry, pfn,
> > -			DAX_PMD | DAX_ZERO_PAGE, false);
> > +	*entry = dax_insert_entry(xas, vmf, *entry, pfn,
> > +				  DAX_PMD | DAX_ZERO_PAGE, 0);
> >
> >  	if (arch_needs_pgtable_deposit()) {
> >  		pgtable = pte_alloc_one(vma->vm_mm); @@ -1431,6 +1439,7 @@
> static
> > vm_fault_t dax_fault_actor(struct vm_fault *vmf, pfn_t *pfnp,
> >  	loff_t pos = (loff_t)xas->xa_offset << PAGE_SHIFT;
> >  	bool write = vmf->flags & FAULT_FLAG_WRITE;
> >  	bool sync = dax_fault_is_synchronous(flags, vmf->vma, iomap);
> > +	unsigned int insert_flags = 0;
> >  	int err = 0;
> >  	pfn_t pfn;
> >  	void *kaddr;
> > @@ -1453,8 +1462,14 @@ static vm_fault_t dax_fault_actor(struct vm_fault
> *vmf, pfn_t *pfnp,
> >  	if (err)
> >  		return dax_fault_return(err);
> >
> > -	entry = dax_insert_entry(xas, mapping, vmf, entry, pfn, 0,
> > -				 write && !sync);
> > +	if (write) {
> > +		if (!sync)
> > +			insert_flags |= DAX_IF_DIRTY;
> > +		if (iomap->flags & IOMAP_F_SHARED)
> > +			insert_flags |= DAX_IF_COW;
> > +	}
> > +
> > +	entry = dax_insert_entry(xas, vmf, entry, pfn, 0, insert_flags);
> >
> >  	if (write && srcmap->addr != iomap->addr) {
> >  		err = dax_iomap_cow_copy(pos, size, size, srcmap, kaddr, false);
> >
> 
> Rest looks good to me. Please feel free to add
> Reviewed-by: Ritesh Harjani <riteshh@gmail.com>
> 
> sorry about changing my email in between of this code review.
> I am planning to use above gmail id as primary account for all upstream work
> from now.
> 

--
Thanks,
Ruan Shiyang.

> > --
> > 2.30.1
> >
> >
> >
_______________________________________________
Ocfs2-devel mailing list
Ocfs2-devel@oss.oracle.com
https://oss.oracle.com/mailman/listinfo/ocfs2-devel

  reply	other threads:[~2021-04-01  7:03 UTC|newest]

Thread overview: 81+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-19  1:52 [PATCH v3 00/10] fsdax,xfs: Add reflink&dedupe support for fsdax Shiyang Ruan
2021-03-19  1:52 ` [Ocfs2-devel] [PATCH v3 00/10] fsdax, xfs: " Shiyang Ruan
2021-03-19  1:52 ` [PATCH v3 00/10] fsdax,xfs: " Shiyang Ruan
2021-03-19  1:52 ` [PATCH v3 01/10] fsdax: Factor helpers to simplify dax fault code Shiyang Ruan
2021-03-19  1:52   ` [Ocfs2-devel] " Shiyang Ruan
2021-03-19  1:52   ` Shiyang Ruan
2021-03-23 15:33   ` Ritesh Harjani
2021-03-23 15:33     ` [Ocfs2-devel] " Ritesh Harjani
2021-03-23 15:33     ` Ritesh Harjani
2021-03-19  1:52 ` [PATCH v3 02/10] fsdax: Factor helper: dax_fault_actor() Shiyang Ruan
2021-03-19  1:52   ` [Ocfs2-devel] " Shiyang Ruan
2021-03-19  1:52   ` Shiyang Ruan
2021-03-23 15:48   ` Ritesh Harjani
2021-03-23 15:48     ` [Ocfs2-devel] " Ritesh Harjani
2021-03-23 15:48     ` Ritesh Harjani
2021-03-31  3:57     ` ruansy.fnst
2021-03-31  3:57       ` [Ocfs2-devel] " ruansy.fnst
2021-03-31  3:57       ` ruansy.fnst
2021-04-02  7:47   ` Christoph Hellwig
2021-04-02  7:47     ` [Ocfs2-devel] " Christoph Hellwig
2021-04-02  7:47     ` Christoph Hellwig
2021-03-19  1:52 ` [PATCH v3 03/10] fsdax: Output address in dax_iomap_pfn() and rename it Shiyang Ruan
2021-03-19  1:52   ` [Ocfs2-devel] " Shiyang Ruan
2021-03-19  1:52   ` Shiyang Ruan
2021-03-23 15:54   ` Ritesh Harjani
2021-03-23 15:54     ` [Ocfs2-devel] " Ritesh Harjani
2021-03-23 15:54     ` Ritesh Harjani
2021-03-19  1:52 ` [PATCH v3 04/10] fsdax: Introduce dax_iomap_cow_copy() Shiyang Ruan
2021-03-19  1:52   ` [Ocfs2-devel] " Shiyang Ruan
2021-03-19  1:52   ` Shiyang Ruan
2021-03-23 16:08   ` Ritesh Harjani
2021-03-23 16:08     ` [Ocfs2-devel] " Ritesh Harjani
2021-03-23 16:08     ` Ritesh Harjani
2021-03-19  1:52 ` [PATCH v3 05/10] fsdax: Replace mmap entry in case of CoW Shiyang Ruan
2021-03-19  1:52   ` [Ocfs2-devel] " Shiyang Ruan
2021-03-19  1:52   ` Shiyang Ruan
2021-04-01  6:39   ` Ritesh Harjani
2021-04-01  6:39     ` [Ocfs2-devel] " Ritesh Harjani
2021-04-01  6:39     ` Ritesh Harjani
2021-04-01  7:03     ` ruansy.fnst [this message]
2021-04-01  7:03       ` [Ocfs2-devel] " ruansy.fnst
2021-04-01  7:03       ` ruansy.fnst
2021-03-19  1:52 ` [PATCH v3 06/10] fsdax: Add dax_iomap_cow_copy() for dax_iomap_zero Shiyang Ruan
2021-03-19  1:52   ` [Ocfs2-devel] " Shiyang Ruan
2021-03-19  1:52   ` Shiyang Ruan
2021-04-01  6:45   ` Ritesh Harjani
2021-04-01  6:45     ` [Ocfs2-devel] " Ritesh Harjani
2021-04-01  6:45     ` Ritesh Harjani
2021-04-01  7:00     ` ruansy.fnst
2021-04-01  7:00       ` [Ocfs2-devel] " ruansy.fnst
2021-04-01  7:00       ` ruansy.fnst
2021-03-19  1:52 ` [PATCH v3 07/10] iomap: Introduce iomap_apply2() for operations on two files Shiyang Ruan
2021-03-19  1:52   ` [Ocfs2-devel] " Shiyang Ruan
2021-03-19  1:52   ` Shiyang Ruan
2021-04-01  7:12   ` Ritesh Harjani
2021-04-01  7:12     ` [Ocfs2-devel] " Ritesh Harjani
2021-04-01  7:12     ` Ritesh Harjani
2021-03-19  1:52 ` [PATCH v3 08/10] fsdax: Dedup file range to use a compare function Shiyang Ruan
2021-03-19  1:52   ` [Ocfs2-devel] " Shiyang Ruan
2021-03-19  1:52   ` Shiyang Ruan
2021-04-01 11:11   ` Ritesh Harjani
2021-04-01 11:11     ` [Ocfs2-devel] " Ritesh Harjani
2021-04-01 11:11     ` Ritesh Harjani
2021-04-08  3:21     ` ruansy.fnst
2021-04-08  3:21       ` [Ocfs2-devel] " ruansy.fnst
2021-04-08  3:21       ` ruansy.fnst
2021-03-19  1:52 ` [PATCH v3 09/10] fs/xfs: Handle CoW for fsdax write() path Shiyang Ruan
2021-03-19  1:52   ` [Ocfs2-devel] " Shiyang Ruan
2021-03-19  1:52   ` Shiyang Ruan
2021-03-19  1:52 ` [PATCH v3 10/10] fs/xfs: Add dedupe support for fsdax Shiyang Ruan
2021-03-19  1:52   ` [Ocfs2-devel] " Shiyang Ruan
2021-03-19  1:52   ` Shiyang Ruan
2021-03-23 15:27 ` [PATCH v3 00/10] fsdax,xfs: Add reflink&dedupe " Ritesh Harjani
2021-03-23 15:27   ` [Ocfs2-devel] [PATCH v3 00/10] fsdax, xfs: " Ritesh Harjani
2021-03-23 15:27   ` [PATCH v3 00/10] fsdax,xfs: " Ritesh Harjani
2021-04-02  7:49 ` Christoph Hellwig
2021-04-02  7:49   ` [Ocfs2-devel] [PATCH v3 00/10] fsdax, xfs: " Christoph Hellwig
2021-04-02  7:49   ` [PATCH v3 00/10] fsdax,xfs: " Christoph Hellwig
2021-04-02  8:18   ` ruansy.fnst
2021-04-02  8:18     ` [Ocfs2-devel] [PATCH v3 00/10] fsdax, xfs: " ruansy.fnst
2021-04-02  8:18     ` [PATCH v3 00/10] fsdax,xfs: " ruansy.fnst

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=OSBPR01MB2920C6CE58C98BF5A42E1700F47B9@OSBPR01MB2920.jpnprd01.prod.outlook.com \
    --to=ruansy.fnst@fujitsu.com \
    --cc=darrick.wong@oracle.com \
    --cc=david@fromorbit.com \
    --cc=hch@lst.de \
    --cc=jack@suse.cz \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=ocfs2-devel@oss.oracle.com \
    --cc=rgoldwyn@suse.com \
    --cc=rgoldwyn@suse.de \
    --cc=ritesh.list@gmail.com \
    --cc=viro@zeniv.linux.org.uk \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.