All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Christoph Hellwig <hch@lst.de>
Cc: "J. Bruce Fields" <bfields@fieldses.org>,
	Jeff Layton <jlayton@primarydata.com>,
	linux-nfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	xfs@oss.sgi.com
Subject: Re: [PATCH 17/18] xfs: implement pnfs export operations
Date: Thu, 8 Jan 2015 08:11:40 +1100	[thread overview]
Message-ID: <20150107211140.GC25000@dastard> (raw)
In-Reply-To: <20150107104010.GD28783@lst.de>

On Wed, Jan 07, 2015 at 11:40:10AM +0100, Christoph Hellwig wrote:
> On Wed, Jan 07, 2015 at 11:24:34AM +1100, Dave Chinner wrote:
> > > diff --git a/fs/xfs/xfs_fsops.c b/fs/xfs/xfs_fsops.c
> > > index fdc6422..2b86be8 100644
> > > --- a/fs/xfs/xfs_fsops.c
> > > +++ b/fs/xfs/xfs_fsops.c
> > > @@ -601,6 +601,8 @@ xfs_growfs_data(
> > >  	if (!mutex_trylock(&mp->m_growlock))
> > >  		return -EWOULDBLOCK;
> > >  	error = xfs_growfs_data_private(mp, in);
> > > +	if (!error)
> > > +		mp->m_generation++;
> > >  	mutex_unlock(&mp->m_growlock);
> > >  	return error;
> > >  }
> > 
> > I couldn't find an explanation of what this generation number is
> > for. What are it's semantics w.r.t. server crashes?
> 
> The generation is incremented when we grow the filesystem, so that
> a new layout (block mapping) returned to the clіent referers to the
> new NFS device ID, which will make the client aware of the new size.
> 
> The device IDs aren't persistent, so after a server crash / reboot
> we'll start at zero again.

So what happens if a grow occurs, then the server crashes, and the
client on reboot sees the same generation as before the grow
occured?

Perhaps it would be better to just initialise the generation with a
random number?

> I'll add comments explaining this to the code.
> 
> > Why does this function get passed an offset it is not actually used?
> 
> Historic reasons..
> 
> > > +static int
> > > +xfs_fs_update_flags(
> > > +	struct xfs_inode	*ip)
> > > +{
> > > +	struct xfs_mount	*mp = ip->i_mount;
> > > +	struct xfs_trans	*tp;
> > > +	int			error;
> > > +
> > > +	/*
> > > +	 * Update the mode, and prealloc flag bits.
> > > +	 */
> > > +	tp = xfs_trans_alloc(mp, XFS_TRANS_WRITEID);
> > > +	error = xfs_trans_reserve(tp, &M_RES(mp)->tr_writeid, 0, 0);
> > > +	if (error) {
> > > +		xfs_trans_cancel(tp, 0);
> > > +		return error;
> > > +	}
> > > +
> > > +	xfs_ilock(ip, XFS_ILOCK_EXCL);
> > > +	xfs_trans_ijoin(tp, ip, XFS_ILOCK_EXCL);
> > > +	ip->i_d.di_mode &= ~S_ISUID;
> > > +	if (ip->i_d.di_mode & S_IXGRP)
> > > +		ip->i_d.di_mode &= ~S_ISGID;
> > > +
> > > +	ip->i_d.di_flags |= XFS_DIFLAG_PREALLOC;
> > > +
> > > +	xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);
> > > +	return xfs_trans_commit(tp, 0);
> > > +}
> > 
> > That needs timestamp changes as well. i.e.:
> > 
> > 	xfs_trans_ichgtime(tp, ip, XFS_ICHGTIME_MOD | XFS_ICHGTIME_CHG);
> 
> The time stamps are only updated when we actually commit the data.
> Updating them here might be harmless, but I'll have to dig into the
> protocol specification and tests a bit more to check if doing the
> additional timestamp update would be harmless.
> 
> > > +
> > > +/*
> > > + * Get a layout for the pNFS client.
> > > + *
> > > + * Note that in the allocation case we do force out the transaction here.
> > > + * There is no metadata update that is required to be stable for NFS
> > > + * semantics, and layouts are not valid over a server crash.  Instead
> > > + * we'll have to be careful in the commit routine as it might pass us
> > > + * blocks for an allocation that never made it to disk in the recovery
> > > + * case.
> > 
> > I think you are saying that because block allocation is an async
> > transaction, then we have to deal with the possibility that we crash
> > before the transaction hits the disk.
> > 
> > How often do we have to allocate
> > new blocks like this? Do we need to use async transactions for this
> > case, or should we simply do the brute force thing (by making the
> > allocation transaction synchronous) initially and then, if
> > performance problems arise, optimise from there?
> 
> Every block allocation from a pNFS client goes through this path, so
> yes it is performance critical.

Sure, but how many allocations per second are we expecting to have
to support? We can do tens of thousands of synchronous transactions
per second on luns with non-volatile write caches, so I'm really
wondering how much of a limitation this is going to be in the real
world. Do you have any numbers?

> > So whenever the server first starts up the generation number in a
> > map is going to be zero - what purpose does this actually serve?
> 
> So that we can communicate if a device was grown to the client, which
> in this case needs to re-read the device information.

Why does it need to reread the device information? the layouts that
are handled to it are still going to be valid from the server POV...

> > > +	/*
> > > +	 * Make sure reads through the pagecache see the new data.
> > > +	 */
> > > +	invalidate_inode_pages2(inode->i_mapping);
> > 
> > Probably should do that first. Also, what happens if there is local
> > dirty data on the file at this point? Doesn't this just toss them
> > away?
> 
> If there was local data it will be tossed.  For regular writes this can't
> happen because we really outstanding layouts in the write path.  For
> mmap we for now ignore this problem, as a pNFS server should generally
> not be used locally.  

Comments, please. ;)

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

WARNING: multiple messages have this Message-ID (diff)
From: Dave Chinner <david@fromorbit.com>
To: Christoph Hellwig <hch@lst.de>
Cc: "J. Bruce Fields" <bfields@fieldses.org>,
	linux-fsdevel@vger.kernel.org, linux-nfs@vger.kernel.org,
	Jeff Layton <jlayton@primarydata.com>,
	xfs@oss.sgi.com
Subject: Re: [PATCH 17/18] xfs: implement pnfs export operations
Date: Thu, 8 Jan 2015 08:11:40 +1100	[thread overview]
Message-ID: <20150107211140.GC25000@dastard> (raw)
In-Reply-To: <20150107104010.GD28783@lst.de>

On Wed, Jan 07, 2015 at 11:40:10AM +0100, Christoph Hellwig wrote:
> On Wed, Jan 07, 2015 at 11:24:34AM +1100, Dave Chinner wrote:
> > > diff --git a/fs/xfs/xfs_fsops.c b/fs/xfs/xfs_fsops.c
> > > index fdc6422..2b86be8 100644
> > > --- a/fs/xfs/xfs_fsops.c
> > > +++ b/fs/xfs/xfs_fsops.c
> > > @@ -601,6 +601,8 @@ xfs_growfs_data(
> > >  	if (!mutex_trylock(&mp->m_growlock))
> > >  		return -EWOULDBLOCK;
> > >  	error = xfs_growfs_data_private(mp, in);
> > > +	if (!error)
> > > +		mp->m_generation++;
> > >  	mutex_unlock(&mp->m_growlock);
> > >  	return error;
> > >  }
> > 
> > I couldn't find an explanation of what this generation number is
> > for. What are it's semantics w.r.t. server crashes?
> 
> The generation is incremented when we grow the filesystem, so that
> a new layout (block mapping) returned to the clіent referers to the
> new NFS device ID, which will make the client aware of the new size.
> 
> The device IDs aren't persistent, so after a server crash / reboot
> we'll start at zero again.

So what happens if a grow occurs, then the server crashes, and the
client on reboot sees the same generation as before the grow
occured?

Perhaps it would be better to just initialise the generation with a
random number?

> I'll add comments explaining this to the code.
> 
> > Why does this function get passed an offset it is not actually used?
> 
> Historic reasons..
> 
> > > +static int
> > > +xfs_fs_update_flags(
> > > +	struct xfs_inode	*ip)
> > > +{
> > > +	struct xfs_mount	*mp = ip->i_mount;
> > > +	struct xfs_trans	*tp;
> > > +	int			error;
> > > +
> > > +	/*
> > > +	 * Update the mode, and prealloc flag bits.
> > > +	 */
> > > +	tp = xfs_trans_alloc(mp, XFS_TRANS_WRITEID);
> > > +	error = xfs_trans_reserve(tp, &M_RES(mp)->tr_writeid, 0, 0);
> > > +	if (error) {
> > > +		xfs_trans_cancel(tp, 0);
> > > +		return error;
> > > +	}
> > > +
> > > +	xfs_ilock(ip, XFS_ILOCK_EXCL);
> > > +	xfs_trans_ijoin(tp, ip, XFS_ILOCK_EXCL);
> > > +	ip->i_d.di_mode &= ~S_ISUID;
> > > +	if (ip->i_d.di_mode & S_IXGRP)
> > > +		ip->i_d.di_mode &= ~S_ISGID;
> > > +
> > > +	ip->i_d.di_flags |= XFS_DIFLAG_PREALLOC;
> > > +
> > > +	xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);
> > > +	return xfs_trans_commit(tp, 0);
> > > +}
> > 
> > That needs timestamp changes as well. i.e.:
> > 
> > 	xfs_trans_ichgtime(tp, ip, XFS_ICHGTIME_MOD | XFS_ICHGTIME_CHG);
> 
> The time stamps are only updated when we actually commit the data.
> Updating them here might be harmless, but I'll have to dig into the
> protocol specification and tests a bit more to check if doing the
> additional timestamp update would be harmless.
> 
> > > +
> > > +/*
> > > + * Get a layout for the pNFS client.
> > > + *
> > > + * Note that in the allocation case we do force out the transaction here.
> > > + * There is no metadata update that is required to be stable for NFS
> > > + * semantics, and layouts are not valid over a server crash.  Instead
> > > + * we'll have to be careful in the commit routine as it might pass us
> > > + * blocks for an allocation that never made it to disk in the recovery
> > > + * case.
> > 
> > I think you are saying that because block allocation is an async
> > transaction, then we have to deal with the possibility that we crash
> > before the transaction hits the disk.
> > 
> > How often do we have to allocate
> > new blocks like this? Do we need to use async transactions for this
> > case, or should we simply do the brute force thing (by making the
> > allocation transaction synchronous) initially and then, if
> > performance problems arise, optimise from there?
> 
> Every block allocation from a pNFS client goes through this path, so
> yes it is performance critical.

Sure, but how many allocations per second are we expecting to have
to support? We can do tens of thousands of synchronous transactions
per second on luns with non-volatile write caches, so I'm really
wondering how much of a limitation this is going to be in the real
world. Do you have any numbers?

> > So whenever the server first starts up the generation number in a
> > map is going to be zero - what purpose does this actually serve?
> 
> So that we can communicate if a device was grown to the client, which
> in this case needs to re-read the device information.

Why does it need to reread the device information? the layouts that
are handled to it are still going to be valid from the server POV...

> > > +	/*
> > > +	 * Make sure reads through the pagecache see the new data.
> > > +	 */
> > > +	invalidate_inode_pages2(inode->i_mapping);
> > 
> > Probably should do that first. Also, what happens if there is local
> > dirty data on the file at this point? Doesn't this just toss them
> > away?
> 
> If there was local data it will be tossed.  For regular writes this can't
> happen because we really outstanding layouts in the write path.  For
> mmap we for now ignore this problem, as a pNFS server should generally
> not be used locally.  

Comments, please. ;)

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

WARNING: multiple messages have this Message-ID (diff)
From: Dave Chinner <david@fromorbit.com>
To: Christoph Hellwig <hch@lst.de>
Cc: "J. Bruce Fields" <bfields@fieldses.org>,
	Jeff Layton <jlayton@primarydata.com>,
	linux-nfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	xfs@oss.sgi.com
Subject: Re: [PATCH 17/18] xfs: implement pnfs export operations
Date: Thu, 8 Jan 2015 08:11:40 +1100	[thread overview]
Message-ID: <20150107211140.GC25000@dastard> (raw)
In-Reply-To: <20150107104010.GD28783@lst.de>

On Wed, Jan 07, 2015 at 11:40:10AM +0100, Christoph Hellwig wrote:
> On Wed, Jan 07, 2015 at 11:24:34AM +1100, Dave Chinner wrote:
> > > diff --git a/fs/xfs/xfs_fsops.c b/fs/xfs/xfs_fsops.c
> > > index fdc6422..2b86be8 100644
> > > --- a/fs/xfs/xfs_fsops.c
> > > +++ b/fs/xfs/xfs_fsops.c
> > > @@ -601,6 +601,8 @@ xfs_growfs_data(
> > >  	if (!mutex_trylock(&mp->m_growlock))
> > >  		return -EWOULDBLOCK;
> > >  	error = xfs_growfs_data_private(mp, in);
> > > +	if (!error)
> > > +		mp->m_generation++;
> > >  	mutex_unlock(&mp->m_growlock);
> > >  	return error;
> > >  }
> > 
> > I couldn't find an explanation of what this generation number is
> > for. What are it's semantics w.r.t. server crashes?
> 
> The generation is incremented when we grow the filesystem, so that
> a new layout (block mapping) returned to the clіent referers to the
> new NFS device ID, which will make the client aware of the new size.
> 
> The device IDs aren't persistent, so after a server crash / reboot
> we'll start at zero again.

So what happens if a grow occurs, then the server crashes, and the
client on reboot sees the same generation as before the grow
occured?

Perhaps it would be better to just initialise the generation with a
random number?

> I'll add comments explaining this to the code.
> 
> > Why does this function get passed an offset it is not actually used?
> 
> Historic reasons..
> 
> > > +static int
> > > +xfs_fs_update_flags(
> > > +	struct xfs_inode	*ip)
> > > +{
> > > +	struct xfs_mount	*mp = ip->i_mount;
> > > +	struct xfs_trans	*tp;
> > > +	int			error;
> > > +
> > > +	/*
> > > +	 * Update the mode, and prealloc flag bits.
> > > +	 */
> > > +	tp = xfs_trans_alloc(mp, XFS_TRANS_WRITEID);
> > > +	error = xfs_trans_reserve(tp, &M_RES(mp)->tr_writeid, 0, 0);
> > > +	if (error) {
> > > +		xfs_trans_cancel(tp, 0);
> > > +		return error;
> > > +	}
> > > +
> > > +	xfs_ilock(ip, XFS_ILOCK_EXCL);
> > > +	xfs_trans_ijoin(tp, ip, XFS_ILOCK_EXCL);
> > > +	ip->i_d.di_mode &= ~S_ISUID;
> > > +	if (ip->i_d.di_mode & S_IXGRP)
> > > +		ip->i_d.di_mode &= ~S_ISGID;
> > > +
> > > +	ip->i_d.di_flags |= XFS_DIFLAG_PREALLOC;
> > > +
> > > +	xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);
> > > +	return xfs_trans_commit(tp, 0);
> > > +}
> > 
> > That needs timestamp changes as well. i.e.:
> > 
> > 	xfs_trans_ichgtime(tp, ip, XFS_ICHGTIME_MOD | XFS_ICHGTIME_CHG);
> 
> The time stamps are only updated when we actually commit the data.
> Updating them here might be harmless, but I'll have to dig into the
> protocol specification and tests a bit more to check if doing the
> additional timestamp update would be harmless.
> 
> > > +
> > > +/*
> > > + * Get a layout for the pNFS client.
> > > + *
> > > + * Note that in the allocation case we do force out the transaction here.
> > > + * There is no metadata update that is required to be stable for NFS
> > > + * semantics, and layouts are not valid over a server crash.  Instead
> > > + * we'll have to be careful in the commit routine as it might pass us
> > > + * blocks for an allocation that never made it to disk in the recovery
> > > + * case.
> > 
> > I think you are saying that because block allocation is an async
> > transaction, then we have to deal with the possibility that we crash
> > before the transaction hits the disk.
> > 
> > How often do we have to allocate
> > new blocks like this? Do we need to use async transactions for this
> > case, or should we simply do the brute force thing (by making the
> > allocation transaction synchronous) initially and then, if
> > performance problems arise, optimise from there?
> 
> Every block allocation from a pNFS client goes through this path, so
> yes it is performance critical.

Sure, but how many allocations per second are we expecting to have
to support? We can do tens of thousands of synchronous transactions
per second on luns with non-volatile write caches, so I'm really
wondering how much of a limitation this is going to be in the real
world. Do you have any numbers?

> > So whenever the server first starts up the generation number in a
> > map is going to be zero - what purpose does this actually serve?
> 
> So that we can communicate if a device was grown to the client, which
> in this case needs to re-read the device information.

Why does it need to reread the device information? the layouts that
are handled to it are still going to be valid from the server POV...

> > > +	/*
> > > +	 * Make sure reads through the pagecache see the new data.
> > > +	 */
> > > +	invalidate_inode_pages2(inode->i_mapping);
> > 
> > Probably should do that first. Also, what happens if there is local
> > dirty data on the file at this point? Doesn't this just toss them
> > away?
> 
> If there was local data it will be tossed.  For regular writes this can't
> happen because we really outstanding layouts in the write path.  For
> mmap we for now ignore this problem, as a pNFS server should generally
> not be used locally.  

Comments, please. ;)

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  reply	other threads:[~2015-01-07 21:11 UTC|newest]

Thread overview: 153+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-01-06 16:28 a simple and scalable pNFS block layout server Christoph Hellwig
2015-01-06 16:28 ` Christoph Hellwig
2015-01-06 16:28 ` Christoph Hellwig
2015-01-06 16:28 ` [PATCH 01/18] nfs: add LAYOUT_TYPE_MAX enum value Christoph Hellwig
2015-01-06 16:28   ` Christoph Hellwig
2015-01-06 16:28 ` [PATCH 02/18] fs: add FL_LAYOUT lease type Christoph Hellwig
2015-01-06 16:28   ` Christoph Hellwig
2015-01-06 18:46   ` Jeff Layton
2015-01-06 18:46     ` Jeff Layton
2015-01-07 10:30     ` Christoph Hellwig
2015-01-07 10:30       ` Christoph Hellwig
2015-01-06 16:28 ` [PATCH 03/18] nfsd: factor out a helper to decode nfstime4 values Christoph Hellwig
2015-01-06 16:28   ` Christoph Hellwig
     [not found]   ` <1420561721-9150-4-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
2015-01-09 23:02     ` Tom Haynes
2015-01-09 23:02       ` Tom Haynes
2015-01-09 23:02       ` Tom Haynes
2015-01-11 11:42       ` Christoph Hellwig
2015-01-11 11:42         ` Christoph Hellwig
     [not found]         ` <20150111114242.GA11939-jcswGhMUV9g@public.gmane.org>
2015-01-11 23:53           ` Tom Haynes
2015-01-11 23:53             ` Tom Haynes
2015-01-11 23:53             ` Tom Haynes
2015-01-06 16:28 ` [PATCH 04/18] nfsd: move nfsd_fh_match to nfsfh.h Christoph Hellwig
2015-01-06 16:28   ` Christoph Hellwig
2015-01-06 16:28 ` [PATCH 05/18] nfsd: add fh_fsid_match helper Christoph Hellwig
2015-01-06 16:28   ` Christoph Hellwig
2015-01-06 16:28 ` [PATCH 06/18] nfsd: make lookup/alloc/unhash_stid available outside nfs4state.c Christoph Hellwig
2015-01-06 16:28   ` Christoph Hellwig
2015-01-06 16:28 ` [PATCH 07/18] nfsd: make find/get/put file " Christoph Hellwig
2015-01-06 16:28   ` Christoph Hellwig
     [not found] ` <1420561721-9150-1-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
2015-01-06 16:28   ` [PATCH 08/18] nfsd: make find_any_file " Christoph Hellwig
2015-01-06 16:28     ` Christoph Hellwig
2015-01-06 16:28     ` Christoph Hellwig
2015-01-06 16:28   ` [PATCH 10/18] nfsd: implement pNFS layout recalls Christoph Hellwig
2015-01-06 16:28     ` Christoph Hellwig
2015-01-06 16:28     ` Christoph Hellwig
     [not found]     ` <1420561721-9150-11-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
2015-01-06 17:25       ` J. Bruce Fields
2015-01-06 17:25         ` J. Bruce Fields
2015-01-06 17:25         ` J. Bruce Fields
2015-01-06 17:42         ` Christoph Hellwig
2015-01-06 17:42           ` Christoph Hellwig
     [not found]           ` <20150106174214.GB16200-jcswGhMUV9g@public.gmane.org>
2015-01-06 17:59             ` Tom Haynes
2015-01-06 17:59               ` Tom Haynes
2015-01-06 17:59               ` Tom Haynes
2015-01-06 16:28   ` [PATCH 11/18] nfsd: update documentation for pNFS support Christoph Hellwig
2015-01-06 16:28     ` Christoph Hellwig
2015-01-06 16:28     ` Christoph Hellwig
2015-01-06 16:28   ` [PATCH 12/18] nfsd: add trace events Christoph Hellwig
2015-01-06 16:28     ` Christoph Hellwig
2015-01-06 16:28     ` Christoph Hellwig
2015-01-06 16:28   ` [PATCH 14/18] nfsd: pNFS block layout driver Christoph Hellwig
2015-01-06 16:28     ` Christoph Hellwig
2015-01-06 16:28     ` Christoph Hellwig
2015-01-06 17:16     ` J. Bruce Fields
2015-01-06 17:16       ` J. Bruce Fields
2015-01-06 17:39       ` Christoph Hellwig
2015-01-06 17:39         ` Christoph Hellwig
2015-01-06 19:39         ` J. Bruce Fields
2015-01-06 19:39           ` J. Bruce Fields
     [not found]           ` <20150106193949.GD28003-uC3wQj2KruNg9hUCZPvPmw@public.gmane.org>
2015-01-06 19:42             ` Jeff Layton
2015-01-06 19:42               ` Jeff Layton
2015-01-06 19:42               ` Jeff Layton
2015-01-07 10:28               ` Christoph Hellwig
2015-01-07 10:28                 ` Christoph Hellwig
2015-01-08 20:41                 ` Jeff Layton
2015-01-08 20:41                   ` Jeff Layton
2015-01-08 20:54                   ` J. Bruce Fields
2015-01-08 20:54                     ` J. Bruce Fields
     [not found]     ` <1420561721-9150-15-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
2015-01-12  4:56       ` Tom Haynes
2015-01-12  4:56         ` Tom Haynes
2015-01-12  4:56         ` Tom Haynes
2015-01-12 12:43         ` Christoph Hellwig
2015-01-12 12:43           ` Christoph Hellwig
2015-01-12  6:14     ` Tom Haynes
2015-01-12  6:14       ` Tom Haynes
2015-01-12 12:46       ` Christoph Hellwig
2015-01-12 12:46         ` Christoph Hellwig
2015-01-06 16:28   ` [PATCH 15/18] xfs: pass a 64-bit count argument to xfs_iomap_write_unwritten Christoph Hellwig
2015-01-06 16:28     ` Christoph Hellwig
2015-01-06 16:28     ` Christoph Hellwig
2015-01-06 16:28   ` [PATCH 16/18] xfs: do not allocate blocks when converting unwritten extents Christoph Hellwig
2015-01-06 16:28     ` Christoph Hellwig
2015-01-06 16:28     ` Christoph Hellwig
2015-01-06 23:21     ` Dave Chinner
2015-01-06 23:21       ` Dave Chinner
2015-01-06 16:28   ` [PATCH 17/18] xfs: implement pnfs export operations Christoph Hellwig
2015-01-06 16:28     ` Christoph Hellwig
2015-01-06 16:28     ` Christoph Hellwig
2015-01-07  0:24     ` Dave Chinner
2015-01-07  0:24       ` Dave Chinner
2015-01-07 10:40       ` Christoph Hellwig
2015-01-07 10:40         ` Christoph Hellwig
2015-01-07 10:40         ` Christoph Hellwig
2015-01-07 21:11         ` Dave Chinner [this message]
2015-01-07 21:11           ` Dave Chinner
2015-01-07 21:11           ` Dave Chinner
2015-01-08 12:43           ` Christoph Hellwig
2015-01-08 12:43             ` Christoph Hellwig
2015-01-08 21:04             ` Dave Chinner
2015-01-08 21:04               ` Dave Chinner
2015-01-09 11:41               ` Christoph Hellwig
2015-01-09 11:41                 ` Christoph Hellwig
     [not found]                 ` <20150109114159.GA25728-jcswGhMUV9g@public.gmane.org>
2015-01-12  3:04                   ` Dave Chinner
2015-01-12  3:04                     ` Dave Chinner
2015-01-12  3:04                     ` Dave Chinner
2015-01-14 10:08                     ` Christoph Hellwig
2015-01-14 10:08                       ` Christoph Hellwig
2015-01-14 10:08                       ` Christoph Hellwig
2015-01-06 16:28 ` [PATCH 09/18] nfsd: implement pNFS operations Christoph Hellwig
2015-01-06 16:28   ` Christoph Hellwig
     [not found]   ` <1420561721-9150-10-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
2015-01-09  0:48     ` Jeff Layton
2015-01-09  0:48       ` Jeff Layton
2015-01-09  0:48       ` Jeff Layton
2015-01-09 10:05       ` Christoph Hellwig
2015-01-09 10:05         ` Christoph Hellwig
2015-01-09 16:51         ` Jeff Layton
2015-01-09 16:51           ` Jeff Layton
2015-01-09 17:16           ` Christoph Hellwig
2015-01-09 17:16             ` Christoph Hellwig
2015-01-09 17:28             ` Jeff Layton
2015-01-09 17:28               ` Jeff Layton
2015-01-09 17:33               ` Jeff Layton
2015-01-09 17:33                 ` Jeff Layton
2015-01-09 17:43                 ` Trond Myklebust
2015-01-09 17:43                   ` Trond Myklebust
2015-01-09 18:49                   ` rfc5661 errata 3901, was: " Christoph Hellwig
2015-01-14 19:16                     ` Tom Haynes
2015-01-15 16:26                       ` Christoph Hellwig
2015-01-15 16:30                         ` [nfsv4] " Trond Myklebust
2015-01-12 17:54     ` Tom Haynes
2015-01-12 17:54       ` Tom Haynes
2015-01-12 17:54       ` Tom Haynes
2015-01-06 16:28 ` [PATCH 13/18] exportfs: add methods for block layout exports Christoph Hellwig
2015-01-06 16:28   ` Christoph Hellwig
2015-01-06 16:28 ` [PATCH 18/18] xfs: recall pNFS layouts on conflicting access Christoph Hellwig
2015-01-06 16:28   ` Christoph Hellwig
     [not found]   ` <1420561721-9150-19-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
2015-01-06 23:18     ` Dave Chinner
2015-01-06 23:18       ` Dave Chinner
2015-01-06 23:18       ` Dave Chinner
2015-01-07 10:31       ` Christoph Hellwig
2015-01-07 10:31         ` Christoph Hellwig
2015-01-06 17:32 ` a simple and scalable pNFS block layout server J. Bruce Fields
2015-01-06 17:32   ` J. Bruce Fields
2015-01-06 17:56   ` Christoph Hellwig
2015-01-06 17:56     ` Christoph Hellwig
     [not found]     ` <20150106175611.GA16413-jcswGhMUV9g@public.gmane.org>
2015-01-06 18:37       ` Jeff Layton
2015-01-06 18:37         ` Jeff Layton
2015-01-06 18:37         ` Jeff Layton
2015-01-06 18:39         ` Weston Andros Adamson
2015-01-06 18:39           ` Weston Andros Adamson
2015-01-06 18:39           ` Weston Andros Adamson
2015-01-06 19:17     ` J. Bruce Fields
2015-01-06 19:17       ` J. Bruce Fields
2015-01-06 19:17       ` J. Bruce Fields

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150107211140.GC25000@dastard \
    --to=david@fromorbit.com \
    --cc=bfields@fieldses.org \
    --cc=hch@lst.de \
    --cc=jlayton@primarydata.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.