All of lore.kernel.org
 help / color / mirror / Atom feed
* How to control the order of different export options for different client formats?
@ 2011-05-17 16:21 James Pearson
  2011-05-17 22:01 ` NeilBrown
  2011-05-18  0:46 ` Max Matveev
  0 siblings, 2 replies; 26+ messages in thread
From: James Pearson @ 2011-05-17 16:21 UTC (permalink / raw)
  To: linux-nfs

I'm using CentOS 5.x (nfs-utils based on v1.0.9) - and have been using 
the following in /etc/exports:

/export *(rw,async) @backup(rw,no_root_squash,async)

which works fine - hosts in the backup NIS netgroup mount the file 
system with no_root_squash and other clients with root_squash

However, I now want to restrict the export to all clients in a single 
subnet - so I now have /etc/exports as:

/export 172.16.0.0/20(rw,async) @backup(rw,no_root_squash,async)

Unfortunately, hosts in the backup NIS netgroup (which are also in the 
172.16.0.0/20 subnet) no longer mount with no_root_squash

It appears that the subnet export takes precedence over the netgroup 
export (it doesn't matter in what order the subnets/netgroups exports 
are listed in /etc/exports) - so the netgroup client options are ignored 
as a match has already been found in the subnet export.

Is there any way to control the order in which clients are checked for 
export options?

i.e. I would like netgroups to take precedence over subnets

Thanks

James Pearson

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: How to control the order of different export options for different client formats?
  2011-05-17 16:21 How to control the order of different export options for different client formats? James Pearson
@ 2011-05-17 22:01 ` NeilBrown
  2011-05-18 10:19   ` James Pearson
  2011-05-18  0:46 ` Max Matveev
  1 sibling, 1 reply; 26+ messages in thread
From: NeilBrown @ 2011-05-17 22:01 UTC (permalink / raw)
  To: James Pearson; +Cc: linux-nfs

On Tue, 17 May 2011 17:21:24 +0100 James Pearson <james-p@moving-picture.com>
wrote:

> I'm using CentOS 5.x (nfs-utils based on v1.0.9) - and have been using 
> the following in /etc/exports:
> 
> /export *(rw,async) @backup(rw,no_root_squash,async)
> 
> which works fine - hosts in the backup NIS netgroup mount the file 
> system with no_root_squash and other clients with root_squash
> 
> However, I now want to restrict the export to all clients in a single 
> subnet - so I now have /etc/exports as:
> 
> /export 172.16.0.0/20(rw,async) @backup(rw,no_root_squash,async)
> 
> Unfortunately, hosts in the backup NIS netgroup (which are also in the 
> 172.16.0.0/20 subnet) no longer mount with no_root_squash
> 
> It appears that the subnet export takes precedence over the netgroup 
> export (it doesn't matter in what order the subnets/netgroups exports 
> are listed in /etc/exports) - so the netgroup client options are ignored 
> as a match has already been found in the subnet export.
> 
> Is there any way to control the order in which clients are checked for 
> export options?
> 
> i.e. I would like netgroups to take precedence over subnets

Unfortunately you cannot do that.

The place in the code where this is determined is towards the end of
'lookup_export' in utils/mountd/cache.c

Were I to try to 'fix' this I would probably define a new field in 'struct
exportent' which holds a 'priority'.

Then allow a setting like "priority=4" in /etc/exports

Then change the code in lookup_export to choose the one with the higher
priority, rather than the 'first' one.

NeilBrown

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: How to control the order of different export options for different client formats?
  2011-05-17 16:21 How to control the order of different export options for different client formats? James Pearson
  2011-05-17 22:01 ` NeilBrown
@ 2011-05-18  0:46 ` Max Matveev
  1 sibling, 0 replies; 26+ messages in thread
From: Max Matveev @ 2011-05-18  0:46 UTC (permalink / raw)
  To: James Pearson; +Cc: linux-nfs

On Tue, 17 May 2011 17:21:24 +0100, James Pearson wrote:

 james-p> Is there any way to control the order in which clients are
 james-p> checked for export options?

 james-p> i.e. I would like netgroups to take precedence over subnets

You're out of luck here - the entires are checked in the following
order: FQDN, subnet, wildcard, netgroup, anonymous and finally gss.
Here 'wildcard' means anything except the bare '*' which is considered
anonymous.  Any entries on the same "level", i.e. two netgroups or two
FQDNs are checked in the same order in which they appear in
/etc/exports.

max

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: How to control the order of different export options for different client formats?
  2011-05-17 22:01 ` NeilBrown
@ 2011-05-18 10:19   ` James Pearson
  2011-05-18 11:54     ` Performance Issue with multiple dataserver Taousif_Ansari
  2011-05-18 16:20     ` How to control the order of different export options for different client formats? J. Bruce Fields
  0 siblings, 2 replies; 26+ messages in thread
From: James Pearson @ 2011-05-18 10:19 UTC (permalink / raw)
  To: linux-nfs

NeilBrown wrote:
> 
> Unfortunately you cannot do that.
> 
> The place in the code where this is determined is towards the end of
> 'lookup_export' in utils/mountd/cache.c
> 
> Were I to try to 'fix' this I would probably define a new field in 'struct
> exportent' which holds a 'priority'.
> 
> Then allow a setting like "priority=4" in /etc/exports
> 
> Then change the code in lookup_export to choose the one with the higher
> priority, rather than the 'first' one.
> 
> NeilBrown

I've hacked the source to make netgroups take precedence over subnets by 
  moving MCL_NETGROUP before MCL_SUBNETWORK in the enum in 
support/include/exportfs.h - which works for me, as I only use 
netgroups, subnets and anonymous (in that priority order).

IMHO the priority of exports should really be as they appear on the line 
in /etc/exports, but I guess if that were to change, it would break 
existing /etc/exports that use the current priority ordering (either by 
design or accident!).

Having a priority option would be a very good idea - and may be in the 
meantime the exports man page should be updated with info about the 
current priority ordering?

Thanks

James Pearson

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Performance Issue with multiple dataserver
  2011-05-18 10:19   ` James Pearson
@ 2011-05-18 11:54     ` Taousif_Ansari
  2011-05-18 16:12       ` J. Bruce Fields
  2011-05-18 16:20     ` How to control the order of different export options for different client formats? J. Bruce Fields
  1 sibling, 1 reply; 26+ messages in thread
From: Taousif_Ansari @ 2011-05-18 11:54 UTC (permalink / raw)
  To: linux-nfs

Hi,

I have done pNFS setup with single Dataserver and Two Dataserver and ran the IOzone tool on both, I found that the performance with multiple dataservers is less than the performance with single dataservers.

Here are some numbers, which were captured by the IOzone tool.


							  4	  8	 16	 32	 64	 128	 256	 512	1024	<== Record Length in KB
With Single Dataserver:
Read operation for file size 1 MB-		66415	66359	63630	70358	86223	70256	66047	66068	68489	<== IO kB/sec
Write operation for file size 1 MB-		18827	16920	18846	17039	18896	17009	17173	19206	17947	<== IO kB/sec

With Two Dataservers :
Read operation for file size 1 MB-		36882	381198	38150	38084	38749	33663	34398	37313	37847	<== IO kB/sec
Write operation for file size 1 MB-		5461	4661	5586	4870	5227	4922	4214	5572	4658	<== IO kB/sec


Can somebody tell me What could be the issue....

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Performance Issue with multiple dataserver
  2011-05-18 11:54     ` Performance Issue with multiple dataserver Taousif_Ansari
@ 2011-05-18 16:12       ` J. Bruce Fields
  2011-05-19  5:26         ` Taousif_Ansari
  0 siblings, 1 reply; 26+ messages in thread
From: J. Bruce Fields @ 2011-05-18 16:12 UTC (permalink / raw)
  To: Taousif_Ansari; +Cc: linux-nfs

You sent this message as a reply to an unrelated message, which is
confusing to those of us with threaded mail readers.

On Wed, May 18, 2011 at 05:24:45PM +0530, Taousif_Ansari@DELLTEAM.com wrote:
> I have done pNFS setup with single Dataserver and Two Dataserver and ran the IOzone tool on both, I found that the performance with multiple dataservers is less than the performance with single dataservers.

What are you using as the server, and what as the client?

--b.

> 
> Here are some numbers, which were captured by the IOzone tool.
> 
> 
> 							  4	  8	 16	 32	 64	 128	 256	 512	1024	<== Record Length in KB
> With Single Dataserver:
> Read operation for file size 1 MB-		66415	66359	63630	70358	86223	70256	66047	66068	68489	<== IO kB/sec
> Write operation for file size 1 MB-		18827	16920	18846	17039	18896	17009	17173	19206	17947	<== IO kB/sec
> 
> With Two Dataservers :
> Read operation for file size 1 MB-		36882	381198	38150	38084	38749	33663	34398	37313	37847	<== IO kB/sec
> Write operation for file size 1 MB-		5461	4661	5586	4870	5227	4922	4214	5572	4658	<== IO kB/sec
> 
> 
> Can somebody tell me What could be the issue....
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: How to control the order of different export options for different client formats?
  2011-05-18 10:19   ` James Pearson
  2011-05-18 11:54     ` Performance Issue with multiple dataserver Taousif_Ansari
@ 2011-05-18 16:20     ` J. Bruce Fields
  2011-05-20 13:38       ` James Pearson
  1 sibling, 1 reply; 26+ messages in thread
From: J. Bruce Fields @ 2011-05-18 16:20 UTC (permalink / raw)
  To: James Pearson; +Cc: linux-nfs

On Wed, May 18, 2011 at 11:19:37AM +0100, James Pearson wrote:
> NeilBrown wrote:
> >
> >Unfortunately you cannot do that.
> >
> >The place in the code where this is determined is towards the end of
> >'lookup_export' in utils/mountd/cache.c
> >
> >Were I to try to 'fix' this I would probably define a new field in 'struct
> >exportent' which holds a 'priority'.
> >
> >Then allow a setting like "priority=4" in /etc/exports
> >
> >Then change the code in lookup_export to choose the one with the higher
> >priority, rather than the 'first' one.
> >
> >NeilBrown
> 
> I've hacked the source to make netgroups take precedence over
> subnets by  moving MCL_NETGROUP before MCL_SUBNETWORK in the enum in
> support/include/exportfs.h - which works for me, as I only use
> netgroups, subnets and anonymous (in that priority order).
> 
> IMHO the priority of exports should really be as they appear on the
> line in /etc/exports,

Sounds reasonable to me.

> but I guess if that were to change, it would
> break existing /etc/exports that use the current priority ordering
> (either by design or accident!).

Maybe some new /etc/exports syntax could allow the administrator to opt
into a new priority ordering.

> Having a priority option would be a very good idea - and may be in
> the meantime the exports man page should be updated with info about
> the current priority ordering?

Sounds good.  Could you send in a patch?

--b.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: Performance Issue with multiple dataserver
  2011-05-18 16:12       ` J. Bruce Fields
@ 2011-05-19  5:26         ` Taousif_Ansari
  2011-05-19 11:50           ` J. Bruce Fields
  0 siblings, 1 reply; 26+ messages in thread
From: Taousif_Ansari @ 2011-05-19  5:26 UTC (permalink / raw)
  To: bfields; +Cc: linux-nfs

Hi,

I am using on Server linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) and on client also linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) downloaded from http://git.linux-nfs.org/?p=bhalevy/linux-pnfs.git;a=summary on Fedora 14.

Extremely sorry for causing confusing .
-----Original Message-----
From: J. Bruce Fields [mailto:bfields@fieldses.org] 
Sent: Wednesday, May 18, 2011 9:43 PM
To: Ansari, Taousif - Dell Team
Cc: linux-nfs@vger.kernel.org
Subject: Re: Performance Issue with multiple dataserver

You sent this message as a reply to an unrelated message, which is
confusing to those of us with threaded mail readers.

On Wed, May 18, 2011 at 05:24:45PM +0530, Taousif_Ansari@DELLTEAM.com wrote:
> I have done pNFS setup with single Dataserver and Two Dataserver and ran the IOzone tool on both, I found that the performance with multiple dataservers is less than the performance with single dataservers.

What are you using as the server, and what as the client?

--b.

> 
> Here are some numbers, which were captured by the IOzone tool.
> 
> 
> 							  4	  8	 16	 32	 64	 128	 256	 512	1024	<== Record Length in KB
> With Single Dataserver:
> Read operation for file size 1 MB-		66415	66359	63630	70358	86223	70256	66047	66068	68489	<== IO kB/sec
> Write operation for file size 1 MB-		18827	16920	18846	17039	18896	17009	17173	19206	17947	<== IO kB/sec
> 
> With Two Dataservers :
> Read operation for file size 1 MB-		36882	381198	38150	38084	38749	33663	34398	37313	37847	<== IO kB/sec
> Write operation for file size 1 MB-		5461	4661	5586	4870	5227	4922	4214	5572	4658	<== IO kB/sec
> 
> 
> Can somebody tell me What could be the issue....
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Performance Issue with multiple dataserver
  2011-05-19  5:26         ` Taousif_Ansari
@ 2011-05-19 11:50           ` J. Bruce Fields
  2011-05-19 12:39             ` Taousif_Ansari
  0 siblings, 1 reply; 26+ messages in thread
From: J. Bruce Fields @ 2011-05-19 11:50 UTC (permalink / raw)
  To: Taousif_Ansari; +Cc: linux-nfs

On Thu, May 19, 2011 at 10:56:44AM +0530, Taousif_Ansari@DELLTEAM.com wrote:
> Hi,
> 
> I am using on Server linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) and on client also linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) downloaded from http://git.linux-nfs.org/?p=bhalevy/linux-pnfs.git;a=summary on Fedora 14.

So you're using GFS2 on the server?  With what sort of storage?

--b.

> 
> Extremely sorry for causing confusing .
> -----Original Message-----
> From: J. Bruce Fields [mailto:bfields@fieldses.org] 
> Sent: Wednesday, May 18, 2011 9:43 PM
> To: Ansari, Taousif - Dell Team
> Cc: linux-nfs@vger.kernel.org
> Subject: Re: Performance Issue with multiple dataserver
> 
> You sent this message as a reply to an unrelated message, which is
> confusing to those of us with threaded mail readers.
> 
> On Wed, May 18, 2011 at 05:24:45PM +0530, Taousif_Ansari@DELLTEAM.com wrote:
> > I have done pNFS setup with single Dataserver and Two Dataserver and ran the IOzone tool on both, I found that the performance with multiple dataservers is less than the performance with single dataservers.
> 
> What are you using as the server, and what as the client?
> 
> --b.
> 
> > 
> > Here are some numbers, which were captured by the IOzone tool.
> > 
> > 
> > 							  4	  8	 16	 32	 64	 128	 256	 512	1024	<== Record Length in KB
> > With Single Dataserver:
> > Read operation for file size 1 MB-		66415	66359	63630	70358	86223	70256	66047	66068	68489	<== IO kB/sec
> > Write operation for file size 1 MB-		18827	16920	18846	17039	18896	17009	17173	19206	17947	<== IO kB/sec
> > 
> > With Two Dataservers :
> > Read operation for file size 1 MB-		36882	381198	38150	38084	38749	33663	34398	37313	37847	<== IO kB/sec
> > Write operation for file size 1 MB-		5461	4661	5586	4870	5227	4922	4214	5572	4658	<== IO kB/sec
> > 
> > 
> > Can somebody tell me What could be the issue....
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: Performance Issue with multiple dataserver
  2011-05-19 11:50           ` J. Bruce Fields
@ 2011-05-19 12:39             ` Taousif_Ansari
  2011-05-19 13:12               ` J. Bruce Fields
  0 siblings, 1 reply; 26+ messages in thread
From: Taousif_Ansari @ 2011-05-19 12:39 UTC (permalink / raw)
  To: bfields; +Cc: linux-nfs

I have followed the way given on http://wiki.linux-nfs.org/wiki/index.php/Configuring_pNFS/spnfsd .

-Taousif

-----Original Message-----
From: J. Bruce Fields [mailto:bfields@fieldses.org] 
Sent: Thursday, May 19, 2011 5:20 PM
To: Ansari, Taousif - Dell Team
Cc: linux-nfs@vger.kernel.org
Subject: Re: Performance Issue with multiple dataserver

On Thu, May 19, 2011 at 10:56:44AM +0530, Taousif_Ansari@DELLTEAM.com wrote:
> Hi,
> 
> I am using on Server linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) and on client also linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) downloaded from http://git.linux-nfs.org/?p=bhalevy/linux-pnfs.git;a=summary on Fedora 14.

So you're using GFS2 on the server?  With what sort of storage?

--b.

> 
> Extremely sorry for causing confusing .
> -----Original Message-----
> From: J. Bruce Fields [mailto:bfields@fieldses.org] 
> Sent: Wednesday, May 18, 2011 9:43 PM
> To: Ansari, Taousif - Dell Team
> Cc: linux-nfs@vger.kernel.org
> Subject: Re: Performance Issue with multiple dataserver
> 
> You sent this message as a reply to an unrelated message, which is
> confusing to those of us with threaded mail readers.
> 
> On Wed, May 18, 2011 at 05:24:45PM +0530, Taousif_Ansari@DELLTEAM.com wrote:
> > I have done pNFS setup with single Dataserver and Two Dataserver and ran the IOzone tool on both, I found that the performance with multiple dataservers is less than the performance with single dataservers.
> 
> What are you using as the server, and what as the client?
> 
> --b.
> 
> > 
> > Here are some numbers, which were captured by the IOzone tool.
> > 
> > 
> > 							  4	  8	 16	 32	 64	 128	 256	 512	1024	<== Record Length in KB
> > With Single Dataserver:
> > Read operation for file size 1 MB-		66415	66359	63630	70358	86223	70256	66047	66068	68489	<== IO kB/sec
> > Write operation for file size 1 MB-		18827	16920	18846	17039	18896	17009	17173	19206	17947	<== IO kB/sec
> > 
> > With Two Dataservers :
> > Read operation for file size 1 MB-		36882	381198	38150	38084	38749	33663	34398	37313	37847	<== IO kB/sec
> > Write operation for file size 1 MB-		5461	4661	5586	4870	5227	4922	4214	5572	4658	<== IO kB/sec
> > 
> > 
> > Can somebody tell me What could be the issue....
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Performance Issue with multiple dataserver
  2011-05-19 12:39             ` Taousif_Ansari
@ 2011-05-19 13:12               ` J. Bruce Fields
  2011-05-19 13:14                 ` Taousif_Ansari
  0 siblings, 1 reply; 26+ messages in thread
From: J. Bruce Fields @ 2011-05-19 13:12 UTC (permalink / raw)
  To: Taousif_Ansari; +Cc: linux-nfs

On Thu, May 19, 2011 at 06:09:21PM +0530, Taousif_Ansari@DELLTEAM.com wrote:
> I have followed the way given on http://wiki.linux-nfs.org/wiki/index.php/Configuring_pNFS/spnfsd .

Oh.  As noted there, spnfs is unmaintained.

And, in any case, we'd need many more details about your setup.

--b.

> 
> -Taousif
> 
> -----Original Message-----
> From: J. Bruce Fields [mailto:bfields@fieldses.org] 
> Sent: Thursday, May 19, 2011 5:20 PM
> To: Ansari, Taousif - Dell Team
> Cc: linux-nfs@vger.kernel.org
> Subject: Re: Performance Issue with multiple dataserver
> 
> On Thu, May 19, 2011 at 10:56:44AM +0530, Taousif_Ansari@DELLTEAM.com wrote:
> > Hi,
> > 
> > I am using on Server linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) and on client also linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) downloaded from http://git.linux-nfs.org/?p=bhalevy/linux-pnfs.git;a=summary on Fedora 14.
> 
> So you're using GFS2 on the server?  With what sort of storage?
> 
> --b.
> 
> > 
> > Extremely sorry for causing confusing .
> > -----Original Message-----
> > From: J. Bruce Fields [mailto:bfields@fieldses.org] 
> > Sent: Wednesday, May 18, 2011 9:43 PM
> > To: Ansari, Taousif - Dell Team
> > Cc: linux-nfs@vger.kernel.org
> > Subject: Re: Performance Issue with multiple dataserver
> > 
> > You sent this message as a reply to an unrelated message, which is
> > confusing to those of us with threaded mail readers.
> > 
> > On Wed, May 18, 2011 at 05:24:45PM +0530, Taousif_Ansari@DELLTEAM.com wrote:
> > > I have done pNFS setup with single Dataserver and Two Dataserver and ran the IOzone tool on both, I found that the performance with multiple dataservers is less than the performance with single dataservers.
> > 
> > What are you using as the server, and what as the client?
> > 
> > --b.
> > 
> > > 
> > > Here are some numbers, which were captured by the IOzone tool.
> > > 
> > > 
> > > 							  4	  8	 16	 32	 64	 128	 256	 512	1024	<== Record Length in KB
> > > With Single Dataserver:
> > > Read operation for file size 1 MB-		66415	66359	63630	70358	86223	70256	66047	66068	68489	<== IO kB/sec
> > > Write operation for file size 1 MB-		18827	16920	18846	17039	18896	17009	17173	19206	17947	<== IO kB/sec
> > > 
> > > With Two Dataservers :
> > > Read operation for file size 1 MB-		36882	381198	38150	38084	38749	33663	34398	37313	37847	<== IO kB/sec
> > > Write operation for file size 1 MB-		5461	4661	5586	4870	5227	4922	4214	5572	4658	<== IO kB/sec
> > > 
> > > 
> > > Can somebody tell me What could be the issue....
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: Performance Issue with multiple dataserver
  2011-05-19 13:12               ` J. Bruce Fields
@ 2011-05-19 13:14                 ` Taousif_Ansari
  2011-05-19 13:43                   ` J. Bruce Fields
  0 siblings, 1 reply; 26+ messages in thread
From: Taousif_Ansari @ 2011-05-19 13:14 UTC (permalink / raw)
  To: bfields; +Cc: linux-nfs

Then what should I follow, and what details are needed....

-----Original Message-----
From: linux-nfs-owner@vger.kernel.org [mailto:linux-nfs-owner@vger.kernel.org] On Behalf Of J. Bruce Fields
Sent: Thursday, May 19, 2011 6:43 PM
To: Ansari, Taousif - Dell Team
Cc: linux-nfs@vger.kernel.org
Subject: Re: Performance Issue with multiple dataserver

On Thu, May 19, 2011 at 06:09:21PM +0530, Taousif_Ansari@DELLTEAM.com wrote:
> I have followed the way given on http://wiki.linux-nfs.org/wiki/index.php/Configuring_pNFS/spnfsd .

Oh.  As noted there, spnfs is unmaintained.

And, in any case, we'd need many more details about your setup.

--b.

> 
> -Taousif
> 
> -----Original Message-----
> From: J. Bruce Fields [mailto:bfields@fieldses.org] 
> Sent: Thursday, May 19, 2011 5:20 PM
> To: Ansari, Taousif - Dell Team
> Cc: linux-nfs@vger.kernel.org
> Subject: Re: Performance Issue with multiple dataserver
> 
> On Thu, May 19, 2011 at 10:56:44AM +0530, Taousif_Ansari@DELLTEAM.com wrote:
> > Hi,
> > 
> > I am using on Server linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) and on client also linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) downloaded from http://git.linux-nfs.org/?p=bhalevy/linux-pnfs.git;a=summary on Fedora 14.
> 
> So you're using GFS2 on the server?  With what sort of storage?
> 
> --b.
> 
> > 
> > Extremely sorry for causing confusing .
> > -----Original Message-----
> > From: J. Bruce Fields [mailto:bfields@fieldses.org] 
> > Sent: Wednesday, May 18, 2011 9:43 PM
> > To: Ansari, Taousif - Dell Team
> > Cc: linux-nfs@vger.kernel.org
> > Subject: Re: Performance Issue with multiple dataserver
> > 
> > You sent this message as a reply to an unrelated message, which is
> > confusing to those of us with threaded mail readers.
> > 
> > On Wed, May 18, 2011 at 05:24:45PM +0530, Taousif_Ansari@DELLTEAM.com wrote:
> > > I have done pNFS setup with single Dataserver and Two Dataserver and ran the IOzone tool on both, I found that the performance with multiple dataservers is less than the performance with single dataservers.
> > 
> > What are you using as the server, and what as the client?
> > 
> > --b.
> > 
> > > 
> > > Here are some numbers, which were captured by the IOzone tool.
> > > 
> > > 
> > > 							  4	  8	 16	 32	 64	 128	 256	 512	1024	<== Record Length in KB
> > > With Single Dataserver:
> > > Read operation for file size 1 MB-		66415	66359	63630	70358	86223	70256	66047	66068	68489	<== IO kB/sec
> > > Write operation for file size 1 MB-		18827	16920	18846	17039	18896	17009	17173	19206	17947	<== IO kB/sec
> > > 
> > > With Two Dataservers :
> > > Read operation for file size 1 MB-		36882	381198	38150	38084	38749	33663	34398	37313	37847	<== IO kB/sec
> > > Write operation for file size 1 MB-		5461	4661	5586	4870	5227	4922	4214	5572	4658	<== IO kB/sec
> > > 
> > > 
> > > Can somebody tell me What could be the issue....
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Performance Issue with multiple dataserver
  2011-05-19 13:14                 ` Taousif_Ansari
@ 2011-05-19 13:43                   ` J. Bruce Fields
  2011-05-19 14:09                     ` Taousif_Ansari-G5Y5guI6XLZWk0Htik3J/w
  0 siblings, 1 reply; 26+ messages in thread
From: J. Bruce Fields @ 2011-05-19 13:43 UTC (permalink / raw)
  To: Taousif_Ansari; +Cc: linux-nfs

On Thu, May 19, 2011 at 06:44:59PM +0530, Taousif_Ansari@DELLTEAM.com wrote:
> Then what should I follow, and what details are needed....

There isn't really any supported server-side pNFS.

The closest is the GFS2-based code, for which you need to install
Benny's latest tree, configure a shared block device, create a GFS2
filesystem on it, mount it across all DS's and the MDS, and export it
from all of them--but I don't believe anyone has written step-by-step
instructions for that.

--b.

> 
> -----Original Message-----
> From: linux-nfs-owner@vger.kernel.org [mailto:linux-nfs-owner@vger.kernel.org] On Behalf Of J. Bruce Fields
> Sent: Thursday, May 19, 2011 6:43 PM
> To: Ansari, Taousif - Dell Team
> Cc: linux-nfs@vger.kernel.org
> Subject: Re: Performance Issue with multiple dataserver
> 
> On Thu, May 19, 2011 at 06:09:21PM +0530, Taousif_Ansari@DELLTEAM.com wrote:
> > I have followed the way given on http://wiki.linux-nfs.org/wiki/index.php/Configuring_pNFS/spnfsd .
> 
> Oh.  As noted there, spnfs is unmaintained.
> 
> And, in any case, we'd need many more details about your setup.
> 
> --b.
> 
> > 
> > -Taousif
> > 
> > -----Original Message-----
> > From: J. Bruce Fields [mailto:bfields@fieldses.org] 
> > Sent: Thursday, May 19, 2011 5:20 PM
> > To: Ansari, Taousif - Dell Team
> > Cc: linux-nfs@vger.kernel.org
> > Subject: Re: Performance Issue with multiple dataserver
> > 
> > On Thu, May 19, 2011 at 10:56:44AM +0530, Taousif_Ansari@DELLTEAM.com wrote:
> > > Hi,
> > > 
> > > I am using on Server linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) and on client also linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) downloaded from http://git.linux-nfs.org/?p=bhalevy/linux-pnfs.git;a=summary on Fedora 14.
> > 
> > So you're using GFS2 on the server?  With what sort of storage?
> > 
> > --b.
> > 
> > > 
> > > Extremely sorry for causing confusing .
> > > -----Original Message-----
> > > From: J. Bruce Fields [mailto:bfields@fieldses.org] 
> > > Sent: Wednesday, May 18, 2011 9:43 PM
> > > To: Ansari, Taousif - Dell Team
> > > Cc: linux-nfs@vger.kernel.org
> > > Subject: Re: Performance Issue with multiple dataserver
> > > 
> > > You sent this message as a reply to an unrelated message, which is
> > > confusing to those of us with threaded mail readers.
> > > 
> > > On Wed, May 18, 2011 at 05:24:45PM +0530, Taousif_Ansari@DELLTEAM.com wrote:
> > > > I have done pNFS setup with single Dataserver and Two Dataserver and ran the IOzone tool on both, I found that the performance with multiple dataservers is less than the performance with single dataservers.
> > > 
> > > What are you using as the server, and what as the client?
> > > 
> > > --b.
> > > 
> > > > 
> > > > Here are some numbers, which were captured by the IOzone tool.
> > > > 
> > > > 
> > > > 							  4	  8	 16	 32	 64	 128	 256	 512	1024	<== Record Length in KB
> > > > With Single Dataserver:
> > > > Read operation for file size 1 MB-		66415	66359	63630	70358	86223	70256	66047	66068	68489	<== IO kB/sec
> > > > Write operation for file size 1 MB-		18827	16920	18846	17039	18896	17009	17173	19206	17947	<== IO kB/sec
> > > > 
> > > > With Two Dataservers :
> > > > Read operation for file size 1 MB-		36882	381198	38150	38084	38749	33663	34398	37313	37847	<== IO kB/sec
> > > > Write operation for file size 1 MB-		5461	4661	5586	4870	5227	4922	4214	5572	4658	<== IO kB/sec
> > > > 
> > > > 
> > > > Can somebody tell me What could be the issue....
> > > > --
> > > > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> > > > the body of a message to majordomo@vger.kernel.org
> > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: Performance Issue with multiple dataserver
  2011-05-19 13:43                   ` J. Bruce Fields
@ 2011-05-19 14:09                     ` Taousif_Ansari-G5Y5guI6XLZWk0Htik3J/w
  2011-05-19 14:37                       ` Shyam_Iyer
  0 siblings, 1 reply; 26+ messages in thread
From: Taousif_Ansari-G5Y5guI6XLZWk0Htik3J/w @ 2011-05-19 14:09 UTC (permalink / raw)
  To: bfields; +Cc: linux-nfs

Can you please elaborate GFS2-setup a bit more...

-----Original Message-----
From: J. Bruce Fields [mailto:bfields@fieldses.org] 
Sent: Thursday, May 19, 2011 7:14 PM
To: Ansari, Taousif - Dell Team
Cc: linux-nfs@vger.kernel.org
Subject: Re: Performance Issue with multiple dataserver

On Thu, May 19, 2011 at 06:44:59PM +0530, Taousif_Ansari-G5Y5guI6XLZWk0Htik3J/w@public.gmane.org wrote:
> Then what should I follow, and what details are needed....

There isn't really any supported server-side pNFS.

The closest is the GFS2-based code, for which you need to install
Benny's latest tree, configure a shared block device, create a GFS2
filesystem on it, mount it across all DS's and the MDS, and export it
from all of them--but I don't believe anyone has written step-by-step
instructions for that.

--b.

> 
> -----Original Message-----
> From: linux-nfs-owner@vger.kernel.org [mailto:linux-nfs-owner@vger.kernel.org] On Behalf Of J. Bruce Fields
> Sent: Thursday, May 19, 2011 6:43 PM
> To: Ansari, Taousif - Dell Team
> Cc: linux-nfs@vger.kernel.org
> Subject: Re: Performance Issue with multiple dataserver
> 
> On Thu, May 19, 2011 at 06:09:21PM +0530, Taousif_Ansari-G5Y5guI6XLZWk0Htik3J/w@public.gmane.org wrote:
> > I have followed the way given on http://wiki.linux-nfs.org/wiki/index.php/Configuring_pNFS/spnfsd .
> 
> Oh.  As noted there, spnfs is unmaintained.
> 
> And, in any case, we'd need many more details about your setup.
> 
> --b.
> 
> > 
> > -Taousif
> > 
> > -----Original Message-----
> > From: J. Bruce Fields [mailto:bfields@fieldses.org] 
> > Sent: Thursday, May 19, 2011 5:20 PM
> > To: Ansari, Taousif - Dell Team
> > Cc: linux-nfs@vger.kernel.org
> > Subject: Re: Performance Issue with multiple dataserver
> > 
> > On Thu, May 19, 2011 at 10:56:44AM +0530, Taousif_Ansari-G5Y5guI6XLZWk0Htik3J/w@public.gmane.org wrote:
> > > Hi,
> > > 
> > > I am using on Server linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) and on client also linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) downloaded from http://git.linux-nfs.org/?p=bhalevy/linux-pnfs.git;a=summary on Fedora 14.
> > 
> > So you're using GFS2 on the server?  With what sort of storage?
> > 
> > --b.
> > 
> > > 
> > > Extremely sorry for causing confusing .
> > > -----Original Message-----
> > > From: J. Bruce Fields [mailto:bfields@fieldses.org] 
> > > Sent: Wednesday, May 18, 2011 9:43 PM
> > > To: Ansari, Taousif - Dell Team
> > > Cc: linux-nfs@vger.kernel.org
> > > Subject: Re: Performance Issue with multiple dataserver
> > > 
> > > You sent this message as a reply to an unrelated message, which is
> > > confusing to those of us with threaded mail readers.
> > > 
> > > On Wed, May 18, 2011 at 05:24:45PM +0530, Taousif_Ansari-G5Y5guI6XLZWk0Htik3J/w@public.gmane.org wrote:
> > > > I have done pNFS setup with single Dataserver and Two Dataserver and ran the IOzone tool on both, I found that the performance with multiple dataservers is less than the performance with single dataservers.
> > > 
> > > What are you using as the server, and what as the client?
> > > 
> > > --b.
> > > 
> > > > 
> > > > Here are some numbers, which were captured by the IOzone tool.
> > > > 
> > > > 
> > > > 							  4	  8	 16	 32	 64	 128	 256	 512	1024	<== Record Length in KB
> > > > With Single Dataserver:
> > > > Read operation for file size 1 MB-		66415	66359	63630	70358	86223	70256	66047	66068	68489	<== IO kB/sec
> > > > Write operation for file size 1 MB-		18827	16920	18846	17039	18896	17009	17173	19206	17947	<== IO kB/sec
> > > > 
> > > > With Two Dataservers :
> > > > Read operation for file size 1 MB-		36882	381198	38150	38084	38749	33663	34398	37313	37847	<== IO kB/sec
> > > > Write operation for file size 1 MB-		5461	4661	5586	4870	5227	4922	4214	5572	4658	<== IO kB/sec
> > > > 
> > > > 
> > > > Can somebody tell me What could be the issue....
> > > > --
> > > > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> > > > the body of a message to majordomo@vger.kernel.org
> > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: Performance Issue with multiple dataserver
  2011-05-19 14:09                     ` Taousif_Ansari-G5Y5guI6XLZWk0Htik3J/w
@ 2011-05-19 14:37                       ` Shyam_Iyer
  2011-05-24 11:39                         ` Taousif_Ansari
  0 siblings, 1 reply; 26+ messages in thread
From: Shyam_Iyer @ 2011-05-19 14:37 UTC (permalink / raw)
  To: Taousif_Ansari, bfields; +Cc: linux-nfs



> -----Original Message-----
> From: linux-nfs-owner@vger.kernel.org [mailto:linux-nfs-
> owner@vger.kernel.org] On Behalf Of Ansari, Taousif - Dell Team
> 
> Can you please elaborate GFS2-setup a bit more...


I guess Bruce is saying the step-by-step procedure is not written up...

Create a Redhat cluster using the shared block storage(iSCSI in your case I guess). You would get documentation on creating a RH cluster in many places..

All the MDSs and the DSs need to be part of the cluster.

Format GFS2 on the shared iSCSI storage.

Mount the GFS2 formatted iSCSI storage on all the MDSs and DSs and export them via NFS. Use Benny's tree for NFS.

The GFS2 cluster backend is your glue to scale the MDSes and DSes.

> 
> -----Original Message-----
> From: J. Bruce Fields [mailto:bfields@fieldses.org]
> Sent: Thursday, May 19, 2011 7:14 PM
> To: Ansari, Taousif - Dell Team
> Cc: linux-nfs@vger.kernel.org
> Subject: Re: Performance Issue with multiple dataserver
> 
> On Thu, May 19, 2011 at 06:44:59PM +0530, Taousif_Ansari@DELLTEAM.com
> wrote:
> > Then what should I follow, and what details are needed....
> 
> There isn't really any supported server-side pNFS.
> 
> The closest is the GFS2-based code, for which you need to install
> Benny's latest tree, configure a shared block device, create a GFS2
> filesystem on it, mount it across all DS's and the MDS, and export it
> from all of them--but I don't believe anyone has written step-by-step
> instructions for that.
> 
> --b.
> 
> >
> > -----Original Message-----
> > From: linux-nfs-owner@vger.kernel.org [mailto:linux-nfs-
> owner@vger.kernel.org] On Behalf Of J. Bruce Fields
> > Sent: Thursday, May 19, 2011 6:43 PM
> > To: Ansari, Taousif - Dell Team
> > Cc: linux-nfs@vger.kernel.org
> > Subject: Re: Performance Issue with multiple dataserver
> >
> > On Thu, May 19, 2011 at 06:09:21PM +0530, Taousif_Ansari@DELLTEAM.com
> wrote:
> > > I have followed the way given on http://wiki.linux-
> nfs.org/wiki/index.php/Configuring_pNFS/spnfsd .
> >
> > Oh.  As noted there, spnfs is unmaintained.
> >
> > And, in any case, we'd need many more details about your setup.
> >
> > --b.
> >
> > >
> > > -Taousif
> > >
> > > -----Original Message-----
> > > From: J. Bruce Fields [mailto:bfields@fieldses.org]
> > > Sent: Thursday, May 19, 2011 5:20 PM
> > > To: Ansari, Taousif - Dell Team
> > > Cc: linux-nfs@vger.kernel.org
> > > Subject: Re: Performance Issue with multiple dataserver
> > >
> > > On Thu, May 19, 2011 at 10:56:44AM +0530,
> Taousif_Ansari@DELLTEAM.com wrote:
> > > > Hi,
> > > >
> > > > I am using on Server linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar)
> and on client also linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) downloaded
> from http://git.linux-nfs.org/?p=bhalevy/linux-pnfs.git;a=summary on
> Fedora 14.
> > >
> > > So you're using GFS2 on the server?  With what sort of storage?
> > >
> > > --b.
> > >
> > > >
> > > > Extremely sorry for causing confusing .
> > > > -----Original Message-----
> > > > From: J. Bruce Fields [mailto:bfields@fieldses.org]
> > > > Sent: Wednesday, May 18, 2011 9:43 PM
> > > > To: Ansari, Taousif - Dell Team
> > > > Cc: linux-nfs@vger.kernel.org
> > > > Subject: Re: Performance Issue with multiple dataserver
> > > >
> > > > You sent this message as a reply to an unrelated message, which
> is
> > > > confusing to those of us with threaded mail readers.
> > > >
> > > > On Wed, May 18, 2011 at 05:24:45PM +0530,
> Taousif_Ansari@DELLTEAM.com wrote:
> > > > > I have done pNFS setup with single Dataserver and Two
> Dataserver and ran the IOzone tool on both, I found that the
> performance with multiple dataservers is less than the performance with
> single dataservers.
> > > >
> > > > What are you using as the server, and what as the client?
> > > >
> > > > --b.
> > > >
> > > > >
> > > > > Here are some numbers, which were captured by the IOzone tool.
> > > > >
> > > > >
> > > > > 							  4	  8	 16	 32
> 	 64	 128	 256	 512	1024	<== Record Length in KB
> > > > > With Single Dataserver:
> > > > > Read operation for file size 1 MB-		66415	66359	63630
> 	70358	86223	70256	66047	66068	68489	<== IO kB/sec
> > > > > Write operation for file size 1 MB-		18827	16920	18846
> 	17039	18896	17009	17173	19206	17947	<== IO kB/sec
> > > > >
> > > > > With Two Dataservers :
> > > > > Read operation for file size 1 MB-		36882	381198
> 	38150	38084	38749	33663	34398	37313	37847	<== IO kB/sec
> > > > > Write operation for file size 1 MB-		5461	4661	5586
> 	4870	5227	4922	4214	5572	4658	<== IO kB/sec
> > > > >
> > > > >
> > > > > Can somebody tell me What could be the issue....
> > > > > --
> > > > > To unsubscribe from this list: send the line "unsubscribe
> linux-nfs" in
> > > > > the body of a message to majordomo@vger.kernel.org
> > > > > More majordomo info at  http://vger.kernel.org/majordomo-
> info.html
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-nfs"
> in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: How to control the order of different export options for different client formats?
  2011-05-18 16:20     ` How to control the order of different export options for different client formats? J. Bruce Fields
@ 2011-05-20 13:38       ` James Pearson
  2011-05-20 16:41         ` J. Bruce Fields
  0 siblings, 1 reply; 26+ messages in thread
From: James Pearson @ 2011-05-20 13:38 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: linux-nfs

J. Bruce Fields wrote:
>>Having a priority option would be a very good idea - and may be in
>>the meantime the exports man page should be updated with info about
>>the current priority ordering?
> 
> 
> Sounds good.  Could you send in a patch?

Here's an attempt - based on the info from Max Matveev <makc@redhat.com> 
earlier in this thread

James Pearson

--- exports.man.dist    2010-09-28 13:24:16.000000000 +0100
+++ exports.man 2011-05-20 14:29:45.555314605 +0100
@@ -92,6 +92,11 @@
  '''.B \-\-public\-root
  '''option. Multiple specifications of a public root will be ignored.
  .PP
+.SS Matched Client Priories
+The order in which the different \fIMachine Name Formats\fR are matched
+against clients is in the priority order: \fIhostname, IP address or 
networks,
+wildcards, netgroup and anonymous\fR. Entries at the same level are matched
+in the same order in which they appear in \fI/etc/exports\fR.
  .SS RPCSEC_GSS security
  You may use the special strings "gss/krb5", "gss/krb5i", or "gss/krb5p"
  to restrict access to clients using rpcsec_gss security.  However, this




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: How to control the order of different export options for different client formats?
  2011-05-20 13:38       ` James Pearson
@ 2011-05-20 16:41         ` J. Bruce Fields
  2011-06-02 13:37           ` James Pearson
  0 siblings, 1 reply; 26+ messages in thread
From: J. Bruce Fields @ 2011-05-20 16:41 UTC (permalink / raw)
  To: James Pearson; +Cc: linux-nfs

On Fri, May 20, 2011 at 02:38:14PM +0100, James Pearson wrote:
> J. Bruce Fields wrote:
> >>Having a priority option would be a very good idea - and may be in
> >>the meantime the exports man page should be updated with info about
> >>the current priority ordering?
> >
> >
> >Sounds good.  Could you send in a patch?
> 
> Here's an attempt - based on the info from Max Matveev
> <makc@redhat.com> earlier in this thread

> 
> James Pearson
> 
> --- exports.man.dist    2010-09-28 13:24:16.000000000 +0100
> +++ exports.man 2011-05-20 14:29:45.555314605 +0100
> @@ -92,6 +92,11 @@
>  '''.B \-\-public\-root
>  '''option. Multiple specifications of a public root will be ignored.
>  .PP
> +.SS Matched Client Priories

Priorities?

But could we just combine this with the previous section--and make sure
the different possibilities are listed there in the correct priority
order to start off with.

That'd also mean adding a new subsection for the "anonymous" case.

--b.

> +The order in which the different \fIMachine Name Formats\fR are matched
> +against clients is in the priority order: \fIhostname, IP address
> or networks,
> +wildcards, netgroup and anonymous\fR. Entries at the same level are matched
> +in the same order in which they appear in \fI/etc/exports\fR.
>  .SS RPCSEC_GSS security
>  You may use the special strings "gss/krb5", "gss/krb5i", or "gss/krb5p"
>  to restrict access to clients using rpcsec_gss security.  However, this
> 
> 
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: Performance Issue with multiple dataserver
  2011-05-19 14:37                       ` Shyam_Iyer
@ 2011-05-24 11:39                         ` Taousif_Ansari
  2011-05-24 11:44                             ` [Cluster-devel] " Steven Whitehouse
  0 siblings, 1 reply; 26+ messages in thread
From: Taousif_Ansari @ 2011-05-24 11:39 UTC (permalink / raw)
  To: Shyam_Iyer, bfields; +Cc: linux-nfs, Ashokan_Vellimalai

Hi Bruce, Shyam

 As mentioned here http://wiki.linux-nfs.org/wiki/index.php/PNFS_server_projects gfs2 is also having issues(crashes, performance), so instead of going for gfs2 can we debug spNFS itself to get high performance?


-Taousif

-----Original Message-----
From: Iyer, Shyam 
Sent: Thursday, May 19, 2011 8:08 PM
To: Ansari, Taousif - Dell Team; bfields@fieldses.org
Cc: linux-nfs@vger.kernel.org
Subject: RE: Performance Issue with multiple dataserver



> -----Original Message-----
> From: linux-nfs-owner@vger.kernel.org [mailto:linux-nfs-
> owner@vger.kernel.org] On Behalf Of Ansari, Taousif - Dell Team
> 
> Can you please elaborate GFS2-setup a bit more...


I guess Bruce is saying the step-by-step procedure is not written up...

Create a Redhat cluster using the shared block storage(iSCSI in your case I guess). You would get documentation on creating a RH cluster in many places..

All the MDSs and the DSs need to be part of the cluster.

Format GFS2 on the shared iSCSI storage.

Mount the GFS2 formatted iSCSI storage on all the MDSs and DSs and export them via NFS. Use Benny's tree for NFS.

The GFS2 cluster backend is your glue to scale the MDSes and DSes.

> 
> -----Original Message-----
> From: J. Bruce Fields [mailto:bfields@fieldses.org]
> Sent: Thursday, May 19, 2011 7:14 PM
> To: Ansari, Taousif - Dell Team
> Cc: linux-nfs@vger.kernel.org
> Subject: Re: Performance Issue with multiple dataserver
> 
> On Thu, May 19, 2011 at 06:44:59PM +0530, Taousif_Ansari@DELLTEAM.com
> wrote:
> > Then what should I follow, and what details are needed....
> 
> There isn't really any supported server-side pNFS.
> 
> The closest is the GFS2-based code, for which you need to install
> Benny's latest tree, configure a shared block device, create a GFS2
> filesystem on it, mount it across all DS's and the MDS, and export it
> from all of them--but I don't believe anyone has written step-by-step
> instructions for that.
> 
> --b.
> 
> >
> > -----Original Message-----
> > From: linux-nfs-owner@vger.kernel.org [mailto:linux-nfs-
> owner@vger.kernel.org] On Behalf Of J. Bruce Fields
> > Sent: Thursday, May 19, 2011 6:43 PM
> > To: Ansari, Taousif - Dell Team
> > Cc: linux-nfs@vger.kernel.org
> > Subject: Re: Performance Issue with multiple dataserver
> >
> > On Thu, May 19, 2011 at 06:09:21PM +0530, Taousif_Ansari@DELLTEAM.com
> wrote:
> > > I have followed the way given on http://wiki.linux-
> nfs.org/wiki/index.php/Configuring_pNFS/spnfsd .
> >
> > Oh.  As noted there, spnfs is unmaintained.
> >
> > And, in any case, we'd need many more details about your setup.
> >
> > --b.
> >
> > >
> > > -Taousif
> > >
> > > -----Original Message-----
> > > From: J. Bruce Fields [mailto:bfields@fieldses.org]
> > > Sent: Thursday, May 19, 2011 5:20 PM
> > > To: Ansari, Taousif - Dell Team
> > > Cc: linux-nfs@vger.kernel.org
> > > Subject: Re: Performance Issue with multiple dataserver
> > >
> > > On Thu, May 19, 2011 at 10:56:44AM +0530,
> Taousif_Ansari@DELLTEAM.com wrote:
> > > > Hi,
> > > >
> > > > I am using on Server linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar)
> and on client also linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) downloaded
> from http://git.linux-nfs.org/?p=bhalevy/linux-pnfs.git;a=summary on
> Fedora 14.
> > >
> > > So you're using GFS2 on the server?  With what sort of storage?
> > >
> > > --b.
> > >
> > > >
> > > > Extremely sorry for causing confusing .
> > > > -----Original Message-----
> > > > From: J. Bruce Fields [mailto:bfields@fieldses.org]
> > > > Sent: Wednesday, May 18, 2011 9:43 PM
> > > > To: Ansari, Taousif - Dell Team
> > > > Cc: linux-nfs@vger.kernel.org
> > > > Subject: Re: Performance Issue with multiple dataserver
> > > >
> > > > You sent this message as a reply to an unrelated message, which
> is
> > > > confusing to those of us with threaded mail readers.
> > > >
> > > > On Wed, May 18, 2011 at 05:24:45PM +0530,
> Taousif_Ansari@DELLTEAM.com wrote:
> > > > > I have done pNFS setup with single Dataserver and Two
> Dataserver and ran the IOzone tool on both, I found that the
> performance with multiple dataservers is less than the performance with
> single dataservers.
> > > >
> > > > What are you using as the server, and what as the client?
> > > >
> > > > --b.
> > > >
> > > > >
> > > > > Here are some numbers, which were captured by the IOzone tool.
> > > > >
> > > > >
> > > > > 							  4	  8	 16	 32
> 	 64	 128	 256	 512	1024	<== Record Length in KB
> > > > > With Single Dataserver:
> > > > > Read operation for file size 1 MB-		66415	66359	63630
> 	70358	86223	70256	66047	66068	68489	<== IO kB/sec
> > > > > Write operation for file size 1 MB-		18827	16920	18846
> 	17039	18896	17009	17173	19206	17947	<== IO kB/sec
> > > > >
> > > > > With Two Dataservers :
> > > > > Read operation for file size 1 MB-		36882	381198
> 	38150	38084	38749	33663	34398	37313	37847	<== IO kB/sec
> > > > > Write operation for file size 1 MB-		5461	4661	5586
> 	4870	5227	4922	4214	5572	4658	<== IO kB/sec
> > > > >
> > > > >
> > > > > Can somebody tell me What could be the issue....
> > > > > --
> > > > > To unsubscribe from this list: send the line "unsubscribe
> linux-nfs" in
> > > > > the body of a message to majordomo@vger.kernel.org
> > > > > More majordomo info at  http://vger.kernel.org/majordomo-
> info.html
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-nfs"
> in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: Performance Issue with multiple dataserver
  2011-05-24 11:39                         ` Taousif_Ansari
@ 2011-05-24 11:44                             ` Steven Whitehouse
  0 siblings, 0 replies; 26+ messages in thread
From: Steven Whitehouse @ 2011-05-24 11:44 UTC (permalink / raw)
  To: Taousif_Ansari
  Cc: Shyam_Iyer, bfields, linux-nfs, Ashokan_Vellimalai, cluster-devel

/Hi,

On Tue, 2011-05-24 at 17:09 +0530, Taousif_Ansari@DELLTEAM.com wrote:
> Hi Bruce, Shyam
> 
>  As mentioned here http://wiki.linux-nfs.org/wiki/index.php/PNFS_server_projects gfs2 is also having issues(crashes, performance), so instead of going for gfs2 can we debug spNFS itself to get high performance?
> 
> 
> -Taousif
> 
As far as I'm aware that is historical information. If there are still
problems with GFS2, then please report them so we can work on them,

Steve.

> -----Original Message-----
> From: Iyer, Shyam 
> Sent: Thursday, May 19, 2011 8:08 PM
> To: Ansari, Taousif - Dell Team; bfields@fieldses.org
> Cc: linux-nfs@vger.kernel.org
> Subject: RE: Performance Issue with multiple dataserver
> 
> 
> 
> > -----Original Message-----
> > From: linux-nfs-owner@vger.kernel.org [mailto:linux-nfs-
> > owner@vger.kernel.org] On Behalf Of Ansari, Taousif - Dell Team
> > 
> > Can you please elaborate GFS2-setup a bit more...
> 
> 
> I guess Bruce is saying the step-by-step procedure is not written up...
> 
> Create a Redhat cluster using the shared block storage(iSCSI in your case I guess). You would get documentation on creating a RH cluster in many places..
> 
> All the MDSs and the DSs need to be part of the cluster.
> 
> Format GFS2 on the shared iSCSI storage.
> 
> Mount the GFS2 formatted iSCSI storage on all the MDSs and DSs and export them via NFS. Use Benny's tree for NFS.
> 
> The GFS2 cluster backend is your glue to scale the MDSes and DSes.
> 
> > 
> > -----Original Message-----
> > From: J. Bruce Fields [mailto:bfields@fieldses.org]
> > Sent: Thursday, May 19, 2011 7:14 PM
> > To: Ansari, Taousif - Dell Team
> > Cc: linux-nfs@vger.kernel.org
> > Subject: Re: Performance Issue with multiple dataserver
> > 
> > On Thu, May 19, 2011 at 06:44:59PM +0530, Taousif_Ansari@DELLTEAM.com
> > wrote:
> > > Then what should I follow, and what details are needed....
> > 
> > There isn't really any supported server-side pNFS.
> > 
> > The closest is the GFS2-based code, for which you need to install
> > Benny's latest tree, configure a shared block device, create a GFS2
> > filesystem on it, mount it across all DS's and the MDS, and export it
> > from all of them--but I don't believe anyone has written step-by-step
> > instructions for that.
> > 
> > --b.
> > 
> > >
> > > -----Original Message-----
> > > From: linux-nfs-owner@vger.kernel.org [mailto:linux-nfs-
> > owner@vger.kernel.org] On Behalf Of J. Bruce Fields
> > > Sent: Thursday, May 19, 2011 6:43 PM
> > > To: Ansari, Taousif - Dell Team
> > > Cc: linux-nfs@vger.kernel.org
> > > Subject: Re: Performance Issue with multiple dataserver
> > >
> > > On Thu, May 19, 2011 at 06:09:21PM +0530, Taousif_Ansari@DELLTEAM.com
> > wrote:
> > > > I have followed the way given on http://wiki.linux-
> > nfs.org/wiki/index.php/Configuring_pNFS/spnfsd .
> > >
> > > Oh.  As noted there, spnfs is unmaintained.
> > >
> > > And, in any case, we'd need many more details about your setup.
> > >
> > > --b.
> > >
> > > >
> > > > -Taousif
> > > >
> > > > -----Original Message-----
> > > > From: J. Bruce Fields [mailto:bfields@fieldses.org]
> > > > Sent: Thursday, May 19, 2011 5:20 PM
> > > > To: Ansari, Taousif - Dell Team
> > > > Cc: linux-nfs@vger.kernel.org
> > > > Subject: Re: Performance Issue with multiple dataserver
> > > >
> > > > On Thu, May 19, 2011 at 10:56:44AM +0530,
> > Taousif_Ansari@DELLTEAM.com wrote:
> > > > > Hi,
> > > > >
> > > > > I am using on Server linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar)
> > and on client also linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) downloaded
> > from http://git.linux-nfs.org/?p=bhalevy/linux-pnfs.git;a=summary on
> > Fedora 14.
> > > >
> > > > So you're using GFS2 on the server?  With what sort of storage?
> > > >
> > > > --b.
> > > >
> > > > >
> > > > > Extremely sorry for causing confusing .
> > > > > -----Original Message-----
> > > > > From: J. Bruce Fields [mailto:bfields@fieldses.org]
> > > > > Sent: Wednesday, May 18, 2011 9:43 PM
> > > > > To: Ansari, Taousif - Dell Team
> > > > > Cc: linux-nfs@vger.kernel.org
> > > > > Subject: Re: Performance Issue with multiple dataserver
> > > > >
> > > > > You sent this message as a reply to an unrelated message, which
> > is
> > > > > confusing to those of us with threaded mail readers.
> > > > >
> > > > > On Wed, May 18, 2011 at 05:24:45PM +0530,
> > Taousif_Ansari@DELLTEAM.com wrote:
> > > > > > I have done pNFS setup with single Dataserver and Two
> > Dataserver and ran the IOzone tool on both, I found that the
> > performance with multiple dataservers is less than the performance with
> > single dataservers.
> > > > >
> > > > > What are you using as the server, and what as the client?
> > > > >
> > > > > --b.
> > > > >
> > > > > >
> > > > > > Here are some numbers, which were captured by the IOzone tool.
> > > > > >
> > > > > >
> > > > > > 							  4	  8	 16	 32
> > 	 64	 128	 256	 512	1024	<== Record Length in KB
> > > > > > With Single Dataserver:
> > > > > > Read operation for file size 1 MB-		66415	66359	63630
> > 	70358	86223	70256	66047	66068	68489	<== IO kB/sec
> > > > > > Write operation for file size 1 MB-		18827	16920	18846
> > 	17039	18896	17009	17173	19206	17947	<== IO kB/sec
> > > > > >
> > > > > > With Two Dataservers :
> > > > > > Read operation for file size 1 MB-		36882	381198
> > 	38150	38084	38749	33663	34398	37313	37847	<== IO kB/sec
> > > > > > Write operation for file size 1 MB-		5461	4661	5586
> > 	4870	5227	4922	4214	5572	4658	<== IO kB/sec
> > > > > >
> > > > > >
> > > > > > Can somebody tell me What could be the issue....
> > > > > > --
> > > > > > To unsubscribe from this list: send the line "unsubscribe
> > linux-nfs" in
> > > > > > the body of a message to majordomo@vger.kernel.org
> > > > > > More majordomo info at  http://vger.kernel.org/majordomo-
> > info.html
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe linux-nfs"
> > in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [Cluster-devel] Performance Issue with multiple dataserver
@ 2011-05-24 11:44                             ` Steven Whitehouse
  0 siblings, 0 replies; 26+ messages in thread
From: Steven Whitehouse @ 2011-05-24 11:44 UTC (permalink / raw)
  To: cluster-devel.redhat.com

/Hi,

On Tue, 2011-05-24 at 17:09 +0530, Taousif_Ansari at DELLTEAM.com wrote:
> Hi Bruce, Shyam
> 
>  As mentioned here http://wiki.linux-nfs.org/wiki/index.php/PNFS_server_projects gfs2 is also having issues(crashes, performance), so instead of going for gfs2 can we debug spNFS itself to get high performance?
> 
> 
> -Taousif
> 
As far as I'm aware that is historical information. If there are still
problems with GFS2, then please report them so we can work on them,

Steve.

> -----Original Message-----
> From: Iyer, Shyam 
> Sent: Thursday, May 19, 2011 8:08 PM
> To: Ansari, Taousif - Dell Team; bfields at fieldses.org
> Cc: linux-nfs at vger.kernel.org
> Subject: RE: Performance Issue with multiple dataserver
> 
> 
> 
> > -----Original Message-----
> > From: linux-nfs-owner at vger.kernel.org [mailto:linux-nfs-
> > owner at vger.kernel.org] On Behalf Of Ansari, Taousif - Dell Team
> > 
> > Can you please elaborate GFS2-setup a bit more...
> 
> 
> I guess Bruce is saying the step-by-step procedure is not written up...
> 
> Create a Redhat cluster using the shared block storage(iSCSI in your case I guess). You would get documentation on creating a RH cluster in many places..
> 
> All the MDSs and the DSs need to be part of the cluster.
> 
> Format GFS2 on the shared iSCSI storage.
> 
> Mount the GFS2 formatted iSCSI storage on all the MDSs and DSs and export them via NFS. Use Benny's tree for NFS.
> 
> The GFS2 cluster backend is your glue to scale the MDSes and DSes.
> 
> > 
> > -----Original Message-----
> > From: J. Bruce Fields [mailto:bfields at fieldses.org]
> > Sent: Thursday, May 19, 2011 7:14 PM
> > To: Ansari, Taousif - Dell Team
> > Cc: linux-nfs at vger.kernel.org
> > Subject: Re: Performance Issue with multiple dataserver
> > 
> > On Thu, May 19, 2011 at 06:44:59PM +0530, Taousif_Ansari at DELLTEAM.com
> > wrote:
> > > Then what should I follow, and what details are needed....
> > 
> > There isn't really any supported server-side pNFS.
> > 
> > The closest is the GFS2-based code, for which you need to install
> > Benny's latest tree, configure a shared block device, create a GFS2
> > filesystem on it, mount it across all DS's and the MDS, and export it
> > from all of them--but I don't believe anyone has written step-by-step
> > instructions for that.
> > 
> > --b.
> > 
> > >
> > > -----Original Message-----
> > > From: linux-nfs-owner at vger.kernel.org [mailto:linux-nfs-
> > owner at vger.kernel.org] On Behalf Of J. Bruce Fields
> > > Sent: Thursday, May 19, 2011 6:43 PM
> > > To: Ansari, Taousif - Dell Team
> > > Cc: linux-nfs at vger.kernel.org
> > > Subject: Re: Performance Issue with multiple dataserver
> > >
> > > On Thu, May 19, 2011 at 06:09:21PM +0530, Taousif_Ansari at DELLTEAM.com
> > wrote:
> > > > I have followed the way given on http://wiki.linux-
> > nfs.org/wiki/index.php/Configuring_pNFS/spnfsd .
> > >
> > > Oh.  As noted there, spnfs is unmaintained.
> > >
> > > And, in any case, we'd need many more details about your setup.
> > >
> > > --b.
> > >
> > > >
> > > > -Taousif
> > > >
> > > > -----Original Message-----
> > > > From: J. Bruce Fields [mailto:bfields at fieldses.org]
> > > > Sent: Thursday, May 19, 2011 5:20 PM
> > > > To: Ansari, Taousif - Dell Team
> > > > Cc: linux-nfs at vger.kernel.org
> > > > Subject: Re: Performance Issue with multiple dataserver
> > > >
> > > > On Thu, May 19, 2011 at 10:56:44AM +0530,
> > Taousif_Ansari at DELLTEAM.com wrote:
> > > > > Hi,
> > > > >
> > > > > I am using on Server linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar)
> > and on client also linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) downloaded
> > from http://git.linux-nfs.org/?p=bhalevy/linux-pnfs.git;a=summary on
> > Fedora 14.
> > > >
> > > > So you're using GFS2 on the server?  With what sort of storage?
> > > >
> > > > --b.
> > > >
> > > > >
> > > > > Extremely sorry for causing confusing .
> > > > > -----Original Message-----
> > > > > From: J. Bruce Fields [mailto:bfields at fieldses.org]
> > > > > Sent: Wednesday, May 18, 2011 9:43 PM
> > > > > To: Ansari, Taousif - Dell Team
> > > > > Cc: linux-nfs at vger.kernel.org
> > > > > Subject: Re: Performance Issue with multiple dataserver
> > > > >
> > > > > You sent this message as a reply to an unrelated message, which
> > is
> > > > > confusing to those of us with threaded mail readers.
> > > > >
> > > > > On Wed, May 18, 2011 at 05:24:45PM +0530,
> > Taousif_Ansari at DELLTEAM.com wrote:
> > > > > > I have done pNFS setup with single Dataserver and Two
> > Dataserver and ran the IOzone tool on both, I found that the
> > performance with multiple dataservers is less than the performance with
> > single dataservers.
> > > > >
> > > > > What are you using as the server, and what as the client?
> > > > >
> > > > > --b.
> > > > >
> > > > > >
> > > > > > Here are some numbers, which were captured by the IOzone tool.
> > > > > >
> > > > > >
> > > > > > 							  4	  8	 16	 32
> > 	 64	 128	 256	 512	1024	<== Record Length in KB
> > > > > > With Single Dataserver:
> > > > > > Read operation for file size 1 MB-		66415	66359	63630
> > 	70358	86223	70256	66047	66068	68489	<== IO kB/sec
> > > > > > Write operation for file size 1 MB-		18827	16920	18846
> > 	17039	18896	17009	17173	19206	17947	<== IO kB/sec
> > > > > >
> > > > > > With Two Dataservers :
> > > > > > Read operation for file size 1 MB-		36882	381198
> > 	38150	38084	38749	33663	34398	37313	37847	<== IO kB/sec
> > > > > > Write operation for file size 1 MB-		5461	4661	5586
> > 	4870	5227	4922	4214	5572	4658	<== IO kB/sec
> > > > > >
> > > > > >
> > > > > > Can somebody tell me What could be the issue....
> > > > > > --
> > > > > > To unsubscribe from this list: send the line "unsubscribe
> > linux-nfs" in
> > > > > > the body of a message to majordomo at vger.kernel.org
> > > > > > More majordomo info at  http://vger.kernel.org/majordomo-
> > info.html
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe linux-nfs"
> > in
> > > the body of a message to majordomo at vger.kernel.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> > the body of a message to majordomo at vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Performance Issue with multiple dataserver
  2011-05-24 11:44                             ` [Cluster-devel] " Steven Whitehouse
@ 2011-05-24 13:17                               ` J. Bruce Fields
  -1 siblings, 0 replies; 26+ messages in thread
From: J. Bruce Fields @ 2011-05-24 13:17 UTC (permalink / raw)
  To: Steven Whitehouse
  Cc: Taousif_Ansari, Shyam_Iyer, linux-nfs, Ashokan_Vellimalai, cluster-devel

On Tue, May 24, 2011 at 12:44:19PM +0100, Steven Whitehouse wrote:
> /Hi,
> 
> On Tue, 2011-05-24 at 17:09 +0530, Taousif_Ansari@DELLTEAM.com wrote:
> > Hi Bruce, Shyam
> > 
> >  As mentioned here http://wiki.linux-nfs.org/wiki/index.php/PNFS_server_projects gfs2 is also having issues(crashes, performance), so instead of going for gfs2 can we debug spNFS itself to get high performance?
> > 
> > 
> > -Taousif
> > 
> As far as I'm aware that is historical information. If there are still
> problems with GFS2, then please report them so we can work on them,

Well, they may be nfs problems rather than gfs2 problems.

In either case, neither pnfs/gfs2 nor spnfs is a particularly mature
project; you will find bugs and performance problems in both.

I think a cluster-filesystem-based approach probably has the better
chance of getting merged earlier, as it solves a number of thorny
problems (such as how to do IO through the MDS) for you.  But it all
depends on what your goals are.  Either will require significant
development work to get into acceptable shape.

--b.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [Cluster-devel] Performance Issue with multiple dataserver
@ 2011-05-24 13:17                               ` J. Bruce Fields
  0 siblings, 0 replies; 26+ messages in thread
From: J. Bruce Fields @ 2011-05-24 13:17 UTC (permalink / raw)
  To: cluster-devel.redhat.com

On Tue, May 24, 2011 at 12:44:19PM +0100, Steven Whitehouse wrote:
> /Hi,
> 
> On Tue, 2011-05-24 at 17:09 +0530, Taousif_Ansari at DELLTEAM.com wrote:
> > Hi Bruce, Shyam
> > 
> >  As mentioned here http://wiki.linux-nfs.org/wiki/index.php/PNFS_server_projects gfs2 is also having issues(crashes, performance), so instead of going for gfs2 can we debug spNFS itself to get high performance?
> > 
> > 
> > -Taousif
> > 
> As far as I'm aware that is historical information. If there are still
> problems with GFS2, then please report them so we can work on them,

Well, they may be nfs problems rather than gfs2 problems.

In either case, neither pnfs/gfs2 nor spnfs is a particularly mature
project; you will find bugs and performance problems in both.

I think a cluster-filesystem-based approach probably has the better
chance of getting merged earlier, as it solves a number of thorny
problems (such as how to do IO through the MDS) for you.  But it all
depends on what your goals are.  Either will require significant
development work to get into acceptable shape.

--b.



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: How to control the order of different export options for different client formats?
  2011-05-20 16:41         ` J. Bruce Fields
@ 2011-06-02 13:37           ` James Pearson
  2011-06-04 18:20             ` J. Bruce Fields
       [not found]             ` <4DE79236.1080808-5Ol4pYTxKWu0ML75eksnrtBPR1lH4CV8@public.gmane.org>
  0 siblings, 2 replies; 26+ messages in thread
From: James Pearson @ 2011-06-02 13:37 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: linux-nfs

[-- Attachment #1: Type: text/plain, Size: 321 bytes --]

J. Bruce Fields wrote:
> 
> But could we just combine this with the previous section--and make sure
> the different possibilities are listed there in the correct priority
> order to start off with.
> 
> That'd also mean adding a new subsection for the "anonymous" case.

OK - how about the attached patch?

James Pearson

[-- Attachment #2: exports.man.patch --]
[-- Type: text/plain, Size: 2804 bytes --]

--- exports.man.dist	2010-09-28 13:24:16.000000000 +0100
+++ exports.man	2011-06-02 14:19:26.434486000 +0100
@@ -48,19 +48,6 @@
 This is the most common format. You may specify a host either by an
 abbreviated name recognized be the resolver, the fully qualified domain
 name, or an IP address.
-.IP "netgroups
-NIS netgroups may be given as
-.IR @group .
-Only the host part of each
-netgroup members is consider in checking for membership.  Empty host
-parts or those containing a single dash (\-) are ignored.
-.IP "wildcards
-Machine names may contain the wildcard characters \fI*\fR and \fI?\fR.
-This can be used to make the \fIexports\fR file more compact; for instance,
-\fI*.cs.foo.edu\fR matches all hosts in the domain
-\fIcs.foo.edu\fR.  As these characters also match the dots in a domain
-name, the given pattern will also match all hosts within any subdomain
-of \fIcs.foo.edu\fR.
 .IP "IP networks
 You can also export directories to all hosts on an IP (sub-) network
 simultaneously. This is done by specifying an IP address and netmask pair
@@ -72,6 +59,25 @@
 to the network base IPv4 address results in identical subnetworks with 10 bits of
 host. Wildcard characters generally do not work on IP addresses, though they
 may work by accident when reverse DNS lookups fail.
+.IP "wildcards
+Machine names may contain the wildcard characters \fI*\fR and \fI?\fR.
+This can be used to make the \fIexports\fR file more compact; for instance,
+\fI*.cs.foo.edu\fR matches all hosts in the domain
+\fIcs.foo.edu\fR.  As these characters also match the dots in a domain
+name, the given pattern will also match all hosts within any subdomain
+of \fIcs.foo.edu\fR.
+.IP "netgroups
+NIS netgroups may be given as
+.IR @group .
+Only the host part of each
+netgroup members is consider in checking for membership.  Empty host
+parts or those containing a single dash (\-) are ignored.
+.IP "anonymous
+This is specified by a single
+.I *
+character (not to be confused with the
+.I wildcard
+entry above) and will match all clients.
 '''.TP
 '''.B =public
 '''This is a special ``hostname'' that identifies the given directory name
@@ -92,6 +98,12 @@
 '''.B \-\-public\-root
 '''option. Multiple specifications of a public root will be ignored.
 .PP
+If a client matches more than one of the specifications above, then
+the first match from the above list order takes precedence - regardless of
+the order they appear on the export line. However, if a client matches
+more than one of the same type of specification (e.g. two netgroups),
+then the first match from the order they appear on the export line takes
+precedence.
 .SS RPCSEC_GSS security
 You may use the special strings "gss/krb5", "gss/krb5i", or "gss/krb5p"
 to restrict access to clients using rpcsec_gss security.  However, this

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: How to control the order of different export options for different client formats?
  2011-06-02 13:37           ` James Pearson
@ 2011-06-04 18:20             ` J. Bruce Fields
  2011-06-06 12:14               ` James Pearson
       [not found]             ` <4DE79236.1080808-5Ol4pYTxKWu0ML75eksnrtBPR1lH4CV8@public.gmane.org>
  1 sibling, 1 reply; 26+ messages in thread
From: J. Bruce Fields @ 2011-06-04 18:20 UTC (permalink / raw)
  To: James Pearson; +Cc: linux-nfs, steved

On Thu, Jun 02, 2011 at 02:37:58PM +0100, James Pearson wrote:
> J. Bruce Fields wrote:
> >
> >But could we just combine this with the previous section--and make sure
> >the different possibilities are listed there in the correct priority
> >order to start off with.
> >
> >That'd also mean adding a new subsection for the "anonymous" case.
> 
> OK - how about the attached patch?

Looks good to me, thanks.

My one quibble is with the statement that "single host" "is the most
common format".  (I don't think we know that.)

Fix that, and just resend with a brief changelog comment and a

	Signed-off-by: James Pearson <etc...>

and steved should get around to applying it eventually....

--b.

> 
> James Pearson

> --- exports.man.dist	2010-09-28 13:24:16.000000000 +0100
> +++ exports.man	2011-06-02 14:19:26.434486000 +0100
> @@ -48,19 +48,6 @@
>  This is the most common format. You may specify a host either by an
>  abbreviated name recognized be the resolver, the fully qualified domain
>  name, or an IP address.
> -.IP "netgroups
> -NIS netgroups may be given as
> -.IR @group .
> -Only the host part of each
> -netgroup members is consider in checking for membership.  Empty host
> -parts or those containing a single dash (\-) are ignored.
> -.IP "wildcards
> -Machine names may contain the wildcard characters \fI*\fR and \fI?\fR.
> -This can be used to make the \fIexports\fR file more compact; for instance,
> -\fI*.cs.foo.edu\fR matches all hosts in the domain
> -\fIcs.foo.edu\fR.  As these characters also match the dots in a domain
> -name, the given pattern will also match all hosts within any subdomain
> -of \fIcs.foo.edu\fR.
>  .IP "IP networks
>  You can also export directories to all hosts on an IP (sub-) network
>  simultaneously. This is done by specifying an IP address and netmask pair
> @@ -72,6 +59,25 @@
>  to the network base IPv4 address results in identical subnetworks with 10 bits of
>  host. Wildcard characters generally do not work on IP addresses, though they
>  may work by accident when reverse DNS lookups fail.
> +.IP "wildcards
> +Machine names may contain the wildcard characters \fI*\fR and \fI?\fR.
> +This can be used to make the \fIexports\fR file more compact; for instance,
> +\fI*.cs.foo.edu\fR matches all hosts in the domain
> +\fIcs.foo.edu\fR.  As these characters also match the dots in a domain
> +name, the given pattern will also match all hosts within any subdomain
> +of \fIcs.foo.edu\fR.
> +.IP "netgroups
> +NIS netgroups may be given as
> +.IR @group .
> +Only the host part of each
> +netgroup members is consider in checking for membership.  Empty host
> +parts or those containing a single dash (\-) are ignored.
> +.IP "anonymous
> +This is specified by a single
> +.I *
> +character (not to be confused with the
> +.I wildcard
> +entry above) and will match all clients.
>  '''.TP
>  '''.B =public
>  '''This is a special ``hostname'' that identifies the given directory name
> @@ -92,6 +98,12 @@
>  '''.B \-\-public\-root
>  '''option. Multiple specifications of a public root will be ignored.
>  .PP
> +If a client matches more than one of the specifications above, then
> +the first match from the above list order takes precedence - regardless of
> +the order they appear on the export line. However, if a client matches
> +more than one of the same type of specification (e.g. two netgroups),
> +then the first match from the order they appear on the export line takes
> +precedence.
>  .SS RPCSEC_GSS security
>  You may use the special strings "gss/krb5", "gss/krb5i", or "gss/krb5p"
>  to restrict access to clients using rpcsec_gss security.  However, this


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: How to control the order of different export options for different client formats?
  2011-06-04 18:20             ` J. Bruce Fields
@ 2011-06-06 12:14               ` James Pearson
  0 siblings, 0 replies; 26+ messages in thread
From: James Pearson @ 2011-06-06 12:14 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: linux-nfs, steved

J. Bruce Fields wrote:
> Looks good to me, thanks.
> 
> My one quibble is with the statement that "single host" "is the most
> common format".  (I don't think we know that.)
> 
> Fix that, and just resend with a brief changelog comment and a
> 
> 	Signed-off-by: James Pearson <etc...>
> 
> and steved should get around to applying it eventually....

The "This is the most common format" statement is in the existing 
exports man page - i.e. nothing to do with my patch ...

However, I'll remove that statement as well and submit the patch

James Pearson

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: How to control the order of different export options for different client formats?
       [not found]             ` <4DE79236.1080808-5Ol4pYTxKWu0ML75eksnrtBPR1lH4CV8@public.gmane.org>
@ 2011-06-07 20:33               ` Steve Dickson
  0 siblings, 0 replies; 26+ messages in thread
From: Steve Dickson @ 2011-06-07 20:33 UTC (permalink / raw)
  To: James Pearson; +Cc: J. Bruce Fields, linux-nfs



On 06/02/2011 09:37 AM, James Pearson wrote:
> J. Bruce Fields wrote:
>>
>> But could we just combine this with the previous section--and make sure
>> the different possibilities are listed there in the correct priority
>> order to start off with.
>>
>> That'd also mean adding a new subsection for the "anonymous" case.
> 
> OK - how about the attached patch?
> 
> James Pearson
Committed....

steved.

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2011-06-07 20:33 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-05-17 16:21 How to control the order of different export options for different client formats? James Pearson
2011-05-17 22:01 ` NeilBrown
2011-05-18 10:19   ` James Pearson
2011-05-18 11:54     ` Performance Issue with multiple dataserver Taousif_Ansari
2011-05-18 16:12       ` J. Bruce Fields
2011-05-19  5:26         ` Taousif_Ansari
2011-05-19 11:50           ` J. Bruce Fields
2011-05-19 12:39             ` Taousif_Ansari
2011-05-19 13:12               ` J. Bruce Fields
2011-05-19 13:14                 ` Taousif_Ansari
2011-05-19 13:43                   ` J. Bruce Fields
2011-05-19 14:09                     ` Taousif_Ansari-G5Y5guI6XLZWk0Htik3J/w
2011-05-19 14:37                       ` Shyam_Iyer
2011-05-24 11:39                         ` Taousif_Ansari
2011-05-24 11:44                           ` Steven Whitehouse
2011-05-24 11:44                             ` [Cluster-devel] " Steven Whitehouse
2011-05-24 13:17                             ` J. Bruce Fields
2011-05-24 13:17                               ` [Cluster-devel] " J. Bruce Fields
2011-05-18 16:20     ` How to control the order of different export options for different client formats? J. Bruce Fields
2011-05-20 13:38       ` James Pearson
2011-05-20 16:41         ` J. Bruce Fields
2011-06-02 13:37           ` James Pearson
2011-06-04 18:20             ` J. Bruce Fields
2011-06-06 12:14               ` James Pearson
     [not found]             ` <4DE79236.1080808-5Ol4pYTxKWu0ML75eksnrtBPR1lH4CV8@public.gmane.org>
2011-06-07 20:33               ` Steve Dickson
2011-05-18  0:46 ` Max Matveev

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.