linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: cifs large write performance improvements to Samba
@ 2004-12-13 16:56 Steve French
  2004-12-13 17:20 ` cliff white
  0 siblings, 1 reply; 11+ messages in thread
From: Steve French @ 2004-12-13 16:56 UTC (permalink / raw)
  To: linux-kernel

The current mainline (very recent 2.6.10-rc Linux tree) should be fine 
from memory leak perspective.  No such leaks have been reported AFAIK on 
current cifs code and certainly none that I have detected in heavy 
stress testing. 

I do need to add some additional filesystem tests to the regression test 
mix soon, and I have asked one  of the guys here  with a ppc64 box to 
run some more bigendian tests on it too.   The ltp has filesystems tests 
scatterred in multiple directories which I need to track down and setup 
scripts to automate (e.g. the sendfile tests are not in the same 
directory with other tests in testcases/kernel/fs, nor is the very 
useful "connectathon nfs" posix filesystem test suite)

The oops (referenced in your post) does need to be fixed of course, but 
since the code that would cause it is disabled (and only is broken to 
certain servers and is noted as broken in the TODO list, implying that 
it should not be turned on in /proc/fs/cifs) - I was considering it 
lower priority than the other issues recently fixed which have been 
consuming my time.  Fixing/adding support for extended security is 
getting closer to the top of the priority list now though.  If I can at 
least prevent the oops with a small change (even if it does not fully 
fix the spnego code) I will push that changeset in soon.   The userspace 
piece should be -- much -- easier to communicate with now that the 
kevents stuff is in.    Very high on the list as well is getting NTLMv2 
tested and working as many environments require it.  Figuring out the 
mysterious byte range lock failure which sometimes occurs on an unlock 
in locktest 7 (I noticed it starting after the nfs changes for the vfs 
posix locking a few months ago) and I have posted about before (Kernel 
panic - not syncing: Attempting to free lock with active blocklist) is a 
slightly higher priority.  Basically I need to figure out what is going 
on with the line in fs/locks.c?

       if (!list_empty <http://lxr.linux.no/ident?v=2.6.8.1;i=list_empty>(&fl->fl_block)
		 panic <http://lxr.linux.no/ident?v=2.6.8.1;i=panic>(/"Attempting to free lock with active block list"/);


Since I am not adding anything to the fl_block list intentionally, I 
need to find out what causes items to be added to the fl_block list (ie 
when the locks_insert_block and locks_delete_block call are made and why 
they sometimes happen differently in lock test 7 then in other byte 
range lock testcases in which unlock obviously works fine).

On the issue of regressing back to smbfs :)  There are a few things 
which can be done that would help.

1) Need to post an updated version of when to still use smbfs for an 
occassional mount (there are only a couple of cases in which smbfs has 
to be used now but they are important to some users - such as users who 
have to access an occassional old OS/2 or DOS server, or who need 
Kerberos), and I need to add it that chart to the fs/cifs/README and the 
project page etc.
2) Public view of the status of testing - the raw data needs to be 
posted regularly as kernel updated (and against five or six different 
server types) so users see what is broken in smbfs (and so users can see 
what posix issues are still being worked on cifs and any known 
problems).   smbfs fails about half of the filesystem tests that I have 
tried, due to stress issues, or because the tests requires better posix 
compliance or because of various smbfs stability fixes.

  If only someone could roll all of the key fs tests into a set of 
scripts which could generate one regularly updated set of test status 
chart ... one for each of XFS, JFS, ext3, Reiser3, CIFS (against various 
servers, Samba version etc), NFSv2, NFSv3, NFSv4 (against various 
servers), AFS but that would be a lot of work (not to run) but the first 
time writing/setup of the scripts to launch the tests in the right order 
since some failures may be expected (at least for the network 
filesystems) due to hard to implement features (missing fcntls, dnotify, 
get/setlease, differences in byte range lock semantics, lack of flock 
etc.) and also since the most sensible NFS, AFS and CIFS tests would 
involve more than one client (to test caching/oplock/token management 
semantics better) but no such fs tests AFAIK exist for Linux.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: cifs large write performance improvements to Samba
  2004-12-13 16:56 cifs large write performance improvements to Samba Steve French
@ 2004-12-13 17:20 ` cliff white
  2004-12-13 18:34   ` Steve French
  0 siblings, 1 reply; 11+ messages in thread
From: cliff white @ 2004-12-13 17:20 UTC (permalink / raw)
  To: Steve French; +Cc: linux-kernel

On Mon, 13 Dec 2004 10:56:45 -0600
Steve French <smfrench@austin.rr.com> wrote:

> The current mainline (very recent 2.6.10-rc Linux tree) should be fine 
> from memory leak perspective.  No such leaks have been reported AFAIK on 
> current cifs code and certainly none that I have detected in heavy 
> stress testing. 
> 
> I do need to add some additional filesystem tests to the regression test 
> mix soon, and I have asked one  of the guys here  with a ppc64 box to 
> run some more bigendian tests on it too.   The ltp has filesystems tests 
> scatterred in multiple directories which I need to track down and setup 
> scripts to automate (e.g. the sendfile tests are not in the same 
> directory with other tests in testcases/kernel/fs, nor is the very 
> useful "connectathon nfs" posix filesystem test suite)
> 
> The oops (referenced in your post) does need to be fixed of course, but 
> since the code that would cause it is disabled (and only is broken to 
> certain servers and is noted as broken in the TODO list, implying that 
> it should not be turned on in /proc/fs/cifs) - I was considering it 
> lower priority than the other issues recently fixed which have been 
> consuming my time.  Fixing/adding support for extended security is 
> getting closer to the top of the priority list now though.  If I can at 
> least prevent the oops with a small change (even if it does not fully 
> fix the spnego code) I will push that changeset in soon.   The userspace 
> piece should be -- much -- easier to communicate with now that the 
> kevents stuff is in.    Very high on the list as well is getting NTLMv2 
> tested and working as many environments require it.  Figuring out the 
> mysterious byte range lock failure which sometimes occurs on an unlock 
> in locktest 7 (I noticed it starting after the nfs changes for the vfs 
> posix locking a few months ago) and I have posted about before (Kernel 
> panic - not syncing: Attempting to free lock with active blocklist) is a 
> slightly higher priority.  Basically I need to figure out what is going 
> on with the line in fs/locks.c?
> 
>        if (!list_empty <http://lxr.linux.no/ident?v=2.6.8.1;i=list_empty>(&fl->fl_block)
> 		 panic <http://lxr.linux.no/ident?v=2.6.8.1;i=panic>(/"Attempting to free lock with active block list"/);
> 
> 
> Since I am not adding anything to the fl_block list intentionally, I 
> need to find out what causes items to be added to the fl_block list (ie 
> when the locks_insert_block and locks_delete_block call are made and why 
> they sometimes happen differently in lock test 7 then in other byte 
> range lock testcases in which unlock obviously works fine).
> 
> On the issue of regressing back to smbfs :)  There are a few things 
> which can be done that would help.
> 
> 1) Need to post an updated version of when to still use smbfs for an 
> occassional mount (there are only a couple of cases in which smbfs has 
> to be used now but they are important to some users - such as users who 
> have to access an occassional old OS/2 or DOS server, or who need 
> Kerberos), and I need to add it that chart to the fs/cifs/README and the 
> project page etc.
> 2) Public view of the status of testing - the raw data needs to be 
> posted regularly as kernel updated (and against five or six different 
> server types) so users see what is broken in smbfs (and so users can see 
> what posix issues are still being worked on cifs and any known 
> problems).   smbfs fails about half of the filesystem tests that I have 
> tried, due to stress issues, or because the tests requires better posix 
> compliance or because of various smbfs stability fixes.
> 
>   If only someone could roll all of the key fs tests into a set of 
> scripts which could generate one regularly updated set of test status 
> chart ... one for each of XFS, JFS, ext3, Reiser3, CIFS (against various 
> servers, Samba version etc), NFSv2, NFSv3, NFSv4 (against various 
> servers), AFS but that would be a lot of work (not to run) but the first 
> time writing/setup of the scripts to launch the tests in the right order 
> since some failures may be expected (at least for the network 
> filesystems) due to hard to implement features (missing fcntls, dnotify, 
> get/setlease, differences in byte range lock semantics, lack of flock 
> etc.) and also since the most sensible NFS, AFS and CIFS tests would 
> involve more than one client (to test caching/oplock/token management 
> semantics better) but no such fs tests AFAIK exist for Linux.

We ( OSDL ) would be very interested in this sort of testing. We have some fs tests
wrappered currently
cliffw
OSDL
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 


-- 
The church is near, but the road is icy.
The bar is far, but i will walk carefully. - Russian proverb

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: cifs large write performance improvements to Samba
  2004-12-13 17:20 ` cliff white
@ 2004-12-13 18:34   ` Steve French
  2004-12-13 18:43     ` Steve French
  0 siblings, 1 reply; 11+ messages in thread
From: Steve French @ 2004-12-13 18:34 UTC (permalink / raw)
  To: cliff white; +Cc: linux-kernel

cliff white wrote:

>On Mon, 13 Dec 2004 10:56:45 -0600
>Steve French <smfrench@austin.rr.com> wrote:
>
>  If only someone could roll all of the key fs tests into a set of 
>scripts which could generate one regularly updated set of test status 
>chart ... one for each of XFS, JFS, ext3, Reiser3, CIFS (against various 
>servers, Samba version etc), NFSv2, NFSv3, NFSv4 (against various 
>servers), AFS but that would be a lot of work (not to run) but the first 
>time writing/setup of the scripts to launch the tests in the right order 
>since some failures may be expected (at least for the network 
>filesystems) due to hard to implement features (missing fcntls, dnotify, 
>get/setlease, differences in byte range lock semantics, lack of flock 
>etc.) and also since the most sensible NFS, AFS and CIFS tests would 
>involve more than one client (to test caching/oplock/token management 
>semantics better) but no such fs tests AFAIK exist for Linux.
>  
>
>
>We ( OSDL ) would be very interested in this sort of testing. We have some fs tests
>wrappered currently
>cliffw
>OSDL
>  
>

Generally what I wanted to see was:
1) at least one little endian and one big endian client to run the tests 
on (and for the network filesystems, at least one server).
2) execute a set of similar tests on each of the filesystems (for our 
Samba purposes e.g. ext3, xfs, jfs are particular important, and cifs 
and might as well run on nfsv3, nfsv4)
- fsx on each (specify an operation count of n=10000 should be sufficient)
- fsstress with at least 100 processes on each, with at least 2 loops, 
200 ops
- "connectathon nfs" tests on each. The local filesystems should all be 
able to pass these.
- for cifs to Windows servers, there is one of the "special" subcategory 
of tests that has to be skipped (since Windows server can not support 
the operation)
- and for cifs locktests 7 and 10 are expected to fail at this time
- dbench

There are others that can be run but they don't seem to broaden it much. 
What is very important to add into the mix - based on defects I have 
seen in various network and cluster filesystems over the past few years 
- are something similar to the following:
- Add a test which hits the three Linux specific fcntls (dnotify, set 
and get lease)
- Add a test which uses O_DIRECT open flag (could also simply use the 
mount flag for those filesystems which support that). NFS guys made 
trivial mod to fsx for this and ran with fsx -W -R (to disable mmapping 
when running with direct i/o)
- Add a test which exercises flock to the mix
- Add a maximum and minimium path name and path component test
- Add a test of creating directories with very large number of entries 
(cthon "basic" subtests can be easily modified for this).
- Add a test which does sendfile with data integrity checking from one 
process and a mix of normal and mmapped writes and reads from another 
set of processes
- Add a test of data integrity to the same network fs from multiple clients
- Add a test for stable nanosecond (or 100 nanosecond which is good 
enough for the network case) timestamps (ext2 and ext3 will fail this 
since they round timestamps down to the second when inode metadata is 
written out)
- A better test for the various O_ flags and file modes (especially 
important to run from multiple clients)
- A better byte range locking functional test
- xattr testing (maximum, minimum sizes, illegal requests, bad user 
buffers etc.) There is an xattr testcase in the ltp but have not 
analyzed it enough to see if it will do
- POSIX ACL testing (getfacl/setfacl)
- Trusted and Security attribute testing (to make sure the FS properly 
handles long attributes and/or values, short attributes and/or values, 
bad buffers etc.)


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: cifs large write performance improvements to Samba
  2004-12-13 18:34   ` Steve French
@ 2004-12-13 18:43     ` Steve French
  2004-12-16 12:11       ` Marcelo Tosatti
  0 siblings, 1 reply; 11+ messages in thread
From: Steve French @ 2004-12-13 18:43 UTC (permalink / raw)
  To: Steve French; +Cc: cliff white, linux-kernel

cliff white wrote:

>
>> On Mon, 13 Dec 2004 10:56:45 -0600
>> Steve French <smfrench@austin.rr.com> wrote:
>>
>> If only someone could roll all of the key fs tests into a set of 
>> scripts which could generate one regularly updated set of test status 
>> chart ... one for each of XFS, JFS, ext3, Reiser3, CIFS (against 
>> various servers, Samba version etc), NFSv2, NFSv3, NFSv4 (against 
>> various servers), AFS but that would be a lot of work (not to run) 
>> but the first time writing/setup of the scripts to launch the tests 
>> in the right order since some failures may be expected (at least for 
>> the network filesystems) due to hard to implement features (missing 
>> fcntls, dnotify, get/setlease, differences in byte range lock 
>> semantics, lack of flock etc.) and also since the most sensible NFS, 
>> AFS and CIFS tests would involve more than one client (to test 
>> caching/oplock/token management semantics better) but no such fs 
>> tests AFAIK exist for Linux.
>>
>>
>>
>> We ( OSDL ) would be very interested in this sort of testing. We have 
>> some fs tests
>> wrappered currently
>> cliffw
>> OSDL
>>
>>
>
The other thing I forgot to mention ... we used to have a concept of 
"performance regression testing" (to make sure that we had not gotten a 
lot slower on the latest rc) - not just runs on every release candidate 
of a few complex benchmark tests (like SpecWeb or Netbench or some 
enterprise Java perf test) but the idea was to run on every rc an fs 
microbenchmark (more like iozone) to ensure that we did not have some 
small functional problem in an fs or mm subsystem was causing big, 
noticeable degradation in performance (large read or small read or large 
write or small write, random or sequential etc.). I have not seen anyone 
doing that on Linux in an automated fashion (e.g running iozone 
automated every time a new 2.6.x.rc on a half a dozen of the fs - simply 
to verify that things had not gotten drastically worse on a particular 
fs due to a bug or sideffect of a global VFS change).


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: cifs large write performance improvements to Samba
  2004-12-13 18:43     ` Steve French
@ 2004-12-16 12:11       ` Marcelo Tosatti
  2004-12-16 16:39         ` automated filesystem testing for multiple Linux fs Steve French
  2004-12-16 18:58         ` cifs large write performance improvements to Samba Hans Reiser
  0 siblings, 2 replies; 11+ messages in thread
From: Marcelo Tosatti @ 2004-12-16 12:11 UTC (permalink / raw)
  To: Steve French; +Cc: cliff white, linux-kernel

On Mon, Dec 13, 2004 at 12:43:27PM -0600, Steve French wrote:
> cliff white wrote:
> 
> >
> >>On Mon, 13 Dec 2004 10:56:45 -0600
> >>Steve French <smfrench@austin.rr.com> wrote:
> >>
> >>If only someone could roll all of the key fs tests into a set of 
> >>scripts which could generate one regularly updated set of test status 
> >>chart ... one for each of XFS, JFS, ext3, Reiser3, CIFS (against 
> >>various servers, Samba version etc), NFSv2, NFSv3, NFSv4 (against 
> >>various servers), AFS but that would be a lot of work (not to run) 
> >>but the first time writing/setup of the scripts to launch the tests 
> >>in the right order since some failures may be expected (at least for 
> >>the network filesystems) due to hard to implement features (missing 
> >>fcntls, dnotify, get/setlease, differences in byte range lock 
> >>semantics, lack of flock etc.) and also since the most sensible NFS, 
> >>AFS and CIFS tests would involve more than one client (to test 
> >>caching/oplock/token management semantics better) but no such fs 
> >>tests AFAIK exist for Linux.
> >>
> >>
> >>
> >>We ( OSDL ) would be very interested in this sort of testing. We have 
> >>some fs tests
> >>wrappered currently
> >>cliffw
> >>OSDL
> >>
> >>
> >
> The other thing I forgot to mention ... we used to have a concept of 
> "performance regression testing" (to make sure that we had not gotten a 
> lot slower on the latest rc) - not just runs on every release candidate 
> of a few complex benchmark tests (like SpecWeb or Netbench or some 
> enterprise Java perf test) but the idea was to run on every rc an fs 
> microbenchmark (more like iozone) to ensure that we did not have some 
> small functional problem in an fs or mm subsystem was causing big, 
> noticeable degradation in performance (large read or small read or large 
> write or small write, random or sequential etc.). I have not seen anyone 
> doing that on Linux in an automated fashion (e.g running iozone 
> automated every time a new 2.6.x.rc on a half a dozen of the fs - simply 
> to verify that things had not gotten drastically worse on a particular 
> fs due to a bug or sideffect of a global VFS change).

Yes, we definately need that.

The STP framework is running different benchmarks on each v2.6 new -mm and -mainline 
release already. Take a look at 

http://www.osdl.org/lab_activities/kernel_testing/stp/search.lnk/search_test_requests

There are two things considered important to me:

- Need to have a wide set of benchmarks to simulate a wide set of workloads. Having 
one microbenchmark limits the usefulness of the tests. This is very important.

- Need to parse the results of such tests in a way thats they are easily visualized

The second item of course depends on the number of variations that are ran. For
example you mention several filesystem, that would increase the amount of results
 dramatically.

We also need the manpower to carefully interpret the results (which is not trivial).

I would like to see different variations of memory size and CPU (1-2-4-8 CPU variation 
is already being done by STP), have been asking Cliff for such functionality for 
the past days.

One unfortunate thing is the failure rate for STP is currently really high. 
Cliff, the results I've received today and yesterday have a rough 50% failure rate.
Is anyone investigating that?

But Steve, to resume, the STP guys are working hard to make such 
IO/VM/FS performance tests automated and useful for the community. 

We need to push'em harder! :) 


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: cifs large write performance improvements to Samba
  2004-12-16 18:58         ` cifs large write performance improvements to Samba Hans Reiser
@ 2004-12-16 16:30           ` Marcelo Tosatti
  0 siblings, 0 replies; 11+ messages in thread
From: Marcelo Tosatti @ 2004-12-16 16:30 UTC (permalink / raw)
  To: Hans Reiser; +Cc: Steve French, cliff white, linux-kernel

On Thu, Dec 16, 2004 at 10:58:07AM -0800, Hans Reiser wrote:
> Marcelo Tosatti wrote:
> 
> >
> >>I have not seen anyone 
> >>doing that on Linux in an automated fashion (e.g running iozone 
> >>automated every time a new 2.6.x.rc on a half a dozen of the fs - simply 
> >>to verify that things had not gotten drastically worse on a particular 
> >>fs due to a bug or sideffect of a global VFS change).
> >>   
> >>
> >
> >Yes, we definately need that.
> >
> > 
> >
> Andrew Morton is saying that iozone does things real apps don't do, that 
> is, it dirties mmap'd pages enough to swamp the machine.
> 
> Do you guys agree or disagree with that?
> 
> Reiser4 needs iozone optimization work which we haven't bothered with yet.

I'm not really familiar with iozone's behaviour, sorry. 

Steve, Andrew?

PS: Yep, another important part (to me at least) of all this automated performance 
testing effort is a detailed understanding of the tests being performed.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* automated filesystem testing for multiple Linux fs
  2004-12-16 12:11       ` Marcelo Tosatti
@ 2004-12-16 16:39         ` Steve French
  2004-12-17  0:27           ` Michael Clark
  2004-12-16 18:58         ` cifs large write performance improvements to Samba Hans Reiser
  1 sibling, 1 reply; 11+ messages in thread
From: Steve French @ 2004-12-16 16:39 UTC (permalink / raw)
  To: Marcelo Tosatti, cliffw, linux-kernel

On Thu, 2004-12-16 at 06:11, Marcelo Tosatti wrote:
> you mention several filesystem, that would increase the amount of results
>  dramatically.
> 

Selfishly, thinking about what I would like to see for Samba servers on
Linux ...  I could live with testing Samba well on a relatively small
number of local filesystems perhaps JFS or even XFS, but at least
testing on one local filesystem and CIFS (and even to a lesser extent
NFS) are needed for us to feel more comfortable with Samba/Linux.  Since
at present only XFS and JFS have the full combination of server
features: better quotas, DMAPI, xattr support, ACL support and
nanosecond file timestamps on disk - (and only JFS has case insensitive
format option which is useful for Samba benchmarking with Windows
clients), and since those two perform quite well on the Netbench
benchmark that is most quoted - those two local filesystems are of the
most interest to me (at least until future versions of Reiser or ext4
...).  For perf tests run Linux CIFS client to Samba/Linux  - JFS on the
server seems to perform best among the five major local filesystems so
far.

The somewhat harder part of course is adding in CIFS and perhaps NFSv3
and NFSv4 testing since that is a little harder to script the setup for
and a few small pieces of a couple of the fs functional tests only run
on a subset of the filesystems.  Part of the problem that I run into is
that LTP has lots of tests but the filesystem tests are rather strangely
not all in the kernel/testcases/fs directory - they are spread across
multiple directories including an nfs one (for the posix connectathon fs
test) and networking directory (e.g. for sendfile) and try finding the
flock and DNOTIFY and SETLEASE and GETLEASE calls ones and the ones that
hit O_DIRECT or O_ASYNC or test O_SYNC(the ones that might be Linux
specific or have slightly different Linux behavior like those are among
the more interesting but quite hard for me to find). 


> I would like to see different variations of memory size and CPU (1-2-4-8 CPU variation 
> is already being done by STP), have been asking Cliff for such functionality for 
> the past days.

That is useful but we need to walk before we run and address the
critical problems first.  I count at least four bad regressions in the
two big network filesystems caused by sideeffects of other vfs changes -
just this year.  We have need for much more basic functional tests and
perf test to prevent more regressions - e.g to catch earlier or avoid
completely cases like the mm writepage performance regression that hit
NFS earlier this year and again the bug that hit nfs that caused 2.6.8
-> 2.6.8.1 regression and the two vfs fcntl changes (dnotify and
posix_lock_file) that regressed cifs. Part of the problem is that AFAIK
we don't have a good Linux fcntl test suite that is easy enough to run
that everyone doing an fs runs it :) 

The other thing I noticed on the STP site is that the three types of
tests are not easily distinguished on the site:
	1) filesystem functional (e.g. connecathon "basic" and "special"
	 subtests, fsx, and some of the ltp tests etc.)
vs.
	2)filesystem stress testing (fsstress etc.)
vs.
	3) filesystem perf testing 
		microbenchmarks (iozone, bonnie etc.)
		bigger benchmarks (dbench, specsfs, specweb etc.)




^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: cifs large write performance improvements to Samba
  2004-12-16 12:11       ` Marcelo Tosatti
  2004-12-16 16:39         ` automated filesystem testing for multiple Linux fs Steve French
@ 2004-12-16 18:58         ` Hans Reiser
  2004-12-16 16:30           ` Marcelo Tosatti
  1 sibling, 1 reply; 11+ messages in thread
From: Hans Reiser @ 2004-12-16 18:58 UTC (permalink / raw)
  To: Marcelo Tosatti; +Cc: Steve French, cliff white, linux-kernel

Marcelo Tosatti wrote:

>
>> I have not seen anyone 
>>doing that on Linux in an automated fashion (e.g running iozone 
>>automated every time a new 2.6.x.rc on a half a dozen of the fs - simply 
>>to verify that things had not gotten drastically worse on a particular 
>>fs due to a bug or sideffect of a global VFS change).
>>    
>>
>
>Yes, we definately need that.
>
>  
>
Andrew Morton is saying that iozone does things real apps don't do, that 
is, it dirties mmap'd pages enough to swamp the machine.

Do you guys agree or disagree with that?

Reiser4 needs iozone optimization work which we haven't bothered with yet.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: automated filesystem testing for multiple Linux fs
  2004-12-16 16:39         ` automated filesystem testing for multiple Linux fs Steve French
@ 2004-12-17  0:27           ` Michael Clark
  2004-12-17  0:51             ` Steve French
  0 siblings, 1 reply; 11+ messages in thread
From: Michael Clark @ 2004-12-17  0:27 UTC (permalink / raw)
  To: Steve French; +Cc: Marcelo Tosatti, cliffw, linux-kernel

Steve French wrote:

>...  Since
>at present only XFS and JFS have the full combination of server
>features: better quotas, DMAPI, xattr support, ACL support and
>nanosecond file timestamps on disk
>

Does JFS have quota support now?

Last I looked it was still on the To Do list.

~mc

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: automated filesystem testing for multiple Linux fs
  2004-12-17  0:27           ` Michael Clark
@ 2004-12-17  0:51             ` Steve French
  2004-12-17  2:23               ` Michael Clark
  0 siblings, 1 reply; 11+ messages in thread
From: Steve French @ 2004-12-17  0:51 UTC (permalink / raw)
  To: Michael Clark; +Cc: Marcelo Tosatti, cliffw, linux-kernel

Michael Clark wrote:

> Steve French wrote:
>
>> ...  Since
>> at present only XFS and JFS have the full combination of server
>> features: better quotas, DMAPI, xattr support, ACL support and
>> nanosecond file timestamps on disk
>>
>
> Does JFS have quota support now?
>
> Last I looked it was still on the To Do list.
>
> ~mc
>
I remember them adding it four months ago or so.  Looking at 
http://linux.bkbits.net/linux-2.5
it seems to be mostly in changeset 1.1803.133.1

Now if I could only figure out a way to get the quota tools to work with 
a network filesystem :)
(NFS bypasses the kernel for quotas by hacking directly into the 
userspace tools which is no
better than the hack we have to do with the samba client utilities for 
setting quotas out of
kernel today).

It would be fairly easy for me to hook cifs into getting called from 
do_quotactl (in fs/quota.c) but that
interface only works with local filesystems that have real devices (not 
deviceless filesystems like
network and some cluster filesystems).   I find it very strange that the 
quota interface takes
a path converts it to a local device and then converts it to a 
superblock.   If only we could define
a sys call to allow deviceless filesystems to hook into the kernel quota 
interface - it looks like a small change
to create a one off of sys_quotactl.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: automated filesystem testing for multiple Linux fs
  2004-12-17  0:51             ` Steve French
@ 2004-12-17  2:23               ` Michael Clark
  0 siblings, 0 replies; 11+ messages in thread
From: Michael Clark @ 2004-12-17  2:23 UTC (permalink / raw)
  To: Steve French; +Cc: linux-kernel

Steve French wrote:

> Michael Clark wrote:
>
>> Steve French wrote:
>>
>>> ...  Since
>>> at present only XFS and JFS have the full combination of server
>>> features: better quotas, DMAPI, xattr support, ACL support and
>>> nanosecond file timestamps on disk
>>>
>>
>> Does JFS have quota support now?
>>
>> Last I looked it was still on the To Do list.
>>
>> ~mc
>>
> I remember them adding it four months ago or so.  Looking at 
> http://linux.bkbits.net/linux-2.5
> it seems to be mostly in changeset 1.1803.133.1


Oh, that's good news. This was one reason you couldn't really consider 
using JFS on a /home fileserver (which sort of implies quotas). It 
perhaps it needs a lot of testing as it's quite new. Any experiences? 
(ie. survives a highly parallel load from a lot of threads with 
different uids).

~mc

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2004-12-17  2:23 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2004-12-13 16:56 cifs large write performance improvements to Samba Steve French
2004-12-13 17:20 ` cliff white
2004-12-13 18:34   ` Steve French
2004-12-13 18:43     ` Steve French
2004-12-16 12:11       ` Marcelo Tosatti
2004-12-16 16:39         ` automated filesystem testing for multiple Linux fs Steve French
2004-12-17  0:27           ` Michael Clark
2004-12-17  0:51             ` Steve French
2004-12-17  2:23               ` Michael Clark
2004-12-16 18:58         ` cifs large write performance improvements to Samba Hans Reiser
2004-12-16 16:30           ` Marcelo Tosatti

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).