From mboxrd@z Thu Jan 1 00:00:00 1970 From: Scott Lovenberg Subject: Re: [PATCH 00/09] cifs: local caching support using FS-Cache Date: Fri, 30 Jul 2010 19:08:39 -0400 Message-ID: <4C535B77.4040604@gmail.com> References: <1278333663-30464-1-git-send-email-sjayaraman@suse.de> <4C3DF6BF.3070001@gmail.com> <4C3F35F7.8060408@suse.de> <4C480F51.8070204@suse.de> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: Steve French , linux-cifs-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-cachefs-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org, David Howells To: Suresh Jayaraman Return-path: In-Reply-To: <4C480F51.8070204-l3A5Bk7waGM@public.gmane.org> Sender: linux-cifs-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org List-ID: On 7/22/2010 5:28 AM, Suresh Jayaraman wrote: > Here are some results from my benchmarking: > > Environment > ------------ > > I'm using my T60p laptop as the CIFS server (running Samba) and one of > my test machines as CIFS client, connected over an ethernet of reported > speed 1000 Mb/s. The TCP bandwidth as seen by a pair of netcats between > the client and the server is about 786.24 Mb/s. > > Client has a 2.8 GHz Pentium D CPU with 2GB RAM > Server has a 2.33GHz Core2 CPU (T7600) with 2GB RAM > > > Test > ----- > The benchmark involves pulling a 200 MB file over CIFS to the client > using cat to /dev/zero under `time'. The wall clock time reported was > recorded. > > Note > ---- > - The client was rebooted after each test, but the server was not. > - The entire file was loaded into RAM on the server before each test > to eliminate disk I/O latencies on that end. > - A seperate partition of size 4GB has been dedicated for the cache. > - There were no other CIFS client that was accessing the Server when > the tests were performed. > > > First, the test was run on the server twice and the second result was > recorded (noted as Server below). > > Secondly, the client was rebooted and the test was run with cachefiled > not running and was recorded (noted as None below). > > Next, the client was rebooted, the cache contents (if any) were erased > with mkfs.ext3 and test was run again with cachefilesd running (noted as > COLD) > > Next the client was rebooted, tests were run with cachefilesd running > this time with a populated cache (noted as HOT). > > Finally, the test was run again without unmounting, stopping cachefiled > or rebooting to ensure pagecache is valid (noted as PGCACHE). > > The benchmark was repeated twice: > > Cache (state) Run #1 Run#2 > ============= ======= ======= > Server 0.107 s 0.090 s > None 6.497 s 6.440 s > COLD 6.707 s 6.635 s > HOT 5.286 s 5.078 s > PGCACHE 0.090 s 0.091 s > > As it can been seen, the performance while reading when data is cache > hot (disk) is not great as the network link is a Gigabit ethernet (with > server having working set in memory) which is mostly expected. (I could > not get access to a slower network (say 100 Mb/s) where the real > performance boost could be evident). > > > Thanks, > > > Suresh, thanks for taking the time to run these bench marks. :)