From mboxrd@z Thu Jan 1 00:00:00 1970 From: Richard Scobie Subject: Re: Linux Raid performance Date: Sat, 03 Apr 2010 14:14:24 +1300 Message-ID: <4BB69670.3040303@sauce.co.nz> References: <20100331201539.GA19395@rap.rap.dk> <20100402110506.GA16294@rap.rap.dk> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: Sender: linux-raid-owner@vger.kernel.org To: Mark Knecht Cc: Learner Study , linux-raid@vger.kernel.org, keld@dkuug.dk List-Id: linux-raid.ids Mark Knecht wrote: > Once all of that is in place then possibly more cores will help, but I > suspect even then it probably hard to use 4 billion CPU cycles/second > doing nothing but disk I/O. SATA controllers are all doing DMA so CPU > overhead is relatively *very* low. There is the RAID5/6 parity calculations to be considered on writes and this appears to be single threaded. There is an experimental multicore kernel option I believe, but recent discussion indicates there may be some problems with it. A very quick test on a box here on a Xeon E5440 (4 x 2.8GHz) and a SAS attached 16 x 750GB SATA md RAID6. The array is 72% full and probably quite fragmented and currently the system is idle. dd if=/dev/zero of=/mnt/storage/dump bs=1M count=20000 20000+0 records in 20000+0 records out 20971520000 bytes (21 GB) copied, 87.2374 s, 240 MB/s Looking at the outputs of vmstat 5 and mpstat -P ALL 5 during this, one core (probably doing parity generation) was around 7.56% idle and the other 3 were around 88.5, 67.5 and 51.8% idle. The same test run when the system was commissioned and the array was empty, acheived 565MB/s writes. Regards, Richard