From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751360AbZIEM57 (ORCPT ); Sat, 5 Sep 2009 08:57:59 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1751170AbZIEM56 (ORCPT ); Sat, 5 Sep 2009 08:57:58 -0400 Received: from rtr.ca ([76.10.145.34]:60622 "EHLO mail.rtr.ca" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750901AbZIEM55 (ORCPT ); Sat, 5 Sep 2009 08:57:57 -0400 Message-ID: <4AA26055.2090400@rtr.ca> Date: Sat, 05 Sep 2009 08:57:57 -0400 From: Mark Lord Organization: Real-Time Remedies Inc. User-Agent: Thunderbird 2.0.0.23 (X11/20090817) MIME-Version: 1.0 To: Ric Wheeler Cc: Krzysztof Halasa , Christoph Hellwig , Michael Tokarev , david@lang.hm, Pavel Machek , Theodore Tso , NeilBrown , Rob Landley , Florian Weimer , Goswin von Brederlow , kernel list , Andrew Morton , mtk.manpages@gmail.com, rdunlap@xenotime.net, linux-doc@vger.kernel.org, linux-ext4@vger.kernel.org, corbet@lwn.net Subject: Re: wishful thinking about atomic, multi-sector or full MD stripe width, writes in storage References: <20090828064449.GA27528@elf.ucw.cz> <20090828120854.GA8153@mit.edu> <20090830075135.GA1874@ucw.cz> <4A9A88B6.9050902@redhat.com> <4A9A9034.8000703@msgid.tls.msk.ru> <20090830163513.GA25899@infradead.org> <4A9BCCEF.7010402@redhat.com> <20090831131626.GA17325@infradead.org> <4A9BCDFE.50008@rtr.ca> <20090831132139.GA5425@infradead.org> <4A9F230F.40707@redhat.com> <4A9FA5F2.9090704@redhat.com> <4A9FC9B3.1080809@redhat.com> <4A9FCF6B.1080704@redhat.com> <4AA184D7.1010502@rtr.ca> <4AA186B0.5090905@redhat.com> In-Reply-To: <4AA186B0.5090905@redhat.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Ric Wheeler wrote: > On 09/04/2009 05:21 PM, Mark Lord wrote: .. >> How about instead, *fixing* the MD layer to properly support barriers? >> That would be far more useful, productive, and better for end-users. .. > Fixing MD would be great - not sure that it would end up still faster > (look at md1 devices with working barriers with compared to md1 with > write cache disabled). .. There's no inherent reason for it to be slower, except possibly drives with b0rked FUA support. So the first step is to fix MD to pass barriers to the LLDs for most/all RAID types. Then, if it has performance issues, those can be addressed by more application of little grey cells. :) Cheers