From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.5 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_PASS,URIBL_BLOCKED,USER_AGENT_MUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3189C4360F for ; Wed, 27 Mar 2019 01:29:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BBAAE213A2 for ; Wed, 27 Mar 2019 01:29:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732061AbfC0B3H (ORCPT ); Tue, 26 Mar 2019 21:29:07 -0400 Received: from ipmail06.adl6.internode.on.net ([150.101.137.145]:31330 "EHLO ipmail06.adl6.internode.on.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727500AbfC0B3H (ORCPT ); Tue, 26 Mar 2019 21:29:07 -0400 Received: from ppp59-167-129-252.static.internode.on.net (HELO dastard) ([59.167.129.252]) by ipmail06.adl6.internode.on.net with ESMTP; 27 Mar 2019 11:59:04 +1030 Received: from dave by dastard with local (Exim 4.80) (envelope-from ) id 1h8xNH-0005PQ-Kx; Wed, 27 Mar 2019 12:29:03 +1100 Date: Wed, 27 Mar 2019 12:29:03 +1100 From: Dave Chinner To: Amir Goldstein Cc: Matthew Wilcox , "Darrick J. Wong" , linux-xfs , Christoph Hellwig , linux-fsdevel Subject: Re: [QUESTION] Long read latencies on mixed rw buffered IO Message-ID: <20190327012903.GH23020@dastard> References: <20190325154731.GT1183@magnolia> <20190325164129.GH10344@bombadil.infradead.org> <20190325182239.GI10344@bombadil.infradead.org> <20190325194021.GJ10344@bombadil.infradead.org> <20190325234838.GC23020@dastard> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org On Tue, Mar 26, 2019 at 05:44:34AM +0200, Amir Goldstein wrote: > On Tue, Mar 26, 2019 at 1:48 AM Dave Chinner wrote: > > On Mon, Mar 25, 2019 at 09:57:46PM +0200, Amir Goldstein wrote: > > > Not as bad as v1. Only a little bit worse than master... > > > The whole deal with the read/write balance and on SSD, I imagine > > > the balance really changes. That's why I was skeptical about > > > one-size-fits all read/write balance. > > > > You're not testing your SSD. You're testing writes into cache vs > > reads from disk. There is a massive latency difference in the two > > operations, so unless you use O_DSYNC for the writes, you are going > > to see this cache-vs-uncached performance unbalance. i.e. unless the > > rwsem is truly fair, there is always going to be more writer > > access to the lock because they spend less time holding it and so > > can put much more pressure on it. > > > > Yeh, I know. SSD makes the balance better because of faster reads > from disk. Was pointing out that the worse case I am interested in is > on spindles. That said, O_DSYNC certainly does improve the balance > and gives shorter worse case latencies. However, it does not make the > problem go away. i_rwsem taken (even for 4K reads) takes its toll > on write latencies (compared to ext4). Sure, that's because it's serialising access across the entire file, not just the page that is being read/written to. The solution here is to move to concurrent buffered writes similar to how we do direct IO. We need to track IO ranges for that, though, so we can serialise unaligned and/or overlapping IOs. We need fast, efficient range locks for this, though. Cheers, Dave. -- Dave Chinner david@fromorbit.com