From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.5 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_PASS,USER_AGENT_MUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C4CAC10F0E for ; Sun, 7 Apr 2019 07:32:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 39596213F2 for ; Sun, 7 Apr 2019 07:32:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726044AbfDGHcZ (ORCPT ); Sun, 7 Apr 2019 03:32:25 -0400 Received: from verein.lst.de ([213.95.11.211]:35901 "EHLO newverein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725951AbfDGHcZ (ORCPT ); Sun, 7 Apr 2019 03:32:25 -0400 Received: by newverein.lst.de (Postfix, from userid 2407) id AA9A068B02; Sun, 7 Apr 2019 09:32:13 +0200 (CEST) Date: Sun, 7 Apr 2019 09:32:13 +0200 From: Christoph Hellwig To: Andreas Gruenbacher Cc: Christoph Hellwig , cluster-devel , Dave Chinner , Ross Lagerwall , Mark Syms , Edwin =?iso-8859-1?B?VPZy9ms=?= , linux-fsdevel , Jan Kara , linux-mm@kvack.org Subject: Re: gfs2 iomap dealock, IOMAP_F_UNBALANCED Message-ID: <20190407073213.GA9509@lst.de> References: <20190321131304.21618-1-agruenba@redhat.com> <20190328165104.GA21552@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.17 (2007-11-01) Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org [adding Jan and linux-mm] On Fri, Mar 29, 2019 at 11:13:00PM +0100, Andreas Gruenbacher wrote: > > But what is the requirement to do this in writeback context? Can't > > we move it out into another context instead? > > Indeed, this isn't for data integrity in this case but because the > dirty limit is exceeded. What other context would you suggest to move > this to? > > (The iomap flag I've proposed would save us from getting into this > situation in the first place.) Your patch does two things: - it only calls balance_dirty_pages_ratelimited once per write operation instead of once per page. In the past btrfs did hacks like that, but IIRC they caused VM balancing issues. That is why everyone now calls balance_dirty_pages_ratelimited one per page. If calling it at a coarse granularity would be fine we should do it everywhere instead of just in gfs2 in journaled mode - it artifically reduces the size of writes to a low value, which I suspect is going to break real life application So I really think we need to fix this properly. And if that means that you can't make use of the iomap batching for gfs2 in journaled mode that is still a better option. But I really think you need to look into the scope of your flush_log and figure out a good way to reduce that as solve the root cause.