From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24D24C43381 for ; Thu, 21 Mar 2019 23:01:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E4AC121917 for ; Thu, 21 Mar 2019 23:01:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726904AbfCUXBk (ORCPT ); Thu, 21 Mar 2019 19:01:40 -0400 Received: from mail-ot1-f53.google.com ([209.85.210.53]:37075 "EHLO mail-ot1-f53.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725999AbfCUXBk (ORCPT ); Thu, 21 Mar 2019 19:01:40 -0400 Received: by mail-ot1-f53.google.com with SMTP id c16so321027otn.4 for ; Thu, 21 Mar 2019 16:01:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ccsLbXmGMpg3VMa8uyuX+VkzJfPzKkEbGo+cTynD/aw=; b=d5mf0cuko9+qU3xUb56DuL6YbiRrC58+YYxDBYz4X7cBNPteHWcQ43MDAqNBZrCnQ2 9WEC/Zpmcm2xSrO62jiZ8SQlIHKDtcKu0IOHfxxID8i3s4ZK9mmN8mtK9qEvJmHkcVNn khy0yBqKzo/159n3/CpjmrGsEaIRBM5VSoMjCp5HokVcaTm4ujeljFWpEdb6hl4O7YeS MSkVIWIcdtooZtKENtrdrRsnXoBByoHM4/9ePgjm6SyHhR4Xwm8sCQuyhv/yvMfs5giN E43teOOEZJnvhv0yLYJNvl4ujFmsjyoREx1mHRRQ89Reujr9+9h2r2tD1Unq4kDhjVb6 Ekhg== X-Gm-Message-State: APjAAAV0DqbY1VL7VkutbQd/fvlZq24N5DQfqasZRON2vHo1XV9De8qB yCYTcyW0cv4+w2NqOONuTv1tgFfNICh0g+anoVLOLg== X-Google-Smtp-Source: APXvYqwURUvZSO8JyJ+DP0U9izW6fmiumD+drK51zuaU++PpwxTzXyuea/255drX1z5COn47at0X2kEAqEK6/ywcOVI= X-Received: by 2002:a9d:561a:: with SMTP id e26mr4632622oti.332.1553209299254; Thu, 21 Mar 2019 16:01:39 -0700 (PDT) MIME-Version: 1.0 References: <20190321131304.21618-1-agruenba@redhat.com> <20190321214345.GE26298@dastard> In-Reply-To: <20190321214345.GE26298@dastard> From: Andreas Gruenbacher Date: Fri, 22 Mar 2019 00:01:27 +0100 Message-ID: Subject: Re: gfs2 iomap dealock, IOMAP_F_UNBALANCED To: Dave Chinner Cc: Christoph Hellwig , cluster-devel , Ross Lagerwall , Mark Syms , =?UTF-8?B?RWR3aW4gVMO2csO2aw==?= , linux-fsdevel Content-Type: text/plain; charset="UTF-8" Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org On Thu, 21 Mar 2019 at 22:43, Dave Chinner wrote: > The problem is calling balance_dirty_pages() inside the > ->iomap_begin/->iomap_end calls and not that it is called by the > iomap infrastructure itself, right? > > Is so, I'd prefer to see this in iomap_apply() after the call to > ops->iomap_end because iomap_file_buffered_write() can iterate and > call iomap_apply() multiple times. This would keep the balancing to > a per-iomap granularity, rather than a per-syscall granularity. > > i.e. if we do write(2GB), we want more than one balancing call > during that syscall, so it would be up to the filesystem to a) limit > the size of write mappings to something smaller (e.g. 1024 pages) > so that there are still frequent balancing calls for large writes. Hmm. The looping across multiple mappings isn't done in iomap_apply but in iomap_file_buffered_write, so the balancing could go into iomap_apply or iomap_file_buffered_write, but can't go further up the stack. Given that, iomap_file_buffered_write seems the better place, but this is still quite horrible. Thanks, Andreas