From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE449C43457 for ; Fri, 16 Oct 2020 23:03:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3E71520874 for ; Fri, 16 Oct 2020 23:03:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3E71520874 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=fromorbit.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 82834900002; Fri, 16 Oct 2020 19:03:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7D8156B006E; Fri, 16 Oct 2020 19:03:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 71561900002; Fri, 16 Oct 2020 19:03:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0069.hostedemail.com [216.40.44.69]) by kanga.kvack.org (Postfix) with ESMTP id 3F10F6B005D for ; Fri, 16 Oct 2020 19:03:36 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id D2435180AD807 for ; Fri, 16 Oct 2020 23:03:35 +0000 (UTC) X-FDA: 77379317190.20.loss36_201140727220 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id C33EB180C07A3 for ; Fri, 16 Oct 2020 23:03:35 +0000 (UTC) X-HE-Tag: loss36_201140727220 X-Filterd-Recvd-Size: 4024 Received: from mail105.syd.optusnet.com.au (mail105.syd.optusnet.com.au [211.29.132.249]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Fri, 16 Oct 2020 23:03:35 +0000 (UTC) Received: from dread.disaster.area (pa49-179-6-140.pa.nsw.optusnet.com.au [49.179.6.140]) by mail105.syd.optusnet.com.au (Postfix) with ESMTPS id 7BEC73AABAC; Sat, 17 Oct 2020 10:03:30 +1100 (AEDT) Received: from dave by dread.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1kTYkz-001HaQ-Bz; Sat, 17 Oct 2020 10:03:29 +1100 Date: Sat, 17 Oct 2020 10:03:29 +1100 From: Dave Chinner To: Vivek Goyal Cc: Linus Torvalds , Miklos Szeredi , Qian Cai , Hugh Dickins , Matthew Wilcox , "Kirill A . Shutemov" , Linux-MM , Andrew Morton , linux-fsdevel , Amir Goldstein Subject: Re: Possible deadlock in fuse write path (Was: Re: [PATCH 0/4] Some more lock_page work..) Message-ID: <20201016230329.GC7322@dread.disaster.area> References: <4794a3fa3742a5e84fb0f934944204b55730829b.camel@lca.pw> <20201015151606.GA226448@redhat.com> <20201015195526.GC226448@redhat.com> <20201016181908.GA282856@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201016181908.GA282856@redhat.com> X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.3 cv=Ubgvt5aN c=1 sm=1 tr=0 cx=a_idp_d a=uDU3YIYVKEaHT0eX+MXYOQ==:117 a=uDU3YIYVKEaHT0eX+MXYOQ==:17 a=kj9zAlcOel0A:10 a=afefHYAZSVUA:10 a=7-415B0cAAAA:8 a=2Er5gBTTgXofXWECVHgA:9 a=CjuIK1q_8ugA:10 a=biEYGPWJfzWAr4FL6Ov7:22 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Oct 16, 2020 at 02:19:08PM -0400, Vivek Goyal wrote: > On Thu, Oct 15, 2020 at 02:21:58PM -0700, Linus Torvalds wrote: > > [..] > > > > I don't know why fuse does multiple pages to begin with. Why can't it > > do whatever it does just one page at a time? > > Sending multiple pages in single WRITE command does seem to help a lot > with performance. I modified code to write only one page at a time > and ran a fio job with sequential writes(and random writes), > block size 64K and compared the performance on virtiofs. > > NAME WORKLOAD Bandwidth IOPS > one-page-write seqwrite-psync 58.3mb 933 > multi-page-write seqwrite-psync 265.7mb 4251 > > one-page-write randwrite-psync 53.5mb 856 > multi-page-write randwrite-psync 315.5mb 5047 > > So with multi page writes performance seems much better for this > particular workload. Huh. This is essentially the problem the iomap buffered write path was designed to solve. Filesystems like gfs2 got similar major improvements in large buffered write throughput when switching to use iomap for buffered IO.... Essentially, it works by having iomap_apply() first ask the filesystem to map the IO range, then iterates the page cache across the io range performing the desired operation (iomap_write_actor() in the case of a buffered write), then it tells the filesystem how much of the original range it completed copying into the cache. Hence the filesystem only does one mapping/completion operation per contiguous IO range instead of once per dirtied page, and the inner loop just locks a page at a time as it works over the range. Pages are marked uptodate+dirty as the user data is copied into them, not when the entire IO is completely written. Cheers, Dave. -- Dave Chinner david@fromorbit.com