From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7E78C3A5A2 for ; Fri, 23 Aug 2019 07:57:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id AD44022CF7 for ; Fri, 23 Aug 2019 07:57:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391463AbfHWH5K (ORCPT ); Fri, 23 Aug 2019 03:57:10 -0400 Received: from out30-54.freemail.mail.aliyun.com ([115.124.30.54]:36539 "EHLO out30-54.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391272AbfHWH5K (ORCPT ); Fri, 23 Aug 2019 03:57:10 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R151e4;CH=green;DM=||false|;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01f04391;MF=joseph.qi@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0TaC9Q2t_1566547022; Received: from JosephdeMacBook-Pro.local(mailfrom:joseph.qi@linux.alibaba.com fp:SMTPD_---0TaC9Q2t_1566547022) by smtp.aliyun-inc.com(127.0.0.1); Fri, 23 Aug 2019 15:57:03 +0800 Subject: Re: [RFC] performance regression with "ext4: Allow parallel DIO reads" To: Dave Chinner Cc: "Theodore Y. Ts'o" , Jan Kara , Joseph Qi , Andreas Dilger , Ext4 Developers List , Xiaoguang Wang , Liu Bo References: <6DADA28C-542F-45F6-ADB0-870A81ABED23@dilger.ca> <15112e38-94fe-39d6-a8e2-064ff47187d5@linux.alibaba.com> <20190728225122.GG7777@dread.disaster.area> <960bb915-20cc-26a0-7abc-bfca01aa39c0@gmail.com> <20190815151336.GO14313@quack2.suse.cz> <075fd06f-b0b4-4122-81c6-e49200d5bd17@linux.alibaba.com> <20190816145719.GA3041@quack2.suse.cz> <20190820160805.GB10232@mit.edu> <20190822054001.GT7777@dread.disaster.area> From: Joseph Qi Message-ID: Date: Fri, 23 Aug 2019 15:57:02 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <20190822054001.GT7777@dread.disaster.area> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org Hi Dave, On 19/8/22 13:40, Dave Chinner wrote: > On Wed, Aug 21, 2019 at 09:04:57AM +0800, Joseph Qi wrote: >> Hi Ted, >> >> On 19/8/21 00:08, Theodore Y. Ts'o wrote: >>> On Tue, Aug 20, 2019 at 11:00:39AM +0800, Joseph Qi wrote: >>>> >>>> I've tested parallel dio reads with dioread_nolock, it doesn't have >>>> significant performance improvement and still poor compared with reverting >>>> parallel dio reads. IMO, this is because with parallel dio reads, it take >>>> inode shared lock at the very beginning in ext4_direct_IO_read(). >>> >>> Why is that a problem? It's a shared lock, so parallel threads should >>> be able to issue reads without getting serialized? >>> >> The above just tells the result that even mounting with dioread_nolock, >> parallel dio reads still has poor performance than before (w/o parallel >> dio reads). >> >>> Are you using sufficiently fast storage devices that you're worried >>> about cache line bouncing of the shared lock? Or do you have some >>> other concern, such as some other thread taking an exclusive lock? >>> >> The test case is random read/write described in my first mail. And > > Regardless of dioread_nolock, ext4_direct_IO_read() is taking > inode_lock_shared() across the direct IO call. And writes in ext4 > _always_ take the inode_lock() in ext4_file_write_iter(), even > though it gets dropped quite early when overwrite && dioread_nolock > is set. But just taking the lock exclusively in write fro a short > while is enough to kill all shared locking concurrency... > >> from my preliminary investigation, shared lock consumes more in such >> scenario. > > If the write lock is also shared, then there should not be a > scalability issue. The shared dio locking is only half-done in ext4, > so perhaps comparing your workload against XFS would be an > informative exercise... I've done the same test workload on xfs, it behaves the same as ext4 after reverting parallel dio reads and mounting with dioread_lock. Here is the test result: psync, randrw, direct=1, numofjobs=8 4k: ----------------------------------------- ext4 | READ 123450KB/s | WRITE 123368KB/s ----------------------------------------- xfs | READ 123848KB/s | WRITE 123761KB/s ----------------------------------------- 16k: ----------------------------------------- ext4 | READ 222477KB/s | WRITE 222322KB/s ----------------------------------------- xfs | READ 223261KB/s | WRITE 223106KB/s ----------------------------------------- 64k: ----------------------------------------- ext4 | READ 427406KB/s | WRITE 426197KB/s ----------------------------------------- xfs | READ 403697KB/s | WRITE 402555KB/s ----------------------------------------- 512k: ----------------------------------------- ext4 | READ 618752KB/s | WRITE 619054KB/s ----------------------------------------- xfs | READ 614954KB/s | WRITE 615254KB/s ----------------------------------------- 1M: ----------------------------------------- ext4 | READ 615011KB/s | WRITE 612255KB/s ----------------------------------------- xfs | READ 624087KB/s | WRITE 621290KB/s -----------------------------------------