From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754088AbZIWDZR (ORCPT ); Tue, 22 Sep 2009 23:25:17 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1753641AbZIWDZR (ORCPT ); Tue, 22 Sep 2009 23:25:17 -0400 Received: from mga14.intel.com ([143.182.124.37]:27945 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753628AbZIWDZQ (ORCPT ); Tue, 22 Sep 2009 23:25:16 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.44,435,1249282800"; d="scan'208";a="190602364" Date: Wed, 23 Sep 2009 11:25:08 +0800 From: Wu Fengguang To: "Li, Shaohua" Cc: Andrew Morton , Chris Mason , Peter Zijlstra , "linux-kernel@vger.kernel.org" , "richard@rsk.demon.co.uk" , "jens.axboe@oracle.com" Subject: Re: regression in page writeback Message-ID: <20090923032508.GA28860@localhost> References: <20090922175452.d66400dd.akpm@linux-foundation.org> <20090923011758.GC6382@localhost> <20090922182832.28e7f73a.akpm@linux-foundation.org> <20090923014500.GA11076@localhost> <20090922185941.1118e011.akpm@linux-foundation.org> <20090923022622.GB11918@localhost> <20090922193622.42c00012.akpm@linux-foundation.org> <20090923024958.GA13205@localhost> <20090923031012.GA7358@sli10-desk.sh.intel.com> <20090923031450.GC26530@localhost> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20090923031450.GC26530@localhost> User-Agent: Mutt/1.5.18 (2008-05-17) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Sep 23, 2009 at 11:14:50AM +0800, Wu Fengguang wrote: > On Wed, Sep 23, 2009 at 11:10:12AM +0800, Li, Shaohua wrote: > > On Wed, Sep 23, 2009 at 10:49:58AM +0800, Wu, Fengguang wrote: > > > On Wed, Sep 23, 2009 at 10:36:22AM +0800, Andrew Morton wrote: > > > > On Wed, 23 Sep 2009 10:26:22 +0800 Wu Fengguang wrote: > > > > > > > > > On Wed, Sep 23, 2009 at 09:59:41AM +0800, Andrew Morton wrote: > > > > > > On Wed, 23 Sep 2009 09:45:00 +0800 Wu Fengguang wrote: > > > > > > > > > > > > > On Wed, Sep 23, 2009 at 09:28:32AM +0800, Andrew Morton wrote: > > > > > > > > On Wed, 23 Sep 2009 09:17:58 +0800 Wu Fengguang wrote: > > > > > > > > > > > > > > > > > On Wed, Sep 23, 2009 at 08:54:52AM +0800, Andrew Morton wrote: > > > > > > > > > > On Wed, 23 Sep 2009 08:22:20 +0800 Wu Fengguang wrote: > > > > > > > > > > > > > > > > > > > > > Jens' per-bdi writeback has another improvement. In 2.6.31, when > > > > > > > > > > > superblocks A and B both have 100000 dirty pages, it will first > > > > > > > > > > > exhaust A's 100000 dirty pages before going on to sync B's. > > > > > > > > > > > > > > > > > > > > That would only be true if someone broke 2.6.31. Did they? > > > > > > > > > > > > > > > > > > > > SYSCALL_DEFINE0(sync) > > > > > > > > > > { > > > > > > > > > > wakeup_pdflush(0); > > > > > > > > > > sync_filesystems(0); > > > > > > > > > > sync_filesystems(1); > > > > > > > > > > if (unlikely(laptop_mode)) > > > > > > > > > > laptop_sync_completion(); > > > > > > > > > > return 0; > > > > > > > > > > } > > > > > > > > > > > > > > > > > > > > the sync_filesystems(0) is supposed to non-blockingly start IO against > > > > > > > > > > all devices. It used to do that correctly. But people mucked with it > > > > > > > > > > so perhaps it no longer does. > > > > > > > > > > > > > > > > > > I'm referring to writeback_inodes(). Each invocation of which (to sync > > > > > > > > > 4MB) will do the same iteration over superblocks A => B => C ... So if > > > > > > > > > A has dirty pages, it will always be served first. > > > > > > > > > > > > > > > > > > So if wbc->bdi == NULL (which is true for kupdate/background sync), it > > > > > > > > > will have to first exhaust A before going on to B and C. > > > > > > > > > > > > > > > > But that works OK. We fill the first device's queue, then it gets > > > > > > > > congested and sync_sb_inodes() does nothing and we advance to the next > > > > > > > > queue. > > > > > > > > > > > > > > So in common cases "exhaust" is a bit exaggerated, but A does receive > > > > > > > much more opportunity than B. Computation resources for IO submission > > > > > > > are unbalanced for A, and there are pointless overheads in rechecking A. > > > > > > > > > > > > That's unquantified handwaving. One CPU can do a *lot* of IO. > > > > > > > > > > Yes.. I had the impression that the writeback submission can be pretty slow. > > > > > It should be because of the congestion_wait. Now that it is removed, > > > > > things are going faster when queue is not full. > > > > > > > > What? The wait is short. The design intent there is that we repoll > > > > all previously-congested queues well before they start to run empty. > > > > > > When queue is not congested (in which case congestion_wait is not > > > necessary), the congestion_wait() degrades io submission speed to near > > > io completion speed. > > > > > > > > > > > If a device has more than a queue's worth of dirty data then we'll > > > > > > > > probably leave some of that dirty memory un-queued, so there's some > > > > > > > > lack of concurrency in that situation. > > > > > > > > > > > > > > Good insight. > > > > > > > > > > > > It was wrong. See the other email. > > > > > > > > > > No your first insight is correct. Because the (unnecessary) teeny > > > > > sleeps is independent of the A=>B=>C traversing order. Only queue > > > > > congestion could help skip A. > > > > > > > > The sleeps are completely necessary! Otherwise we end up busywaiting. > > > > > > > > After the sleep we repoll all queues. > > > > > > I mean, it is not always necessary. Only when _all_ superblocks cannot > > > writeback their inodes (eg. all in congestion), we should wait. > > > > > > Just before Jens' work, I had patch to convert > > > > > > - if (wbc.encountered_congestion || wbc.more_io) > > > - congestion_wait(WRITE, HZ/10); > > > - else > > > - break; > > > > > > to > > > > > > + if (wbc->encountered_congestion && wbc->nr_to_write == MAX_WRITEBACK_PAGES) > > > + congestion_wait(WRITE, HZ/10); > > > > > > Note that wbc->encountered_congestion only means "at least one bdi > > > encountered congestion". We may still make progress in other bdis > > > hence should not sleep. > > Hi, > > encountered_congestion only is checked when nr_to_write > 0, if some superblocks > > aren't congestions, nr_to_write should be 0, right? > > Yeah, good spot! So the change only helps some corner cases. Then it remains a problem why the io submission is slow when !congested. For example, this trace shows that the io submission speed is about 4MB / 0.01s = 400MB/s Workload is a plain copy on a 2Ghz Intel Core 2 CPU. [ 71.487121] mm/page-writeback.c 761 background_writeout: comm=pdflush pid=343 n=-4096 [ 71.489635] global dirty=65513 writeback=5925 nfs=1 flags=_M towrite=0 skipped=0 [ 71.496019] redirty_tail +442: inode 79232 [ 71.497432] mm/page-writeback.c 761 background_writeout: comm=pdflush pid=343 n=-5120 [ 71.498890] global dirty=64490 writeback=6700 nfs=1 flags=_M towrite=0 skipped=0 [ 71.506355] redirty_tail +442: inode 79232 [ 71.508473] mm/page-writeback.c 761 background_writeout: comm=pdflush pid=343 n=-6144 [ 71.510538] global dirty=62475 writeback=7599 nfs=1 flags=_M towrite=0 skipped=0 [ 71.511910] redirty_tail +502: inode 3438 [ 71.512846] redirty_tail +502: inode 1920 Thanks, Fengguang