From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Date: Tue, 3 Nov 2015 17:21:19 +0100 From: Jan Kara Subject: Re: [PATCH v3 02/15] dax: increase granularity of dax_clear_blocks() operations Message-ID: <20151103162119.GE4063@quack.suse.cz> References: <20151102042941.6610.27784.stgit@dwillia2-desk3.amr.corp.intel.com> <20151102042952.6610.7185.stgit@dwillia2-desk3.amr.corp.intel.com> <20151103005113.GN10656@dastard> <20151103044802.GP10656@dastard> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Sender: linux-kernel-owner@vger.kernel.org To: Dan Williams Cc: Dave Chinner , Jens Axboe , Jan Kara , "linux-nvdimm@lists.01.org" , "linux-kernel@vger.kernel.org" , Jeff Moyer , Jan Kara , Ross Zwisler , Christoph Hellwig List-ID: On Mon 02-11-15 21:31:11, Dan Williams wrote: > On Mon, Nov 2, 2015 at 8:48 PM, Dave Chinner wrote: > > On Mon, Nov 02, 2015 at 07:27:26PM -0800, Dan Williams wrote: > >> On Mon, Nov 2, 2015 at 4:51 PM, Dave Chinner wrote: > >> > On Sun, Nov 01, 2015 at 11:29:53PM -0500, Dan Williams wrote: > >> > The zeroing (and the data, for that matter) doesn't need to be > >> > committed to persistent store until the allocation is written and > >> > committed to the journal - that will happen with a REQ_FLUSH|REQ_FUA > >> > write, so it makes sense to deploy the big hammer and delay the > >> > blocking CPU cache flushes until the last possible moment in cases > >> > like this. > >> > >> In pmem terms that would be a non-temporal memset plus a delayed > >> wmb_pmem at REQ_FLUSH time. Better to write around the cache than > >> loop over the dirty-data issuing flushes after the fact. We'll bump > >> the priority of the non-temporal memset implementation. > > > > Why is it better to do two synchronous physical writes to memory > > within a couple of microseconds of CPU time rather than writing them > > through the cache and, in most cases, only doing one physical write > > to memory in a separate context that expects to wait for a flush > > to complete? > > With a switch to non-temporal writes they wouldn't be synchronous, > although it's doubtful that the subsequent writes after zeroing would > also hit the store buffer. > > If we had a method to flush by physical-cache-way rather than a > virtual address then it would indeed be better to save up for one > final flush, but when we need to resort to looping through all the > virtual addresses that might have touched it gets expensive. Similarly as Dave I'm somewhat confused by your use of "virtual addresses" and I wasn't able to figure out what exactly are you speaking about. In Ross' patches, fsync will iterate over all 4 KB ranges (they would be pages if we had page cache) of the file that got dirtied and call wb_cache_pmem() for each corresponding "physical block" - where "physical block" actually ends up being an physical address in pmem. Is this iteration what you find too costly? Honza -- Jan Kara SUSE Labs, CR From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755341AbbKCQV1 (ORCPT ); Tue, 3 Nov 2015 11:21:27 -0500 Received: from mx2.suse.de ([195.135.220.15]:50538 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754882AbbKCQVX (ORCPT ); Tue, 3 Nov 2015 11:21:23 -0500 Date: Tue, 3 Nov 2015 17:21:19 +0100 From: Jan Kara To: Dan Williams Cc: Dave Chinner , Jens Axboe , Jan Kara , "linux-nvdimm@lists.01.org" , "linux-kernel@vger.kernel.org" , Jeff Moyer , Jan Kara , Ross Zwisler , Christoph Hellwig Subject: Re: [PATCH v3 02/15] dax: increase granularity of dax_clear_blocks() operations Message-ID: <20151103162119.GE4063@quack.suse.cz> References: <20151102042941.6610.27784.stgit@dwillia2-desk3.amr.corp.intel.com> <20151102042952.6610.7185.stgit@dwillia2-desk3.amr.corp.intel.com> <20151103005113.GN10656@dastard> <20151103044802.GP10656@dastard> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon 02-11-15 21:31:11, Dan Williams wrote: > On Mon, Nov 2, 2015 at 8:48 PM, Dave Chinner wrote: > > On Mon, Nov 02, 2015 at 07:27:26PM -0800, Dan Williams wrote: > >> On Mon, Nov 2, 2015 at 4:51 PM, Dave Chinner wrote: > >> > On Sun, Nov 01, 2015 at 11:29:53PM -0500, Dan Williams wrote: > >> > The zeroing (and the data, for that matter) doesn't need to be > >> > committed to persistent store until the allocation is written and > >> > committed to the journal - that will happen with a REQ_FLUSH|REQ_FUA > >> > write, so it makes sense to deploy the big hammer and delay the > >> > blocking CPU cache flushes until the last possible moment in cases > >> > like this. > >> > >> In pmem terms that would be a non-temporal memset plus a delayed > >> wmb_pmem at REQ_FLUSH time. Better to write around the cache than > >> loop over the dirty-data issuing flushes after the fact. We'll bump > >> the priority of the non-temporal memset implementation. > > > > Why is it better to do two synchronous physical writes to memory > > within a couple of microseconds of CPU time rather than writing them > > through the cache and, in most cases, only doing one physical write > > to memory in a separate context that expects to wait for a flush > > to complete? > > With a switch to non-temporal writes they wouldn't be synchronous, > although it's doubtful that the subsequent writes after zeroing would > also hit the store buffer. > > If we had a method to flush by physical-cache-way rather than a > virtual address then it would indeed be better to save up for one > final flush, but when we need to resort to looping through all the > virtual addresses that might have touched it gets expensive. Similarly as Dave I'm somewhat confused by your use of "virtual addresses" and I wasn't able to figure out what exactly are you speaking about. In Ross' patches, fsync will iterate over all 4 KB ranges (they would be pages if we had page cache) of the file that got dirtied and call wb_cache_pmem() for each corresponding "physical block" - where "physical block" actually ends up being an physical address in pmem. Is this iteration what you find too costly? Honza -- Jan Kara SUSE Labs, CR