From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.5 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 528DEC433DF for ; Fri, 26 Jun 2020 10:22:52 +0000 (UTC) Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C80732070A for ; Fri, 26 Jun 2020 10:22:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C80732070A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 49tXyK1Sh1zDr4F for ; Fri, 26 Jun 2020 20:22:49 +1000 (AEST) Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.de (client-ip=195.135.220.15; helo=mx2.suse.de; envelope-from=msuchanek@suse.de; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 49tXvW4HwSzDqXq for ; Fri, 26 Jun 2020 20:20:23 +1000 (AEST) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id F3E42B581; Fri, 26 Jun 2020 10:20:17 +0000 (UTC) Date: Fri, 26 Jun 2020 12:20:16 +0200 From: Michal =?iso-8859-1?Q?Such=E1nek?= To: Mikulas Patocka Subject: Re: [PATCH v2 3/5] libnvdimm/nvdimm/flush: Allow architecture to override the flush barrier Message-ID: <20200626102016.GP21462@kitsune.suse.cz> References: <87d070f2vs.fsf@linux.ibm.com> <20200522093127.GY25173@kitsune.suse.cz> <23e57565-be2a-a45c-f4d4-d8eca7262dea@linux.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jan Kara , linux-nvdimm , "Aneesh Kumar K.V" , Jeff Moyer , alistair@popple.id.au, Dan Williams , linuxppc-dev Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" On Fri, May 22, 2020 at 09:01:17AM -0400, Mikulas Patocka wrote: > > > On Fri, 22 May 2020, Aneesh Kumar K.V wrote: > > > On 5/22/20 3:01 PM, Michal Suchánek wrote: > > > On Thu, May 21, 2020 at 02:52:30PM -0400, Mikulas Patocka wrote: > > > > > > > > > > > > On Thu, 21 May 2020, Dan Williams wrote: > > > > > > > > > On Thu, May 21, 2020 at 10:03 AM Aneesh Kumar K.V > > > > > wrote: > > > > > > > > > > > > > Moving on to the patch itself--Aneesh, have you audited other > > > > > > > persistent > > > > > > > memory users in the kernel? For example, drivers/md/dm-writecache.c > > > > > > > does > > > > > > > this: > > > > > > > > > > > > > > static void writecache_commit_flushed(struct dm_writecache *wc, bool > > > > > > > wait_for_ios) > > > > > > > { > > > > > > > if (WC_MODE_PMEM(wc)) > > > > > > > wmb(); <========== > > > > > > > else > > > > > > > ssd_commit_flushed(wc, wait_for_ios); > > > > > > > } > > > > > > > > > > > > > > I believe you'll need to make modifications there. > > > > > > > > > > > > > > > > > > > Correct. Thanks for catching that. > > > > > > > > > > > > > > > > > > I don't understand dm much, wondering how this will work with > > > > > > non-synchronous DAX device? > > > > > > > > > > That's a good point. DM-writecache needs to be cognizant of things > > > > > like virtio-pmem that violate the rule that persisent memory writes > > > > > can be flushed by CPU functions rather than calling back into the > > > > > driver. It seems we need to always make the flush case a dax_operation > > > > > callback to account for this. > > > > > > > > dm-writecache is normally sitting on the top of dm-linear, so it would > > > > need to pass the wmb() call through the dm core and dm-linear target ... > > > > that would slow it down ... I remember that you already did it this way > > > > some times ago and then removed it. > > > > > > > > What's the exact problem with POWER? Could the POWER system have two types > > > > of persistent memory that need two different ways of flushing? > > > > > > As far as I understand the discussion so far > > > > > > - on POWER $oldhardware uses $oldinstruction to ensure pmem consistency > > > - on POWER $newhardware uses $newinstruction to ensure pmem consistency > > > (compatible with $oldinstruction on $oldhardware) > > > > Correct. > > > > > - on some platforms instead of barrier instruction a callback into the > > > driver is issued to ensure consistency > > > > This is virtio-pmem only at this point IIUC. > > > > -aneesh > > And does the virtio-pmem driver track which pages are dirty? Or does it > need to specify the range of pages to flush in the flush function? > > > > None of this is reflected by the dm driver. > > We could make a new dax method: > void *(dax_get_flush_function)(void); > > This would return a pointer to "wmb()" on x86 and something else on Power. > > The method "dax_get_flush_function" would be called only once when > initializing the writecache driver (because the call would be slow because > it would have to go through the DM stack) and then, the returned function > would be called each time we need write ordering. The returned function > would do just "sfence; ret". Hello, as far as I understand the code virtio_pmem has a fush function defined which indeed can make use of the region properties, such as memory range. If such function exists you need quivalent of sync() - call into the device in question. If it does not calling arch_pmem_flush_barrier() instead of wmb() should suffice. I am not aware of an interface to determine if the flush function exists for a particular region. Thanks Michal