From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753005AbbCZOfX (ORCPT ); Thu, 26 Mar 2015 10:35:23 -0400 Received: from verein.lst.de ([213.95.11.211]:52648 "EHLO newverein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752964AbbCZOfW (ORCPT ); Thu, 26 Mar 2015 10:35:22 -0400 Date: Thu, 26 Mar 2015 15:35:20 +0100 From: Christoph Hellwig To: Dan Williams Cc: Christoph Hellwig , linux-nvdimm , linux-fsdevel , "linux-kernel@vger.kernel.org" , X86 ML , Jens Axboe Subject: Re: [Linux-nvdimm] [PATCH 1/3] pmem: Initial version of persistent memory driver Message-ID: <20150326143520.GB1557@lst.de> References: <1427358764-6126-1-git-send-email-hch@lst.de> <1427358764-6126-2-git-send-email-hch@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.17 (2007-11-01) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Mar 26, 2015 at 07:12:23AM -0700, Dan Williams wrote: > > + struct resource *res_mem; > > + int err; > > + > > + res_mem = request_mem_region_exclusive(pmem->phys_addr, pmem->size, > > + "pmem"); > > Isn't request_mem_region() enough? i.e. it seems > request_mem_region_exclusive() assumes no DAX, at least in theory? This is 1:1 from the patch Ross sent, but I've been wondering why request_mem_region_exclusive is used here. All it does is setting the IORESOURCE_EXCLUSIVE flag, which prevents /dev/mem and sysfs from accessing the memory while the driver claims it. Besides pmem only a watchdog driver and e1000 make use of this flag, and there's various function related to it that are entirely unused. It's a weird beast. > This is fine for now, but I think we're going to end up with a > continuum of solutions to this problem based on the platform and the > device. Some ADR platforms have firmware that takes actions like > flushing caches on a "power going away" signal. Other platforms have > cache management instructions that we can use on either a per-i/o or > per REQ_FUA/FLUSH request. Hmm, with this being in the memory map by > default I think this poses a challenge for VIVT caches and aliased > accesses? We can revisit this when arm support shows up. Yes, I expect us to pass flags related to this through the platform_data eventually, but I think that starting with the simplest and safest version is probably the best idea.