From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1162065AbeCAVZz (ORCPT ); Thu, 1 Mar 2018 16:25:55 -0500 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:45554 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1161816AbeCAVZq (ORCPT ); Thu, 1 Mar 2018 16:25:46 -0500 Date: Thu, 1 Mar 2018 16:25:42 -0500 From: Jerome Glisse To: Logan Gunthorpe Cc: Benjamin Herrenschmidt , linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-nvdimm@lists.01.org, linux-block@vger.kernel.org, Stephen Bates , Christoph Hellwig , Jens Axboe , Keith Busch , Sagi Grimberg , Bjorn Helgaas , Jason Gunthorpe , Max Gurtovoy , Dan Williams , Alex Williamson , Oliver OHalloran Subject: Re: [PATCH v2 00/10] Copy Offload in NVMe Fabrics with P2P PCI Memory Message-ID: <20180301212541.GD6742@redhat.com> References: <20180228234006.21093-1-logang@deltatee.com> <1519876489.4592.3.camel@kernel.crashing.org> <1519876569.4592.4.camel@au1.ibm.com> <8e808448-fc01-5da0-51e7-1a6657d5a23a@deltatee.com> <1519936195.4592.18.camel@au1.ibm.com> <20180301205548.GA6742@redhat.com> <20180301211036.GB6742@redhat.com> <8ed955f8-55c9-a2bd-1d58-90bf1dcfa055@deltatee.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <8ed955f8-55c9-a2bd-1d58-90bf1dcfa055@deltatee.com> User-Agent: Mutt/1.9.2 (2017-12-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Mar 01, 2018 at 02:15:01PM -0700, Logan Gunthorpe wrote: > > > On 01/03/18 02:10 PM, Jerome Glisse wrote: > > It seems people miss-understand HMM :( you do not have to use all of > > its features. If all you care about is having struct page then just > > use that for instance in your case only use those following 3 functions: > > > > hmm_devmem_add() or hmm_devmem_add_resource() and hmm_devmem_remove() > > for cleanup. > > To what benefit over just using devm_memremap_pages()? If I'm using the hmm > interface and disabling all the features, I don't see the point. We've also > cleaned up the devm_memremap_pages() interface to be more usefully generic > in such a way that I'd hope HMM starts using it too and gets rid of the code > duplication. > The first HMM variant find a hole and do not require a resource as input parameter. Beside that internaly for PCIE device memory devm_memremap_pages() does not do the right thing last time i check it always create a linear mapping of the range ie HMM call add_pages() while devm_memremap_pages() call arch_add_memory() When i upstreamed HMM, Dan didn't want me to touch devm_memremap_pages() to match my need. I am more than happy to modify devm_memremap_pages() to also handle HMM needs. Note that the intention of HMM is to be a middle layer between low level infrastructure and device driver. Idea is that such impedance layer should make it easier down the road to change how thing are handled down below without having to touch many device driver. Cheers, Jérôme