From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id AAAB0224511BB for ; Thu, 1 Mar 2018 13:04:33 -0800 (PST) Date: Thu, 1 Mar 2018 16:10:36 -0500 From: Jerome Glisse Subject: Re: [PATCH v2 00/10] Copy Offload in NVMe Fabrics with P2P PCI Memory Message-ID: <20180301211036.GB6742@redhat.com> References: <20180228234006.21093-1-logang@deltatee.com> <1519876489.4592.3.camel@kernel.crashing.org> <1519876569.4592.4.camel@au1.ibm.com> <8e808448-fc01-5da0-51e7-1a6657d5a23a@deltatee.com> <1519936195.4592.18.camel@au1.ibm.com> <20180301205548.GA6742@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" To: Logan Gunthorpe Cc: Jens Axboe , Keith Busch , Oliver OHalloran , Benjamin Herrenschmidt , linux-nvdimm@lists.01.org, linux-rdma@vger.kernel.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, Alex Williamson , Jason Gunthorpe , Bjorn Helgaas , Max Gurtovoy , Christoph Hellwig List-ID: On Thu, Mar 01, 2018 at 02:03:26PM -0700, Logan Gunthorpe wrote: > = > = > On 01/03/18 01:55 PM, Jerome Glisse wrote: > > Well this again a new user of struct page for device memory just for > > one usecase. I wanted HMM to be more versatile so that it could be use > > for this kind of thing too. I guess the message didn't go through. I > > will take some cycles tomorrow to look into this patchset to ascertain > > how struct page is use in this context. > = > We looked at it but didn't see how any of it was applicable to our needs. > = It seems people miss-understand HMM :( you do not have to use all of its features. If all you care about is having struct page then just use that for instance in your case only use those following 3 functions: hmm_devmem_add() or hmm_devmem_add_resource() and hmm_devmem_remove() for cleanup. You can set the fault callback to an empty stub that always do return VM_SIGBUS or a patch to allow NULL callback inside HMM. You don't have to use the free callback if you don't care and if there is something that doesn't quite match what you want HMM can always be ajusted to address this. The intention of HMM is to be useful for all device memory that wish to have struct page for various reasons. Cheers, J=E9r=F4me _______________________________________________ Linux-nvdimm mailing list Linux-nvdimm@lists.01.org https://lists.01.org/mailman/listinfo/linux-nvdimm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Date: Thu, 1 Mar 2018 16:10:36 -0500 From: Jerome Glisse To: Logan Gunthorpe Cc: Benjamin Herrenschmidt , linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-nvdimm@lists.01.org, linux-block@vger.kernel.org, Stephen Bates , Christoph Hellwig , Jens Axboe , Keith Busch , Sagi Grimberg , Bjorn Helgaas , Jason Gunthorpe , Max Gurtovoy , Dan Williams , Alex Williamson , Oliver OHalloran Subject: Re: [PATCH v2 00/10] Copy Offload in NVMe Fabrics with P2P PCI Memory Message-ID: <20180301211036.GB6742@redhat.com> References: <20180228234006.21093-1-logang@deltatee.com> <1519876489.4592.3.camel@kernel.crashing.org> <1519876569.4592.4.camel@au1.ibm.com> <8e808448-fc01-5da0-51e7-1a6657d5a23a@deltatee.com> <1519936195.4592.18.camel@au1.ibm.com> <20180301205548.GA6742@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 In-Reply-To: List-ID: On Thu, Mar 01, 2018 at 02:03:26PM -0700, Logan Gunthorpe wrote: > > > On 01/03/18 01:55 PM, Jerome Glisse wrote: > > Well this again a new user of struct page for device memory just for > > one usecase. I wanted HMM to be more versatile so that it could be use > > for this kind of thing too. I guess the message didn't go through. I > > will take some cycles tomorrow to look into this patchset to ascertain > > how struct page is use in this context. > > We looked at it but didn't see how any of it was applicable to our needs. > It seems people miss-understand HMM :( you do not have to use all of its features. If all you care about is having struct page then just use that for instance in your case only use those following 3 functions: hmm_devmem_add() or hmm_devmem_add_resource() and hmm_devmem_remove() for cleanup. You can set the fault callback to an empty stub that always do return VM_SIGBUS or a patch to allow NULL callback inside HMM. You don't have to use the free callback if you don't care and if there is something that doesn't quite match what you want HMM can always be ajusted to address this. The intention of HMM is to be useful for all device memory that wish to have struct page for various reasons. Cheers, J�r�me From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jerome Glisse Subject: Re: [PATCH v2 00/10] Copy Offload in NVMe Fabrics with P2P PCI Memory Date: Thu, 1 Mar 2018 16:10:36 -0500 Message-ID: <20180301211036.GB6742@redhat.com> References: <20180228234006.21093-1-logang@deltatee.com> <1519876489.4592.3.camel@kernel.crashing.org> <1519876569.4592.4.camel@au1.ibm.com> <8e808448-fc01-5da0-51e7-1a6657d5a23a@deltatee.com> <1519936195.4592.18.camel@au1.ibm.com> <20180301205548.GA6742@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Return-path: Content-Disposition: inline In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-nvdimm-bounces-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org Sender: "Linux-nvdimm" To: Logan Gunthorpe Cc: Jens Axboe , Keith Busch , Oliver OHalloran , Benjamin Herrenschmidt , linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org, linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-pci-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org, linux-block-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Alex Williamson , Jason Gunthorpe , Bjorn Helgaas , Max Gurtovoy , Christoph Hellwig List-Id: linux-rdma@vger.kernel.org On Thu, Mar 01, 2018 at 02:03:26PM -0700, Logan Gunthorpe wrote: > = > = > On 01/03/18 01:55 PM, Jerome Glisse wrote: > > Well this again a new user of struct page for device memory just for > > one usecase. I wanted HMM to be more versatile so that it could be use > > for this kind of thing too. I guess the message didn't go through. I > > will take some cycles tomorrow to look into this patchset to ascertain > > how struct page is use in this context. > = > We looked at it but didn't see how any of it was applicable to our needs. > = It seems people miss-understand HMM :( you do not have to use all of its features. If all you care about is having struct page then just use that for instance in your case only use those following 3 functions: hmm_devmem_add() or hmm_devmem_add_resource() and hmm_devmem_remove() for cleanup. You can set the fault callback to an empty stub that always do return VM_SIGBUS or a patch to allow NULL callback inside HMM. You don't have to use the free callback if you don't care and if there is something that doesn't quite match what you want HMM can always be ajusted to address this. The intention of HMM is to be useful for all device memory that wish to have struct page for various reasons. Cheers, J=E9r=F4me From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1161874AbeCAVKo (ORCPT ); Thu, 1 Mar 2018 16:10:44 -0500 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:44486 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1161764AbeCAVKm (ORCPT ); Thu, 1 Mar 2018 16:10:42 -0500 Date: Thu, 1 Mar 2018 16:10:36 -0500 From: Jerome Glisse To: Logan Gunthorpe Cc: Benjamin Herrenschmidt , linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-nvdimm@lists.01.org, linux-block@vger.kernel.org, Stephen Bates , Christoph Hellwig , Jens Axboe , Keith Busch , Sagi Grimberg , Bjorn Helgaas , Jason Gunthorpe , Max Gurtovoy , Dan Williams , Alex Williamson , Oliver OHalloran Subject: Re: [PATCH v2 00/10] Copy Offload in NVMe Fabrics with P2P PCI Memory Message-ID: <20180301211036.GB6742@redhat.com> References: <20180228234006.21093-1-logang@deltatee.com> <1519876489.4592.3.camel@kernel.crashing.org> <1519876569.4592.4.camel@au1.ibm.com> <8e808448-fc01-5da0-51e7-1a6657d5a23a@deltatee.com> <1519936195.4592.18.camel@au1.ibm.com> <20180301205548.GA6742@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: Mutt/1.9.2 (2017-12-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Mar 01, 2018 at 02:03:26PM -0700, Logan Gunthorpe wrote: > > > On 01/03/18 01:55 PM, Jerome Glisse wrote: > > Well this again a new user of struct page for device memory just for > > one usecase. I wanted HMM to be more versatile so that it could be use > > for this kind of thing too. I guess the message didn't go through. I > > will take some cycles tomorrow to look into this patchset to ascertain > > how struct page is use in this context. > > We looked at it but didn't see how any of it was applicable to our needs. > It seems people miss-understand HMM :( you do not have to use all of its features. If all you care about is having struct page then just use that for instance in your case only use those following 3 functions: hmm_devmem_add() or hmm_devmem_add_resource() and hmm_devmem_remove() for cleanup. You can set the fault callback to an empty stub that always do return VM_SIGBUS or a patch to allow NULL callback inside HMM. You don't have to use the free callback if you don't care and if there is something that doesn't quite match what you want HMM can always be ajusted to address this. The intention of HMM is to be useful for all device memory that wish to have struct page for various reasons. Cheers, Jérôme From mboxrd@z Thu Jan 1 00:00:00 1970 From: jglisse@redhat.com (Jerome Glisse) Date: Thu, 1 Mar 2018 16:10:36 -0500 Subject: [PATCH v2 00/10] Copy Offload in NVMe Fabrics with P2P PCI Memory In-Reply-To: References: <20180228234006.21093-1-logang@deltatee.com> <1519876489.4592.3.camel@kernel.crashing.org> <1519876569.4592.4.camel@au1.ibm.com> <8e808448-fc01-5da0-51e7-1a6657d5a23a@deltatee.com> <1519936195.4592.18.camel@au1.ibm.com> <20180301205548.GA6742@redhat.com> Message-ID: <20180301211036.GB6742@redhat.com> On Thu, Mar 01, 2018@02:03:26PM -0700, Logan Gunthorpe wrote: > > > On 01/03/18 01:55 PM, Jerome Glisse wrote: > > Well this again a new user of struct page for device memory just for > > one usecase. I wanted HMM to be more versatile so that it could be use > > for this kind of thing too. I guess the message didn't go through. I > > will take some cycles tomorrow to look into this patchset to ascertain > > how struct page is use in this context. > > We looked at it but didn't see how any of it was applicable to our needs. > It seems people miss-understand HMM :( you do not have to use all of its features. If all you care about is having struct page then just use that for instance in your case only use those following 3 functions: hmm_devmem_add() or hmm_devmem_add_resource() and hmm_devmem_remove() for cleanup. You can set the fault callback to an empty stub that always do return VM_SIGBUS or a patch to allow NULL callback inside HMM. You don't have to use the free callback if you don't care and if there is something that doesn't quite match what you want HMM can always be ajusted to address this. The intention of HMM is to be useful for all device memory that wish to have struct page for various reasons. Cheers, J?r?me