From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from ale.deltatee.com (ale.deltatee.com [207.54.116.67]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 4759F22546BA6 for ; Thu, 1 Mar 2018 13:51:18 -0800 (PST) From: Logan Gunthorpe References: <20180228234006.21093-1-logang@deltatee.com> <1519876489.4592.3.camel@kernel.crashing.org> <1519876569.4592.4.camel@au1.ibm.com> <1519936477.4592.23.camel@au1.ibm.com> <2079ba48-5ae5-5b44-cce1-8175712dd395@deltatee.com> Message-ID: <43ba615f-a6e1-9444-65e1-494169cb415d@deltatee.com> Date: Thu, 1 Mar 2018 14:57:06 -0700 MIME-Version: 1.0 In-Reply-To: <2079ba48-5ae5-5b44-cce1-8175712dd395@deltatee.com> Content-Language: en-US Subject: Re: [PATCH v2 00/10] Copy Offload in NVMe Fabrics with P2P PCI Memory List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" To: Dan Williams , benh@au1.ibm.com Cc: Jens Axboe , linux-block@vger.kernel.org, Oliver OHalloran , linux-nvdimm , linux-rdma , linux-pci@vger.kernel.org, Linux Kernel Mailing List , linux-nvme@lists.infradead.org, Keith Busch , Alex Williamson , Jason Gunthorpe , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Bjorn Helgaas , Max Gurtovoy , Christoph Hellwig List-ID: On 01/03/18 02:45 PM, Logan Gunthorpe wrote: > It handles it fine for many situations. But when you try to map > something that is at the end of the physical address space then the > spares-vmemmap needs virtual address space that's the size of the > physical address space divided by PAGE_SIZE which may be a little bit > too large... Though, considering this more, maybe this shouldn't be a problem... Lets say you have 56bits of address space. That's 64PB. If you use need a sparse vmemmap for the entire space it will take 16TB which leaves you with 63.98PB of address space left. (Similar calculations for other numbers of address bits.) So I'm not sure what the problem with this is. We still have to ensure all the arches map the memory with the right cache bits but that should be relatively easy to solve. Logan _______________________________________________ Linux-nvdimm mailing list Linux-nvdimm@lists.01.org https://lists.01.org/mailman/listinfo/linux-nvdimm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from ale.deltatee.com ([207.54.116.67]:39006 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1162136AbeCAV50 (ORCPT ); Thu, 1 Mar 2018 16:57:26 -0500 From: Logan Gunthorpe To: Dan Williams , benh@au1.ibm.com Cc: Jens Axboe , Keith Busch , Oliver OHalloran , Alex Williamson , linux-nvdimm , linux-rdma , linux-pci@vger.kernel.org, Linux Kernel Mailing List , linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Jason Gunthorpe , Bjorn Helgaas , Max Gurtovoy , Christoph Hellwig References: <20180228234006.21093-1-logang@deltatee.com> <1519876489.4592.3.camel@kernel.crashing.org> <1519876569.4592.4.camel@au1.ibm.com> <1519936477.4592.23.camel@au1.ibm.com> <2079ba48-5ae5-5b44-cce1-8175712dd395@deltatee.com> Message-ID: <43ba615f-a6e1-9444-65e1-494169cb415d@deltatee.com> Date: Thu, 1 Mar 2018 14:57:06 -0700 MIME-Version: 1.0 In-Reply-To: <2079ba48-5ae5-5b44-cce1-8175712dd395@deltatee.com> Content-Type: text/plain; charset=utf-8; format=flowed Subject: Re: [PATCH v2 00/10] Copy Offload in NVMe Fabrics with P2P PCI Memory Sender: linux-block-owner@vger.kernel.org List-Id: linux-block@vger.kernel.org On 01/03/18 02:45 PM, Logan Gunthorpe wrote: > It handles it fine for many situations. But when you try to map > something that is at the end of the physical address space then the > spares-vmemmap needs virtual address space that's the size of the > physical address space divided by PAGE_SIZE which may be a little bit > too large... Though, considering this more, maybe this shouldn't be a problem... Lets say you have 56bits of address space. That's 64PB. If you use need a sparse vmemmap for the entire space it will take 16TB which leaves you with 63.98PB of address space left. (Similar calculations for other numbers of address bits.) So I'm not sure what the problem with this is. We still have to ensure all the arches map the memory with the right cache bits but that should be relatively easy to solve. Logan From mboxrd@z Thu Jan 1 00:00:00 1970 From: Logan Gunthorpe Subject: Re: [PATCH v2 00/10] Copy Offload in NVMe Fabrics with P2P PCI Memory Date: Thu, 1 Mar 2018 14:57:06 -0700 Message-ID: <43ba615f-a6e1-9444-65e1-494169cb415d@deltatee.com> References: <20180228234006.21093-1-logang@deltatee.com> <1519876489.4592.3.camel@kernel.crashing.org> <1519876569.4592.4.camel@au1.ibm.com> <1519936477.4592.23.camel@au1.ibm.com> <2079ba48-5ae5-5b44-cce1-8175712dd395@deltatee.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <2079ba48-5ae5-5b44-cce1-8175712dd395-OTvnGxWRz7hWk0Htik3J/w@public.gmane.org> Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-nvdimm-bounces-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org Sender: "Linux-nvdimm" To: Dan Williams , benh-8fk3Idey6ehBDgjK7y7TUQ@public.gmane.org Cc: Jens Axboe , linux-block-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Oliver OHalloran , linux-nvdimm , linux-rdma , linux-pci-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Linux Kernel Mailing List , linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org, Keith Busch , Alex Williamson , Jason Gunthorpe , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Bjorn Helgaas , Max Gurtovoy , Christoph Hellwig List-Id: linux-rdma@vger.kernel.org On 01/03/18 02:45 PM, Logan Gunthorpe wrote: > It handles it fine for many situations. But when you try to map > something that is at the end of the physical address space then the > spares-vmemmap needs virtual address space that's the size of the > physical address space divided by PAGE_SIZE which may be a little bit > too large... Though, considering this more, maybe this shouldn't be a problem... Lets say you have 56bits of address space. That's 64PB. If you use need a sparse vmemmap for the entire space it will take 16TB which leaves you with 63.98PB of address space left. (Similar calculations for other numbers of address bits.) So I'm not sure what the problem with this is. We still have to ensure all the arches map the memory with the right cache bits but that should be relatively easy to solve. Logan From mboxrd@z Thu Jan 1 00:00:00 1970 From: logang@deltatee.com (Logan Gunthorpe) Date: Thu, 1 Mar 2018 14:57:06 -0700 Subject: [PATCH v2 00/10] Copy Offload in NVMe Fabrics with P2P PCI Memory In-Reply-To: <2079ba48-5ae5-5b44-cce1-8175712dd395@deltatee.com> References: <20180228234006.21093-1-logang@deltatee.com> <1519876489.4592.3.camel@kernel.crashing.org> <1519876569.4592.4.camel@au1.ibm.com> <1519936477.4592.23.camel@au1.ibm.com> <2079ba48-5ae5-5b44-cce1-8175712dd395@deltatee.com> Message-ID: <43ba615f-a6e1-9444-65e1-494169cb415d@deltatee.com> On 01/03/18 02:45 PM, Logan Gunthorpe wrote: > It handles it fine for many situations. But when you try to map > something that is at the end of the physical address space then the > spares-vmemmap needs virtual address space that's the size of the > physical address space divided by PAGE_SIZE which may be a little bit > too large... Though, considering this more, maybe this shouldn't be a problem... Lets say you have 56bits of address space. That's 64PB. If you use need a sparse vmemmap for the entire space it will take 16TB which leaves you with 63.98PB of address space left. (Similar calculations for other numbers of address bits.) So I'm not sure what the problem with this is. We still have to ensure all the arches map the memory with the right cache bits but that should be relatively easy to solve. Logan