From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1163070AbeCAX1U (ORCPT ); Thu, 1 Mar 2018 18:27:20 -0500 Received: from gate.crashing.org ([63.228.1.57]:46072 "EHLO gate.crashing.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1162906AbeCAX1S (ORCPT ); Thu, 1 Mar 2018 18:27:18 -0500 Message-ID: <1519946767.4592.49.camel@kernel.crashing.org> Subject: Re: [PATCH v2 00/10] Copy Offload in NVMe Fabrics with P2P PCI Memory From: Benjamin Herrenschmidt To: Logan Gunthorpe , Dan Williams Cc: Jens Axboe , Keith Busch , Oliver OHalloran , Alex Williamson , linux-nvdimm , linux-rdma , linux-pci@vger.kernel.org, Linux Kernel Mailing List , linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, =?ISO-8859-1?Q?J=E9r=F4me?= Glisse , Jason Gunthorpe , Bjorn Helgaas , Max Gurtovoy , Christoph Hellwig Date: Fri, 02 Mar 2018 10:26:07 +1100 In-Reply-To: <595acefb-18fc-e650-e172-bae271263c4c@deltatee.com> References: <20180228234006.21093-1-logang@deltatee.com> <1519876489.4592.3.camel@kernel.crashing.org> <1519876569.4592.4.camel@au1.ibm.com> <1519936477.4592.23.camel@au1.ibm.com> <2079ba48-5ae5-5b44-cce1-8175712dd395@deltatee.com> <43ba615f-a6e1-9444-65e1-494169cb415d@deltatee.com> <1519945204.4592.45.camel@au1.ibm.com> <595acefb-18fc-e650-e172-bae271263c4c@deltatee.com> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.26.5 (3.26.5-1.fc27) Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 2018-03-01 at 16:19 -0700, Logan Gunthorpe wrote: (Switching back to my non-IBM address ...) > On 01/03/18 04:00 PM, Benjamin Herrenschmidt wrote: > > We use only 52 in practice but yes. > > > > > That's 64PB. If you use need > > > a sparse vmemmap for the entire space it will take 16TB which leaves you > > > with 63.98PB of address space left. (Similar calculations for other > > > numbers of address bits.) > > > > We only have 52 bits of virtual space for the kernel with the radix > > MMU. > > Ok, assuming you only have 52 bits of physical address space: the sparse > vmemmap takes 1TB and you're left with 3.9PB of address space for other > things. So, again, why doesn't that work? Is my math wrong The big problem is not the vmemmap, it's the linear mapping. Cheers, Ben.