From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935269AbdAJPEj (ORCPT ); Tue, 10 Jan 2017 10:04:39 -0500 Received: from mout.kundenserver.de ([217.72.192.74]:63395 "EHLO mout.kundenserver.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752376AbdAJPEf (ORCPT ); Tue, 10 Jan 2017 10:04:35 -0500 From: Arnd Bergmann To: linux-arm-kernel@lists.infradead.org Cc: Christoph Hellwig , Nikita Yushchenko , Keith Busch , Sagi Grimberg , Jens Axboe , Catalin Marinas , Will Deacon , linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-renesas-soc@vger.kernel.org, Simon Horman , linux-pci@vger.kernel.org, Bjorn Helgaas , artemi.ivanov@cogentembedded.com Subject: Re: NVMe vs DMA addressing limitations Date: Tue, 10 Jan 2017 16:02:28 +0100 Message-ID: <9020459.Ga31IGQ4TP@wuerfel> User-Agent: KMail/5.1.3 (Linux/4.4.0-34-generic; KDE/5.18.0; x86_64; ; ) In-Reply-To: <20170110144839.GB27156@lst.de> References: <1483044304-2085-1-git-send-email-nikita.yoush@cogentembedded.com> <4137257.d2v87kqLLv@wuerfel> <20170110144839.GB27156@lst.de> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-Provags-ID: V03:K0:v3MMyKxXFAmSWWZ/43DJzkBD/IFdQ7YV7zOOt9GGJW5iBXnnfh6 6Y53gvCyQg9lnEPRqku9QG/TjdT9igYDcEnkm4n9tJptU1YmYTJg+eIWk8rBulYyV02z0j1 zTLeo2kkFSNAKzA6ZoFzu0ZNWPf3afFuqzR9qY6Honp2Rd+KyoYt/7bBmXk6wZ8rwiBLKly tVqJLz34KyTra0hKsnQ7Q== X-UI-Out-Filterresults: notjunk:1;V01:K0:jnJFpTfxQAk=:DDWjwbHcq2edC0BXMNEcK+ Udu+mP9oobM1Bb6QbQYbeI/Bx2o1wXT5AQ5Xtyg0Bqeox88qM9PtloCpky6U6UftDohjzUR+B zVfiCko/zsWj4s4jKkES5cZ8KFX+hx4YMrFlwrOMR7BV5Wvpi/Iaxrybybs+gan6fS3TCWojv lufnsgk+Kg7WoNbwFEqGVbCYzfuuhYKTVF4j7gwHchAZmN3Q2vYB7ga2xlUb4/3c8SGt7D/wR 4GzoWE2NsfjS5EW/z1l7M2D4z9QCs4PBKnPU4Fln9CtCTa5xlcmLLUxEk5AEOQEmnQh+8uG6x czG7Djt23IMp2Cf3hKml7BsHoo1gOh/u90XnFWLsLoxggRDIF2f/0eH+cqFQD63lWzNoY2RWA GAsSEvtUyVRybRe8DdJBZmJV9YrC9y527DtnbxHucmH8yLu1ITzQNss9yCv6gMjCCkLL/wAtJ jpSad1RXklu/L0dap/+OxAUz5LP/LfRUYfqX6F4xL5hk2x2+2w/U37SLXMC3thlUr+/0o5KcV pcUeImsTolNG2v4v/Edg+J1mYrE9+w09FaYm8fn2BrPF3kAK7rTsUniNRG+24h9Zs6VRxR4p0 uXcKsvMN2nBy9XskQ8JlwefldWoYM5hruIYVWcOcXuPabHsJN4mc+vsSTMxu6vkybd4HFPiko FwmCPENVJcdgcn1NWUOJkCdGPuPBreGVZyOGrqemtwG8bPr3/NeN/ixHBDqwRem/ke2zXOyEu +bs6CL/VVbCTJmrO Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tuesday, January 10, 2017 3:48:39 PM CET Christoph Hellwig wrote: > On Tue, Jan 10, 2017 at 12:01:05PM +0100, Arnd Bergmann wrote: > > Another workaround me might need is to limit amount of concurrent DMA > > in the NVMe driver based on some platform quirk. The way that NVMe works, > > it can have very large amounts of data that is concurrently mapped into > > the device. > > That's not really just NVMe - other storage and network controllers also > can DMA map giant amounts of memory. There are a couple aspects to it: > > - dma coherent memoery - right now NVMe doesn't use too much of it, > but upcoming low-end NVMe controllers will soon start to require > fairl large amounts of it for the host memory buffer feature that > allows for DRAM-less controller designs. As an interesting quirk > that is memory only used by the PCIe devices, and never accessed > by the Linux host at all. Right, that is going to become interesting, as some platforms are very limited with their coherent allocations. > - size vs number of the dynamic mapping. We probably want the dma_ops > specify a maximum mapping size for a given device. As long as we > can make progress with a few mappings swiotlb / the iommu can just > fail mapping and the driver will propagate that to the block layer > that throttles I/O. Good idea. Arnd From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Return-Path: From: Arnd Bergmann To: linux-arm-kernel@lists.infradead.org Subject: Re: NVMe vs DMA addressing limitations Date: Tue, 10 Jan 2017 16:02:28 +0100 Message-ID: <9020459.Ga31IGQ4TP@wuerfel> In-Reply-To: <20170110144839.GB27156@lst.de> References: <1483044304-2085-1-git-send-email-nikita.yoush@cogentembedded.com> <4137257.d2v87kqLLv@wuerfel> <20170110144839.GB27156@lst.de> MIME-Version: 1.0 List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Nikita Yushchenko , Jens Axboe , Sagi Grimberg , linux-pci@vger.kernel.org, Catalin Marinas , Will Deacon , linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, Keith Busch , Simon Horman , linux-renesas-soc@vger.kernel.org, Bjorn Helgaas , artemi.ivanov@cogentembedded.com, Christoph Hellwig Content-Type: text/plain; charset="us-ascii" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+bjorn=helgaas.com@lists.infradead.org List-ID: On Tuesday, January 10, 2017 3:48:39 PM CET Christoph Hellwig wrote: > On Tue, Jan 10, 2017 at 12:01:05PM +0100, Arnd Bergmann wrote: > > Another workaround me might need is to limit amount of concurrent DMA > > in the NVMe driver based on some platform quirk. The way that NVMe works, > > it can have very large amounts of data that is concurrently mapped into > > the device. > > That's not really just NVMe - other storage and network controllers also > can DMA map giant amounts of memory. There are a couple aspects to it: > > - dma coherent memoery - right now NVMe doesn't use too much of it, > but upcoming low-end NVMe controllers will soon start to require > fairl large amounts of it for the host memory buffer feature that > allows for DRAM-less controller designs. As an interesting quirk > that is memory only used by the PCIe devices, and never accessed > by the Linux host at all. Right, that is going to become interesting, as some platforms are very limited with their coherent allocations. > - size vs number of the dynamic mapping. We probably want the dma_ops > specify a maximum mapping size for a given device. As long as we > can make progress with a few mappings swiotlb / the iommu can just > fail mapping and the driver will propagate that to the block layer > that throttles I/O. Good idea. Arnd _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel From mboxrd@z Thu Jan 1 00:00:00 1970 From: arnd@linaro.org (Arnd Bergmann) Date: Tue, 10 Jan 2017 16:02:28 +0100 Subject: NVMe vs DMA addressing limitations In-Reply-To: <20170110144839.GB27156@lst.de> References: <1483044304-2085-1-git-send-email-nikita.yoush@cogentembedded.com> <4137257.d2v87kqLLv@wuerfel> <20170110144839.GB27156@lst.de> Message-ID: <9020459.Ga31IGQ4TP@wuerfel> On Tuesday, January 10, 2017 3:48:39 PM CET Christoph Hellwig wrote: > On Tue, Jan 10, 2017@12:01:05PM +0100, Arnd Bergmann wrote: > > Another workaround me might need is to limit amount of concurrent DMA > > in the NVMe driver based on some platform quirk. The way that NVMe works, > > it can have very large amounts of data that is concurrently mapped into > > the device. > > That's not really just NVMe - other storage and network controllers also > can DMA map giant amounts of memory. There are a couple aspects to it: > > - dma coherent memoery - right now NVMe doesn't use too much of it, > but upcoming low-end NVMe controllers will soon start to require > fairl large amounts of it for the host memory buffer feature that > allows for DRAM-less controller designs. As an interesting quirk > that is memory only used by the PCIe devices, and never accessed > by the Linux host at all. Right, that is going to become interesting, as some platforms are very limited with their coherent allocations. > - size vs number of the dynamic mapping. We probably want the dma_ops > specify a maximum mapping size for a given device. As long as we > can make progress with a few mappings swiotlb / the iommu can just > fail mapping and the driver will propagate that to the block layer > that throttles I/O. Good idea. Arnd From mboxrd@z Thu Jan 1 00:00:00 1970 From: arnd@linaro.org (Arnd Bergmann) Date: Tue, 10 Jan 2017 16:02:28 +0100 Subject: NVMe vs DMA addressing limitations In-Reply-To: <20170110144839.GB27156@lst.de> References: <1483044304-2085-1-git-send-email-nikita.yoush@cogentembedded.com> <4137257.d2v87kqLLv@wuerfel> <20170110144839.GB27156@lst.de> Message-ID: <9020459.Ga31IGQ4TP@wuerfel> To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org On Tuesday, January 10, 2017 3:48:39 PM CET Christoph Hellwig wrote: > On Tue, Jan 10, 2017 at 12:01:05PM +0100, Arnd Bergmann wrote: > > Another workaround me might need is to limit amount of concurrent DMA > > in the NVMe driver based on some platform quirk. The way that NVMe works, > > it can have very large amounts of data that is concurrently mapped into > > the device. > > That's not really just NVMe - other storage and network controllers also > can DMA map giant amounts of memory. There are a couple aspects to it: > > - dma coherent memoery - right now NVMe doesn't use too much of it, > but upcoming low-end NVMe controllers will soon start to require > fairl large amounts of it for the host memory buffer feature that > allows for DRAM-less controller designs. As an interesting quirk > that is memory only used by the PCIe devices, and never accessed > by the Linux host at all. Right, that is going to become interesting, as some platforms are very limited with their coherent allocations. > - size vs number of the dynamic mapping. We probably want the dma_ops > specify a maximum mapping size for a given device. As long as we > can make progress with a few mappings swiotlb / the iommu can just > fail mapping and the driver will propagate that to the block layer > that throttles I/O. Good idea. Arnd