From mboxrd@z Thu Jan 1 00:00:00 1970 From: Christoph Hellwig Subject: Re: [PATCH v2 0/7] Stop losing firmware-set DMA masks Date: Mon, 6 Aug 2018 14:13:34 +0200 Message-ID: <20180806121334.GA5340@lst.de> References: <1ccccc4b-7d4c-a3ee-23a2-f108916705e9@arm.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Errors-To: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org To: Arnd Bergmann Cc: Frank Rowand , gregkh , the arch/x86 maintainers , ACPI Devel Maling List , "open list:IOMMU DRIVERS" , Rob Herring , Sudeep Holla , Robin Murphy , Christoph Hellwig , Linux ARM List-Id: linux-acpi@vger.kernel.org On Mon, Aug 06, 2018 at 12:01:34PM +0200, Arnd Bergmann wrote: > There are a few subtle corner cases here, in particular in which cases > the new dma_set_mask() behavior on arm64 reports success or > failure when truncating the mask to the bus_dma_mask. Going forward my plan was to make dma_set_mask() never fail. The idea is that it sets the mask that the device is capable of, and the core dma code is responsible for also looking at bus_dma_mask and otherwise make things just work. Robin brought up the case where a platform can't handle a given limitation ever, e.g. a PCI(e) device with a 24-bit dma mask on a device with a dma offset that means we'll never have any physical memory reachable in that range. So we'll either still need to allow it to fail for such corner cases or delay such error until later, e.g. when dma_alloc_* (or in the corner case of the corner case dma_map_*) is called. I'm still undecided which way to go, but not allowing error returns from dma_set_mask and its variants sounds very tempting. From mboxrd@z Thu Jan 1 00:00:00 1970 From: hch@lst.de (Christoph Hellwig) Date: Mon, 6 Aug 2018 14:13:34 +0200 Subject: [PATCH v2 0/7] Stop losing firmware-set DMA masks In-Reply-To: References: <1ccccc4b-7d4c-a3ee-23a2-f108916705e9@arm.com> Message-ID: <20180806121334.GA5340@lst.de> To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org On Mon, Aug 06, 2018 at 12:01:34PM +0200, Arnd Bergmann wrote: > There are a few subtle corner cases here, in particular in which cases > the new dma_set_mask() behavior on arm64 reports success or > failure when truncating the mask to the bus_dma_mask. Going forward my plan was to make dma_set_mask() never fail. The idea is that it sets the mask that the device is capable of, and the core dma code is responsible for also looking at bus_dma_mask and otherwise make things just work. Robin brought up the case where a platform can't handle a given limitation ever, e.g. a PCI(e) device with a 24-bit dma mask on a device with a dma offset that means we'll never have any physical memory reachable in that range. So we'll either still need to allow it to fail for such corner cases or delay such error until later, e.g. when dma_alloc_* (or in the corner case of the corner case dma_map_*) is called. I'm still undecided which way to go, but not allowing error returns from dma_set_mask and its variants sounds very tempting.