From mboxrd@z Thu Jan 1 00:00:00 1970 From: Greentime Hu Subject: Re: [PATCH v6 16/36] nds32: DMA mapping API Date: Thu, 25 Jan 2018 11:45:18 +0800 Message-ID: References: Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Return-path: In-Reply-To: Sender: linux-arch-owner@vger.kernel.org To: Arnd Bergmann Cc: Greentime , Linux Kernel Mailing List , linux-arch , Thomas Gleixner , Jason Cooper , Marc Zyngier , Rob Herring , Networking , Vincent Chen , DTML , Al Viro , David Howells , Will Deacon , Daniel Lezcano , linux-serial@vger.kernel.org, Geert Uytterhoeven , Linus Walleij , Mark Rutland , Greg KH , Guo Ren List-Id: devicetree@vger.kernel.org Hi, Arnd: 2018-01-24 19:36 GMT+08:00 Arnd Bergmann : > On Tue, Jan 23, 2018 at 12:52 PM, Greentime Hu wrote: >> Hi, Arnd: >> >> 2018-01-23 16:23 GMT+08:00 Greentime Hu : >>> Hi, Arnd: >>> >>> 2018-01-18 18:26 GMT+08:00 Arnd Bergmann : >>>> On Mon, Jan 15, 2018 at 6:53 AM, Greentime Hu wrote: >>>>> From: Greentime Hu >>>>> >>>>> This patch adds support for the DMA mapping API. It uses dma_map_ops for >>>>> flexibility. >>>>> >>>>> Signed-off-by: Vincent Chen >>>>> Signed-off-by: Greentime Hu >>>> >>>> I'm still unhappy about the way the cache flushes are done here as discussed >>>> before. It's not a show-stopped, but no Ack from me. >>> >>> How about this implementation? > >> I am not sure if I understand it correctly. >> I list all the combinations. >> >> RAM to DEVICE >> before DMA => writeback cache >> after DMA => nop >> >> DEVICE to RAM >> before DMA => nop >> after DMA => invalidate cache >> >> static void consistent_sync(void *vaddr, size_t size, int direction, int master) >> { >> unsigned long start = (unsigned long)vaddr; >> unsigned long end = start + size; >> >> if (master == FOR_CPU) { >> switch (direction) { >> case DMA_TO_DEVICE: >> break; >> case DMA_FROM_DEVICE: >> case DMA_BIDIRECTIONAL: >> cpu_dma_inval_range(start, end); >> break; >> default: >> BUG(); >> } >> } else { >> /* FOR_DEVICE */ >> switch (direction) { >> case DMA_FROM_DEVICE: >> break; >> case DMA_TO_DEVICE: >> case DMA_BIDIRECTIONAL: >> cpu_dma_wb_range(start, end); >> break; >> default: >> BUG(); >> } >> } >> } > > That looks reasonable enough, but it does depend on a number of factors, > and the dma-mapping.h implementation is not just about cache flushes. > > As I don't know the microarchitecture, can you answer these questions: > > - are caches always write-back, or could they be write-through? Yes, we can config it to write-back or write-through. > - can the cache be shared with another CPU or a device? No, we don't support it. > - if the cache is shared, is it always coherent, never coherent, or > either of them? We don't support SMP and the device will access memory through bus. I think the cache is not shared. > - could the same memory be visible at different physical addresses > and have conflicting caches? We currently don't have such kind of SoC memory map. > - is the CPU physical address always the same as the address visible to the > device? Yes, it is always the same unless the CPU uses local memory. The physical address of local memory will overlap the original bus address. I think the local memory case can be ignored because we don't use it for now. > - are there devices that can only see a subset of the physical memory? All devices are able to see the whole physical memory in our current SoC, but I think other SoC may support such kind of HW behavior. > - can there be an IOMMU? No. > - are there write-buffers in the CPU that might need to get flushed before > flushing the cache? Yes, there are write-buffers in front of CPU caches but it should be transparent to SW. We don't need to flush it. > - could cache lines be loaded speculatively or with read-ahead while > a buffer is owned by a device? No.