From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Cyrus-Session-Id: sloti22d1t05-3729377-1516708418-2-11343076446258424911 X-Sieve: CMU Sieve 3.0 X-Spam-known-sender: no X-Spam-score: 0.0 X-Spam-hits: BAYES_00 -1.9, FREEMAIL_FROM 0.001, RCVD_IN_DNSWL_NONE -0.0001, RCVD_IN_MSPIKE_H3 -0.01, RCVD_IN_MSPIKE_WL -0.01, SPF_PASS -0.001, LANGUAGES enro, BAYES_USED global, SA_VERSION 3.4.0 X-Spam-source: IP='209.85.217.195', Host='mail-ua0-f195.google.com', Country='US', FromHeader='com', MailFrom='com' X-Spam-charsets: plain='UTF-8' X-Resolved-to: greg@kroah.com X-Delivered-to: greg@kroah.com X-Mail-from: green.hu@gmail.com ARC-Seal: i=1; a=rsa-sha256; cv=none; d=messagingengine.com; s=arctest; t=1516708417; b=R+9/waxbirl0Xen4Zj2eRroH7PRf2YEvA7qAfGPlxqSroaG YAmMBbt/eip/I1/4zpEDsrGTt8cNAFAu01Y6Tq7iWkI3AxaJxL4Qgl9HTW2HvJt1 YHbHtEA0OZh9iaFN+6KXd370ulDPeUTW0qFlQwp94Vrwo/+lz5ar5YDCaiStirwJ 7ZeWO47WGlCKq6O1hOtFmyuDQEe22qIDqtpDk/NUM2oa8TAUJF7Flz+f5YJhyiAW 99pfUupiIybLWVXMT6TwXWUwTbCVZWOK+rxZfq/o/szFuqp89Srbk9QOA05MZgch BbGK3clodWwXCtWBzmHIKxjS7+EcNb5+/gbOfOw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=mime-version:in-reply-to:references:from :date:message-id:subject:to:cc:content-type; s=arctest; t= 1516708417; bh=a3jVnTY7a+bBoPHIVIPSNeVCBeJE6ta4E6vz1ZRmddo=; b=T kjmxmoueKRjVlqy1E+db+56m/kgXUPZ6iOC1I/y2VphFg3qE7/TBRcdPnv9XIp/n rY6d18eWNEIA8d+s+lUSJ2OWu6WUJEHWYacNnpEEW7qdFzLQL679taa8PI0E0iNH 670W/O4uIGgVv97hw8gUO1+wgIeAZJrr0FyMJFvqhKJxJPQGBOtdm0btyIFgPRtu Mqj5ogbbpRaMV7NB+/I8Hf1NiOeHRvcp0NLca+aA4DHau6QYCIJKe4sGcypIAGAD pzSE6uSl2Wz5Sy2rEla0X4mvUV7rbCS0l4JWJexDyLIvg0Ji0GBnA2uAPj5v+1VI 6UmHgeBj3ABGEJRAt0phQ== ARC-Authentication-Results: i=1; mx2.messagingengine.com; arc=none (no signatures found); dkim=pass (2048-bit rsa key sha256) header.d=gmail.com header.i=@gmail.com header.b=e6dthoSC x-bits=2048 x-keytype=rsa x-algorithm=sha256 x-selector=20161025; dmarc=pass (p=none,d=none) header.from=gmail.com; iprev=pass policy.iprev=209.85.217.195 (mail-ua0-f195.google.com); spf=pass smtp.mailfrom=green.hu@gmail.com smtp.helo=mail-ua0-f195.google.com; x-aligned-from=pass; x-google-dkim=pass (2048-bit rsa key) header.d=1e100.net header.i=@1e100.net header.b=I89iInna; x-ptr=pass x-ptr-helo=mail-ua0-f195.google.com x-ptr-lookup=mail-ua0-f195.google.com; x-return-mx=pass smtp.domain=gmail.com smtp.result=pass smtp_is_org_domain=yes header.domain=gmail.com header.result=pass header_is_org_domain=yes; x-tls=pass version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128 Authentication-Results: mx2.messagingengine.com; arc=none (no signatures found); dkim=pass (2048-bit rsa key sha256) header.d=gmail.com header.i=@gmail.com header.b=e6dthoSC x-bits=2048 x-keytype=rsa x-algorithm=sha256 x-selector=20161025; dmarc=pass (p=none,d=none) header.from=gmail.com; iprev=pass policy.iprev=209.85.217.195 (mail-ua0-f195.google.com); spf=pass smtp.mailfrom=green.hu@gmail.com smtp.helo=mail-ua0-f195.google.com; x-aligned-from=pass; x-google-dkim=pass (2048-bit rsa key) header.d=1e100.net header.i=@1e100.net header.b=I89iInna; x-ptr=pass x-ptr-helo=mail-ua0-f195.google.com x-ptr-lookup=mail-ua0-f195.google.com; x-return-mx=pass smtp.domain=gmail.com smtp.result=pass smtp_is_org_domain=yes header.domain=gmail.com header.result=pass header_is_org_domain=yes; x-tls=pass version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128 X-Google-Smtp-Source: AH8x225+5kGbYmY2VT7/4eGYNONNKXMHUqgO33E7FpPCuxg0SBhqTDApsWvLwKghzYAqd8rN/fMjA6Ya2S3WRH7TfUU= MIME-Version: 1.0 In-Reply-To: References: From: Greentime Hu Date: Tue, 23 Jan 2018 19:52:55 +0800 Message-ID: Subject: Re: [PATCH v6 16/36] nds32: DMA mapping API To: Arnd Bergmann Cc: Greentime , Linux Kernel Mailing List , linux-arch , Thomas Gleixner , Jason Cooper , Marc Zyngier , Rob Herring , Networking , Vincent Chen , DTML , Al Viro , David Howells , Will Deacon , Daniel Lezcano , linux-serial@vger.kernel.org, Geert Uytterhoeven , Linus Walleij , Mark Rutland , Greg KH , Guo Ren , Randy Dunlap , David Miller , Jonas Bonn , Stefan Kristiansson , Stafford Horne , Vincent Chen Content-Type: text/plain; charset="UTF-8" X-getmail-retrieved-from-mailbox: INBOX X-Mailing-List: linux-kernel@vger.kernel.org List-ID: Hi, Arnd: 2018-01-23 16:23 GMT+08:00 Greentime Hu : > Hi, Arnd: > > 2018-01-18 18:26 GMT+08:00 Arnd Bergmann : >> On Mon, Jan 15, 2018 at 6:53 AM, Greentime Hu wrote: >>> From: Greentime Hu >>> >>> This patch adds support for the DMA mapping API. It uses dma_map_ops for >>> flexibility. >>> >>> Signed-off-by: Vincent Chen >>> Signed-off-by: Greentime Hu >> >> I'm still unhappy about the way the cache flushes are done here as discussed >> before. It's not a show-stopped, but no Ack from me. > > How about this implementation? > > static void > nds32_dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle, > size_t size, enum dma_data_direction dir) > { > switch (direction) { > case DMA_TO_DEVICE: /* writeback only */ > break; > case DMA_FROM_DEVICE: /* invalidate only */ > case DMA_BIDIRECTIONAL: /* writeback and invalidate */ > cpu_dma_inval_range(start, end); > break; > default: > BUG(); > } > } > > static void > nds32_dma_sync_single_for_device(struct device *dev, dma_addr_t handle, > size_t size, enum dma_data_direction dir) > { > switch (direction) { > case DMA_FROM_DEVICE: /* invalidate only */ > break; > case DMA_TO_DEVICE: /* writeback only */ > case DMA_BIDIRECTIONAL: /* writeback and invalidate */ > cpu_dma_wb_range(start, end); > break; > default: > BUG(); > } > } I am not sure if I understand it correctly. I list all the combinations. RAM to DEVICE before DMA => writeback cache after DMA => nop DEVICE to RAM before DMA => nop after DMA => invalidate cache static void consistent_sync(void *vaddr, size_t size, int direction, int master) { unsigned long start = (unsigned long)vaddr; unsigned long end = start + size; if (master == FOR_CPU) { switch (direction) { case DMA_TO_DEVICE: break; case DMA_FROM_DEVICE: case DMA_BIDIRECTIONAL: cpu_dma_inval_range(start, end); break; default: BUG(); } } else { /* FOR_DEVICE */ switch (direction) { case DMA_FROM_DEVICE: break; case DMA_TO_DEVICE: case DMA_BIDIRECTIONAL: cpu_dma_wb_range(start, end); break; default: BUG(); } } } static void nds32_dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir) { consistent_sync((void *)phys_to_virt(handle), size, dir, FOR_CPU); } static void nds32_dma_sync_single_for_device(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir) { consistent_sync((void *)phys_to_virt(handle), size, dir, FOR_DEVICE); } static dma_addr_t nds32_dma_map_page(struct device *dev, struct page *page, unsigned long offset, size_t size, enum dma_data_direction dir, unsigned long attrs) { if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) consistent_sync((void *)(page_address(page) + offset), size, dir, FOR_DEVICE); return page_to_phys(page) + offset; } static void nds32_dma_unmap_page(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir, unsigned long attrs) { if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) consistent_sync(phys_to_virt(handle), size, dir, FOR_CPU); } From mboxrd@z Thu Jan 1 00:00:00 1970 From: Greentime Hu Subject: Re: [PATCH v6 16/36] nds32: DMA mapping API Date: Tue, 23 Jan 2018 19:52:55 +0800 Message-ID: References: Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Cc: Greentime , Linux Kernel Mailing List , linux-arch , Thomas Gleixner , Jason Cooper , Marc Zyngier , Rob Herring , Networking , Vincent Chen , DTML , Al Viro , David Howells , Will Deacon , Daniel Lezcano , linux-serial-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Geert Uytterhoeven , Linus Walleij , Mark Rutland , Greg KH , Guo Ren Return-path: In-Reply-To: Sender: devicetree-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org List-Id: netdev.vger.kernel.org Hi, Arnd: 2018-01-23 16:23 GMT+08:00 Greentime Hu : > Hi, Arnd: > > 2018-01-18 18:26 GMT+08:00 Arnd Bergmann : >> On Mon, Jan 15, 2018 at 6:53 AM, Greentime Hu wrote: >>> From: Greentime Hu >>> >>> This patch adds support for the DMA mapping API. It uses dma_map_ops for >>> flexibility. >>> >>> Signed-off-by: Vincent Chen >>> Signed-off-by: Greentime Hu >> >> I'm still unhappy about the way the cache flushes are done here as discussed >> before. It's not a show-stopped, but no Ack from me. > > How about this implementation? > > static void > nds32_dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle, > size_t size, enum dma_data_direction dir) > { > switch (direction) { > case DMA_TO_DEVICE: /* writeback only */ > break; > case DMA_FROM_DEVICE: /* invalidate only */ > case DMA_BIDIRECTIONAL: /* writeback and invalidate */ > cpu_dma_inval_range(start, end); > break; > default: > BUG(); > } > } > > static void > nds32_dma_sync_single_for_device(struct device *dev, dma_addr_t handle, > size_t size, enum dma_data_direction dir) > { > switch (direction) { > case DMA_FROM_DEVICE: /* invalidate only */ > break; > case DMA_TO_DEVICE: /* writeback only */ > case DMA_BIDIRECTIONAL: /* writeback and invalidate */ > cpu_dma_wb_range(start, end); > break; > default: > BUG(); > } > } I am not sure if I understand it correctly. I list all the combinations. RAM to DEVICE before DMA => writeback cache after DMA => nop DEVICE to RAM before DMA => nop after DMA => invalidate cache static void consistent_sync(void *vaddr, size_t size, int direction, int master) { unsigned long start = (unsigned long)vaddr; unsigned long end = start + size; if (master == FOR_CPU) { switch (direction) { case DMA_TO_DEVICE: break; case DMA_FROM_DEVICE: case DMA_BIDIRECTIONAL: cpu_dma_inval_range(start, end); break; default: BUG(); } } else { /* FOR_DEVICE */ switch (direction) { case DMA_FROM_DEVICE: break; case DMA_TO_DEVICE: case DMA_BIDIRECTIONAL: cpu_dma_wb_range(start, end); break; default: BUG(); } } } static void nds32_dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir) { consistent_sync((void *)phys_to_virt(handle), size, dir, FOR_CPU); } static void nds32_dma_sync_single_for_device(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir) { consistent_sync((void *)phys_to_virt(handle), size, dir, FOR_DEVICE); } static dma_addr_t nds32_dma_map_page(struct device *dev, struct page *page, unsigned long offset, size_t size, enum dma_data_direction dir, unsigned long attrs) { if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) consistent_sync((void *)(page_address(page) + offset), size, dir, FOR_DEVICE); return page_to_phys(page) + offset; } static void nds32_dma_unmap_page(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir, unsigned long attrs) { if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) consistent_sync(phys_to_virt(handle), size, dir, FOR_CPU); } -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html From mboxrd@z Thu Jan 1 00:00:00 1970 From: Greentime Hu Subject: Re: [PATCH v6 16/36] nds32: DMA mapping API Date: Tue, 23 Jan 2018 19:52:55 +0800 Message-ID: References: Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Return-path: In-Reply-To: Sender: devicetree-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org To: Arnd Bergmann Cc: Greentime , Linux Kernel Mailing List , linux-arch , Thomas Gleixner , Jason Cooper , Marc Zyngier , Rob Herring , Networking , Vincent Chen , DTML , Al Viro , David Howells , Will Deacon , Daniel Lezcano , linux-serial-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Geert Uytterhoeven , Linus Walleij , Mark Rutland , Greg KH , Guo Ren List-Id: devicetree@vger.kernel.org Hi, Arnd: 2018-01-23 16:23 GMT+08:00 Greentime Hu : > Hi, Arnd: > > 2018-01-18 18:26 GMT+08:00 Arnd Bergmann : >> On Mon, Jan 15, 2018 at 6:53 AM, Greentime Hu wrote: >>> From: Greentime Hu >>> >>> This patch adds support for the DMA mapping API. It uses dma_map_ops for >>> flexibility. >>> >>> Signed-off-by: Vincent Chen >>> Signed-off-by: Greentime Hu >> >> I'm still unhappy about the way the cache flushes are done here as discussed >> before. It's not a show-stopped, but no Ack from me. > > How about this implementation? > > static void > nds32_dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle, > size_t size, enum dma_data_direction dir) > { > switch (direction) { > case DMA_TO_DEVICE: /* writeback only */ > break; > case DMA_FROM_DEVICE: /* invalidate only */ > case DMA_BIDIRECTIONAL: /* writeback and invalidate */ > cpu_dma_inval_range(start, end); > break; > default: > BUG(); > } > } > > static void > nds32_dma_sync_single_for_device(struct device *dev, dma_addr_t handle, > size_t size, enum dma_data_direction dir) > { > switch (direction) { > case DMA_FROM_DEVICE: /* invalidate only */ > break; > case DMA_TO_DEVICE: /* writeback only */ > case DMA_BIDIRECTIONAL: /* writeback and invalidate */ > cpu_dma_wb_range(start, end); > break; > default: > BUG(); > } > } I am not sure if I understand it correctly. I list all the combinations. RAM to DEVICE before DMA => writeback cache after DMA => nop DEVICE to RAM before DMA => nop after DMA => invalidate cache static void consistent_sync(void *vaddr, size_t size, int direction, int master) { unsigned long start = (unsigned long)vaddr; unsigned long end = start + size; if (master == FOR_CPU) { switch (direction) { case DMA_TO_DEVICE: break; case DMA_FROM_DEVICE: case DMA_BIDIRECTIONAL: cpu_dma_inval_range(start, end); break; default: BUG(); } } else { /* FOR_DEVICE */ switch (direction) { case DMA_FROM_DEVICE: break; case DMA_TO_DEVICE: case DMA_BIDIRECTIONAL: cpu_dma_wb_range(start, end); break; default: BUG(); } } } static void nds32_dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir) { consistent_sync((void *)phys_to_virt(handle), size, dir, FOR_CPU); } static void nds32_dma_sync_single_for_device(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir) { consistent_sync((void *)phys_to_virt(handle), size, dir, FOR_DEVICE); } static dma_addr_t nds32_dma_map_page(struct device *dev, struct page *page, unsigned long offset, size_t size, enum dma_data_direction dir, unsigned long attrs) { if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) consistent_sync((void *)(page_address(page) + offset), size, dir, FOR_DEVICE); return page_to_phys(page) + offset; } static void nds32_dma_unmap_page(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir, unsigned long attrs) { if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) consistent_sync(phys_to_virt(handle), size, dir, FOR_CPU); } -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html