From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 96042C7619A for ; Mon, 27 Mar 2023 12:15:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=5Q8fS1lOD2/pzQ9O9WyqKTR2RyPq3mcMJtoBcpca0AY=; b=lAao4SDtMiwQSC TYS3qb2NutW6YT3es7bxteolHdu9eSJ1yr018kAk4wDmRJG9dWKCfI5rpB439lvp3g7QaPLgADqzS jNdoFBm4FIZAfS8xDXQ1WSdm8lChYlIBQevdE6qIx+fcw11lpjSlJyqsOaJf9pfqj4AuAeM7CUfSr pvmJvGQVT4t7Qmf+V4Oj7M5TdxdXRj59WQYqUXqA7xdml7M+MBb20UY384GvjfXmiKwtUdTK3sLFT cFSgWSX2/C6CkalmWV69L4pL5eTZVEWe31cxGoX94TpGlPGaJKA5e87EwIkHE4gb/Pk4tCgwLtm9F +jxBnmXsS1TbLw0zF9gQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pglke-00ArNw-0J; Mon, 27 Mar 2023 12:15:04 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pglkI-00Ar8h-0u; Mon, 27 Mar 2023 12:14:44 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id D0689611CA; Mon, 27 Mar 2023 12:14:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1C784C433A4; Mon, 27 Mar 2023 12:14:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679919281; bh=hpHzcJ3T0vShL1lzlkbvi7pqgHNybQdWukxJR0vOP3E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mfm6R/WQZMh61DGU+9uuCWkv0/tyJ6gJGGUPX4zG+Mk/WG5WkNGo0dAhhcwKvkeNZ Z7uItq3PdOnRh7TPukjXUvNYAGRmmdeX8JS3mT3qo9kJY6F3XOZKksXOchdcM4Ox8m YIgytC8vdHgG83EpJ1D/64cYqPwsTsUhPZ6rhpeC62QxfG4lJkjHaP4sM0VmU9aHOb tvmNcya7wRhmDLi1iqOu44WhuiEB0TW0zubz6bYucmFVCHU+7L/S2wlSb9kE6kLicK 5hCctagI4UhCeLS7HyFjmcfLaXiJY5HRdWEL6v0y+dDF92dndyAz6eMDrhb79Jejef K08wGLvDTfBVg== From: Arnd Bergmann To: linux-kernel@vger.kernel.org Cc: Arnd Bergmann , Vineet Gupta , Russell King , Neil Armstrong , Linus Walleij , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Stafford Horne , Helge Deller , Michael Ellerman , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Max Filippov , Christoph Hellwig , Robin Murphy , Lad Prabhakar , Conor Dooley , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-oxnas@groups.io, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-xtensa@linux-xtensa.org Subject: [PATCH 05/21] powerpc: dma-mapping: split out cache operation logic Date: Mon, 27 Mar 2023 14:13:01 +0200 Message-Id: <20230327121317.4081816-6-arnd@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230327121317.4081816-1-arnd@kernel.org> References: <20230327121317.4081816-1-arnd@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230327_051442_423214_8779332B X-CRM114-Status: GOOD ( 20.63 ) X-BeenThere: linux-snps-arc@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Linux on Synopsys ARC Processors List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-snps-arc" Errors-To: linux-snps-arc-bounces+linux-snps-arc=archiver.kernel.org@lists.infradead.org From: Arnd Bergmann The powerpc arch_sync_dma_for_device()/arch_sync_dma_for_cpu() functions behave differently from all other architectures, at least for some of the operations. As a preparation for making the behavior more consistent, reorder the logic in which they decide whether to flush, invalidate or clean the. No change in behavior is intended. Signed-off-by: Arnd Bergmann --- arch/powerpc/mm/dma-noncoherent.c | 91 +++++++++++++++++++++---------- 1 file changed, 63 insertions(+), 28 deletions(-) diff --git a/arch/powerpc/mm/dma-noncoherent.c b/arch/powerpc/mm/dma-noncoherent.c index 30260b5d146d..f10869d27de5 100644 --- a/arch/powerpc/mm/dma-noncoherent.c +++ b/arch/powerpc/mm/dma-noncoherent.c @@ -16,31 +16,28 @@ #include #include +enum dma_cache_op { + DMA_CACHE_CLEAN, + DMA_CACHE_INVAL, + DMA_CACHE_FLUSH, +}; + /* * make an area consistent. */ -static void __dma_sync(void *vaddr, size_t size, int direction) +static void __dma_op(void *vaddr, size_t size, enum dma_cache_op op) { unsigned long start = (unsigned long)vaddr; unsigned long end = start + size; - switch (direction) { - case DMA_NONE: - BUG(); - case DMA_FROM_DEVICE: - /* - * invalidate only when cache-line aligned otherwise there is - * the potential for discarding uncommitted data from the cache - */ - if ((start | end) & (L1_CACHE_BYTES - 1)) - flush_dcache_range(start, end); - else - invalidate_dcache_range(start, end); - break; - case DMA_TO_DEVICE: /* writeback only */ + switch (op) { + case DMA_CACHE_CLEAN: clean_dcache_range(start, end); break; - case DMA_BIDIRECTIONAL: /* writeback and invalidate */ + case DMA_CACHE_INVAL: + invalidate_dcache_range(start, end); + break; + case DMA_CACHE_FLUSH: flush_dcache_range(start, end); break; } @@ -48,16 +45,16 @@ static void __dma_sync(void *vaddr, size_t size, int direction) #ifdef CONFIG_HIGHMEM /* - * __dma_sync_page() implementation for systems using highmem. + * __dma_highmem_op() implementation for systems using highmem. * In this case, each page of a buffer must be kmapped/kunmapped - * in order to have a virtual address for __dma_sync(). This must + * in order to have a virtual address for __dma_op(). This must * not sleep so kmap_atomic()/kunmap_atomic() are used. * * Note: yes, it is possible and correct to have a buffer extend * beyond the first page. */ -static inline void __dma_sync_page_highmem(struct page *page, - unsigned long offset, size_t size, int direction) +static inline void __dma_highmem_op(struct page *page, + unsigned long offset, size_t size, enum dma_cache_op op) { size_t seg_size = min((size_t)(PAGE_SIZE - offset), size); size_t cur_size = seg_size; @@ -71,7 +68,7 @@ static inline void __dma_sync_page_highmem(struct page *page, start = (unsigned long)kmap_atomic(page + seg_nr) + seg_offset; /* Sync this buffer segment */ - __dma_sync((void *)start, seg_size, direction); + __dma_op((void *)start, seg_size, op); kunmap_atomic((void *)start); seg_nr++; @@ -88,32 +85,70 @@ static inline void __dma_sync_page_highmem(struct page *page, #endif /* CONFIG_HIGHMEM */ /* - * __dma_sync_page makes memory consistent. identical to __dma_sync, but - * takes a struct page instead of a virtual address + * __dma_phys_op makes memory consistent. identical to __dma_op, but + * takes a phys_addr_t instead of a virtual address */ -static void __dma_sync_page(phys_addr_t paddr, size_t size, int dir) +static void __dma_phys_op(phys_addr_t paddr, size_t size, enum dma_cache_op op) { struct page *page = pfn_to_page(paddr >> PAGE_SHIFT); unsigned offset = paddr & ~PAGE_MASK; #ifdef CONFIG_HIGHMEM - __dma_sync_page_highmem(page, offset, size, dir); + __dma_highmem_op(page, offset, size, op); #else unsigned long start = (unsigned long)page_address(page) + offset; - __dma_sync((void *)start, size, dir); + __dma_op((void *)start, size, op); #endif } void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, enum dma_data_direction dir) { - __dma_sync_page(paddr, size, dir); + switch (direction) { + case DMA_NONE: + BUG(); + case DMA_FROM_DEVICE: + /* + * invalidate only when cache-line aligned otherwise there is + * the potential for discarding uncommitted data from the cache + */ + if ((start | end) & (L1_CACHE_BYTES - 1)) + __dma_phys_op(start, end, DMA_CACHE_FLUSH); + else + __dma_phys_op(start, end, DMA_CACHE_INVAL); + break; + case DMA_TO_DEVICE: /* writeback only */ + __dma_phys_op(start, end, DMA_CACHE_CLEAN); + break; + case DMA_BIDIRECTIONAL: /* writeback and invalidate */ + __dma_phys_op(start, end, DMA_CACHE_FLUSH); + break; + } } void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, enum dma_data_direction dir) { - __dma_sync_page(paddr, size, dir); + switch (direction) { + case DMA_NONE: + BUG(); + case DMA_FROM_DEVICE: + /* + * invalidate only when cache-line aligned otherwise there is + * the potential for discarding uncommitted data from the cache + */ + if ((start | end) & (L1_CACHE_BYTES - 1)) + __dma_phys_op(start, end, DMA_CACHE_FLUSH); + else + __dma_phys_op(start, end, DMA_CACHE_INVAL); + break; + case DMA_TO_DEVICE: /* writeback only */ + __dma_phys_op(start, end, DMA_CACHE_CLEAN); + break; + case DMA_BIDIRECTIONAL: /* writeback and invalidate */ + __dma_phys_op(start, end, DMA_CACHE_FLUSH); + break; + } } void arch_dma_prep_coherent(struct page *page, size_t size) -- 2.39.2 _______________________________________________ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc