From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755396AbbLaK0s (ORCPT ); Thu, 31 Dec 2015 05:26:48 -0500 Received: from lxorguk.ukuu.org.uk ([81.2.110.251]:35315 "EHLO lxorguk.ukuu.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755171AbbLaK0r (ORCPT ); Thu, 31 Dec 2015 05:26:47 -0500 Date: Thu, 31 Dec 2015 10:25:48 +0000 From: One Thousand Gnomes To: Masahiro Yamada Cc: Linux Kernel Mailing List , dmaengine@vger.kernel.org, Dan Williams , "James E.J. Bottomley" , Sumit Semwal , Vinod Koul , Christoph Hellwig , Lars-Peter Clausen , linux-arm-kernel , Nicolas Ferre Subject: Re: [Question about DMA] Consistent memory? Message-ID: <20151231102548.3ed389fb@lxorguk.ukuu.org.uk> In-Reply-To: References: Organization: Intel Corporation X-Mailer: Claws Mail 3.12.0 (GTK+ 2.24.29; x86_64-redhat-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 31 Dec 2015 16:50:54 +0900 > But, I think such a system is rare. Actually its quite normal for some vendors processors but not others. > At least on my SoC (ARM SoC), DMA controllers > for NAND, MMC, etc. are directly connected to the DRAM > like Fig.2. > > So, cache operations must be explicitly done > by software before/after DMAs are kicked. > (I think this is very normal.) For ARM certainly. > > Fig.2 > > |------| |------| |-----| > | CPU0 | | CPU1 | | DMA | > |------| |------| |-----| > | | | > | | | > |------| |------| | > | L1-C | | L1-C | | > |------| |------| | > | | | > |------------------| | > |Snoop Control Unit| | > |------------------| | > | | > |------------------| | > | L2-cache | | > |------------------| | > | | > |--------------------------| > | DRAM | > |--------------------------| > > > In a system like Fig.2, is the memory non-consistent? dma_alloc_coherent will always provide you with coherent memory. On a machine with good cache interfaces it will provide you with normal memory. On some systems it may be memory from a special window, in other cases it will fall back to providing uncached memory for this. If the platform genuinely cannot support this (even by marking those areas uncacheable) then it will fail the allocation. What it does mean is that you need to use non-coherent mappings when accessing a lot of data. On hardware without proper cache coherency it may be quite expensive to access coherent memory. Alan