From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:38838) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1SVe1n-0007aw-Pd for qemu-devel@nongnu.org; Sat, 19 May 2012 03:24:41 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1SVe1m-0007eB-3m for qemu-devel@nongnu.org; Sat, 19 May 2012 03:24:39 -0400 Received: from mail-wi0-f181.google.com ([209.85.212.181]:34422) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1SVe1l-0007e2-R3 for qemu-devel@nongnu.org; Sat, 19 May 2012 03:24:38 -0400 Received: by wibhn14 with SMTP id hn14so765410wib.10 for ; Sat, 19 May 2012 00:24:35 -0700 (PDT) Sender: Paolo Bonzini Message-ID: <4FB74AB0.7090608@redhat.com> Date: Sat, 19 May 2012 09:24:32 +0200 From: Paolo Bonzini MIME-Version: 1.0 References: <1336625347-10169-1-git-send-email-benh@kernel.crashing.org> <1336625347-10169-14-git-send-email-benh@kernel.crashing.org> <4FB1A8BF.7060503@codemonkey.ws> <20120515014449.GF30229@truffala.fritz.box> <1337142938.6727.122.camel@pasglop> <4FB4028F.7070003@codemonkey.ws> <1337213257.30558.22.camel@pasglop> <1337214293.30558.25.camel@pasglop> <4FB5F1FD.9020009@redhat.com> <1337329136.2513.7.camel@pasglop> <4FB60EFF.6070205@redhat.com> <1337379992.2513.17.camel@pasglop> In-Reply-To: <1337379992.2513.17.camel@pasglop> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 13/13] iommu: Add a memory barrier to DMA RW function List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Benjamin Herrenschmidt Cc: "Michael S. Tsirkin" , qemu-devel@nongnu.org, Anthony Liguori , David Gibson Il 19/05/2012 00:26, Benjamin Herrenschmidt ha scritto: >> In theory you would need a memory barrier before the first ld/st and one >> after the last... considering virtio uses map/unmap, what about leaving >> map/unmap and ld*_phys/st*_phys as the high performance unsafe API? >> Then you can add barriers around ld*_pci_dma/st*_pci_dma. > > So no, my idea is to make anybody using ld_* and st_* (non _dma) > responsible for their own barriers. The _dma are implemented in term of > cpu_physical_memory_rw so should inherit the barriers. Yeah, after these patches they are. > As for map/unmap, there's an inconsistency since when it falls back to > bounce buffering, it will get implicit barriers. My idea was to put a > barrier before always, see blow. The bounce buffering case is never hit in practice. Your reasoning about adding a barrier before always makes sense, but probably it's better to add (a) a variant of map with no barrier; (b) a variant that takes an sglist that would add only one barrier. I agree that a barrier in unmap is not needed. >>> The full sync should provide all the synchronization we need >> >> You mean "sync; ld; sync" for load and "sync; st" for store? That would >> do, yes. > > No, just sync,ld > > IE. If I put a barrier "before" in cpu_physical_memory_rw I ensure > ordering vs all previous accesses. Ok. I guess the C11/C++ guys required an isync barrier after either loads or stores, because they need to order the load/store vs. code accessing other memory. This is not needed in QEMU because all guest accesses go through cpu_physical_memory_rw (or has its own barriers). Paolo