From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Michael S. Tsirkin" Subject: Re: [PATCH 4/7] ide: use the PCI memory access interface Date: Thu, 2 Sep 2010 12:58:13 +0300 Message-ID: <20100902095813.GB9085@redhat.com> References: <1283007298-10942-1-git-send-email-eduard.munteanu@linux360.ro> <1283007298-10942-5-git-send-email-eduard.munteanu@linux360.ro> <20100902051911.GA4273@redhat.com> <20100902091200.GB7319@localhost> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: joro@8bytes.org, blauwirbel@gmail.com, paul@codesourcery.com, avi@redhat.com, anthony@codemonkey.ws, av1474@comtv.ru, yamahata@valinux.co.jp, kvm@vger.kernel.org, qemu-devel@nongnu.org To: Eduard - Gabriel Munteanu Return-path: Received: from mx1.redhat.com ([209.132.183.28]:2998 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750799Ab0IBKEa (ORCPT ); Thu, 2 Sep 2010 06:04:30 -0400 Content-Disposition: inline In-Reply-To: <20100902091200.GB7319@localhost> Sender: kvm-owner@vger.kernel.org List-ID: On Thu, Sep 02, 2010 at 12:12:00PM +0300, Eduard - Gabriel Munteanu wrote: > On Thu, Sep 02, 2010 at 08:19:11AM +0300, Michael S. Tsirkin wrote: > > On Sat, Aug 28, 2010 at 05:54:55PM +0300, Eduard - Gabriel Munteanu wrote: > > > Emulated PCI IDE controllers now use the memory access interface. This > > > also allows an emulated IOMMU to translate and check accesses. > > > > > > Map invalidation results in cancelling DMA transfers. Since the guest OS > > > can't properly recover the DMA results in case the mapping is changed, > > > this is a fairly good approximation. > > > > > > Signed-off-by: Eduard - Gabriel Munteanu > > > --- > > [snip] > > > > +static inline void bmdma_memory_read(BMDMAState *bm, > > > + target_phys_addr_t addr, > > > + uint8_t *buf, > > > + target_phys_addr_t len) > > > +{ > > > + bm->rw(bm->opaque, addr, buf, len, 0); > > > +} > > > + > > > +static inline void bmdma_memory_write(BMDMAState *bm, > > > + target_phys_addr_t addr, > > > + uint8_t *buf, > > > + target_phys_addr_t len) > > > +{ > > > + bm->rw(bm->opaque, addr, buf, len, 1); > > > +} > > > + > > > > Here again, I am concerned about indirection and pointer chaising on data path. > > Can we have an iommu pointer in the device, and do a fast path in case > > there is no iommu? > > > > See my other reply. I don't insist on this solution, but what other way do you propose to avoid the overhead for everyone not using an iommu? I'm all for a solution that would help iommu as well, but one wasn't yet proposed. > > > static inline IDEState *idebus_active_if(IDEBus *bus) > > > { > > > return bus->ifs + bus->unit; > > > diff --git a/hw/ide/macio.c b/hw/ide/macio.c > > > index bd1c73e..962ae13 100644 > > > --- a/hw/ide/macio.c > > > +++ b/hw/ide/macio.c > > > @@ -79,7 +79,7 @@ static void pmac_ide_atapi_transfer_cb(void *opaque, int ret) > > > > > > s->io_buffer_size = io->len; > > > > > > - qemu_sglist_init(&s->sg, io->len / MACIO_PAGE_SIZE + 1); > > > + qemu_sglist_init(&s->sg, io->len / MACIO_PAGE_SIZE + 1, NULL, NULL, NULL); > > > qemu_sglist_add(&s->sg, io->addr, io->len); > > > io->addr += io->len; > > > io->len = 0; > > > @@ -141,7 +141,7 @@ static void pmac_ide_transfer_cb(void *opaque, int ret) > > > s->io_buffer_index = 0; > > > s->io_buffer_size = io->len; > > > > > > - qemu_sglist_init(&s->sg, io->len / MACIO_PAGE_SIZE + 1); > > > + qemu_sglist_init(&s->sg, io->len / MACIO_PAGE_SIZE + 1, NULL, NULL, NULL); > > > qemu_sglist_add(&s->sg, io->addr, io->len); > > > io->addr += io->len; > > > io->len = 0; > > > diff --git a/hw/ide/pci.c b/hw/ide/pci.c > > > index 4d95cc5..5879044 100644 > > > --- a/hw/ide/pci.c > > > +++ b/hw/ide/pci.c > > > @@ -183,4 +183,11 @@ void pci_ide_create_devs(PCIDevice *dev, DriveInfo **hd_table) > > > continue; > > > ide_create_drive(d->bus+bus[i], unit[i], hd_table[i]); > > > } > > > + > > > + for (i = 0; i < 2; i++) { > > > + d->bmdma[i].rw = (void *) pci_memory_rw; > > > + d->bmdma[i].map = (void *) pci_memory_map; > > > + d->bmdma[i].unmap = (void *) pci_memory_unmap; > > > + d->bmdma[i].opaque = dev; > > > + } > > > } > > > > These casts show something is wrong with the API, IMO. > > > > Hm, here's an oversight on my part: I think I should provide explicit > bmdma hooks, since pcibus_t is a uint64_t and target_phys_addr_t is a > uint{32,64}_t depending on the guest machine, so it might be buggy on > 32-bit wrt calling conventions. But that introduces yet another > non-inlined function call :-(. That would drop the (void *) cast, > though. > > > Eduard So we get away with it without casts but only because C compiler will let us silently convert the types, possibly discarding data in the process. Or we'll add a check that will try and detect this, but there's no good way to report a DMA error to user. IOW, if our code only works because target fits in pcibus, what good is the abstraction and using distinct types? This is why I think we need a generic DMA APIs using dma addresses. -- MST From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=55842 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Or6el-0005sU-VR for qemu-devel@nongnu.org; Thu, 02 Sep 2010 06:04:33 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1Or6ej-0002E3-RO for qemu-devel@nongnu.org; Thu, 02 Sep 2010 06:04:31 -0400 Received: from mx1.redhat.com ([209.132.183.28]:17833) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1Or6ej-0002Dl-KN for qemu-devel@nongnu.org; Thu, 02 Sep 2010 06:04:29 -0400 Date: Thu, 2 Sep 2010 12:58:13 +0300 From: "Michael S. Tsirkin" Message-ID: <20100902095813.GB9085@redhat.com> References: <1283007298-10942-1-git-send-email-eduard.munteanu@linux360.ro> <1283007298-10942-5-git-send-email-eduard.munteanu@linux360.ro> <20100902051911.GA4273@redhat.com> <20100902091200.GB7319@localhost> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20100902091200.GB7319@localhost> Subject: [Qemu-devel] Re: [PATCH 4/7] ide: use the PCI memory access interface List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Eduard - Gabriel Munteanu Cc: kvm@vger.kernel.org, joro@8bytes.org, qemu-devel@nongnu.org, blauwirbel@gmail.com, yamahata@valinux.co.jp, paul@codesourcery.com, avi@redhat.com On Thu, Sep 02, 2010 at 12:12:00PM +0300, Eduard - Gabriel Munteanu wrote: > On Thu, Sep 02, 2010 at 08:19:11AM +0300, Michael S. Tsirkin wrote: > > On Sat, Aug 28, 2010 at 05:54:55PM +0300, Eduard - Gabriel Munteanu wrote: > > > Emulated PCI IDE controllers now use the memory access interface. This > > > also allows an emulated IOMMU to translate and check accesses. > > > > > > Map invalidation results in cancelling DMA transfers. Since the guest OS > > > can't properly recover the DMA results in case the mapping is changed, > > > this is a fairly good approximation. > > > > > > Signed-off-by: Eduard - Gabriel Munteanu > > > --- > > [snip] > > > > +static inline void bmdma_memory_read(BMDMAState *bm, > > > + target_phys_addr_t addr, > > > + uint8_t *buf, > > > + target_phys_addr_t len) > > > +{ > > > + bm->rw(bm->opaque, addr, buf, len, 0); > > > +} > > > + > > > +static inline void bmdma_memory_write(BMDMAState *bm, > > > + target_phys_addr_t addr, > > > + uint8_t *buf, > > > + target_phys_addr_t len) > > > +{ > > > + bm->rw(bm->opaque, addr, buf, len, 1); > > > +} > > > + > > > > Here again, I am concerned about indirection and pointer chaising on data path. > > Can we have an iommu pointer in the device, and do a fast path in case > > there is no iommu? > > > > See my other reply. I don't insist on this solution, but what other way do you propose to avoid the overhead for everyone not using an iommu? I'm all for a solution that would help iommu as well, but one wasn't yet proposed. > > > static inline IDEState *idebus_active_if(IDEBus *bus) > > > { > > > return bus->ifs + bus->unit; > > > diff --git a/hw/ide/macio.c b/hw/ide/macio.c > > > index bd1c73e..962ae13 100644 > > > --- a/hw/ide/macio.c > > > +++ b/hw/ide/macio.c > > > @@ -79,7 +79,7 @@ static void pmac_ide_atapi_transfer_cb(void *opaque, int ret) > > > > > > s->io_buffer_size = io->len; > > > > > > - qemu_sglist_init(&s->sg, io->len / MACIO_PAGE_SIZE + 1); > > > + qemu_sglist_init(&s->sg, io->len / MACIO_PAGE_SIZE + 1, NULL, NULL, NULL); > > > qemu_sglist_add(&s->sg, io->addr, io->len); > > > io->addr += io->len; > > > io->len = 0; > > > @@ -141,7 +141,7 @@ static void pmac_ide_transfer_cb(void *opaque, int ret) > > > s->io_buffer_index = 0; > > > s->io_buffer_size = io->len; > > > > > > - qemu_sglist_init(&s->sg, io->len / MACIO_PAGE_SIZE + 1); > > > + qemu_sglist_init(&s->sg, io->len / MACIO_PAGE_SIZE + 1, NULL, NULL, NULL); > > > qemu_sglist_add(&s->sg, io->addr, io->len); > > > io->addr += io->len; > > > io->len = 0; > > > diff --git a/hw/ide/pci.c b/hw/ide/pci.c > > > index 4d95cc5..5879044 100644 > > > --- a/hw/ide/pci.c > > > +++ b/hw/ide/pci.c > > > @@ -183,4 +183,11 @@ void pci_ide_create_devs(PCIDevice *dev, DriveInfo **hd_table) > > > continue; > > > ide_create_drive(d->bus+bus[i], unit[i], hd_table[i]); > > > } > > > + > > > + for (i = 0; i < 2; i++) { > > > + d->bmdma[i].rw = (void *) pci_memory_rw; > > > + d->bmdma[i].map = (void *) pci_memory_map; > > > + d->bmdma[i].unmap = (void *) pci_memory_unmap; > > > + d->bmdma[i].opaque = dev; > > > + } > > > } > > > > These casts show something is wrong with the API, IMO. > > > > Hm, here's an oversight on my part: I think I should provide explicit > bmdma hooks, since pcibus_t is a uint64_t and target_phys_addr_t is a > uint{32,64}_t depending on the guest machine, so it might be buggy on > 32-bit wrt calling conventions. But that introduces yet another > non-inlined function call :-(. That would drop the (void *) cast, > though. > > > Eduard So we get away with it without casts but only because C compiler will let us silently convert the types, possibly discarding data in the process. Or we'll add a check that will try and detect this, but there's no good way to report a DMA error to user. IOW, if our code only works because target fits in pcibus, what good is the abstraction and using distinct types? This is why I think we need a generic DMA APIs using dma addresses. -- MST