From: Kit Chow <kchow@gigaio.com>
To: Logan Gunthorpe <logang@deltatee.com>,
"Jiang, Dave" <dave.jiang@intel.com>,
Eric Pilmore <epilmore@gigaio.com>,
Bjorn Helgaas <helgaas@kernel.org>
Cc: "linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>,
David Woodhouse <dwmw2@infradead.org>,
Alex Williamson <alex.williamson@redhat.com>,
"iommu@lists.linux-foundation.org"
<iommu@lists.linux-foundation.org>
Subject: Re: IOAT DMA w/IOMMU
Date: Fri, 10 Aug 2018 09:23:38 -0700 [thread overview]
Message-ID: <c8dd4740-f01a-372f-2990-2e1163f94042@gigaio.com> (raw)
In-Reply-To: <1170c096-a749-6ee3-32d4-ebba1d12adff@gigaio.com>
There is an internal routine (__intel_map_single) inside the intel iommu
code that does the actual mapping using a phys_addr_t. Think I'll try to
implement a intel_map_resource routine that calls that routine directly
without all of the conversions done for dma_map_{single,page} (pci bar
addr -> page -> phys_addr)...
On 08/10/2018 09:02 AM, Kit Chow wrote:
> Turns out there is no dma_map_resource routine on x86. get_dma_ops
> returns intel_dma_ops which has map_resource pointing to NULL.
>
> (gdb) p intel_dma_ops
> $7 = {alloc = 0xffffffff8150f310 <intel_alloc_coherent>,
> free = 0xffffffff8150ec20 <intel_free_coherent>,
> mmap = 0x0 <irq_stack_union>, get_sgtable = 0x0 <irq_stack_union>,
> map_page = 0xffffffff8150f2d0 <intel_map_page>,
> unmap_page = 0xffffffff8150ec10 <intel_unmap_page>,
> map_sg = 0xffffffff8150ef40 <intel_map_sg>,
> unmap_sg = 0xffffffff8150eb80 <intel_unmap_sg>,
> map_resource = 0x0 <irq_stack_union>,
> unmap_resource = 0x0 <irq_stack_union>,
> sync_single_for_cpu = 0x0 <irq_stack_union>,
> sync_single_for_device = 0x0 <irq_stack_union>,
> sync_sg_for_cpu = 0x0 <irq_stack_union>,
> sync_sg_for_device = 0x0 <irq_stack_union>,
> cache_sync = 0x0 <irq_stack_union>,
> mapping_error = 0xffffffff815095f0 <intel_mapping_error>,
> dma_supported = 0xffffffff81033830 <x86_dma_supported>, is_phys = 0}
>
> Will poke around some in the intel_map_page code but can you actually
> get a valid struct page for a pci bar address (dma_map_single calls
> virt_to_page)? If not, does a map_resource routine that can properly
> map a pci bar address need to be implemented?
>
> Kit
>
> ---
>
>
> static inline dma_addr_t dma_map_single_attrs(struct device *dev, void
> *ptr,
> size_t size,
> enum dma_data_direction
> dir,
> unsigned long attrs)
> {
> const struct dma_map_ops *ops = get_dma_ops(dev);
> dma_addr_t addr;
>
> BUG_ON(!valid_dma_direction(dir));
> addr = ops->map_page(dev, virt_to_page(ptr),
> offset_in_page(ptr), size,
> dir, attrs);
> debug_dma_map_page(dev, virt_to_page(ptr),
> offset_in_page(ptr), size,
> dir, addr, true);
> return addr;
> }
>
>
>
>
>
>
> On 08/09/2018 04:00 PM, Kit Chow wrote:
>>
>>
>> On 08/09/2018 03:50 PM, Logan Gunthorpe wrote:
>>>
>>> On 09/08/18 04:48 PM, Kit Chow wrote:
>>>> Based on Logan's comments, I am very hopeful that the dma_map_resource
>>>> will make things work on the older platforms...
>>> Well, I *think* dma_map_single() would still work. So I'm not that
>>> confident that's the root of your problem. I'd still like to see the
>>> actual code snippet you are using.
>>>
>>> Logan
>> Here's the code snippet - (ntbdebug & 4) path does dma_map_resource
>> of the pci bar address.
>>
>> It was:
>> unmap->addr[1] = dma_map_single(device->dev, (void
>> *)dest, len,
>> DMA_TO_DEVICE);
>>
>> Kit
>> ---
>>
>>
>> static int ntb_async_tx_submit(struct ntb_transport_qp *qp,
>> struct ntb_queue_entry *entry)
>> {
>> struct dma_async_tx_descriptor *txd;
>> struct dma_chan *chan = qp->tx_dma_chan;
>> struct dma_device *device;
>> size_t len = entry->len;
>> void *buf = entry->buf;
>> size_t dest_off, buff_off;
>> struct dmaengine_unmap_data *unmap;
>> dma_addr_t dest;
>> dma_cookie_t cookie;
>> int unmapcnt;
>>
>> device = chan->device;
>>
>> dest = qp->tx_mw_phys + qp->tx_max_frame * entry->tx_index;
>>
>> buff_off = (size_t)buf & ~PAGE_MASK;
>> dest_off = (size_t)dest & ~PAGE_MASK;
>>
>> if (!is_dma_copy_aligned(device, buff_off, dest_off, len))
>> goto err;
>>
>>
>> if (ntbdebug & 0x4) {
>> unmapcnt = 2;
>> } else {
>> unmapcnt = 1;
>> }
>>
>> unmap = dmaengine_get_unmap_data(device->dev, unmapcnt,
>> GFP_NOWAIT);
>> if (!unmap)
>> goto err;
>>
>> unmap->len = len;
>> unmap->addr[0] = dma_map_page(device->dev, virt_to_page(buf),
>> buff_off, len, DMA_TO_DEVICE);
>> if (dma_mapping_error(device->dev, unmap->addr[0]))
>> goto err_get_unmap;
>>
>> if (ntbdebug & 0x4) {
>> unmap->addr[1] = dma_map_resource(device->dev,
>> (phys_addr_t)dest, len, DMA_TO_DEVICE, 0);
>> if (dma_mapping_error(device->dev, unmap->addr[1]))
>> goto err_get_unmap;
>> unmap->to_cnt = 2;
>> } else {
>> unmap->addr[1] = dest;
>> unmap->to_cnt = 1;
>> }
>>
>> txd = device->device_prep_dma_memcpy(chan, unmap->addr[1],
>> unmap->addr[0], len, DMA_PREP_INTERRUPT);
>>
>> if (!txd)
>> goto err_get_unmap;
>>
>> txd->callback_result = ntb_tx_copy_callback;
>> txd->callback_param = entry;
>> dma_set_unmap(txd, unmap);
>>
>> cookie = dmaengine_submit(txd);
>> if (dma_submit_error(cookie))
>> goto err_set_unmap;
>>
>> dmaengine_unmap_put(unmap);
>>
>> dma_async_issue_pending(chan);
>>
>> return 0;
>>
>> err_set_unmap:
>> dma_descriptor_unmap(txd);
>> txd->desc_free(txd);
>> err_get_unmap:
>> dmaengine_unmap_put(unmap);
>> err:
>> return -ENXIO;
>> }
>>
>
next prev parent reply other threads:[~2018-08-10 16:23 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-08-09 18:14 IOAT DMA w/IOMMU Eric Pilmore
2018-08-09 18:43 ` Bjorn Helgaas
2018-08-09 18:51 ` Eric Pilmore
2018-08-09 19:35 ` Logan Gunthorpe
2018-08-09 19:47 ` Kit Chow
2018-08-09 20:11 ` Logan Gunthorpe
2018-08-09 20:57 ` Kit Chow
2018-08-09 21:11 ` Logan Gunthorpe
2018-08-09 21:47 ` Kit Chow
2018-08-09 22:40 ` Jiang, Dave
2018-08-09 22:48 ` Kit Chow
2018-08-09 22:50 ` Logan Gunthorpe
2018-08-09 23:00 ` Kit Chow
2018-08-10 16:02 ` Kit Chow
2018-08-10 16:23 ` Kit Chow [this message]
2018-08-10 16:24 ` Logan Gunthorpe
2018-08-10 16:24 ` Logan Gunthorpe
2018-08-10 16:31 ` Dave Jiang
2018-08-10 16:33 ` Logan Gunthorpe
2018-08-10 17:01 ` Dave Jiang
2018-08-10 17:15 ` Logan Gunthorpe
2018-08-10 17:46 ` Dave Jiang
2018-08-11 0:53 ` Kit Chow
2018-08-11 2:10 ` Logan Gunthorpe
2018-08-13 14:23 ` Kit Chow
2018-08-13 14:59 ` Robin Murphy
2018-08-13 15:21 ` Kit Chow
2018-08-13 23:30 ` Kit Chow
2018-08-13 23:39 ` Logan Gunthorpe
2018-08-13 23:48 ` Kit Chow
2018-08-13 23:50 ` Logan Gunthorpe
2018-08-14 13:47 ` Kit Chow
2018-08-14 14:03 ` Robin Murphy
2018-08-13 23:36 ` Kit Chow
2018-08-09 21:31 ` Eric Pilmore
2018-08-09 21:36 ` Logan Gunthorpe
2018-08-16 17:16 ` Kit Chow
2018-08-16 17:21 ` Logan Gunthorpe
2018-08-16 18:53 ` Kit Chow
2018-08-16 18:56 ` Logan Gunthorpe
2018-08-21 23:18 ` Eric Pilmore
2018-08-21 23:20 ` Logan Gunthorpe
2018-08-21 23:28 ` Eric Pilmore
2018-08-21 23:35 ` Logan Gunthorpe
2018-08-21 23:45 ` Eric Pilmore
2018-08-21 23:53 ` Logan Gunthorpe
2018-08-21 23:59 ` Eric Pilmore
2018-08-21 23:30 ` Eric Pilmore
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c8dd4740-f01a-372f-2990-2e1163f94042@gigaio.com \
--to=kchow@gigaio.com \
--cc=alex.williamson@redhat.com \
--cc=dave.jiang@intel.com \
--cc=dwmw2@infradead.org \
--cc=epilmore@gigaio.com \
--cc=helgaas@kernel.org \
--cc=iommu@lists.linux-foundation.org \
--cc=linux-pci@vger.kernel.org \
--cc=logang@deltatee.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).