From: Eric Pilmore <epilmore@gigaio.com>
To: Logan Gunthorpe <logang@deltatee.com>
Cc: Kit Chow <kchow@gigaio.com>, Bjorn Helgaas <helgaas@kernel.org>,
linux-pci@vger.kernel.org, David Woodhouse <dwmw2@infradead.org>,
Alex Williamson <alex.williamson@redhat.com>,
iommu@lists.linux-foundation.org
Subject: Re: IOAT DMA w/IOMMU
Date: Tue, 21 Aug 2018 16:45:20 -0700 [thread overview]
Message-ID: <CAOQPn8vO7wP2XGwyJALxkyeBFG6fkTAFgF8QcGiiJF0fOMBa4Q@mail.gmail.com> (raw)
In-Reply-To: <dd918b52-7ddc-662c-5980-edf3e96ebca2@deltatee.com>
On Tue, Aug 21, 2018 at 4:35 PM, Logan Gunthorpe <logang@deltatee.com> wrote:
>
>
> On 21/08/18 05:28 PM, Eric Pilmore wrote:
>>
>>
>> On Tue, Aug 21, 2018 at 4:20 PM, Logan Gunthorpe <logang@deltatee.com
>> <mailto:logang@deltatee.com>> wrote:
>>
>>
>>
>> On 21/08/18 05:18 PM, Eric Pilmore wrote:
>> > We have been running locally with Kit's change for dma_map_resource and its
>> > incorporation in ntb_async_tx_submit for the destination address and
>> > it runs fine
>> > under "load" (iperf) on a Xeon (Xeon(R) CPU E5-2680 v4 @ 2.40GHz) based system,
>> > regardless of whether the DMA engine being used is IOAT or a PLX
>> > device sitting in
>> > the PCIe tree. However, when we go back to a i7 (i7-7700K CPU @ 4.20GHz) based
>> > system it seems to run into issues, specifically when put under a
>> > load. In this case,
>> > just having a load using a single ping command with an interval=0, i.e. no delay
>> > between ping packets, after a few thousand packets the system just hangs. No
>> > panic or watchdogs. Note that in this scenario I can only use a PLX DMA engine.
>>
>> This is just my best guess: but it sounds to me like a bug in the PLX
>> DMA driver or hardware.
>>
>>
>> The PLX DMA driver? But the PLX driver isn't really even involved in
>> the mapping
>> stage. Are you thinking maybe the stage at which the DMA descriptor is
>> freed and
>> the PLX DMA driver does a dma_descriptor_unmap?
>
> Hmm, well what would make you think the hang is during
> mapping/unmapping?
Well, the only difference between success and failure is running with the
call to dma_map_resource for the destination address, which is a PCI BAR
address. Prior to Kit introducing this call, we never created a mapping for the
destination PCI BAR address and it worked fine on all systems when using
PLX DMA. It was only when we went to a Xeon system and attempted to use
IOAT DMA that we found we needed a mapping for that destination PCI BAR
address.
The only thing the PLX driver does related to "mappings" is a call to
dma_descriptor_unmap when the descriptor is freed, however that is more
of an administrative step to clean up the unmap-data data structure used
when the mapping was originally established.
> I would expect a hang to be in handling completions
> from the DMA engine or something like that.
>
>> Again, PLX did not exhibit any issues on the Xeon system.
>
> Oh, I missed that. That puts a crinkle in my theory but, as you say, it
> could be a timing issue.
>
> Also, it's VERY strange that it would hang the entire system. That makes
> things very hard to debug...
Tell me about it! ;-)
Eric
next prev parent reply other threads:[~2018-08-21 23:45 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-08-09 18:14 IOAT DMA w/IOMMU Eric Pilmore
2018-08-09 18:43 ` Bjorn Helgaas
2018-08-09 18:51 ` Eric Pilmore
2018-08-09 19:35 ` Logan Gunthorpe
2018-08-09 19:47 ` Kit Chow
2018-08-09 20:11 ` Logan Gunthorpe
2018-08-09 20:57 ` Kit Chow
2018-08-09 21:11 ` Logan Gunthorpe
2018-08-09 21:47 ` Kit Chow
2018-08-09 22:40 ` Jiang, Dave
2018-08-09 22:48 ` Kit Chow
2018-08-09 22:50 ` Logan Gunthorpe
2018-08-09 23:00 ` Kit Chow
2018-08-10 16:02 ` Kit Chow
2018-08-10 16:23 ` Kit Chow
2018-08-10 16:24 ` Logan Gunthorpe
2018-08-10 16:24 ` Logan Gunthorpe
2018-08-10 16:31 ` Dave Jiang
2018-08-10 16:33 ` Logan Gunthorpe
2018-08-10 17:01 ` Dave Jiang
2018-08-10 17:15 ` Logan Gunthorpe
2018-08-10 17:46 ` Dave Jiang
2018-08-11 0:53 ` Kit Chow
2018-08-11 2:10 ` Logan Gunthorpe
2018-08-13 14:23 ` Kit Chow
2018-08-13 14:59 ` Robin Murphy
2018-08-13 15:21 ` Kit Chow
2018-08-13 23:30 ` Kit Chow
2018-08-13 23:39 ` Logan Gunthorpe
2018-08-13 23:48 ` Kit Chow
2018-08-13 23:50 ` Logan Gunthorpe
2018-08-14 13:47 ` Kit Chow
2018-08-14 14:03 ` Robin Murphy
2018-08-13 23:36 ` Kit Chow
2018-08-09 21:31 ` Eric Pilmore
2018-08-09 21:36 ` Logan Gunthorpe
2018-08-16 17:16 ` Kit Chow
2018-08-16 17:21 ` Logan Gunthorpe
2018-08-16 18:53 ` Kit Chow
2018-08-16 18:56 ` Logan Gunthorpe
2018-08-21 23:18 ` Eric Pilmore
2018-08-21 23:20 ` Logan Gunthorpe
2018-08-21 23:28 ` Eric Pilmore
2018-08-21 23:35 ` Logan Gunthorpe
2018-08-21 23:45 ` Eric Pilmore [this message]
2018-08-21 23:53 ` Logan Gunthorpe
2018-08-21 23:59 ` Eric Pilmore
2018-08-21 23:30 ` Eric Pilmore
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAOQPn8vO7wP2XGwyJALxkyeBFG6fkTAFgF8QcGiiJF0fOMBa4Q@mail.gmail.com \
--to=epilmore@gigaio.com \
--cc=alex.williamson@redhat.com \
--cc=dwmw2@infradead.org \
--cc=helgaas@kernel.org \
--cc=iommu@lists.linux-foundation.org \
--cc=kchow@gigaio.com \
--cc=linux-pci@vger.kernel.org \
--cc=logang@deltatee.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).