All of lore.kernel.org
 help / color / mirror / Atom feed
From: Robin Murphy <robin.murphy@arm.com>
To: Ray Jui <ray.jui@broadcom.com>, Will Deacon <will.deacon@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>,
	Mark Rutland <mark.rutland@arm.com>,
	Joerg Roedel <joro@8bytes.org>,
	linux-arm-kernel@lists.infradead.org,
	iommu@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: Device address specific mapping of arm,mmu-500
Date: Tue, 6 Jun 2017 11:02:10 +0100	[thread overview]
Message-ID: <bc636388-9388-3582-23b0-9f4271cae4b2@arm.com> (raw)
In-Reply-To: <498100e8-e94e-4a65-a9e1-ae59bd59fe2d@broadcom.com>

Hi Ray,

On 05/06/17 19:03, Ray Jui wrote:
> Hi Will/Robin,
> 
> Just want to check with you on this again. Do you have a very rough
> timeline on when the excessive locking in the IOMMU driver may be fixed
> (so we can restore expected up to 95% performance)?

I've currently got some experimental patches pushed out here:

    git://linux-arm.org/linux-rm  iommu/pgtable

So far, there's still one silly bug (which doesn't affect DMA ops usage)
and an awkward race for non-coherent table walks which will need
resolving before I have anything to post properly, which I hope will be
within the next couple of weeks. In the meantime, though, it already
seems to work well enough in practice, so any feedback is welcome!

Robin.

> 
> Thanks,
> 
> Ray
> 
> 
> On 5/31/17 10:32 AM, Ray Jui wrote:
>> Hi Will,
>>
>> On 5/31/17 5:44 AM, Will Deacon wrote:
>>> On Tue, May 30, 2017 at 11:13:36PM -0700, Ray Jui wrote:
>>>> I did a little more digging myself and I think I now understand what you
>>>> meant by identity mapping, i.e., configuring the MMU-500 with 1:1 mapping
>>>> between the DMA address and the IOVA address.
>>>>
>>>> I think that should work. In the end, due to this MSI write parsing issue in
>>>> our PCIe controller, the reason to use IOMMU is to allow the cache
>>>> attributes (AxCACHE) of the MSI writes towards GICv3 ITS to be modified by
>>>> the IOMMU to be device type, while leaving the rest of inbound reads/writes
>>>> from/to DDR with more optimized cache attributes setting, to allow I/O
>>>> coherency to be still enabled for the PCIe controller. In fact, the PCIe
>>>> controller itself is fully capable of DMA to/from the full address space of
>>>> our SoC including both DDR and any device memory.
>>>>
>>>> The 1:1 mapping will still pose some translation overhead like you
>>>> suggested; however, the overhead of allocating page tables and locking will
>>>> be gone. This sounds like the best possible option I have currently.
>>>
>>> It might end up being pretty invasive to work around a hardware bug, so
>>> we'll have to see what it looks like. Ideally, we could just use the SMMU
>>> for everything as-is and work on clawing back the lost performance (it
>>> should be possible to get ~95% of the perf if we sort out the locking, which
>>> we *are* working on).
>>>
>>
>> If 95% of performance can be achieved by fixing the locking in the
>> driver, then that's great news.
>>
>> If you have anything that you want me to help test, feel free to send it
>> out. I will be more than happy to help testing it and let you know about
>> the performance numbers, :)
>>
>>>> May I ask, how do I start to try to get this identity mapping to work as an
>>>> experiment and proof of concept? Any pointer or advise is highly appreciated
>>>> as you can see I'm not very experienced with this. I found Will recently
>>>> added the IOMMU_DOMAIN_IDENTITY support to the arm-smmu driver. But I
>>>> suppose that is to bypass the SMMU completely, instead of still going
>>>> through the MMU with 1:1 translation. Is my understanding correct?
>>>
>>> Yes, I don't think IOMMU_DOMAIN_IDENTITY is what you need because you
>>> actally need per-page control of memory attributes.
>>>
>>> Robin might have a better idea, but I think you'll have to hack dma-iommu.c
>>> so that you can have a version of the DMA ops that:
>>>
>>>   * Initialises the identity map (I guess as normal WB cacheable?)
>>>   * Reserves and maps the MSI region appropriately
>>>   * Just returns the physical address for the dma address for map requests
>>>     (return error for the MSI region)
>>>   * Does nothing for unmap requests
>>>
>>> But my strong preference would be to fix the locking overhead from the
>>> SMMU so that the perf hit is acceptable.
>>
>> Yes, I agree, we want to be able to use the SMMU the intended way. Do
>> you have a timeline on when the locking issue may be fixed (or
>> improved)? Depending on the timeline, on our side, we may still need to
>> go for identity mapping as a temporary solution until the fix.
>>
>>>
>>> Will
>>>
>>
>> Thanks,
>>
>> Ray
>>

WARNING: multiple messages have this Message-ID (diff)
From: robin.murphy@arm.com (Robin Murphy)
To: linux-arm-kernel@lists.infradead.org
Subject: Device address specific mapping of arm,mmu-500
Date: Tue, 6 Jun 2017 11:02:10 +0100	[thread overview]
Message-ID: <bc636388-9388-3582-23b0-9f4271cae4b2@arm.com> (raw)
In-Reply-To: <498100e8-e94e-4a65-a9e1-ae59bd59fe2d@broadcom.com>

Hi Ray,

On 05/06/17 19:03, Ray Jui wrote:
> Hi Will/Robin,
> 
> Just want to check with you on this again. Do you have a very rough
> timeline on when the excessive locking in the IOMMU driver may be fixed
> (so we can restore expected up to 95% performance)?

I've currently got some experimental patches pushed out here:

    git://linux-arm.org/linux-rm  iommu/pgtable

So far, there's still one silly bug (which doesn't affect DMA ops usage)
and an awkward race for non-coherent table walks which will need
resolving before I have anything to post properly, which I hope will be
within the next couple of weeks. In the meantime, though, it already
seems to work well enough in practice, so any feedback is welcome!

Robin.

> 
> Thanks,
> 
> Ray
> 
> 
> On 5/31/17 10:32 AM, Ray Jui wrote:
>> Hi Will,
>>
>> On 5/31/17 5:44 AM, Will Deacon wrote:
>>> On Tue, May 30, 2017 at 11:13:36PM -0700, Ray Jui wrote:
>>>> I did a little more digging myself and I think I now understand what you
>>>> meant by identity mapping, i.e., configuring the MMU-500 with 1:1 mapping
>>>> between the DMA address and the IOVA address.
>>>>
>>>> I think that should work. In the end, due to this MSI write parsing issue in
>>>> our PCIe controller, the reason to use IOMMU is to allow the cache
>>>> attributes (AxCACHE) of the MSI writes towards GICv3 ITS to be modified by
>>>> the IOMMU to be device type, while leaving the rest of inbound reads/writes
>>>> from/to DDR with more optimized cache attributes setting, to allow I/O
>>>> coherency to be still enabled for the PCIe controller. In fact, the PCIe
>>>> controller itself is fully capable of DMA to/from the full address space of
>>>> our SoC including both DDR and any device memory.
>>>>
>>>> The 1:1 mapping will still pose some translation overhead like you
>>>> suggested; however, the overhead of allocating page tables and locking will
>>>> be gone. This sounds like the best possible option I have currently.
>>>
>>> It might end up being pretty invasive to work around a hardware bug, so
>>> we'll have to see what it looks like. Ideally, we could just use the SMMU
>>> for everything as-is and work on clawing back the lost performance (it
>>> should be possible to get ~95% of the perf if we sort out the locking, which
>>> we *are* working on).
>>>
>>
>> If 95% of performance can be achieved by fixing the locking in the
>> driver, then that's great news.
>>
>> If you have anything that you want me to help test, feel free to send it
>> out. I will be more than happy to help testing it and let you know about
>> the performance numbers, :)
>>
>>>> May I ask, how do I start to try to get this identity mapping to work as an
>>>> experiment and proof of concept? Any pointer or advise is highly appreciated
>>>> as you can see I'm not very experienced with this. I found Will recently
>>>> added the IOMMU_DOMAIN_IDENTITY support to the arm-smmu driver. But I
>>>> suppose that is to bypass the SMMU completely, instead of still going
>>>> through the MMU with 1:1 translation. Is my understanding correct?
>>>
>>> Yes, I don't think IOMMU_DOMAIN_IDENTITY is what you need because you
>>> actally need per-page control of memory attributes.
>>>
>>> Robin might have a better idea, but I think you'll have to hack dma-iommu.c
>>> so that you can have a version of the DMA ops that:
>>>
>>>   * Initialises the identity map (I guess as normal WB cacheable?)
>>>   * Reserves and maps the MSI region appropriately
>>>   * Just returns the physical address for the dma address for map requests
>>>     (return error for the MSI region)
>>>   * Does nothing for unmap requests
>>>
>>> But my strong preference would be to fix the locking overhead from the
>>> SMMU so that the perf hit is acceptable.
>>
>> Yes, I agree, we want to be able to use the SMMU the intended way. Do
>> you have a timeline on when the locking issue may be fixed (or
>> improved)? Depending on the timeline, on our side, we may still need to
>> go for identity mapping as a temporary solution until the fix.
>>
>>>
>>> Will
>>>
>>
>> Thanks,
>>
>> Ray
>>

  reply	other threads:[~2017-06-06 10:02 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-30  1:18 Device address specific mapping of arm,mmu-500 Ray Jui
2017-05-30  1:18 ` Ray Jui
2017-05-30 15:14 ` Will Deacon
2017-05-30 15:14   ` Will Deacon
2017-05-30 15:14   ` Will Deacon
2017-05-30 16:49   ` Ray Jui
2017-05-30 16:49     ` Ray Jui
2017-05-30 16:49     ` Ray Jui via iommu
2017-05-30 16:59     ` Marc Zyngier
2017-05-30 16:59       ` Marc Zyngier
2017-05-30 16:59       ` Marc Zyngier
2017-05-30 17:16       ` Ray Jui
2017-05-30 17:16         ` Ray Jui
2017-05-30 17:16         ` Ray Jui via iommu
2017-05-30 17:27         ` Marc Zyngier
2017-05-30 17:27           ` Marc Zyngier
2017-05-30 17:27           ` Marc Zyngier
2017-05-30 22:06           ` Ray Jui
2017-05-30 22:06             ` Ray Jui
2017-05-30 22:06             ` Ray Jui via iommu
2017-05-31  6:13             ` Ray Jui
2017-05-31  6:13               ` Ray Jui
2017-05-31  6:13               ` Ray Jui via iommu
2017-05-31 12:44               ` Will Deacon
2017-05-31 12:44                 ` Will Deacon
2017-05-31 12:44                 ` Will Deacon
2017-05-31 17:32                 ` Ray Jui
2017-05-31 17:32                   ` Ray Jui
2017-05-31 17:32                   ` Ray Jui via iommu
2017-06-05 18:03                   ` Ray Jui
2017-06-05 18:03                     ` Ray Jui
2017-06-05 18:03                     ` Ray Jui via iommu
2017-06-06 10:02                     ` Robin Murphy [this message]
2017-06-06 10:02                       ` Robin Murphy
2017-06-07  6:20                       ` Ray Jui
2017-06-07  6:20                         ` Ray Jui
2017-05-30 17:27     ` Robin Murphy
2017-05-30 17:27       ` Robin Murphy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bc636388-9388-3582-23b0-9f4271cae4b2@arm.com \
    --to=robin.murphy@arm.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=joro@8bytes.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=marc.zyngier@arm.com \
    --cc=mark.rutland@arm.com \
    --cc=ray.jui@broadcom.com \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.