All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tomasz Figa <tfiga@chromium.org>
To: Robin Murphy <robin.murphy@arm.com>
Cc: Yong Zhi <yong.zhi@intel.com>,
	linux-media@vger.kernel.org,
	Sakari Ailus <sakari.ailus@linux.intel.com>,
	"Zheng, Jian Xu" <jian.xu.zheng@intel.com>,
	"Mani, Rajmohan" <rajmohan.mani@intel.com>,
	"Toivonen, Tuukka" <tuukka.toivonen@intel.com>,
	Joerg Roedel <joro@8bytes.org>,
	"open list:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>
Subject: Re: [PATCH 03/12] intel-ipu3: Add DMA API implementation
Date: Thu, 8 Jun 2017 23:35:51 +0900	[thread overview]
Message-ID: <CAAFQd5AWGN_qSXGG32D-eWKZRoRme+XCD9v1r8qJ5bthtS9z9w@mail.gmail.com> (raw)
In-Reply-To: <73992991-65a9-915b-a450-b23aeb3baaed@arm.com>

On Thu, Jun 8, 2017 at 10:22 PM, Robin Murphy <robin.murphy@arm.com> wrote:
> On 07/06/17 10:47, Tomasz Figa wrote:
>> Hi Yong,
>>
>> +Robin, Joerg, IOMMU ML
>>
>> Please see my comments inline.
>>
>> On Tue, Jun 6, 2017 at 5:39 AM, Yong Zhi <yong.zhi@intel.com> wrote:
[snip]
>>> +
>>> +/* End of things adapted from arch/arm/mm/dma-mapping.c */
>>> +static void ipu3_dmamap_sync_single_for_cpu(struct device *dev,
>>> +                                           dma_addr_t dma_handle, size_t size,
>>> +                                           enum dma_data_direction dir)
>>> +{
>>> +       struct ipu3_mmu *mmu = to_ipu3_mmu(dev);
>>> +       dma_addr_t daddr = iommu_iova_to_phys(mmu->domain, dma_handle);
>>> +
>>> +       clflush_cache_range(phys_to_virt(daddr), size);
>>
>> You might need to consider another IOMMU on the way here. Generally,
>> given that daddr is your MMU DMA address (not necessarily CPU physical
>> address), you should be able to call
>>
>> dma_sync_single_for_cpu(<your pci device>, daddr, size, dir)
>
> I'd hope that this IPU complex is some kind of embedded endpoint thing
> that bypasses the VT-d IOMMU or is always using its own local RAM,
> because it would be pretty much unworkable otherwise.

It uses system RAM and, as far as my understanding goes, by default it
operates without the VT-d IOMMU and that's how it's used right now.
I'm suggesting VT-d IOMMU as a way to further strengthen the security
and error resilience in future (due to the IPU complex being
non-coherent and also running a closed source firmware).

> The whole
> infrastructure isn't really capable of dealing with nested IOMMUs, and
> nested DMA ops would be an equally horrible idea.

Could you elaborate a bit more on this? I think we should be able to
deal with this in a way I suggested before:

a) the PCI device would use the system DMA ops,
b) the PCI device would implement a secondary bus for which it would
provide its own DMA and IOMMU ops.
c) a secondary device would be registered on the secondary bus,
d) all memory for the IPU would be managed on behalf of the secondary device.

In fact, the driver already is designed in a way that all the points
above are true. If I'm not missing something, the only significant
missing point is calling into system DMA ops from IPU DMA ops.

Best regards,
Tomasz

WARNING: multiple messages have this Message-ID (diff)
From: Tomasz Figa <tfiga-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
To: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>
Cc: "Zheng,
	Jian Xu" <jian.xu.zheng-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>,
	"open list:IOMMU DRIVERS"
	<iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org>,
	"Toivonen,
	Tuukka" <tuukka.toivonen-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>,
	"Mani,
	Rajmohan" <rajmohan.mani-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>,
	Yong Zhi <yong.zhi-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>,
	linux-media-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Subject: Re: [PATCH 03/12] intel-ipu3: Add DMA API implementation
Date: Thu, 8 Jun 2017 23:35:51 +0900	[thread overview]
Message-ID: <CAAFQd5AWGN_qSXGG32D-eWKZRoRme+XCD9v1r8qJ5bthtS9z9w@mail.gmail.com> (raw)
In-Reply-To: <73992991-65a9-915b-a450-b23aeb3baaed-5wv7dgnIgG8@public.gmane.org>

On Thu, Jun 8, 2017 at 10:22 PM, Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org> wrote:
> On 07/06/17 10:47, Tomasz Figa wrote:
>> Hi Yong,
>>
>> +Robin, Joerg, IOMMU ML
>>
>> Please see my comments inline.
>>
>> On Tue, Jun 6, 2017 at 5:39 AM, Yong Zhi <yong.zhi-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org> wrote:
[snip]
>>> +
>>> +/* End of things adapted from arch/arm/mm/dma-mapping.c */
>>> +static void ipu3_dmamap_sync_single_for_cpu(struct device *dev,
>>> +                                           dma_addr_t dma_handle, size_t size,
>>> +                                           enum dma_data_direction dir)
>>> +{
>>> +       struct ipu3_mmu *mmu = to_ipu3_mmu(dev);
>>> +       dma_addr_t daddr = iommu_iova_to_phys(mmu->domain, dma_handle);
>>> +
>>> +       clflush_cache_range(phys_to_virt(daddr), size);
>>
>> You might need to consider another IOMMU on the way here. Generally,
>> given that daddr is your MMU DMA address (not necessarily CPU physical
>> address), you should be able to call
>>
>> dma_sync_single_for_cpu(<your pci device>, daddr, size, dir)
>
> I'd hope that this IPU complex is some kind of embedded endpoint thing
> that bypasses the VT-d IOMMU or is always using its own local RAM,
> because it would be pretty much unworkable otherwise.

It uses system RAM and, as far as my understanding goes, by default it
operates without the VT-d IOMMU and that's how it's used right now.
I'm suggesting VT-d IOMMU as a way to further strengthen the security
and error resilience in future (due to the IPU complex being
non-coherent and also running a closed source firmware).

> The whole
> infrastructure isn't really capable of dealing with nested IOMMUs, and
> nested DMA ops would be an equally horrible idea.

Could you elaborate a bit more on this? I think we should be able to
deal with this in a way I suggested before:

a) the PCI device would use the system DMA ops,
b) the PCI device would implement a secondary bus for which it would
provide its own DMA and IOMMU ops.
c) a secondary device would be registered on the secondary bus,
d) all memory for the IPU would be managed on behalf of the secondary device.

In fact, the driver already is designed in a way that all the points
above are true. If I'm not missing something, the only significant
missing point is calling into system DMA ops from IPU DMA ops.

Best regards,
Tomasz

  reply	other threads:[~2017-06-08 14:36 UTC|newest]

Thread overview: 72+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-05 20:39 [PATCH 00/12] Intel IPU3 ImgU patchset Yong Zhi
2017-06-05 20:39 ` [PATCH 01/12] videodev2.h, v4l2-ioctl: add IPU3 meta buffer format Yong Zhi
2017-06-05 20:43   ` Alan Cox
2017-06-09 10:55     ` Sakari Ailus
2017-06-06  4:30   ` Tomasz Figa
2017-06-06  4:30     ` Tomasz Figa
2017-06-06  7:25       ` Sakari Ailus
2017-06-06  8:04         ` Hans Verkuil
2017-06-06 10:09           ` Tomasz Figa
2017-06-16  5:52             ` Tomasz Figa
2017-06-16  8:25               ` Sakari Ailus
2017-06-16  8:35                 ` Tomasz Figa
2017-06-16  8:49                   ` Sakari Ailus
2017-06-16  9:03                     ` Tomasz Figa
2017-06-16  9:19                       ` Sakari Ailus
2017-06-16  9:29                         ` Tomasz Figa
2017-06-19  9:17                   ` Laurent Pinchart
2017-06-19 10:41                     ` Tomasz Figa
2017-06-05 20:39 ` [PATCH 02/12] intel-ipu3: mmu: implement driver Yong Zhi
2017-06-06  8:07   ` Hans Verkuil
2017-06-16  9:21     ` Sakari Ailus
2017-06-06 10:13   ` Tomasz Figa
2017-06-07  8:35     ` Tomasz Figa
2017-06-07  8:35       ` Tomasz Figa
2017-06-08 16:43       ` Sakari Ailus
2017-06-09  5:59         ` Tomasz Figa
2017-06-09 11:16           ` Sakari Ailus
2017-06-09 12:09             ` Tomasz Figa
2017-06-09  8:26       ` Tuukka Toivonen
2017-06-09 12:10         ` Tomasz Figa
2017-06-07 21:59     ` Sakari Ailus
2017-06-07 21:59       ` Sakari Ailus
2017-06-08  7:36       ` Tomasz Figa
2017-06-05 20:39 ` [PATCH 03/12] intel-ipu3: Add DMA API implementation Yong Zhi
2017-06-07  9:47   ` Tomasz Figa
2017-06-07  9:47     ` Tomasz Figa
2017-06-07 17:45     ` Alan Cox
2017-06-08  2:55       ` Tomasz Figa
2017-06-08  2:55         ` Tomasz Figa
2017-06-08 16:47         ` Sakari Ailus
2017-06-08 13:22     ` Robin Murphy
2017-06-08 14:35       ` Tomasz Figa [this message]
2017-06-08 14:35         ` Tomasz Figa
2017-06-08 18:07         ` Robin Murphy
2017-06-09  6:20           ` Tomasz Figa
2017-06-09  6:20             ` Tomasz Figa
2017-06-09 13:05             ` Robin Murphy
2017-06-05 20:39 ` [PATCH 04/12] intel-ipu3: Add user space ABI definitions Yong Zhi
2017-06-06  8:28   ` Hans Verkuil
2017-06-07 22:22     ` Sakari Ailus
2017-09-05 17:31       ` Zhi, Yong
2018-04-27 12:27       ` Sakari Ailus
2017-06-05 20:39 ` [PATCH 06/12] intel-ipu3: css: imgu dma buff pool Yong Zhi
2017-06-05 20:39 ` [PATCH 07/12] intel-ipu3: css: firmware management Yong Zhi
2017-06-06  8:38   ` Hans Verkuil
2017-06-14 21:46     ` Zhi, Yong
2017-06-16 10:15   ` Tomasz Figa
2017-06-05 20:39 ` [PATCH 08/12] intel-ipu3: params: compute and program ccs Yong Zhi
2017-06-05 20:39 ` [PATCH 09/12] intel-ipu3: css hardware setup Yong Zhi
2017-06-05 20:39 ` [PATCH 10/12] intel-ipu3: css pipeline Yong Zhi
2017-06-05 20:39 ` [PATCH 11/12] intel-ipu3: Add imgu v4l2 driver Yong Zhi
2017-06-06  9:08   ` Hans Verkuil
2017-06-09  9:20     ` Sakari Ailus
2017-06-14 23:40       ` Zhi, Yong
2017-06-05 20:39 ` [PATCH 12/12] intel-ipu3: imgu top level pci device Yong Zhi
2017-06-05 20:46 ` [PATCH 00/12] Intel IPU3 ImgU patchset Alan Cox
2017-06-14 22:26   ` Sakari Ailus
2017-06-15  8:26     ` Andy Shevchenko
2017-06-15  8:37       ` Sakari Ailus
2017-06-06  9:14 ` Hans Verkuil
     [not found] ` <1496695157-19926-6-git-send-email-yong.zhi@intel.com>
2017-06-08  8:29   ` [PATCH 05/12] intel-ipu3: css: tables Tomasz Figa
2017-06-09  9:43     ` Sakari Ailus

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAAFQd5AWGN_qSXGG32D-eWKZRoRme+XCD9v1r8qJ5bthtS9z9w@mail.gmail.com \
    --to=tfiga@chromium.org \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jian.xu.zheng@intel.com \
    --cc=joro@8bytes.org \
    --cc=linux-media@vger.kernel.org \
    --cc=rajmohan.mani@intel.com \
    --cc=robin.murphy@arm.com \
    --cc=sakari.ailus@linux.intel.com \
    --cc=tuukka.toivonen@intel.com \
    --cc=yong.zhi@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.