All of lore.kernel.org
 help / color / mirror / Atom feed
From: Joerg Roedel <joro-zLv9SwRftAIdnm+yROfE0A@public.gmane.org>
To: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>
Cc: "iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org"
	<iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org>,
	Will Deacon <Will.Deacon-5wv7dgnIgG8@public.gmane.org>,
	"linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org"
	<linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org>
Subject: Re: [PATCH 2/3] iommu/arm-smmu: Add initial driver support for ARM SMMUv3 devices
Date: Fri, 29 May 2015 16:40:43 +0200	[thread overview]
Message-ID: <20150529144043.GA20384@8bytes.org> (raw)
In-Reply-To: <55684F1C.3050702-5wv7dgnIgG8@public.gmane.org>

On Fri, May 29, 2015 at 12:35:56PM +0100, Robin Murphy wrote:
> The trouble with this is, what about the CPU page size? Say you have
> some multimedia subsystem with its own integrated SMMU and for that
> they've only implemented the 16K granule scheme because it works
> best for the video hardware (and the GPU driver is making direct
> IOMMU API calls to remap carved-out RAM rather than using
> DMA-mapping). Now, the SMMU on the compute side of the SoC serving
> the general peripherals will be rendered useless by bumping the
> system-wide minimum page size up to 16K, because it then can't map
> that scatterlist of discontiguous 4K pages that the USB controller
> needs...
> 
> I think this really represents another push to get away from (or at
> least around) the page-at-a-time paradigm - if the IOMMU API itself
> wasn't too fussed about page sizes and could let drivers handle the
> full map/unmap requests however they see fit, I think we could
> bypass a lot of these issues. We've already got the Intel IOMMU
> driver doing horrible hacks with the pgsize_bitmap to cheat the
> system, I'm sure we don't want to add any more of that. How about
> something like the below diff as a first step?

Moving functionality out of the iommu core code into the drivers is the
wrong direction imo. It is better to solve it with something like

	struct iommu_domain *iommu_domain_alloc_for_group(struct iommu_group *group);

Which gets us a domain that can only be assigned to that particular
group. Since there is a clear one-to-many relationship between a
hardware iommu and the groups of devices behind it, we could propagate
the pgsize_bitmap from the iommu to the group and then to the domain.

Domains allocated via iommu_domain_alloc() would get the merged
pgsize_bitmap like I described in my previous mail.

But to make this happen we need a representation of single hardware
iommu instances in the iommu core first.


	Joerg

WARNING: multiple messages have this Message-ID (diff)
From: joro@8bytes.org (Joerg Roedel)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH 2/3] iommu/arm-smmu: Add initial driver support for ARM SMMUv3 devices
Date: Fri, 29 May 2015 16:40:43 +0200	[thread overview]
Message-ID: <20150529144043.GA20384@8bytes.org> (raw)
In-Reply-To: <55684F1C.3050702@arm.com>

On Fri, May 29, 2015 at 12:35:56PM +0100, Robin Murphy wrote:
> The trouble with this is, what about the CPU page size? Say you have
> some multimedia subsystem with its own integrated SMMU and for that
> they've only implemented the 16K granule scheme because it works
> best for the video hardware (and the GPU driver is making direct
> IOMMU API calls to remap carved-out RAM rather than using
> DMA-mapping). Now, the SMMU on the compute side of the SoC serving
> the general peripherals will be rendered useless by bumping the
> system-wide minimum page size up to 16K, because it then can't map
> that scatterlist of discontiguous 4K pages that the USB controller
> needs...
> 
> I think this really represents another push to get away from (or at
> least around) the page-at-a-time paradigm - if the IOMMU API itself
> wasn't too fussed about page sizes and could let drivers handle the
> full map/unmap requests however they see fit, I think we could
> bypass a lot of these issues. We've already got the Intel IOMMU
> driver doing horrible hacks with the pgsize_bitmap to cheat the
> system, I'm sure we don't want to add any more of that. How about
> something like the below diff as a first step?

Moving functionality out of the iommu core code into the drivers is the
wrong direction imo. It is better to solve it with something like

	struct iommu_domain *iommu_domain_alloc_for_group(struct iommu_group *group);

Which gets us a domain that can only be assigned to that particular
group. Since there is a clear one-to-many relationship between a
hardware iommu and the groups of devices behind it, we could propagate
the pgsize_bitmap from the iommu to the group and then to the domain.

Domains allocated via iommu_domain_alloc() would get the merged
pgsize_bitmap like I described in my previous mail.

But to make this happen we need a representation of single hardware
iommu instances in the iommu core first.


	Joerg

  parent reply	other threads:[~2015-05-29 14:40 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-05-08 18:00 [PATCH 0/3] iommu/arm-smmu: Add driver for ARM SMMUv3 devices Will Deacon
2015-05-08 18:00 ` Will Deacon
     [not found] ` <1431108046-9675-1-git-send-email-will.deacon-5wv7dgnIgG8@public.gmane.org>
2015-05-08 18:00   ` [PATCH 1/3] Documentation: dt-bindings: Add device-tree binding for ARM SMMUv3 IOMMU Will Deacon
2015-05-08 18:00     ` Will Deacon
2015-05-08 18:00   ` [PATCH 2/3] iommu/arm-smmu: Add initial driver support for ARM SMMUv3 devices Will Deacon
2015-05-08 18:00     ` Will Deacon
     [not found]     ` <1431108046-9675-3-git-send-email-will.deacon-5wv7dgnIgG8@public.gmane.org>
2015-05-12  7:40       ` leizhen
2015-05-12  7:40         ` leizhen
     [not found]         ` <5551AE56.6050906-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2015-05-12 16:55           ` Will Deacon
2015-05-12 16:55             ` Will Deacon
     [not found]             ` <20150512165500.GE2062-5wv7dgnIgG8@public.gmane.org>
2015-05-13  8:33               ` leizhen
2015-05-13  8:33                 ` leizhen
     [not found]                 ` <55530C4F.5000605-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2015-05-21 11:25                   ` Will Deacon
2015-05-21 11:25                     ` Will Deacon
     [not found]                     ` <20150521112555.GH21920-5wv7dgnIgG8@public.gmane.org>
2015-05-25  2:07                       ` leizhen
2015-05-25  2:07                         ` leizhen
     [not found]                         ` <556283D5.4030901-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2015-05-26 16:12                           ` Will Deacon
2015-05-26 16:12                             ` Will Deacon
     [not found]                             ` <20150526161245.GR1565-5wv7dgnIgG8@public.gmane.org>
2015-05-27  9:12                               ` leizhen
2015-05-27  9:12                                 ` leizhen
2015-05-19 15:24       ` Joerg Roedel
2015-05-19 15:24         ` Joerg Roedel
     [not found]         ` <20150519152435.GL20611-zLv9SwRftAIdnm+yROfE0A@public.gmane.org>
2015-05-20 17:09           ` Will Deacon
2015-05-20 17:09             ` Will Deacon
     [not found]             ` <20150520170926.GI11498-5wv7dgnIgG8@public.gmane.org>
2015-05-29  6:43               ` Joerg Roedel
2015-05-29  6:43                 ` Joerg Roedel
     [not found]                 ` <20150529064337.GN20611-zLv9SwRftAIdnm+yROfE0A@public.gmane.org>
2015-05-29 11:35                   ` Robin Murphy
2015-05-29 11:35                     ` Robin Murphy
     [not found]                     ` <55684F1C.3050702-5wv7dgnIgG8@public.gmane.org>
2015-05-29 14:40                       ` Joerg Roedel [this message]
2015-05-29 14:40                         ` Joerg Roedel
     [not found]                         ` <20150529144043.GA20384-zLv9SwRftAIdnm+yROfE0A@public.gmane.org>
2015-06-01  9:40                           ` Will Deacon
2015-06-01  9:40                             ` Will Deacon
     [not found]                             ` <20150601094014.GC1641-5wv7dgnIgG8@public.gmane.org>
2015-06-02  7:39                               ` Joerg Roedel
2015-06-02  7:39                                 ` Joerg Roedel
     [not found]                                 ` <20150602073956.GG20384-zLv9SwRftAIdnm+yROfE0A@public.gmane.org>
2015-06-02  9:47                                   ` Will Deacon
2015-06-02  9:47                                     ` Will Deacon
     [not found]                                     ` <20150602094746.GC22569-5wv7dgnIgG8@public.gmane.org>
2015-06-02 18:43                                       ` Joerg Roedel
2015-06-02 18:43                                         ` Joerg Roedel
2015-05-08 18:00   ` [PATCH 3/3] drivers/vfio: Allow type-1 IOMMU instantiation on top of an ARM SMMUv3 Will Deacon
2015-05-08 18:00     ` Will Deacon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150529144043.GA20384@8bytes.org \
    --to=joro-zlv9swrftaidnm+yrofe0a@public.gmane.org \
    --cc=Will.Deacon-5wv7dgnIgG8@public.gmane.org \
    --cc=iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org \
    --cc=linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org \
    --cc=robin.murphy-5wv7dgnIgG8@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.