iommu.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: John Garry <john.garry@huawei.com>
To: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: robh@kernel.org, Alex Williamson <alex.williamson@redhat.com>,
	Saravana Kannan <saravanak@google.com>,
	Will Deacon <will@kernel.org>, Marc Zyngier <maz@kernel.org>,
	Sudeep Holla <sudeep.holla@arm.com>,
	Linuxarm <linuxarm@huawei.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	iommu <iommu@lists.linux-foundation.org>,
	"Guohanjun \(Hanjun Guo\)" <guohanjun@huawei.com>,
	Bjorn Helgaas <bhelgaas@google.com>,
	Robin Murphy <robin.murphy@arm.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: arm64 iommu groups issue
Date: Mon, 15 Jun 2020 08:35:45 +0100	[thread overview]
Message-ID: <20fe20d8-8c2e-642f-019c-3b92e7dbd31c@huawei.com> (raw)
In-Reply-To: <20200612143006.GA4905@red-moon.cambridge.arm.com>

On 12/06/2020 15:30, Lorenzo Pieralisi wrote:
> On Mon, Feb 17, 2020 at 12:08:48PM +0000, John Garry wrote:
>>>>
>>>> Right, and even worse is that it relies on the port driver even
>>>> existing at all.
>>>>
>>>> All this iommu group assignment should be taken outside device
>>>> driver probe paths.
>>>>
>>>> However we could still consider device links for sync'ing the SMMU
>>>> and each device probing.
>>>
>>> Yes, we should get that for DT now thanks to the of_devlink stuff, but
>>> cooking up some equivalent for IORT might be worthwhile.
>>
>> It doesn't solve this problem, but at least we could remove the iommu_ops
>> check in iort_iommu_xlate().
>>
>> We would need to carve out a path from pci_device_add() or even device_add()
>> to solve all cases.
>>
>>>
>>>>> Another thought that crosses my mind is that when pci_device_group()
>>>>> walks up to the point of ACS isolation and doesn't find an existing
>>>>> group, it can still infer that everything it walked past *should* be put
>>>>> in the same group it's then eventually going to return. Unfortunately I
>>>>> can't see an obvious way for it to act on that knowledge, though, since
>>>>> recursive iommu_probe_device() is unlikely to end well.
>>>>
>>
>> [...]
>>
>>>> And this looks to be the reason for which current
>>>> iommu_bus_init()->bus_for_each_device(..., add_iommu_group) fails
>>>> also.
>>>
>>> Of course, just adding a 'correct' add_device replay without the
>>> of_xlate process doesn't help at all. No wonder this looked suspiciously
>>> simpler than where the first idea left off...
>>>
>>> (on reflection, the core of this idea seems to be recycling the existing
>>> iommu_bus_init walk rather than building up a separate "waiting list",
>>> while forgetting that that wasn't the difficult part of the original
>>> idea anyway)
>>
>> We could still use a bus walk to add the group per iommu, but we would need
>> an additional check to ensure the device is associated with the IOMMU.
>>
>>>
>>>> On this current code mentioned, the principle of this seems wrong to
>>>> me - we call bus_for_each_device(..., add_iommu_group) for the first
>>>> SMMU in the system which probes, but we attempt to add_iommu_group()
>>>> for all devices on the bus, even though the SMMU for that device may
>>>> yet to have probed.
>>>
>>> Yes, iommu_bus_init() is one of the places still holding a
>>> deeply-ingrained assumption that the ops go live for all IOMMU instances
>>> at once, which is what warranted the further replay in
>>> of_iommu_configure() originally. Moving that out of
>>> of_platform_device_create() to support probe deferral is where the
>>> trouble really started.
>>
>> I'm not too familiar with the history here, but could this be reverted now
>> with the introduction of of_devlink stuff?
> 
> Hi John,

Hi Lorenzo,

> 
> have we managed to reach a consensus on this thread on how to solve
> the issue ? 

No, not really. So Robin and I tried a couple of quick things 
previously, but they came did not come to much, as above.

> Asking because this thread seems stalled - I am keen on
> getting it fixed.

I haven't spent more time on this. But from what I was hearing last 
time, this issue was ticketed internally for arm, so I was waiting for 
that to be picked up to re-engage.

Thanks,
John
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

      reply	other threads:[~2020-06-15  7:37 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-19  8:43 arm64 iommu groups issue John Garry
2019-09-19 13:25 ` Robin Murphy
2019-09-19 14:35   ` John Garry
2019-11-04 12:18     ` John Garry
2020-02-13 15:49     ` John Garry
2020-02-13 19:40       ` Robin Murphy
2020-02-14 14:09         ` John Garry
2020-02-14 18:35           ` Robin Murphy
2020-02-17 12:08             ` John Garry
2020-06-12 14:30               ` Lorenzo Pieralisi
2020-06-15  7:35                 ` John Garry [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20fe20d8-8c2e-642f-019c-3b92e7dbd31c@huawei.com \
    --to=john.garry@huawei.com \
    --cc=alex.williamson@redhat.com \
    --cc=bhelgaas@google.com \
    --cc=guohanjun@huawei.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxarm@huawei.com \
    --cc=lorenzo.pieralisi@arm.com \
    --cc=maz@kernel.org \
    --cc=robh@kernel.org \
    --cc=robin.murphy@arm.com \
    --cc=saravanak@google.com \
    --cc=sudeep.holla@arm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).