All of lore.kernel.org
 help / color / mirror / Atom feed
* Unshared IOMMU issues
@ 2017-02-15 15:52 Oleksandr Tyshchenko
  2017-02-15 16:22 ` Jan Beulich
  0 siblings, 1 reply; 17+ messages in thread
From: Oleksandr Tyshchenko @ 2017-02-15 15:52 UTC (permalink / raw)
  To: Xen Devel; +Cc: Julien Grall, Stefano Stabellini, Jan Beulich

Hi, all.

As it was suggested by Julien in IRC I open this thread.

Currently, I am trying to add support for IPMMU in Xen.
It is VMSA-compatible IOMMU that integrated in the newest Renesas SoCs (ARM).
This IPMMU can't share the page table with the CPU since it uses
stage-1 page table
unlike the CPU that uses stage-2.
So, the IPMMU driver has own page table inside it and maintains them
like other "unshared IOMMU" drivers usually do.

For passing all mapping updates via IOMMU I slightly
modified P2M code (p2m_set_entry) on ARM to call iommu_map_page/iommu_unmap_page
if following (need_iommu(p2m->domain) &&
!iommu_use_hap_pt(p2m->domain)) is true.
I even optimized a bit by adding iommu_map_pages/iommu_unmap_pages API
and map_pages/unmap_pages flatform ops
for passing the whole memory block (nr pages) to the IOMMU code. But
it is not scope of this thread.

I faced several generic problems that had prevented me from making
IPMMU driver (but it might be another "unshared IOMMU" driver) happy
inside XEN on ARM.
Most of them I have already resolved somehow just to see that it
worked out well for me, but I am still have doubts about how to do it
in a right way.

So, for allowing P2M core to update IOMMU mapping from the first
"set_entry" and for "unshared IOMMU" driver to be ready to handle
IOMMU mapping updates
I do two things:
1. I always allocate IOMMU page table in iommu_domain_init() for every
domain even this domain won't have any assigned devices in future.
The main reason why I do so is not to skip any IOMMU mapping updates
from P2M code (RAM, MMIOs, etc). The IOMMU driver has to be ready for
processing
IOMMU mapping updates from the *very beginning*.
Of course, the IOMMU page table will be completely deleted in iommu_teardown().
But, anyway allocating IOMMU page table if it won't be really used in
domain looks not good.
Although there is an arch_iommu_populate_page_table() solution that
could help in a such situation,
but it does not look suitable for ARM because we have no way to
translate a MFN to a GFN as Julien had noticed me in IRC.

2. Another action I do is to explicitly set need_iommu flag during
arch_iommu_domain_init() call in ARM part if following
 (iommu_enabled && !is_hardware_domain(d) && !iommu_use_hap_pt(d)) is true.
I do that since in case of domU need_iommu flag is set during device
assignment, but it is too late. For dom0 we force need_iommu flag.
There are many mapping updates to P2M by the time the first device
will have been assigned.
I see the way how this action can be dropped. For example, don't rely
on need_iommu flag before updating IOMMU mapping from P2M,
check for iommu_enabled flag instead.

I think, but I am not 100% sure that we could avoid actions above if
we would have knowledge about device assignment for particular
domain before making any updates in P2M.

Could you please suggest me a right way in resolving such problems?

-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Unshared IOMMU issues
  2017-02-15 15:52 Unshared IOMMU issues Oleksandr Tyshchenko
@ 2017-02-15 16:22 ` Jan Beulich
  2017-02-15 17:43   ` Oleksandr Tyshchenko
  0 siblings, 1 reply; 17+ messages in thread
From: Jan Beulich @ 2017-02-15 16:22 UTC (permalink / raw)
  To: Oleksandr Tyshchenko; +Cc: Julien Grall, Stefano Stabellini, Xen Devel

>>> On 15.02.17 at 16:52, <olekstysh@gmail.com> wrote:
> I think, but I am not 100% sure that we could avoid actions above if
> we would have knowledge about device assignment for particular
> domain before making any updates in P2M.

Well, one could in theory make this work for boot time assigned
devices, but since this won't cover runtime assigned (hotplugged)
ones, I don't think this would gain you anything.

> Could you please suggest me a right way in resolving such problems?

Well, you described what you do (with quite a bit of ARM terminology
I don't understand), but I'm not sure you made explicit what problem(s)
you need to solve. I'm sorry if it's just me not understanding what you
wrote.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Unshared IOMMU issues
  2017-02-15 16:22 ` Jan Beulich
@ 2017-02-15 17:43   ` Oleksandr Tyshchenko
  2017-02-16  9:36     ` Jan Beulich
  0 siblings, 1 reply; 17+ messages in thread
From: Oleksandr Tyshchenko @ 2017-02-15 17:43 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Julien Grall, Stefano Stabellini, Xen Devel

Hi, Jan.

On Wed, Feb 15, 2017 at 6:22 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 15.02.17 at 16:52, <olekstysh@gmail.com> wrote:
>> I think, but I am not 100% sure that we could avoid actions above if
>> we would have knowledge about device assignment for particular
>> domain before making any updates in P2M.
>
> Well, one could in theory make this work for boot time assigned
> devices, but since this won't cover runtime assigned (hotplugged)
> ones, I don't think this would gain you anything.

Indeed, I didn't take into account hotplugged devices.

>
>> Could you please suggest me a right way in resolving such problems?
>
> Well, you described what you do (with quite a bit of ARM terminology
> I don't understand), but I'm not sure you made explicit what problem(s)
> you need to solve. I'm sorry if it's just me not understanding what you
> wrote.
Ok.

Sorry, if I was unclear. Let rephrase a bit.

I described some generic problems I had faced during playing with
new IOMMU driver (that doesn't share page table with the CPU unlike
existing SMMU driver) in XEN on ARM.
I described how I had resolved it somehow, just to see it working.
Now, I want to hear community opinion about
whether these changes are correct and might be acceptable in general
or these changes should be done in other way.

1.
I need:
Allow P2M core on ARM to update IOMMU mapping from the first "p2m_set_entry".
I do:
I explicitly set need_iommu flag for *every* guest domain during
iommu_domain_init() on ARM in case if page table is not shared.
At that moment I have no knowledge about will any device be assigned
to this domain or not. I am just want to receive all mapping updates
from P2M code. The P2M will update IOMMU mapping only when need_iommu
is set and page table is not shared.
I have doubts:
Is it correct to just force need_iommu flag? Or maybe another flag
should be introduced?
Or we don't need to check for need_iommu flag before updating IOMMU
mapping in P2M code, maybe iommu_enabled would be enough?

2.
I need:
Allow IOMMU driver to be ready to handle IOMMU mapping updates from
the first "p2m_set_entry".
I do:
I always allocate IOMMU page table during iommu_domain_init() for every
domain even this domain won't have any assigned devices in future. I
don't wait for iommu_construct.
I have doubts:
Is it correct? It might be just wasting memory and CPU time if domain
doesn't have any assigned devices in future.

>
> Jan
>



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Unshared IOMMU issues
  2017-02-15 17:43   ` Oleksandr Tyshchenko
@ 2017-02-16  9:36     ` Jan Beulich
  2017-02-16 15:02       ` Oleksandr Tyshchenko
  0 siblings, 1 reply; 17+ messages in thread
From: Jan Beulich @ 2017-02-16  9:36 UTC (permalink / raw)
  To: Oleksandr Tyshchenko; +Cc: Julien Grall, Stefano Stabellini, Xen Devel

>>> On 15.02.17 at 18:43, <olekstysh@gmail.com> wrote:
> 1.
> I need:
> Allow P2M core on ARM to update IOMMU mapping from the first "p2m_set_entry".
> I do:
> I explicitly set need_iommu flag for *every* guest domain during
> iommu_domain_init() on ARM in case if page table is not shared.
> At that moment I have no knowledge about will any device be assigned
> to this domain or not. I am just want to receive all mapping updates
> from P2M code. The P2M will update IOMMU mapping only when need_iommu
> is set and page table is not shared.
> I have doubts:
> Is it correct to just force need_iommu flag?

No, I don't think so. This is a waste of a measurable amount of
resources when page tables aren't shared.

> Or maybe another flag should be introduced?

Not sure what you think of here. Where's the problem with building
IOMMU page tables at the time the first device gets assigned, just
like x86 does?

> Or we don't need to check for need_iommu flag before updating IOMMU
> mapping in P2M code, maybe iommu_enabled would be enough?

No, afaict that would again mean maintaining IOMMU page tables
regardless of whether they're needed.

> 2.
> I need:
> Allow IOMMU driver to be ready to handle IOMMU mapping updates from
> the first "p2m_set_entry".

Why (see also the question above)?

> I do:
> I always allocate IOMMU page table during iommu_domain_init() for every
> domain even this domain won't have any assigned devices in future. I
> don't wait for iommu_construct.
> I have doubts:
> Is it correct? It might be just wasting memory and CPU time if domain
> doesn't have any assigned devices in future.

Indeed.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Unshared IOMMU issues
  2017-02-16  9:36     ` Jan Beulich
@ 2017-02-16 15:02       ` Oleksandr Tyshchenko
  2017-02-16 15:52         ` Jan Beulich
  0 siblings, 1 reply; 17+ messages in thread
From: Oleksandr Tyshchenko @ 2017-02-16 15:02 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Julien Grall, Stefano Stabellini, Xen Devel

Hi, Jan.

On Thu, Feb 16, 2017 at 11:36 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 15.02.17 at 18:43, <olekstysh@gmail.com> wrote:
>> 1.
>> I need:
>> Allow P2M core on ARM to update IOMMU mapping from the first "p2m_set_entry".
>> I do:
>> I explicitly set need_iommu flag for *every* guest domain during
>> iommu_domain_init() on ARM in case if page table is not shared.
>> At that moment I have no knowledge about will any device be assigned
>> to this domain or not. I am just want to receive all mapping updates
>> from P2M code. The P2M will update IOMMU mapping only when need_iommu
>> is set and page table is not shared.
>> I have doubts:
>> Is it correct to just force need_iommu flag?
>
> No, I don't think so. This is a waste of a measurable amount of
> resources when page tables aren't shared.
>
>> Or maybe another flag should be introduced?
>
> Not sure what you think of here. Where's the problem with building
> IOMMU page tables at the time the first device gets assigned, just
> like x86 does?
OK, I have already had a look at  arch_iommu_populate_page_table() for x86.
I don't know at the moment how this solution can help me.
There are a least two points the prevent me from doing the similar thing.
1. For create IOMMU mapping I need both mfn and gfn. (+ flags).
I am able to get mfn only. How can I find corresponding gfn?
2. The d->page_list seems only contains domain RAM (not 100% sure).
Where can I get other regions (mmios, etc)?

>
>> Or we don't need to check for need_iommu flag before updating IOMMU
>> mapping in P2M code, maybe iommu_enabled would be enough?
>
> No, afaict that would again mean maintaining IOMMU page tables
> regardless of whether they're needed.
>
>> 2.
>> I need:
>> Allow IOMMU driver to be ready to handle IOMMU mapping updates from
>> the first "p2m_set_entry".
>
> Why (see also the question above)?
I explained above.  Unfortunately, I don't see how can I get all
iova-to-pa mapping
that took place for this domain from the start on ARM.

>
>> I do:
>> I always allocate IOMMU page table during iommu_domain_init() for every
>> domain even this domain won't have any assigned devices in future. I
>> don't wait for iommu_construct.
>> I have doubts:
>> Is it correct? It might be just wasting memory and CPU time if domain
>> doesn't have any assigned devices in future.
>
> Indeed.
>
> Jan
>



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Unshared IOMMU issues
  2017-02-16 15:02       ` Oleksandr Tyshchenko
@ 2017-02-16 15:52         ` Jan Beulich
  2017-02-16 16:11           ` Julien Grall
  0 siblings, 1 reply; 17+ messages in thread
From: Jan Beulich @ 2017-02-16 15:52 UTC (permalink / raw)
  To: Oleksandr Tyshchenko; +Cc: Julien Grall, Stefano Stabellini, Xen Devel

>>> On 16.02.17 at 16:02, <olekstysh@gmail.com> wrote:
> On Thu, Feb 16, 2017 at 11:36 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 15.02.17 at 18:43, <olekstysh@gmail.com> wrote:
>>> 1.
>>> I need:
>>> Allow P2M core on ARM to update IOMMU mapping from the first "p2m_set_entry".
>>> I do:
>>> I explicitly set need_iommu flag for *every* guest domain during
>>> iommu_domain_init() on ARM in case if page table is not shared.
>>> At that moment I have no knowledge about will any device be assigned
>>> to this domain or not. I am just want to receive all mapping updates
>>> from P2M code. The P2M will update IOMMU mapping only when need_iommu
>>> is set and page table is not shared.
>>> I have doubts:
>>> Is it correct to just force need_iommu flag?
>>
>> No, I don't think so. This is a waste of a measurable amount of
>> resources when page tables aren't shared.
>>
>>> Or maybe another flag should be introduced?
>>
>> Not sure what you think of here. Where's the problem with building
>> IOMMU page tables at the time the first device gets assigned, just
>> like x86 does?
> OK, I have already had a look at  arch_iommu_populate_page_table() for x86.
> I don't know at the moment how this solution can help me.
> There are a least two points the prevent me from doing the similar thing.
> 1. For create IOMMU mapping I need both mfn and gfn. (+ flags).
> I am able to get mfn only. How can I find corresponding gfn?

As the x86 one shows, via mfn_to_gmfn(). If ARM doesn't have
this, perhaps it needs to gain it?

> 2. The d->page_list seems only contains domain RAM (not 100% sure).
> Where can I get other regions (mmios, etc)?

These necessarily are being tracked for the domain, so you need to
take them from wherever they're stored on ARM.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Unshared IOMMU issues
  2017-02-16 15:52         ` Jan Beulich
@ 2017-02-16 16:11           ` Julien Grall
  2017-02-16 16:34             ` Jan Beulich
  0 siblings, 1 reply; 17+ messages in thread
From: Julien Grall @ 2017-02-16 16:11 UTC (permalink / raw)
  To: Jan Beulich, Oleksandr Tyshchenko; +Cc: nd, Stefano Stabellini, Xen Devel

Hi Jan,

On 16/02/17 15:52, Jan Beulich wrote:
>>>> On 16.02.17 at 16:02, <olekstysh@gmail.com> wrote:
>> On Thu, Feb 16, 2017 at 11:36 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>> On 15.02.17 at 18:43, <olekstysh@gmail.com> wrote:
>>>> 1.
>>>> I need:
>>>> Allow P2M core on ARM to update IOMMU mapping from the first "p2m_set_entry".
>>>> I do:
>>>> I explicitly set need_iommu flag for *every* guest domain during
>>>> iommu_domain_init() on ARM in case if page table is not shared.
>>>> At that moment I have no knowledge about will any device be assigned
>>>> to this domain or not. I am just want to receive all mapping updates
>>>> from P2M code. The P2M will update IOMMU mapping only when need_iommu
>>>> is set and page table is not shared.
>>>> I have doubts:
>>>> Is it correct to just force need_iommu flag?
>>>
>>> No, I don't think so. This is a waste of a measurable amount of
>>> resources when page tables aren't shared.
>>>
>>>> Or maybe another flag should be introduced?
>>>
>>> Not sure what you think of here. Where's the problem with building
>>> IOMMU page tables at the time the first device gets assigned, just
>>> like x86 does?
>> OK, I have already had a look at  arch_iommu_populate_page_table() for x86.
>> I don't know at the moment how this solution can help me.
>> There are a least two points the prevent me from doing the similar thing.
>> 1. For create IOMMU mapping I need both mfn and gfn. (+ flags).
>> I am able to get mfn only. How can I find corresponding gfn?
>
> As the x86 one shows, via mfn_to_gmfn(). If ARM doesn't have
> this, perhaps it needs to gain it?

Looking at the x86 implementation, mfn_to_gmfn is using a table for that 
indexed by the MFN. This is requiring virtual address space that is 
already scarce on ARM32 and also using physical memory.

I am not convinced this is the right things to do on ARM as the only 
user so far will be the IOMMU code.

Another solution would be to go through the stage-2 page table and 
replicate all the mappings.

>
>> 2. The d->page_list seems only contains domain RAM (not 100% sure).
>> Where can I get other regions (mmios, etc)?
>
> These necessarily are being tracked for the domain, so you need to
> take them from wherever they're stored on ARM.

Is there any reason why you don't seem to have such code on x86? AFAICT 
only RAM is currently mapped.

Regarding ARM, we know whether a domain is allowed to access a certain 
range of MMIO, but, similarly to above, we don't have the conversion MFN 
-> GFN for them. However in this case, we would not be able to use an 
M2P as a same MFN may be mapped in multiple domain.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Unshared IOMMU issues
  2017-02-16 16:11           ` Julien Grall
@ 2017-02-16 16:34             ` Jan Beulich
  2017-02-16 18:09               ` Julien Grall
  0 siblings, 1 reply; 17+ messages in thread
From: Jan Beulich @ 2017-02-16 16:34 UTC (permalink / raw)
  To: Julien Grall, Oleksandr Tyshchenko; +Cc: nd, Stefano Stabellini, Xen Devel

>>> On 16.02.17 at 17:11, <julien.grall@arm.com> wrote:
> On 16/02/17 15:52, Jan Beulich wrote:
>>>>> On 16.02.17 at 16:02, <olekstysh@gmail.com> wrote:
>>> On Thu, Feb 16, 2017 at 11:36 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>>> On 15.02.17 at 18:43, <olekstysh@gmail.com> wrote:
>>>>> 1.
>>>>> I need:
>>>>> Allow P2M core on ARM to update IOMMU mapping from the first "p2m_set_entry".
>>>>> I do:
>>>>> I explicitly set need_iommu flag for *every* guest domain during
>>>>> iommu_domain_init() on ARM in case if page table is not shared.
>>>>> At that moment I have no knowledge about will any device be assigned
>>>>> to this domain or not. I am just want to receive all mapping updates
>>>>> from P2M code. The P2M will update IOMMU mapping only when need_iommu
>>>>> is set and page table is not shared.
>>>>> I have doubts:
>>>>> Is it correct to just force need_iommu flag?
>>>>
>>>> No, I don't think so. This is a waste of a measurable amount of
>>>> resources when page tables aren't shared.
>>>>
>>>>> Or maybe another flag should be introduced?
>>>>
>>>> Not sure what you think of here. Where's the problem with building
>>>> IOMMU page tables at the time the first device gets assigned, just
>>>> like x86 does?
>>> OK, I have already had a look at  arch_iommu_populate_page_table() for x86.
>>> I don't know at the moment how this solution can help me.
>>> There are a least two points the prevent me from doing the similar thing.
>>> 1. For create IOMMU mapping I need both mfn and gfn. (+ flags).
>>> I am able to get mfn only. How can I find corresponding gfn?
>>
>> As the x86 one shows, via mfn_to_gmfn(). If ARM doesn't have
>> this, perhaps it needs to gain it?
> 
> Looking at the x86 implementation, mfn_to_gmfn is using a table for that 
> indexed by the MFN. This is requiring virtual address space that is 
> already scarce on ARM32 and also using physical memory.
> 
> I am not convinced this is the right things to do on ARM as the only 
> user so far will be the IOMMU code.
> 
> Another solution would be to go through the stage-2 page table and 
> replicate all the mappings.

That's certainly an option, if you want to save the memory (and
VA space on ARM32). It only makes the x86 model of establishing
the mappings slightly more compute intensive.

>>> 2. The d->page_list seems only contains domain RAM (not 100% sure).
>>> Where can I get other regions (mmios, etc)?
>>
>> These necessarily are being tracked for the domain, so you need to
>> take them from wherever they're stored on ARM.
> 
> Is there any reason why you don't seem to have such code on x86? AFAICT 
> only RAM is currently mapped.

Well, no-one care so far, I would guess. Even runtime mappings of
MMIO space were mad work properly only very recently (by Roger).

> Regarding ARM, we know whether a domain is allowed to access a certain 
> range of MMIO, but, similarly to above, we don't have the conversion MFN 
> -> GFN for them. However in this case, we would not be able to use an 
> M2P as a same MFN may be mapped in multiple domain.

Mapped by multiple domains? If one DomU and Dom0, I can see
this as possible, but not a requirement. If multiple DomU-s I have
to raise the question of security. In any event, your stage-2
table walking approach ought to cover this case, too.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Unshared IOMMU issues
  2017-02-16 16:34             ` Jan Beulich
@ 2017-02-16 18:09               ` Julien Grall
  2017-02-16 18:58                 ` Stefano Stabellini
  2017-02-17  7:43                 ` Jan Beulich
  0 siblings, 2 replies; 17+ messages in thread
From: Julien Grall @ 2017-02-16 18:09 UTC (permalink / raw)
  To: Jan Beulich, Oleksandr Tyshchenko; +Cc: nd, Stefano Stabellini, Xen Devel

Hi Jan,

On 16/02/17 16:34, Jan Beulich wrote:
>>>> On 16.02.17 at 17:11, <julien.grall@arm.com> wrote:
>> On 16/02/17 15:52, Jan Beulich wrote:
>>>>>> On 16.02.17 at 16:02, <olekstysh@gmail.com> wrote:
>>>> On Thu, Feb 16, 2017 at 11:36 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>>>> On 15.02.17 at 18:43, <olekstysh@gmail.com> wrote:
>>>>>> 1.
>>>>>> I need:
>>>>>> Allow P2M core on ARM to update IOMMU mapping from the first "p2m_set_entry".
>>>>>> I do:
>>>>>> I explicitly set need_iommu flag for *every* guest domain during
>>>>>> iommu_domain_init() on ARM in case if page table is not shared.
>>>>>> At that moment I have no knowledge about will any device be assigned
>>>>>> to this domain or not. I am just want to receive all mapping updates
>>>>>> from P2M code. The P2M will update IOMMU mapping only when need_iommu
>>>>>> is set and page table is not shared.
>>>>>> I have doubts:
>>>>>> Is it correct to just force need_iommu flag?
>>>>>
>>>>> No, I don't think so. This is a waste of a measurable amount of
>>>>> resources when page tables aren't shared.
>>>>>
>>>>>> Or maybe another flag should be introduced?
>>>>>
>>>>> Not sure what you think of here. Where's the problem with building
>>>>> IOMMU page tables at the time the first device gets assigned, just
>>>>> like x86 does?
>>>> OK, I have already had a look at  arch_iommu_populate_page_table() for x86.
>>>> I don't know at the moment how this solution can help me.
>>>> There are a least two points the prevent me from doing the similar thing.
>>>> 1. For create IOMMU mapping I need both mfn and gfn. (+ flags).
>>>> I am able to get mfn only. How can I find corresponding gfn?
>>>
>>> As the x86 one shows, via mfn_to_gmfn(). If ARM doesn't have
>>> this, perhaps it needs to gain it?
>>
>> Looking at the x86 implementation, mfn_to_gmfn is using a table for that
>> indexed by the MFN. This is requiring virtual address space that is
>> already scarce on ARM32 and also using physical memory.
>>
>> I am not convinced this is the right things to do on ARM as the only
>> user so far will be the IOMMU code.
>>
>> Another solution would be to go through the stage-2 page table and
>> replicate all the mappings.
>
> That's certainly an option, if you want to save the memory (and
> VA space on ARM32). It only makes the x86 model of establishing
> the mappings slightly more compute intensive.

I made a quick calculation, ARM32 supports up 40-bit PA and IPA (e.g 
guest address), which means 28-bits of MFN/GFN. The GFN would have to be 
stored in a 32-bit for alignment, so we would need 2^28 * 4 = 1GiB of 
virtual address space and potentially physical memory.
We don't have 1GB of VA space free on 32-bit right now.

ARM64 currently supports up to 48-bit PA and 48-bit IPA, which means 
36-bits of MFN/GFN. The GFN would have to be stored in 64-bit for 
alignment, so we would need 2^36 * 8 = 512GiB of virtual address space 
and potentially physical memory. While virtual address space is not a 
problem, the memory is a problem for embedded platform. We want Xen to 
be as lean as possible.

I though a bit more on the advantage to create the IOMMU page tables 
later on.

For devices assigned at domain creation, we know that devices will be 
assigned so we could let Xen and populated IOMMU while allocating the 
memory for the domain.

For hotplug devices, this would only happen for PCI as integrated 
devices cannot be hotplug. As we go towards emulating a root complex in 
Xen rather than the PV approach, you would need the root complex to be 
instantiated when the domain is created (unless we want to hotplug 
too?). IHMO, if you assign a root complex is likely that you will want 
to assign a PCI afterwards. So allocating page tables at that time 
sounds sensible.

This would avoid to walk the stage-2 page tables at runtime.

Any opinions?

>>>> 2. The d->page_list seems only contains domain RAM (not 100% sure).
>>>> Where can I get other regions (mmios, etc)?
>>>
>>> These necessarily are being tracked for the domain, so you need to
>>> take them from wherever they're stored on ARM.
>>
>> Is there any reason why you don't seem to have such code on x86? AFAICT
>> only RAM is currently mapped.
>
> Well, no-one care so far, I would guess. Even runtime mappings of
> MMIO space were mad work properly only very recently (by Roger).
>
>> Regarding ARM, we know whether a domain is allowed to access a certain
>> range of MMIO, but, similarly to above, we don't have the conversion MFN
>> -> GFN for them. However in this case, we would not be able to use an
>> M2P as a same MFN may be mapped in multiple domain.
>
> Mapped by multiple domains? If one DomU and Dom0, I can see
> this as possible, but not a requirement. If multiple DomU-s I have
> to raise the question of security.

The interrupt controller GICv2 supports virtualization and allow the 
guest to manage interrupt as it was running on baremetal. There is a 
per-CPU interface that is mapped on every domain. Obviously, the state 
is saved/restored during vCPU context switch.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Unshared IOMMU issues
  2017-02-16 18:09               ` Julien Grall
@ 2017-02-16 18:58                 ` Stefano Stabellini
  2017-02-17  7:43                 ` Jan Beulich
  1 sibling, 0 replies; 17+ messages in thread
From: Stefano Stabellini @ 2017-02-16 18:58 UTC (permalink / raw)
  To: Julien Grall
  Cc: Oleksandr Tyshchenko, nd, Stefano Stabellini, Jan Beulich, Xen Devel

On Thu, 16 Feb 2017, Julien Grall wrote:
> Hi Jan,
> 
> On 16/02/17 16:34, Jan Beulich wrote:
> > > > > On 16.02.17 at 17:11, <julien.grall@arm.com> wrote:
> > > On 16/02/17 15:52, Jan Beulich wrote:
> > > > > > > On 16.02.17 at 16:02, <olekstysh@gmail.com> wrote:
> > > > > On Thu, Feb 16, 2017 at 11:36 AM, Jan Beulich <JBeulich@suse.com>
> > > > > wrote:
> > > > > > > > > On 15.02.17 at 18:43, <olekstysh@gmail.com> wrote:
> > > > > > > 1.
> > > > > > > I need:
> > > > > > > Allow P2M core on ARM to update IOMMU mapping from the first
> > > > > > > "p2m_set_entry".
> > > > > > > I do:
> > > > > > > I explicitly set need_iommu flag for *every* guest domain during
> > > > > > > iommu_domain_init() on ARM in case if page table is not shared.
> > > > > > > At that moment I have no knowledge about will any device be
> > > > > > > assigned
> > > > > > > to this domain or not. I am just want to receive all mapping
> > > > > > > updates
> > > > > > > from P2M code. The P2M will update IOMMU mapping only when
> > > > > > > need_iommu
> > > > > > > is set and page table is not shared.
> > > > > > > I have doubts:
> > > > > > > Is it correct to just force need_iommu flag?
> > > > > > 
> > > > > > No, I don't think so. This is a waste of a measurable amount of
> > > > > > resources when page tables aren't shared.
> > > > > > 
> > > > > > > Or maybe another flag should be introduced?
> > > > > > 
> > > > > > Not sure what you think of here. Where's the problem with building
> > > > > > IOMMU page tables at the time the first device gets assigned, just
> > > > > > like x86 does?
> > > > > OK, I have already had a look at  arch_iommu_populate_page_table() for
> > > > > x86.
> > > > > I don't know at the moment how this solution can help me.
> > > > > There are a least two points the prevent me from doing the similar
> > > > > thing.
> > > > > 1. For create IOMMU mapping I need both mfn and gfn. (+ flags).
> > > > > I am able to get mfn only. How can I find corresponding gfn?
> > > > 
> > > > As the x86 one shows, via mfn_to_gmfn(). If ARM doesn't have
> > > > this, perhaps it needs to gain it?
> > > 
> > > Looking at the x86 implementation, mfn_to_gmfn is using a table for that
> > > indexed by the MFN. This is requiring virtual address space that is
> > > already scarce on ARM32 and also using physical memory.
> > > 
> > > I am not convinced this is the right things to do on ARM as the only
> > > user so far will be the IOMMU code.
> > > 
> > > Another solution would be to go through the stage-2 page table and
> > > replicate all the mappings.
> > 
> > That's certainly an option, if you want to save the memory (and
> > VA space on ARM32). It only makes the x86 model of establishing
> > the mappings slightly more compute intensive.
> 
> I made a quick calculation, ARM32 supports up 40-bit PA and IPA (e.g guest
> address), which means 28-bits of MFN/GFN. The GFN would have to be stored in a
> 32-bit for alignment, so we would need 2^28 * 4 = 1GiB of virtual address
> space and potentially physical memory.
> We don't have 1GB of VA space free on 32-bit right now.
> 
> ARM64 currently supports up to 48-bit PA and 48-bit IPA, which means 36-bits
> of MFN/GFN. The GFN would have to be stored in 64-bit for alignment, so we
> would need 2^36 * 8 = 512GiB of virtual address space and potentially physical
> memory. While virtual address space is not a problem, the memory is a problem
> for embedded platform. We want Xen to be as lean as possible.

I think you are right that it's best not to introduce mfn-to-gfn
tracking on ARM.


> I though a bit more on the advantage to create the IOMMU page tables later on.
> 
> For devices assigned at domain creation, we know that devices will be assigned
> so we could let Xen and populated IOMMU while allocating the memory for the
> domain.
> 
> For hotplug devices, this would only happen for PCI as integrated devices
> cannot be hotplug. As we go towards emulating a root complex in Xen rather
> than the PV approach, you would need the root complex to be instantiated when
> the domain is created (unless we want to hotplug too?). IHMO, if you assign a
> root complex is likely that you will want to assign a PCI afterwards. So
> allocating page tables at that time sounds sensible.
> 
> This would avoid to walk the stage-2 page tables at runtime.
> 
> Any opinions?

Obviously, static device assignment is not a problem. The issue is only
hotplug, which today we don't support.

Like you say, hotplug by definition requires a discoverable bus of some
sort. For example PCI. When we introduce it in guests, we'll also
introduce IOMMU pagetables. The only downside of this idea, is that it
will require users to write something in the VM config file, for example
pci=[''], just to reserve the right to do pci hotplug at some point in
the future. This is not the case today on x86. It's not great, but I
cannot see a way around it, given that we probably don't want to
introduce a root complex in all ARM guests by default anyway.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Unshared IOMMU issues
  2017-02-16 18:09               ` Julien Grall
  2017-02-16 18:58                 ` Stefano Stabellini
@ 2017-02-17  7:43                 ` Jan Beulich
  2017-02-17 15:25                   ` Julien Grall
  1 sibling, 1 reply; 17+ messages in thread
From: Jan Beulich @ 2017-02-17  7:43 UTC (permalink / raw)
  To: Julien Grall; +Cc: Oleksandr Tyshchenko, nd, Stefano Stabellini, Xen Devel

>>> On 16.02.17 at 19:09, <julien.grall@arm.com> wrote:
> On 16/02/17 16:34, Jan Beulich wrote:
>>>>> On 16.02.17 at 17:11, <julien.grall@arm.com> wrote:
>>> On 16/02/17 15:52, Jan Beulich wrote:
>>>>>>> On 16.02.17 at 16:02, <olekstysh@gmail.com> wrote:
>>>>> On Thu, Feb 16, 2017 at 11:36 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>>>>> On 15.02.17 at 18:43, <olekstysh@gmail.com> wrote:
>>>>>>> 1.
>>>>>>> I need:
>>>>>>> Allow P2M core on ARM to update IOMMU mapping from the first "p2m_set_entry".
>>>>>>> I do:
>>>>>>> I explicitly set need_iommu flag for *every* guest domain during
>>>>>>> iommu_domain_init() on ARM in case if page table is not shared.
>>>>>>> At that moment I have no knowledge about will any device be assigned
>>>>>>> to this domain or not. I am just want to receive all mapping updates
>>>>>>> from P2M code. The P2M will update IOMMU mapping only when need_iommu
>>>>>>> is set and page table is not shared.
>>>>>>> I have doubts:
>>>>>>> Is it correct to just force need_iommu flag?
>>>>>>
>>>>>> No, I don't think so. This is a waste of a measurable amount of
>>>>>> resources when page tables aren't shared.
>>>>>>
>>>>>>> Or maybe another flag should be introduced?
>>>>>>
>>>>>> Not sure what you think of here. Where's the problem with building
>>>>>> IOMMU page tables at the time the first device gets assigned, just
>>>>>> like x86 does?
>>>>> OK, I have already had a look at  arch_iommu_populate_page_table() for x86.
>>>>> I don't know at the moment how this solution can help me.
>>>>> There are a least two points the prevent me from doing the similar thing.
>>>>> 1. For create IOMMU mapping I need both mfn and gfn. (+ flags).
>>>>> I am able to get mfn only. How can I find corresponding gfn?
>>>>
>>>> As the x86 one shows, via mfn_to_gmfn(). If ARM doesn't have
>>>> this, perhaps it needs to gain it?
>>>
>>> Looking at the x86 implementation, mfn_to_gmfn is using a table for that
>>> indexed by the MFN. This is requiring virtual address space that is
>>> already scarce on ARM32 and also using physical memory.
>>>
>>> I am not convinced this is the right things to do on ARM as the only
>>> user so far will be the IOMMU code.
>>>
>>> Another solution would be to go through the stage-2 page table and
>>> replicate all the mappings.
>>
>> That's certainly an option, if you want to save the memory (and
>> VA space on ARM32). It only makes the x86 model of establishing
>> the mappings slightly more compute intensive.
> 
> I made a quick calculation, ARM32 supports up 40-bit PA and IPA (e.g 
> guest address), which means 28-bits of MFN/GFN. The GFN would have to be 
> stored in a 32-bit for alignment, so we would need 2^28 * 4 = 1GiB of 
> virtual address space and potentially physical memory.
> We don't have 1GB of VA space free on 32-bit right now.

Right, you'd have to pay a performance price here. Either, as you
say, by looking the translations up from the stage-2 tables, or by
using some on demand mapping scheme for the table here.

> ARM64 currently supports up to 48-bit PA and 48-bit IPA, which means 
> 36-bits of MFN/GFN. The GFN would have to be stored in 64-bit for 
> alignment, so we would need 2^36 * 8 = 512GiB of virtual address space 
> and potentially physical memory. While virtual address space is not a 
> problem, the memory is a problem for embedded platform. We want Xen to 
> be as lean as possible.

Which then leaves the stage-2 table lookup as the only option. Of
course one might consider a hybrid model - memory constrained
systems could go the stage-2 table lookup route, but an larger
systems the cheap direct table lookup could be used.

> I though a bit more on the advantage to create the IOMMU page tables 
> later on.
> 
> For devices assigned at domain creation, we know that devices will be 
> assigned so we could let Xen and populated IOMMU while allocating the 
> memory for the domain.
> 
> For hotplug devices, this would only happen for PCI as integrated 
> devices cannot be hotplug. As we go towards emulating a root complex in 
> Xen rather than the PV approach, you would need the root complex to be 
> instantiated when the domain is created (unless we want to hotplug 
> too?). IHMO, if you assign a root complex is likely that you will want 
> to assign a PCI afterwards. So allocating page tables at that time 
> sounds sensible.
> 
> This would avoid to walk the stage-2 page tables at runtime.

Well, in the end it's your call, but I don't think this is an acceptable
model in the general case. Quite often - see the Viridian support in
x86 Xen for a very good example - distros (XenServer in this case)
enable functionality even if a guest (Linux in the case here) would
never really want to make use of it. Also you need to keep in mind
that for an admin it is better to not have to take care of all
eventualities before first starting a (perhaps long running) guest.
Granted we have a number of other limitations of that same kind,
but if such can be avoided, I'd always prefer to do so.

>>>>> 2. The d->page_list seems only contains domain RAM (not 100% sure).
>>>>> Where can I get other regions (mmios, etc)?
>>>>
>>>> These necessarily are being tracked for the domain, so you need to
>>>> take them from wherever they're stored on ARM.
>>>
>>> Is there any reason why you don't seem to have such code on x86? AFAICT
>>> only RAM is currently mapped.
>>
>> Well, no-one care so far, I would guess. Even runtime mappings of
>> MMIO space were mad work properly only very recently (by Roger).
>>
>>> Regarding ARM, we know whether a domain is allowed to access a certain
>>> range of MMIO, but, similarly to above, we don't have the conversion MFN
>>> -> GFN for them. However in this case, we would not be able to use an
>>> M2P as a same MFN may be mapped in multiple domain.
>>
>> Mapped by multiple domains? If one DomU and Dom0, I can see
>> this as possible, but not a requirement. If multiple DomU-s I have
>> to raise the question of security.
> 
> The interrupt controller GICv2 supports virtualization and allow the 
> guest to manage interrupt as it was running on baremetal. There is a 
> per-CPU interface that is mapped on every domain. Obviously, the state 
> is saved/restored during vCPU context switch.

Now that looks like a very special case, which the code doing the
mapping could (and should) be aware of. Quite likely this area
even gets mapped at a predetermined GFN (range) for guests
(in which case no lookup is necessary at all)?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Unshared IOMMU issues
  2017-02-17  7:43                 ` Jan Beulich
@ 2017-02-17 15:25                   ` Julien Grall
  2017-02-17 20:20                     ` Oleksandr Tyshchenko
  0 siblings, 1 reply; 17+ messages in thread
From: Julien Grall @ 2017-02-17 15:25 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Oleksandr Tyshchenko, nd, Stefano Stabellini, Xen Devel

Hi Jan,

On 17/02/17 07:43, Jan Beulich wrote:
> Well, in the end it's your call, but I don't think this is an acceptable
> model in the general case. Quite often - see the Viridian support in
> x86 Xen for a very good example - distros (XenServer in this case)
> enable functionality even if a guest (Linux in the case here) would
> never really want to make use of it. Also you need to keep in mind
> that for an admin it is better to not have to take care of all
> eventualities before first starting a (perhaps long running) guest.
> Granted we have a number of other limitations of that same kind,
> but if such can be avoided, I'd always prefer to do so.

To be fair, in server side, the SBSA [1] mandates the IOMMU to be 
compatible with ARM SMMU spec. This is allowing us to share page table 
by default with the SMMU. Today the driver does not support unsharing 
and I don't know yet any use case requiring to unshare them.

For embedded side, I would be surprised if they use PCI hotplug. So 
populate IOMMU page table from domain creating is not a big concern.

As this would be an interface between Xen and the toolstack, we could 
revisit later if we have platform where page table are not shared and 
hotplug is been used.

>
>>>>>> 2. The d->page_list seems only contains domain RAM (not 100% sure).
>>>>>> Where can I get other regions (mmios, etc)?
>>>>>
>>>>> These necessarily are being tracked for the domain, so you need to
>>>>> take them from wherever they're stored on ARM.
>>>>
>>>> Is there any reason why you don't seem to have such code on x86? AFAICT
>>>> only RAM is currently mapped.
>>>
>>> Well, no-one care so far, I would guess. Even runtime mappings of
>>> MMIO space were mad work properly only very recently (by Roger).
>>>
>>>> Regarding ARM, we know whether a domain is allowed to access a certain
>>>> range of MMIO, but, similarly to above, we don't have the conversion MFN
>>>> -> GFN for them. However in this case, we would not be able to use an
>>>> M2P as a same MFN may be mapped in multiple domain.
>>>
>>> Mapped by multiple domains? If one DomU and Dom0, I can see
>>> this as possible, but not a requirement. If multiple DomU-s I have
>>> to raise the question of security.
>>
>> The interrupt controller GICv2 supports virtualization and allow the
>> guest to manage interrupt as it was running on baremetal. There is a
>> per-CPU interface that is mapped on every domain. Obviously, the state
>> is saved/restored during vCPU context switch.
>
> Now that looks like a very special case, which the code doing the
> mapping could (and should) be aware of. Quite likely this area
> even gets mapped at a predetermined GFN (range) for guests
> (in which case no lookup is necessary at all)?

Yes we can in this case.

Cheers,

[1] 
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.den0029/index.html

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Unshared IOMMU issues
  2017-02-17 15:25                   ` Julien Grall
@ 2017-02-17 20:20                     ` Oleksandr Tyshchenko
  2017-02-20  8:31                       ` Julien Grall
  0 siblings, 1 reply; 17+ messages in thread
From: Oleksandr Tyshchenko @ 2017-02-17 20:20 UTC (permalink / raw)
  To: Julien Grall; +Cc: nd, Stefano Stabellini, Jan Beulich, Xen Devel

Hi, all.

So, as I understand we have two possible solutions for the IOMMU page
table to be populated:
1.  When the first device is being assigned. Retrieve all mappings
from stage-2 table.
2.  When the domain is being created.

I would prefer the second variant.

Retrieving all mappings from P2M might take *some* time. This time
will depend on how much mappings the stage-2 table has
and how these mappings should be applied to IOMMU table.
Theoretically  the "unshared IOMMU" might support 4K pages only and
might require cache invalidation after installing each entry.

Thank you.

On Fri, Feb 17, 2017 at 5:25 PM, Julien Grall <julien.grall@arm.com> wrote:
> Hi Jan,
>
> On 17/02/17 07:43, Jan Beulich wrote:
>>
>> Well, in the end it's your call, but I don't think this is an acceptable
>> model in the general case. Quite often - see the Viridian support in
>> x86 Xen for a very good example - distros (XenServer in this case)
>> enable functionality even if a guest (Linux in the case here) would
>> never really want to make use of it. Also you need to keep in mind
>> that for an admin it is better to not have to take care of all
>> eventualities before first starting a (perhaps long running) guest.
>> Granted we have a number of other limitations of that same kind,
>> but if such can be avoided, I'd always prefer to do so.
>
>
> To be fair, in server side, the SBSA [1] mandates the IOMMU to be compatible
> with ARM SMMU spec. This is allowing us to share page table by default with
> the SMMU. Today the driver does not support unsharing and I don't know yet
> any use case requiring to unshare them.
>
> For embedded side, I would be surprised if they use PCI hotplug. So populate
> IOMMU page table from domain creating is not a big concern.
>
> As this would be an interface between Xen and the toolstack, we could
> revisit later if we have platform where page table are not shared and
> hotplug is been used.
>
>>
>>>>>>> 2. The d->page_list seems only contains domain RAM (not 100% sure).
>>>>>>> Where can I get other regions (mmios, etc)?
>>>>>>
>>>>>>
>>>>>> These necessarily are being tracked for the domain, so you need to
>>>>>> take them from wherever they're stored on ARM.
>>>>>
>>>>>
>>>>> Is there any reason why you don't seem to have such code on x86? AFAICT
>>>>> only RAM is currently mapped.
>>>>
>>>>
>>>> Well, no-one care so far, I would guess. Even runtime mappings of
>>>> MMIO space were mad work properly only very recently (by Roger).
>>>>
>>>>> Regarding ARM, we know whether a domain is allowed to access a certain
>>>>> range of MMIO, but, similarly to above, we don't have the conversion
>>>>> MFN
>>>>> -> GFN for them. However in this case, we would not be able to use an
>>>>> M2P as a same MFN may be mapped in multiple domain.
>>>>
>>>>
>>>> Mapped by multiple domains? If one DomU and Dom0, I can see
>>>> this as possible, but not a requirement. If multiple DomU-s I have
>>>> to raise the question of security.
>>>
>>>
>>> The interrupt controller GICv2 supports virtualization and allow the
>>> guest to manage interrupt as it was running on baremetal. There is a
>>> per-CPU interface that is mapped on every domain. Obviously, the state
>>> is saved/restored during vCPU context switch.
>>
>>
>> Now that looks like a very special case, which the code doing the
>> mapping could (and should) be aware of. Quite likely this area
>> even gets mapped at a predetermined GFN (range) for guests
>> (in which case no lookup is necessary at all)?
>
>
> Yes we can in this case.
>
> Cheers,
>
> [1]
> http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.den0029/index.html
>
> --
> Julien Grall



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Unshared IOMMU issues
  2017-02-17 20:20                     ` Oleksandr Tyshchenko
@ 2017-02-20  8:31                       ` Julien Grall
  2017-02-21 10:39                         ` Oleksandr Tyshchenko
  0 siblings, 1 reply; 17+ messages in thread
From: Julien Grall @ 2017-02-20  8:31 UTC (permalink / raw)
  To: Oleksandr Tyshchenko; +Cc: nd, Stefano Stabellini, Jan Beulich, Xen Devel

Hello Oleksandr,

On 02/17/2017 08:20 PM, Oleksandr Tyshchenko wrote:
> Hi, all.
>
> So, as I understand we have two possible solutions for the IOMMU page
> table to be populated:
> 1.  When the first device is being assigned. Retrieve all mappings
> from stage-2 table.
> 2.  When the domain is being created.
>
> I would prefer the second variant.

I am happy with the second variant as long as IOMMU is not enabled by 
default when the guest will have no device assigned.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Unshared IOMMU issues
  2017-02-20  8:31                       ` Julien Grall
@ 2017-02-21 10:39                         ` Oleksandr Tyshchenko
  2017-02-22 11:39                           ` Julien Grall
  0 siblings, 1 reply; 17+ messages in thread
From: Oleksandr Tyshchenko @ 2017-02-21 10:39 UTC (permalink / raw)
  To: Julien Grall; +Cc: nd, Stefano Stabellini, Jan Beulich, Xen Devel

Hi, Julien.

On Mon, Feb 20, 2017 at 10:31 AM, Julien Grall <julien.grall@arm.com> wrote:
> Hello Oleksandr,
>
> On 02/17/2017 08:20 PM, Oleksandr Tyshchenko wrote:
>>
>> Hi, all.
>>
>> So, as I understand we have two possible solutions for the IOMMU page
>> table to be populated:
>> 1.  When the first device is being assigned. Retrieve all mappings
>> from stage-2 table.
>> 2.  When the domain is being created.
>>
>> I would prefer the second variant.
>
>
> I am happy with the second variant as long as IOMMU is not enabled by
> default when the guest will have no device assigned.
OK.

Just to clarify.
We don't need to assign devices when creating domain (at the
iommu_domain_init() time).
We just need to have some knowledge about device assignment in general
(will the guest have assigned devices or won't) .
And only in case when the guest is going to have assigned devices we
will populate IOMMU page table (call iommu_construct()).
Right?

>
> Cheers,
>
> --
> Julien Grall



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Unshared IOMMU issues
  2017-02-21 10:39                         ` Oleksandr Tyshchenko
@ 2017-02-22 11:39                           ` Julien Grall
  2017-02-22 11:59                             ` Oleksandr Tyshchenko
  0 siblings, 1 reply; 17+ messages in thread
From: Julien Grall @ 2017-02-22 11:39 UTC (permalink / raw)
  To: Oleksandr Tyshchenko; +Cc: nd, Stefano Stabellini, Jan Beulich, Xen Devel

On 21/02/17 10:39, Oleksandr Tyshchenko wrote:
> Hi, Julien.

Hi Oleksandr,

> On Mon, Feb 20, 2017 at 10:31 AM, Julien Grall <julien.grall@arm.com> wrote:
>> Hello Oleksandr,
>>
>> On 02/17/2017 08:20 PM, Oleksandr Tyshchenko wrote:
>>>
>>> Hi, all.
>>>
>>> So, as I understand we have two possible solutions for the IOMMU page
>>> table to be populated:
>>> 1.  When the first device is being assigned. Retrieve all mappings
>>> from stage-2 table.
>>> 2.  When the domain is being created.
>>>
>>> I would prefer the second variant.
>>
>>
>> I am happy with the second variant as long as IOMMU is not enabled by
>> default when the guest will have no device assigned.
> OK.
>
> Just to clarify.
> We don't need to assign devices when creating domain (at the
> iommu_domain_init() time).
> We just need to have some knowledge about device assignment in general
> (will the guest have assigned devices or won't) .
> And only in case when the guest is going to have assigned devices we
> will populate IOMMU page table (call iommu_construct()).
> Right?

That's correct.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Unshared IOMMU issues
  2017-02-22 11:39                           ` Julien Grall
@ 2017-02-22 11:59                             ` Oleksandr Tyshchenko
  0 siblings, 0 replies; 17+ messages in thread
From: Oleksandr Tyshchenko @ 2017-02-22 11:59 UTC (permalink / raw)
  To: Julien Grall; +Cc: nd, Stefano Stabellini, Jan Beulich, Xen Devel

On Wed, Feb 22, 2017 at 1:39 PM, Julien Grall <julien.grall@arm.com> wrote:
> On 21/02/17 10:39, Oleksandr Tyshchenko wrote:
>>
>> Hi, Julien.

Hi, Julien, all.

>
>
> Hi Oleksandr,
>
>> On Mon, Feb 20, 2017 at 10:31 AM, Julien Grall <julien.grall@arm.com>
>> wrote:
>>>
>>> Hello Oleksandr,
>>>
>>> On 02/17/2017 08:20 PM, Oleksandr Tyshchenko wrote:
>>>>
>>>>
>>>> Hi, all.
>>>>
>>>> So, as I understand we have two possible solutions for the IOMMU page
>>>> table to be populated:
>>>> 1.  When the first device is being assigned. Retrieve all mappings
>>>> from stage-2 table.
>>>> 2.  When the domain is being created.
>>>>
>>>> I would prefer the second variant.
>>>
>>>
>>>
>>> I am happy with the second variant as long as IOMMU is not enabled by
>>> default when the guest will have no device assigned.
>>
>> OK.
>>
>> Just to clarify.
>> We don't need to assign devices when creating domain (at the
>> iommu_domain_init() time).
>> We just need to have some knowledge about device assignment in general
>> (will the guest have assigned devices or won't) .
>> And only in case when the guest is going to have assigned devices we
>> will populate IOMMU page table (call iommu_construct()).
>> Right?
>
>
> That's correct.

Thank you.

>
> Cheers,
>
> --
> Julien Grall



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2017-02-22 11:59 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-15 15:52 Unshared IOMMU issues Oleksandr Tyshchenko
2017-02-15 16:22 ` Jan Beulich
2017-02-15 17:43   ` Oleksandr Tyshchenko
2017-02-16  9:36     ` Jan Beulich
2017-02-16 15:02       ` Oleksandr Tyshchenko
2017-02-16 15:52         ` Jan Beulich
2017-02-16 16:11           ` Julien Grall
2017-02-16 16:34             ` Jan Beulich
2017-02-16 18:09               ` Julien Grall
2017-02-16 18:58                 ` Stefano Stabellini
2017-02-17  7:43                 ` Jan Beulich
2017-02-17 15:25                   ` Julien Grall
2017-02-17 20:20                     ` Oleksandr Tyshchenko
2017-02-20  8:31                       ` Julien Grall
2017-02-21 10:39                         ` Oleksandr Tyshchenko
2017-02-22 11:39                           ` Julien Grall
2017-02-22 11:59                             ` Oleksandr Tyshchenko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.