All of lore.kernel.org
 help / color / mirror / Atom feed
* your change "iommu: make map and unmap take a page count, similar to flush"
@ 2021-07-21 15:58 Jan Beulich
  2021-07-21 16:53 ` Paul Durrant
  0 siblings, 1 reply; 3+ messages in thread
From: Jan Beulich @ 2021-07-21 15:58 UTC (permalink / raw)
  To: Paul Durrant; +Cc: xen-devel

Paul,

the description of this says

"At the moment iommu_map() and iommu_unmap() take a page order rather than a
 count, whereas iommu_iotlb_flush() takes a page count rather than an order.
 This patch makes them consistent with each other, opting for a page count since
 CPU page orders are not necessarily the same as those of an IOMMU."

I don't understand the latter sentence at all, now that I read it again.
What may differ is the base page size, but that affects counts of pages
and page order all the same.

I'm intending to make an attempt to cut through the page order (or
count) to the actual vendor functions, in order to then be able to
establish large page mappings where possible. In all cases (perhaps
most noticable on Arm) handing them a page order would seem easier, so
I was considering to have iommu_{,un}map() do that step of abstraction
(or transformation). But since you did explicitly convert from order to
count, I was wondering whether me following this plan would cause
problems with any of your further intentions back then.

If we really wanted to cater for base page size varying between CPU and
IOMMU, besides the IOMMU vendor code needing to announce their value, I
guess we'd have to do quite a bit more abstracting work, as it would
matter to outer layers in particular if the IOMMU base page size was
larger than the CPU's. Supporting just smaller IOMMU base page sizes,
otoh, would seem entirely feasible to deal with inside the rework of
iommu_{,un}map() as mentioned above.

Jan



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: your change "iommu: make map and unmap take a page count, similar to flush"
  2021-07-21 15:58 your change "iommu: make map and unmap take a page count, similar to flush" Jan Beulich
@ 2021-07-21 16:53 ` Paul Durrant
  2021-07-23  6:24   ` Jan Beulich
  0 siblings, 1 reply; 3+ messages in thread
From: Paul Durrant @ 2021-07-21 16:53 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel

On 21/07/2021 16:58, Jan Beulich wrote:
> Paul,
> 
> the description of this says
> 
> "At the moment iommu_map() and iommu_unmap() take a page order rather than a
>   count, whereas iommu_iotlb_flush() takes a page count rather than an order.
>   This patch makes them consistent with each other, opting for a page count since
>   CPU page orders are not necessarily the same as those of an IOMMU."
> 
> I don't understand the latter sentence at all, now that I read it again.
> What may differ is the base page size, but that affects counts of pages
> and page order all the same.
> 

Hi Jan,

What it's supposed to mean is that a CPU may e.g. have page orders 0 
(4k) , 9 (2M), etc. but the IOMMU may not use the same orders. And by 
page count it means a count of (CPU) order 0 pages (which I assume all 
IOMMUs will support).

> I'm intending to make an attempt to cut through the page order (or
> count) to the actual vendor functions, in order to then be able to
> establish large page mappings where possible. In all cases (perhaps
> most noticable on Arm) handing them a page order would seem easier, so
> I was considering to have iommu_{,un}map() do that step of abstraction
> (or transformation). But since you did explicitly convert from order to
> count, I was wondering whether me following this plan would cause
> problems with any of your further intentions back then.
> 
> If we really wanted to cater for base page size varying between CPU and
> IOMMU, besides the IOMMU vendor code needing to announce their value, I
> guess we'd have to do quite a bit more abstracting work, as it would
> matter to outer layers in particular if the IOMMU base page size was
> larger than the CPU's.

Yes, if the order 0 page size was different between IOMMU and MMU then 
that would clearly be more tricky to deal with.

> Supporting just smaller IOMMU base page sizes,
> otoh, would seem entirely feasible to deal with inside the rework of
> iommu_{,un}map() as mentioned above.
> 

Yes. The aim of the patch was really only to make the calls consistent. 
Changing all of them to use an order rather than a count would be ok too 
I think.

   Paul

> Jan
> 



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: your change "iommu: make map and unmap take a page count, similar to flush"
  2021-07-21 16:53 ` Paul Durrant
@ 2021-07-23  6:24   ` Jan Beulich
  0 siblings, 0 replies; 3+ messages in thread
From: Jan Beulich @ 2021-07-23  6:24 UTC (permalink / raw)
  To: paul; +Cc: xen-devel

On 21.07.2021 18:53, Paul Durrant wrote:
> On 21/07/2021 16:58, Jan Beulich wrote:
>> Paul,
>>
>> the description of this says
>>
>> "At the moment iommu_map() and iommu_unmap() take a page order rather than a
>>   count, whereas iommu_iotlb_flush() takes a page count rather than an order.
>>   This patch makes them consistent with each other, opting for a page count since
>>   CPU page orders are not necessarily the same as those of an IOMMU."
>>
>> I don't understand the latter sentence at all, now that I read it again.
>> What may differ is the base page size, but that affects counts of pages
>> and page order all the same.
> 
> What it's supposed to mean is that a CPU may e.g. have page orders 0 
> (4k) , 9 (2M), etc. but the IOMMU may not use the same orders. And by 
> page count it means a count of (CPU) order 0 pages (which I assume all 
> IOMMUs will support).

Oh, that's still somewhat odd. Which particular (higher) orders an IOMMU
variant supports should be of no interest in what the generic layer
passes down. For example in reality the AMD IOMMUs support all page
orders, by allowing "non-default page size" through a leaf level setting
of 7. I'd therefore expect the generic layer to pass down arbitrary
order values, with the vendor code needing to split the request if
necessary.

The only thing that the generic layer needs to know is the base page
size, because it can't request mappings that are finer grained than
that. But perhaps for the immediate purpose we can continue to assume
a common base page size of 4k.

>> I'm intending to make an attempt to cut through the page order (or
>> count) to the actual vendor functions, in order to then be able to
>> establish large page mappings where possible. In all cases (perhaps
>> most noticable on Arm) handing them a page order would seem easier, so
>> I was considering to have iommu_{,un}map() do that step of abstraction
>> (or transformation). But since you did explicitly convert from order to
>> count, I was wondering whether me following this plan would cause
>> problems with any of your further intentions back then.
>>
>> If we really wanted to cater for base page size varying between CPU and
>> IOMMU, besides the IOMMU vendor code needing to announce their value, I
>> guess we'd have to do quite a bit more abstracting work, as it would
>> matter to outer layers in particular if the IOMMU base page size was
>> larger than the CPU's.
> 
> Yes, if the order 0 page size was different between IOMMU and MMU then 
> that would clearly be more tricky to deal with.
> 
>> Supporting just smaller IOMMU base page sizes,
>> otoh, would seem entirely feasible to deal with inside the rework of
>> iommu_{,un}map() as mentioned above.
>>
> 
> Yes. The aim of the patch was really only to make the calls consistent. 
> Changing all of them to use an order rather than a count would be ok too 
> I think.

I'll go check whether all the flush operations would convert cleanly,
and if so I may indeed move things back and beyond.

Jan



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-07-23  6:25 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-21 15:58 your change "iommu: make map and unmap take a page count, similar to flush" Jan Beulich
2021-07-21 16:53 ` Paul Durrant
2021-07-23  6:24   ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.