All of lore.kernel.org
 help / color / mirror / Atom feed
* [virtio-dev] [RFC PATCH] virtio-iommu: Add PAGE_SIZE_MASK property
@ 2020-03-23 13:38 Jean-Philippe Brucker
  2020-03-23 14:22 ` [virtio-dev] " Auger Eric
  0 siblings, 1 reply; 8+ messages in thread
From: Jean-Philippe Brucker @ 2020-03-23 13:38 UTC (permalink / raw)
  To: virtio-dev; +Cc: bbhushan2, eric.auger, Jean-Philippe Brucker

Add a PROBE property to declare the mapping granularity per endpoint.
The virtio-iommu device already declares a granule in its config space,
but when endpoints are behind different physical IOMMUs, they may have
different mapping granules. This new property allows to override the
global page_size_mask for each endpoint.

In the future it may be useful to describe more than one page_size_mask
for each endpoint, and allow them to negotiate it during ATTACH. For
example two masks could allow the driver to choose between 4k and 64k
granule, along with their respective block mapping sizes. This could be
added by replacing \field{reserved} with an array length, for example.

Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
---
I had this change planned for a page-table sharing extension, but it
seems to be useful for the VFIO support in QEMU as well [1].

[1] https://lore.kernel.org/qemu-devel/3da60c1b-6897-7ab1-3a67-bec44fa00a54@redhat.com/
---
---
 virtio-iommu.tex | 33 ++++++++++++++++++++++++++++++++-
 1 file changed, 32 insertions(+), 1 deletion(-)

diff --git a/virtio-iommu.tex b/virtio-iommu.tex
index 08b358a..d7be13d 100644
--- a/virtio-iommu.tex
+++ b/virtio-iommu.tex
@@ -664,7 +664,8 @@ \subsubsection{PROBE request}\label{sec:Device Types / IOMMU Device / Device ope
 \subsubsection{PROBE properties}\label{sec:Device Types / IOMMU Device / Device operations / PROBE properties}
 
 \begin{lstlisting}
-#define VIRTIO_IOMMU_PROBE_T_RESV_MEM   1
+#define VIRTIO_IOMMU_PROBE_T_RESV_MEM         1
+#define VIRTIO_IOMMU_PROBE_T_PAGE_SIZE_MASK   2
 \end{lstlisting}
 
 \paragraph{Property RESV_MEM}\label{sec:Device Types / IOMMU Device / Device operations / PROBE properties / RESVMEM}
@@ -727,6 +728,36 @@ \subsubsection{PROBE properties}\label{sec:Device Types / IOMMU Device / Device
 The device SHOULD NOT allow accesses from the endpoint to RESV_MEM regions
 to affect any other component than the endpoint and the driver.
 
+\paragraph{Property PAGE_SIZE_MASK}\label{sec:Device Types / IOMMU Device / Device operations / PROBE properties / PAGESIZEMASK}
+
+The PAGE_SIZE_MASK property overrides the global \field{page_size_mask}
+configuration for an endpoint.
+
+\begin{lstlisting}
+struct virtio_iommu_probe_page_size_mask {
+  struct virtio_iommu_probe_property head;
+  u8    reserved[4];
+  le64  page_size_mask;
+};
+\end{lstlisting}
+
+The \field{page_size_mask} field behaves in the same way as the global
+\field{page_size_mask} field. The least significant bit describes the
+mapping granularity, and additional bits are hints (see \ref{sec:Device
+Types / IOMMU Device / Device operations})
+
+\drivernormative{\subparagraph}{Property PAGE_SIZE_MASK}{Device Types / IOMMU Device / Device operations / PROBE properties / PAGESIZEMASK}
+
+The driver MUST ignore \field{reserved}.
+
+\devicenormative{\subparagraph}{Property PAGE_SIZE_MASK}{Device Types / IOMMU Device / Device operations / PROBE properties / PAGESIZEMASK}
+
+The device SHOULD set \field{reserved} to zero.
+
+The device MUST set at least one bit in \field{page_size_mask}, describing
+the page granularity. The device MAY set more than one bit in
+\field{page_size_mask}.
+
 \subsubsection{Fault reporting}\label{sev:Device Types / IOMMU Device / Device operations / Fault reporting}
 
 The device can report translation faults and other significant
-- 
2.25.1


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [virtio-dev] Re: [RFC PATCH] virtio-iommu: Add PAGE_SIZE_MASK property
  2020-03-23 13:38 [virtio-dev] [RFC PATCH] virtio-iommu: Add PAGE_SIZE_MASK property Jean-Philippe Brucker
@ 2020-03-23 14:22 ` Auger Eric
  2020-03-26 10:49   ` Jean-Philippe Brucker
  0 siblings, 1 reply; 8+ messages in thread
From: Auger Eric @ 2020-03-23 14:22 UTC (permalink / raw)
  To: Jean-Philippe Brucker, virtio-dev; +Cc: bbhushan2

Hi Jean,

On 3/23/20 2:38 PM, Jean-Philippe Brucker wrote:
> Add a PROBE property to declare the mapping granularity per endpoint.
> The virtio-iommu device already declares a granule in its config space,
> but when endpoints are behind different physical IOMMUs, they may have
> different mapping granules. This new property allows to override the
> global page_size_mask for each endpoint.
> 
> In the future it may be useful to describe more than one page_size_mask
> for each endpoint, and allow them to negotiate it during ATTACH. For
> example two masks could allow the driver to choose between 4k and 64k
> granule, along with their respective block mapping sizes. This could be
> added by replacing \field{reserved} with an array length, for example.
Sorry I don't get the use case where several page size bitmaps should be
exposed.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
> ---
> I had this change planned for a page-table sharing extension, but it
> seems to be useful for the VFIO support in QEMU as well [1].
> 
> [1] https://lore.kernel.org/qemu-devel/3da60c1b-6897-7ab1-3a67-bec44fa00a54@redhat.com/
> ---
> ---
>  virtio-iommu.tex | 33 ++++++++++++++++++++++++++++++++-
>  1 file changed, 32 insertions(+), 1 deletion(-)
> 
> diff --git a/virtio-iommu.tex b/virtio-iommu.tex
> index 08b358a..d7be13d 100644
> --- a/virtio-iommu.tex
> +++ b/virtio-iommu.tex
> @@ -664,7 +664,8 @@ \subsubsection{PROBE request}\label{sec:Device Types / IOMMU Device / Device ope
>  \subsubsection{PROBE properties}\label{sec:Device Types / IOMMU Device / Device operations / PROBE properties}
>  
>  \begin{lstlisting}
> -#define VIRTIO_IOMMU_PROBE_T_RESV_MEM   1
> +#define VIRTIO_IOMMU_PROBE_T_RESV_MEM         1
> +#define VIRTIO_IOMMU_PROBE_T_PAGE_SIZE_MASK   2
>  \end{lstlisting}
>  
>  \paragraph{Property RESV_MEM}\label{sec:Device Types / IOMMU Device / Device operations / PROBE properties / RESVMEM}
> @@ -727,6 +728,36 @@ \subsubsection{PROBE properties}\label{sec:Device Types / IOMMU Device / Device
>  The device SHOULD NOT allow accesses from the endpoint to RESV_MEM regions
>  to affect any other component than the endpoint and the driver.
>  
> +\paragraph{Property PAGE_SIZE_MASK}\label{sec:Device Types / IOMMU Device / Device operations / PROBE properties / PAGESIZEMASK}
"Property PAGESIZEMASK" to be homegeneous with "Property RESV_MEM"?
> +
> +The PAGE_SIZE_MASK property overrides the global \field{page_size_mask}
> +configuration for an endpoint.
> +
> +\begin{lstlisting}
> +struct virtio_iommu_probe_page_size_mask {
> +  struct virtio_iommu_probe_property head;
> +  u8    reserved[4];
> +  le64  page_size_mask;
> +};
> +\end{lstlisting}

Maybe it would be useful to put in the spec the context of use of the
new property (basically what you wrote in the commit msg). Also
re-inforce that if no ep page size mask is returned the default one
applies. Chapters describing page_size_mask global field may be sligtly
augmented to mention the new prop...
> +
> +The \field{page_size_mask} field behaves in the same way as the global
> +\field{page_size_mask} field. The least significant bit describes the
> +mapping granularity, and additional bits are hints (see \ref{sec:Device
> +Types / IOMMU Device / Device operations})
> +
> +\drivernormative{\subparagraph}{Property PAGE_SIZE_MASK}{Device Types / IOMMU Device / Device operations / PROBE properties / PAGESIZEMASK}
> +
> +The driver MUST ignore \field{reserved}.
> +
> +\devicenormative{\subparagraph}{Property PAGE_SIZE_MASK}{Device Types / IOMMU Device / Device operations / PROBE properties / PAGESIZEMASK}
> +
> +The device SHOULD set \field{reserved} to zero.
> +
> +The device MUST set at least one bit in \field{page_size_mask}, describing
> +the page granularity. The device MAY set more than one bit in
> +\field{page_size_mask}.
> +
>  \subsubsection{Fault reporting}\label{sev:Device Types / IOMMU Device / Device operations / Fault reporting}
>  
>  The device can report translation faults and other significant
> 

Thanks

Eric


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [virtio-dev] Re: [RFC PATCH] virtio-iommu: Add PAGE_SIZE_MASK property
  2020-03-23 14:22 ` [virtio-dev] " Auger Eric
@ 2020-03-26 10:49   ` Jean-Philippe Brucker
  2020-03-26 11:20     ` Auger Eric
  0 siblings, 1 reply; 8+ messages in thread
From: Jean-Philippe Brucker @ 2020-03-26 10:49 UTC (permalink / raw)
  To: Auger Eric; +Cc: virtio-dev, bbhushan2

On Mon, Mar 23, 2020 at 03:22:40PM +0100, Auger Eric wrote:
> Hi Jean,
> 
> On 3/23/20 2:38 PM, Jean-Philippe Brucker wrote:
> > Add a PROBE property to declare the mapping granularity per endpoint.
> > The virtio-iommu device already declares a granule in its config space,
> > but when endpoints are behind different physical IOMMUs, they may have
> > different mapping granules. This new property allows to override the
> > global page_size_mask for each endpoint.
> > 
> > In the future it may be useful to describe more than one page_size_mask
> > for each endpoint, and allow them to negotiate it during ATTACH. For
> > example two masks could allow the driver to choose between 4k and 64k
> > granule, along with their respective block mapping sizes. This could be
> > added by replacing \field{reserved} with an array length, for example.
> Sorry I don't get the use case where several page size bitmaps should be
> exposed.

For a 4k granule you get block mappings of 2M and 1G. For a 64k granule
you get 512M and 4T block mappings. If you want to communicate both
options to the guest, you need two separate masks, 0x40201000 and
0x40020010000. Then the guest could choose one of the granules during
attach, if we add a flag to the attach request. I'm not suggesting we do
that now, just trying to make sure it can be extended if anyone actually
wants it. Personally I don't think it's worth adding, especially given the
additional work required in the host.

> > 
> > Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
> > ---
> > I had this change planned for a page-table sharing extension, but it
> > seems to be useful for the VFIO support in QEMU as well [1].
> > 
> > [1] https://lore.kernel.org/qemu-devel/3da60c1b-6897-7ab1-3a67-bec44fa00a54@redhat.com/
> > ---
> > ---
> >  virtio-iommu.tex | 33 ++++++++++++++++++++++++++++++++-
> >  1 file changed, 32 insertions(+), 1 deletion(-)
> > 
> > diff --git a/virtio-iommu.tex b/virtio-iommu.tex
> > index 08b358a..d7be13d 100644
> > --- a/virtio-iommu.tex
> > +++ b/virtio-iommu.tex
> > @@ -664,7 +664,8 @@ \subsubsection{PROBE request}\label{sec:Device Types / IOMMU Device / Device ope
> >  \subsubsection{PROBE properties}\label{sec:Device Types / IOMMU Device / Device operations / PROBE properties}
> >  
> >  \begin{lstlisting}
> > -#define VIRTIO_IOMMU_PROBE_T_RESV_MEM   1
> > +#define VIRTIO_IOMMU_PROBE_T_RESV_MEM         1
> > +#define VIRTIO_IOMMU_PROBE_T_PAGE_SIZE_MASK   2
> >  \end{lstlisting}
> >  
> >  \paragraph{Property RESV_MEM}\label{sec:Device Types / IOMMU Device / Device operations / PROBE properties / RESVMEM}
> > @@ -727,6 +728,36 @@ \subsubsection{PROBE properties}\label{sec:Device Types / IOMMU Device / Device
> >  The device SHOULD NOT allow accesses from the endpoint to RESV_MEM regions
> >  to affect any other component than the endpoint and the driver.
> >  
> > +\paragraph{Property PAGE_SIZE_MASK}\label{sec:Device Types / IOMMU Device / Device operations / PROBE properties / PAGESIZEMASK}
> "Property PAGESIZEMASK" to be homegeneous with "Property RESV_MEM"?

I don't understand, isn't the underscore version more homogeneous? I only
used PAGESIZEMASK and RESVMEM in the label to avoid troubles with latex,
which complains about underscores in labels (maybe because of the
underscore package). But labels are only used for resolving references and
don't appear in the final text.

> > +
> > +The PAGE_SIZE_MASK property overrides the global \field{page_size_mask}
> > +configuration for an endpoint.
> > +
> > +\begin{lstlisting}
> > +struct virtio_iommu_probe_page_size_mask {
> > +  struct virtio_iommu_probe_property head;
> > +  u8    reserved[4];
> > +  le64  page_size_mask;
> > +};
> > +\end{lstlisting}
> 
> Maybe it would be useful to put in the spec the context of use of the
> new property (basically what you wrote in the commit msg). Also
> re-inforce that if no ep page size mask is returned the default one
> applies. Chapters describing page_size_mask global field may be sligtly
> augmented to mention the new prop...

I'll add that if no page size mask property is present, the global page
size mask applies. And in Device operations I'll add that these limits are
described in the device configuration as well as probe properties.

Thanks,
Jean

> > +
> > +The \field{page_size_mask} field behaves in the same way as the global
> > +\field{page_size_mask} field. The least significant bit describes the
> > +mapping granularity, and additional bits are hints (see \ref{sec:Device
> > +Types / IOMMU Device / Device operations})
> > +
> > +\drivernormative{\subparagraph}{Property PAGE_SIZE_MASK}{Device Types / IOMMU Device / Device operations / PROBE properties / PAGESIZEMASK}
> > +
> > +The driver MUST ignore \field{reserved}.
> > +
> > +\devicenormative{\subparagraph}{Property PAGE_SIZE_MASK}{Device Types / IOMMU Device / Device operations / PROBE properties / PAGESIZEMASK}
> > +
> > +The device SHOULD set \field{reserved} to zero.
> > +
> > +The device MUST set at least one bit in \field{page_size_mask}, describing
> > +the page granularity. The device MAY set more than one bit in
> > +\field{page_size_mask}.
> > +
> >  \subsubsection{Fault reporting}\label{sev:Device Types / IOMMU Device / Device operations / Fault reporting}
> >  
> >  The device can report translation faults and other significant
> > 
> 
> Thanks
> 
> Eric
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [virtio-dev] Re: [RFC PATCH] virtio-iommu: Add PAGE_SIZE_MASK property
  2020-03-26 10:49   ` Jean-Philippe Brucker
@ 2020-03-26 11:20     ` Auger Eric
       [not found]       ` <MWHPR1801MB1966D5B702D86554421EE3D6E3CC0@MWHPR1801MB1966.namprd18.prod.outlook.com>
  0 siblings, 1 reply; 8+ messages in thread
From: Auger Eric @ 2020-03-26 11:20 UTC (permalink / raw)
  To: Jean-Philippe Brucker; +Cc: virtio-dev, bbhushan2

Hi Jean,

On 3/26/20 11:49 AM, Jean-Philippe Brucker wrote:
> On Mon, Mar 23, 2020 at 03:22:40PM +0100, Auger Eric wrote:
>> Hi Jean,
>>
>> On 3/23/20 2:38 PM, Jean-Philippe Brucker wrote:
>>> Add a PROBE property to declare the mapping granularity per endpoint.
>>> The virtio-iommu device already declares a granule in its config space,
>>> but when endpoints are behind different physical IOMMUs, they may have
>>> different mapping granules. This new property allows to override the
>>> global page_size_mask for each endpoint.
>>>
>>> In the future it may be useful to describe more than one page_size_mask
>>> for each endpoint, and allow them to negotiate it during ATTACH. For
>>> example two masks could allow the driver to choose between 4k and 64k
>>> granule, along with their respective block mapping sizes. This could be
>>> added by replacing \field{reserved} with an array length, for example.
>> Sorry I don't get the use case where several page size bitmaps should be
>> exposed.
> 
> For a 4k granule you get block mappings of 2M and 1G. For a 64k granule
> you get 512M and 4T block mappings. If you want to communicate both
> options to the guest, you need two separate masks, 0x40201000 and
> 0x40020010000. Then the guest could choose one of the granules during
> attach, if we add a flag to the attach request. I'm not suggesting we do
> that now, just trying to make sure it can be extended if anyone actually
> wants it. Personally I don't think it's worth adding, especially given the
> additional work required in the host.
OK I get it now.
> 
>>>
>>> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
>>> ---
>>> I had this change planned for a page-table sharing extension, but it
>>> seems to be useful for the VFIO support in QEMU as well [1].
>>>
>>> [1] https://lore.kernel.org/qemu-devel/3da60c1b-6897-7ab1-3a67-bec44fa00a54@redhat.com/
>>> ---
>>> ---
>>>  virtio-iommu.tex | 33 ++++++++++++++++++++++++++++++++-
>>>  1 file changed, 32 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/virtio-iommu.tex b/virtio-iommu.tex
>>> index 08b358a..d7be13d 100644
>>> --- a/virtio-iommu.tex
>>> +++ b/virtio-iommu.tex
>>> @@ -664,7 +664,8 @@ \subsubsection{PROBE request}\label{sec:Device Types / IOMMU Device / Device ope
>>>  \subsubsection{PROBE properties}\label{sec:Device Types / IOMMU Device / Device operations / PROBE properties}
>>>  
>>>  \begin{lstlisting}
>>> -#define VIRTIO_IOMMU_PROBE_T_RESV_MEM   1
>>> +#define VIRTIO_IOMMU_PROBE_T_RESV_MEM         1
>>> +#define VIRTIO_IOMMU_PROBE_T_PAGE_SIZE_MASK   2
>>>  \end{lstlisting}
>>>  
>>>  \paragraph{Property RESV_MEM}\label{sec:Device Types / IOMMU Device / Device operations / PROBE properties / RESVMEM}
>>> @@ -727,6 +728,36 @@ \subsubsection{PROBE properties}\label{sec:Device Types / IOMMU Device / Device
>>>  The device SHOULD NOT allow accesses from the endpoint to RESV_MEM regions
>>>  to affect any other component than the endpoint and the driver.
>>>  
>>> +\paragraph{Property PAGE_SIZE_MASK}\label{sec:Device Types / IOMMU Device / Device operations / PROBE properties / PAGESIZEMASK}
>> "Property PAGESIZEMASK" to be homegeneous with "Property RESV_MEM"?>
> I don't understand, isn't the underscore version more homogeneous? I only
> used PAGESIZEMASK and RESVMEM in the label to avoid troubles with latex,
> which complains about underscores in labels (maybe because of the
> underscore package). But labels are only used for resolving references and
> don't appear in the final text.
Sorry, I thought the paragraph title was PAGESIZEMASK instead of
"Property PAGE_SIZE_MASK" but I misread the description. Forgive me for
the noise.
> 
>>> +
>>> +The PAGE_SIZE_MASK property overrides the global \field{page_size_mask}
>>> +configuration for an endpoint.
>>> +
>>> +\begin{lstlisting}
>>> +struct virtio_iommu_probe_page_size_mask {
>>> +  struct virtio_iommu_probe_property head;
>>> +  u8    reserved[4];
>>> +  le64  page_size_mask;
>>> +};
>>> +\end{lstlisting}
>>
>> Maybe it would be useful to put in the spec the context of use of the
>> new property (basically what you wrote in the commit msg). Also
>> re-inforce that if no ep page size mask is returned the default one
>> applies. Chapters describing page_size_mask global field may be sligtly
>> augmented to mention the new prop...
> 
> I'll add that if no page size mask property is present, the global page
> size mask applies. And in Device operations I'll add that these limits are
> described in the device configuration as well as probe properties.
OK

Thanks

Eric
> 
> Thanks,
> Jean
> 
>>> +
>>> +The \field{page_size_mask} field behaves in the same way as the global
>>> +\field{page_size_mask} field. The least significant bit describes the
>>> +mapping granularity, and additional bits are hints (see \ref{sec:Device
>>> +Types / IOMMU Device / Device operations})
>>> +
>>> +\drivernormative{\subparagraph}{Property PAGE_SIZE_MASK}{Device Types / IOMMU Device / Device operations / PROBE properties / PAGESIZEMASK}
>>> +
>>> +The driver MUST ignore \field{reserved}.
>>> +
>>> +\devicenormative{\subparagraph}{Property PAGE_SIZE_MASK}{Device Types / IOMMU Device / Device operations / PROBE properties / PAGESIZEMASK}
>>> +
>>> +The device SHOULD set \field{reserved} to zero.
>>> +
>>> +The device MUST set at least one bit in \field{page_size_mask}, describing
>>> +the page granularity. The device MAY set more than one bit in
>>> +\field{page_size_mask}.
>>> +
>>>  \subsubsection{Fault reporting}\label{sev:Device Types / IOMMU Device / Device operations / Fault reporting}
>>>  
>>>  The device can report translation faults and other significant
>>>
>>
>> Thanks
>>
>> Eric
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
>> For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
>>
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [virtio-dev] Re: [EXT] Re: [virtio-dev] Re: [RFC PATCH] virtio-iommu: Add PAGE_SIZE_MASK property
       [not found]       ` <MWHPR1801MB1966D5B702D86554421EE3D6E3CC0@MWHPR1801MB1966.namprd18.prod.outlook.com>
@ 2020-03-27  9:00         ` Jean-Philippe Brucker
       [not found]           ` <MWHPR1801MB1966713697492AEF974ED528E3CC0@MWHPR1801MB1966.namprd18.prod.outlook.com>
  0 siblings, 1 reply; 8+ messages in thread
From: Jean-Philippe Brucker @ 2020-03-27  9:00 UTC (permalink / raw)
  To: Bharat Bhushan; +Cc: Auger Eric, virtio-dev

On Fri, Mar 27, 2020 at 05:20:56AM +0000, Bharat Bhushan wrote:
> Hi Jean,
> 
> > -----Original Message-----
> > From: Auger Eric <eric.auger@redhat.com>
> > Sent: Thursday, March 26, 2020 4:50 PM
> > To: Jean-Philippe Brucker <jean-philippe@linaro.org>
> > Cc: virtio-dev@lists.oasis-open.org; Bharat Bhushan <bbhushan2@marvell.com>
> > Subject: [EXT] Re: [virtio-dev] Re: [RFC PATCH] virtio-iommu: Add PAGE_SIZE_MASK
> > property
> > 
> > External Email
> > 
> > ----------------------------------------------------------------------
> > Hi Jean,
> > 
> > On 3/26/20 11:49 AM, Jean-Philippe Brucker wrote:
> > > On Mon, Mar 23, 2020 at 03:22:40PM +0100, Auger Eric wrote:
> > >> Hi Jean,
> > >>
> > >> On 3/23/20 2:38 PM, Jean-Philippe Brucker wrote:
> > >>> Add a PROBE property to declare the mapping granularity per endpoint.
> > >>> The virtio-iommu device already declares a granule in its config
> > >>> space, but when endpoints are behind different physical IOMMUs, they
> > >>> may have different mapping granules. This new property allows to
> > >>> override the global page_size_mask for each endpoint.
> > >>>
> > >>> In the future it may be useful to describe more than one
> > >>> page_size_mask for each endpoint, and allow them to negotiate it
> > >>> during ATTACH. For example two masks could allow the driver to
> > >>> choose between 4k and 64k granule, along with their respective block
> > >>> mapping sizes. This could be added by replacing \field{reserved} with an array
> > length, for example.
> > >> Sorry I don't get the use case where several page size bitmaps should
> > >> be exposed.
> > >
> > > For a 4k granule you get block mappings of 2M and 1G. For a 64k
> > > granule you get 512M and 4T block mappings. If you want to communicate
> > > both options to the guest, you need two separate masks, 0x40201000 and
> > > 0x40020010000. Then the guest could choose one of the granules during
> > > attach, if we add a flag to the attach request. I'm not suggesting we
> > > do that now, just trying to make sure it can be extended if anyone
> > > actually wants it. Personally I don't think it's worth adding,
> > > especially given the additional work required in the host.
> > OK I get it now.
> 
> What some clarification about two page-size-mask configurations available.
>  - Global configuration for page-size-mask
>  - per endpoint page-size-mask configuration
> 
> PAGE_SIZE_MASK probe for and endpoint can return zero or non-zero value.
> If it returns non-zero value than it will override the global configuration.
> If PAGE_SIZE_MASK probe for and endpoint return zero value than global page-size-mask configuration will be used.
> 
> Is that correct?

Yes. If a PAGE_SIZE_MASK property is available for an endpoint, the driver
should use that mask. Otherwise it should use the global mask, which is
always provided.

I wonder, should we introduce some form of negotiation now?  If the driver
doesn't know about the new probe property, it will use the global mask. At
some point it will send a MAP request not aligned on the page granule, and
the device will abort the request. If instead we add a flag and page mask
field to the attach request, the device would know that the driver didn't
understand the per-endpoint page mask and abort the attach.

Thanks,
Jean


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [virtio-dev] Re: [EXT] Re: [virtio-dev] Re: [RFC PATCH] virtio-iommu: Add PAGE_SIZE_MASK property
       [not found]             ` <MWHPR1801MB19664CB43F9CE2377B74C1E6E3CC0@MWHPR1801MB1966.namprd18.prod.outlook.com>
@ 2020-03-27  9:35               ` Jean-Philippe Brucker
  2020-03-27 11:05                 ` Auger Eric
  0 siblings, 1 reply; 8+ messages in thread
From: Jean-Philippe Brucker @ 2020-03-27  9:35 UTC (permalink / raw)
  To: Bharat Bhushan; +Cc: Auger Eric, virtio-dev

On Fri, Mar 27, 2020 at 09:17:43AM +0000, Bharat Bhushan wrote:
> 
> Sent again, somehow the email-address got corrupted.
> 
> > -----Original Message-----
> > From: Bharat Bhushan
> > Sent: Friday, March 27, 2020 2:46 PM
> > To: ^[@mx0a-0016f401.pphosted.com
> > Cc: Auger Eric <eric.auger@redhat.com>; virtio-dev@lists.oasis-open.org
> > Subject: RE: [EXT] Re: [virtio-dev] Re: [RFC PATCH] virtio-iommu: Add
> > PAGE_SIZE_MASK property
> > 
> > Hi Jean,
> > 
> > > -----Original Message-----
> > > From: Jean-Philippe Brucker <jean-philippe@linaro.org>
> > > Sent: Friday, March 27, 2020 2:31 PM
> > > To: Bharat Bhushan <bbhushan2@marvell.com>
> > > Cc: Auger Eric <eric.auger@redhat.com>;
> > > virtio-dev@lists.oasis-open.org
> > > Subject: Re: [EXT] Re: [virtio-dev] Re: [RFC PATCH] virtio-iommu: Add
> > > PAGE_SIZE_MASK property
> > >
> > > On Fri, Mar 27, 2020 at 05:20:56AM +0000, Bharat Bhushan wrote:
> > > > Hi Jean,
> > > >
> > > > > -----Original Message-----
> > > > > From: Auger Eric <eric.auger@redhat.com>
> > > > > Sent: Thursday, March 26, 2020 4:50 PM
> > > > > To: Jean-Philippe Brucker <jean-philippe@linaro.org>
> > > > > Cc: virtio-dev@lists.oasis-open.org; Bharat Bhushan
> > > > > <bbhushan2@marvell.com>
> > > > > Subject: [EXT] Re: [virtio-dev] Re: [RFC PATCH] virtio-iommu: Add
> > > > > PAGE_SIZE_MASK property
> > > > >
> > > > > External Email
> > > > >
> > > > > ------------------------------------------------------------------
> > > > > --
> > > > > --
> > > > > Hi Jean,
> > > > >
> > > > > On 3/26/20 11:49 AM, Jean-Philippe Brucker wrote:
> > > > > > On Mon, Mar 23, 2020 at 03:22:40PM +0100, Auger Eric wrote:
> > > > > >> Hi Jean,
> > > > > >>
> > > > > >> On 3/23/20 2:38 PM, Jean-Philippe Brucker wrote:
> > > > > >>> Add a PROBE property to declare the mapping granularity per endpoint.
> > > > > >>> The virtio-iommu device already declares a granule in its
> > > > > >>> config space, but when endpoints are behind different physical
> > > > > >>> IOMMUs, they may have different mapping granules. This new
> > > > > >>> property allows to override the global page_size_mask for each endpoint.
> > > > > >>>
> > > > > >>> In the future it may be useful to describe more than one
> > > > > >>> page_size_mask for each endpoint, and allow them to negotiate
> > > > > >>> it during ATTACH. For example two masks could allow the driver
> > > > > >>> to choose between 4k and 64k granule, along with their
> > > > > >>> respective block mapping sizes. This could be added by
> > > > > >>> replacing \field{reserved} with an array
> > > > > length, for example.
> > > > > >> Sorry I don't get the use case where several page size bitmaps
> > > > > >> should be exposed.
> > > > > >
> > > > > > For a 4k granule you get block mappings of 2M and 1G. For a 64k
> > > > > > granule you get 512M and 4T block mappings. If you want to
> > > > > > communicate both options to the guest, you need two separate
> > > > > > masks, 0x40201000 and 0x40020010000. Then the guest could choose
> > > > > > one of the granules during attach, if we add a flag to the
> > > > > > attach request. I'm not suggesting we do that now, just trying
> > > > > > to make sure it can be extended if anyone actually wants it.
> > > > > > Personally I don't think it's worth adding, especially given the
> > > > > > additional work required in
> > > the host.
> > > > > OK I get it now.
> > > >
> > > > What some clarification about two page-size-mask configurations available.
> > > >  - Global configuration for page-size-mask
> > > >  - per endpoint page-size-mask configuration
> > > >
> > > > PAGE_SIZE_MASK probe for and endpoint can return zero or non-zero value.
> > > > If it returns non-zero value than it will override the global configuration.
> > > > If PAGE_SIZE_MASK probe for and endpoint return zero value than
> > > > global page-
> > > size-mask configuration will be used.
> > > >
> > > > Is that correct?
> > >
> > > Yes. If a PAGE_SIZE_MASK property is available for an endpoint, the
> > > driver should use that mask. Otherwise it should use the global mask, which is
> > always provided.
> > 
> > That mean even if the device return ZERO as page-size mask it will override global
> > page-size-mask configuration?
> > So device should not return PAGE_SIZE_MASK property when page-size-mask not
> > set for that endpoint, that is zero?
> > 
> > Or
> > 
> > Device can return page-size-mask = 0 for the endpoint in PAGE_SIZE_MASK
> > property, driver have to use global page-size-mask when PAGE_SIZE_MASK
> > property returns page-size-mask = 0.
> > 

Ah sorry I didn't get the question. page_size_mask == 0 isn't valid:

	The device MUST set at least one bit in \field{page_size_mask},
	describing the page granularity. The device MAY set more than one
	bit in \field{page_size_mask}.

So the driver doesn't have to expect a value of 0, that would be a device
bug. If it wants to be on the safe side, then it rejects a page_size_mask
of 0 and use the global mask instead.

> > > I wonder, should we introduce some form of negotiation now?  If the
> > > driver doesn't know about the new probe property, it will use the
> > > global mask. At some point it will send a MAP request not aligned on
> > > the page granule, and the device will abort the request. If instead we
> > > add a flag and page mask field to the attach request, the device would
> > > know that the driver didn't understand the per-endpoint page mask and abort
> > the attach.
> > 
> > What if device support PAGE_SIZE_MASK property  then it will always return the
> > property, with zero or non-zero mask.
> > If zero mask, use global mask otherwise use per-endpoint mask.

My question is about device supports PAGE_SIZE_MASK property and driver
doesn't. For example a linux v4.6 guest would ignore the PAGE_SIZE_MASK
property. Then it will use the global mask, and send MAP requests that
aren't aligned on the per-endpoint page granule (they fail with S_RANGE
status). Should we, with the introduction of the PAGE_SIZE_MASK property,
also introduce a page size negotiation mechanism?  So that the device
knows early whether the guest understands or not the provided
PAGE_SIZE_MASK property?

Thanks,
Jean

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [virtio-dev] Re: [EXT] Re: [virtio-dev] Re: [RFC PATCH] virtio-iommu: Add PAGE_SIZE_MASK property
  2020-03-27  9:35               ` Jean-Philippe Brucker
@ 2020-03-27 11:05                 ` Auger Eric
  2020-03-27 11:39                   ` Jean-Philippe Brucker
  0 siblings, 1 reply; 8+ messages in thread
From: Auger Eric @ 2020-03-27 11:05 UTC (permalink / raw)
  To: Jean-Philippe Brucker, Bharat Bhushan; +Cc: virtio-dev

Hi Jean, Bharat,

On 3/27/20 10:35 AM, Jean-Philippe Brucker wrote:
> On Fri, Mar 27, 2020 at 09:17:43AM +0000, Bharat Bhushan wrote:
>>
>> Sent again, somehow the email-address got corrupted.
>>
>>> -----Original Message-----
>>> From: Bharat Bhushan
>>> Sent: Friday, March 27, 2020 2:46 PM
>>> To: ^[@mx0a-0016f401.pphosted.com
>>> Cc: Auger Eric <eric.auger@redhat.com>; virtio-dev@lists.oasis-open.org
>>> Subject: RE: [EXT] Re: [virtio-dev] Re: [RFC PATCH] virtio-iommu: Add
>>> PAGE_SIZE_MASK property
>>>
>>> Hi Jean,
>>>
>>>> -----Original Message-----
>>>> From: Jean-Philippe Brucker <jean-philippe@linaro.org>
>>>> Sent: Friday, March 27, 2020 2:31 PM
>>>> To: Bharat Bhushan <bbhushan2@marvell.com>
>>>> Cc: Auger Eric <eric.auger@redhat.com>;
>>>> virtio-dev@lists.oasis-open.org
>>>> Subject: Re: [EXT] Re: [virtio-dev] Re: [RFC PATCH] virtio-iommu: Add
>>>> PAGE_SIZE_MASK property
>>>>
>>>> On Fri, Mar 27, 2020 at 05:20:56AM +0000, Bharat Bhushan wrote:
>>>>> Hi Jean,
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Auger Eric <eric.auger@redhat.com>
>>>>>> Sent: Thursday, March 26, 2020 4:50 PM
>>>>>> To: Jean-Philippe Brucker <jean-philippe@linaro.org>
>>>>>> Cc: virtio-dev@lists.oasis-open.org; Bharat Bhushan
>>>>>> <bbhushan2@marvell.com>
>>>>>> Subject: [EXT] Re: [virtio-dev] Re: [RFC PATCH] virtio-iommu: Add
>>>>>> PAGE_SIZE_MASK property
>>>>>>
>>>>>> External Email
>>>>>>
>>>>>> ------------------------------------------------------------------
>>>>>> --
>>>>>> --
>>>>>> Hi Jean,
>>>>>>
>>>>>> On 3/26/20 11:49 AM, Jean-Philippe Brucker wrote:
>>>>>>> On Mon, Mar 23, 2020 at 03:22:40PM +0100, Auger Eric wrote:
>>>>>>>> Hi Jean,
>>>>>>>>
>>>>>>>> On 3/23/20 2:38 PM, Jean-Philippe Brucker wrote:
>>>>>>>>> Add a PROBE property to declare the mapping granularity per endpoint.
>>>>>>>>> The virtio-iommu device already declares a granule in its
>>>>>>>>> config space, but when endpoints are behind different physical
>>>>>>>>> IOMMUs, they may have different mapping granules. This new
>>>>>>>>> property allows to override the global page_size_mask for each endpoint.
>>>>>>>>>
>>>>>>>>> In the future it may be useful to describe more than one
>>>>>>>>> page_size_mask for each endpoint, and allow them to negotiate
>>>>>>>>> it during ATTACH. For example two masks could allow the driver
>>>>>>>>> to choose between 4k and 64k granule, along with their
>>>>>>>>> respective block mapping sizes. This could be added by
>>>>>>>>> replacing \field{reserved} with an array
>>>>>> length, for example.
>>>>>>>> Sorry I don't get the use case where several page size bitmaps
>>>>>>>> should be exposed.
>>>>>>>
>>>>>>> For a 4k granule you get block mappings of 2M and 1G. For a 64k
>>>>>>> granule you get 512M and 4T block mappings. If you want to
>>>>>>> communicate both options to the guest, you need two separate
>>>>>>> masks, 0x40201000 and 0x40020010000. Then the guest could choose
>>>>>>> one of the granules during attach, if we add a flag to the
>>>>>>> attach request. I'm not suggesting we do that now, just trying
>>>>>>> to make sure it can be extended if anyone actually wants it.
>>>>>>> Personally I don't think it's worth adding, especially given the
>>>>>>> additional work required in
>>>> the host.
>>>>>> OK I get it now.
>>>>>
>>>>> What some clarification about two page-size-mask configurations available.
>>>>>  - Global configuration for page-size-mask
>>>>>  - per endpoint page-size-mask configuration
>>>>>
>>>>> PAGE_SIZE_MASK probe for and endpoint can return zero or non-zero value.
>>>>> If it returns non-zero value than it will override the global configuration.
>>>>> If PAGE_SIZE_MASK probe for and endpoint return zero value than
>>>>> global page-
>>>> size-mask configuration will be used.
>>>>>
>>>>> Is that correct?
>>>>
>>>> Yes. If a PAGE_SIZE_MASK property is available for an endpoint, the
>>>> driver should use that mask. Otherwise it should use the global mask, which is
>>> always provided.
>>>
>>> That mean even if the device return ZERO as page-size mask it will override global
>>> page-size-mask configuration?
>>> So device should not return PAGE_SIZE_MASK property when page-size-mask not
>>> set for that endpoint, that is zero?
>>>
>>> Or
>>>
>>> Device can return page-size-mask = 0 for the endpoint in PAGE_SIZE_MASK
>>> property, driver have to use global page-size-mask when PAGE_SIZE_MASK
>>> property returns page-size-mask = 0.
>>>
> 
> Ah sorry I didn't get the question. page_size_mask == 0 isn't valid:
> 
> 	The device MUST set at least one bit in \field{page_size_mask},
> 	describing the page granularity. The device MAY set more than one
> 	bit in \field{page_size_mask}.
> 
> So the driver doesn't have to expect a value of 0, that would be a device
> bug. If it wants to be on the safe side, then it rejects a page_size_mask
> of 0 and use the global mask instead.

So I think on device's side, the best is to not return any
PAGE_SIZE_MASK property for an ep which does not have any special
limitation (for instance a virtio EP), as opposed to a VFIO EP, right?
Supporting this probe property does not mean we always report it, right?
> 
>>>> I wonder, should we introduce some form of negotiation now?  If the
>>>> driver doesn't know about the new probe property, it will use the
>>>> global mask. At some point it will send a MAP request not aligned on
>>>> the page granule, and the device will abort the request. If instead we
>>>> add a flag and page mask field to the attach request, the device would
>>>> know that the driver didn't understand the per-endpoint page mask and abort
>>> the attach.
>>>
>>> What if device support PAGE_SIZE_MASK property  then it will always return the
>>> property, with zero or non-zero mask.
>>> If zero mask, use global mask otherwise use per-endpoint mask.
> 
> My question is about device supports PAGE_SIZE_MASK property and driver
> doesn't. For example a linux v4.6 guest would ignore the PAGE_SIZE_MASK
> property. Then it will use the global mask, and send MAP requests that
> aren't aligned on the per-endpoint page granule (they fail with S_RANGE
> status). Should we, with the introduction of the PAGE_SIZE_MASK property,
> also introduce a page size negotiation mechanism?  So that the device
> knows early whether the guest understands or not the provided
> PAGE_SIZE_MASK property?

On std iommu drivers, the check also is done on map, right?
If so, and given the current known user pool, I would leave it as it is
today.

Thanks

Eric
> 
> Thanks,
> Jean
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [virtio-dev] Re: [EXT] Re: [virtio-dev] Re: [RFC PATCH] virtio-iommu: Add PAGE_SIZE_MASK property
  2020-03-27 11:05                 ` Auger Eric
@ 2020-03-27 11:39                   ` Jean-Philippe Brucker
  0 siblings, 0 replies; 8+ messages in thread
From: Jean-Philippe Brucker @ 2020-03-27 11:39 UTC (permalink / raw)
  To: Auger Eric; +Cc: Bharat Bhushan, virtio-dev

On Fri, Mar 27, 2020 at 12:05:42PM +0100, Auger Eric wrote:
> Hi Jean, Bharat,
> 
> On 3/27/20 10:35 AM, Jean-Philippe Brucker wrote:
> > On Fri, Mar 27, 2020 at 09:17:43AM +0000, Bharat Bhushan wrote:
> >>
> >> Sent again, somehow the email-address got corrupted.
> >>
> >>> -----Original Message-----
> >>> From: Bharat Bhushan
> >>> Sent: Friday, March 27, 2020 2:46 PM
> >>> To: ^[@mx0a-0016f401.pphosted.com
> >>> Cc: Auger Eric <eric.auger@redhat.com>; virtio-dev@lists.oasis-open.org
> >>> Subject: RE: [EXT] Re: [virtio-dev] Re: [RFC PATCH] virtio-iommu: Add
> >>> PAGE_SIZE_MASK property
> >>>
> >>> Hi Jean,
> >>>
> >>>> -----Original Message-----
> >>>> From: Jean-Philippe Brucker <jean-philippe@linaro.org>
> >>>> Sent: Friday, March 27, 2020 2:31 PM
> >>>> To: Bharat Bhushan <bbhushan2@marvell.com>
> >>>> Cc: Auger Eric <eric.auger@redhat.com>;
> >>>> virtio-dev@lists.oasis-open.org
> >>>> Subject: Re: [EXT] Re: [virtio-dev] Re: [RFC PATCH] virtio-iommu: Add
> >>>> PAGE_SIZE_MASK property
> >>>>
> >>>> On Fri, Mar 27, 2020 at 05:20:56AM +0000, Bharat Bhushan wrote:
> >>>>> Hi Jean,
> >>>>>
> >>>>>> -----Original Message-----
> >>>>>> From: Auger Eric <eric.auger@redhat.com>
> >>>>>> Sent: Thursday, March 26, 2020 4:50 PM
> >>>>>> To: Jean-Philippe Brucker <jean-philippe@linaro.org>
> >>>>>> Cc: virtio-dev@lists.oasis-open.org; Bharat Bhushan
> >>>>>> <bbhushan2@marvell.com>
> >>>>>> Subject: [EXT] Re: [virtio-dev] Re: [RFC PATCH] virtio-iommu: Add
> >>>>>> PAGE_SIZE_MASK property
> >>>>>>
> >>>>>> External Email
> >>>>>>
> >>>>>> ------------------------------------------------------------------
> >>>>>> --
> >>>>>> --
> >>>>>> Hi Jean,
> >>>>>>
> >>>>>> On 3/26/20 11:49 AM, Jean-Philippe Brucker wrote:
> >>>>>>> On Mon, Mar 23, 2020 at 03:22:40PM +0100, Auger Eric wrote:
> >>>>>>>> Hi Jean,
> >>>>>>>>
> >>>>>>>> On 3/23/20 2:38 PM, Jean-Philippe Brucker wrote:
> >>>>>>>>> Add a PROBE property to declare the mapping granularity per endpoint.
> >>>>>>>>> The virtio-iommu device already declares a granule in its
> >>>>>>>>> config space, but when endpoints are behind different physical
> >>>>>>>>> IOMMUs, they may have different mapping granules. This new
> >>>>>>>>> property allows to override the global page_size_mask for each endpoint.
> >>>>>>>>>
> >>>>>>>>> In the future it may be useful to describe more than one
> >>>>>>>>> page_size_mask for each endpoint, and allow them to negotiate
> >>>>>>>>> it during ATTACH. For example two masks could allow the driver
> >>>>>>>>> to choose between 4k and 64k granule, along with their
> >>>>>>>>> respective block mapping sizes. This could be added by
> >>>>>>>>> replacing \field{reserved} with an array
> >>>>>> length, for example.
> >>>>>>>> Sorry I don't get the use case where several page size bitmaps
> >>>>>>>> should be exposed.
> >>>>>>>
> >>>>>>> For a 4k granule you get block mappings of 2M and 1G. For a 64k
> >>>>>>> granule you get 512M and 4T block mappings. If you want to
> >>>>>>> communicate both options to the guest, you need two separate
> >>>>>>> masks, 0x40201000 and 0x40020010000. Then the guest could choose
> >>>>>>> one of the granules during attach, if we add a flag to the
> >>>>>>> attach request. I'm not suggesting we do that now, just trying
> >>>>>>> to make sure it can be extended if anyone actually wants it.
> >>>>>>> Personally I don't think it's worth adding, especially given the
> >>>>>>> additional work required in
> >>>> the host.
> >>>>>> OK I get it now.
> >>>>>
> >>>>> What some clarification about two page-size-mask configurations available.
> >>>>>  - Global configuration for page-size-mask
> >>>>>  - per endpoint page-size-mask configuration
> >>>>>
> >>>>> PAGE_SIZE_MASK probe for and endpoint can return zero or non-zero value.
> >>>>> If it returns non-zero value than it will override the global configuration.
> >>>>> If PAGE_SIZE_MASK probe for and endpoint return zero value than
> >>>>> global page-
> >>>> size-mask configuration will be used.
> >>>>>
> >>>>> Is that correct?
> >>>>
> >>>> Yes. If a PAGE_SIZE_MASK property is available for an endpoint, the
> >>>> driver should use that mask. Otherwise it should use the global mask, which is
> >>> always provided.
> >>>
> >>> That mean even if the device return ZERO as page-size mask it will override global
> >>> page-size-mask configuration?
> >>> So device should not return PAGE_SIZE_MASK property when page-size-mask not
> >>> set for that endpoint, that is zero?
> >>>
> >>> Or
> >>>
> >>> Device can return page-size-mask = 0 for the endpoint in PAGE_SIZE_MASK
> >>> property, driver have to use global page-size-mask when PAGE_SIZE_MASK
> >>> property returns page-size-mask = 0.
> >>>
> > 
> > Ah sorry I didn't get the question. page_size_mask == 0 isn't valid:
> > 
> > 	The device MUST set at least one bit in \field{page_size_mask},
> > 	describing the page granularity. The device MAY set more than one
> > 	bit in \field{page_size_mask}.
> > 
> > So the driver doesn't have to expect a value of 0, that would be a device
> > bug. If it wants to be on the safe side, then it rejects a page_size_mask
> > of 0 and use the global mask instead.
> 
> So I think on device's side, the best is to not return any
> PAGE_SIZE_MASK property for an ep which does not have any special
> limitation (for instance a virtio EP), as opposed to a VFIO EP, right?
> Supporting this probe property does not mean we always report it, right?

No we don't have to report for devices without special limitations, since
it would be the same as global mask anyway

> > 
> >>>> I wonder, should we introduce some form of negotiation now?  If the
> >>>> driver doesn't know about the new probe property, it will use the
> >>>> global mask. At some point it will send a MAP request not aligned on
> >>>> the page granule, and the device will abort the request. If instead we
> >>>> add a flag and page mask field to the attach request, the device would
> >>>> know that the driver didn't understand the per-endpoint page mask and abort
> >>> the attach.
> >>>
> >>> What if device support PAGE_SIZE_MASK property  then it will always return the
> >>> property, with zero or non-zero mask.
> >>> If zero mask, use global mask otherwise use per-endpoint mask.
> > 
> > My question is about device supports PAGE_SIZE_MASK property and driver
> > doesn't. For example a linux v4.6 guest would ignore the PAGE_SIZE_MASK
> > property. Then it will use the global mask, and send MAP requests that
> > aren't aligned on the per-endpoint page granule (they fail with S_RANGE
> > status). Should we, with the introduction of the PAGE_SIZE_MASK property,
> > also introduce a page size negotiation mechanism?  So that the device
> > knows early whether the guest understands or not the provided
> > PAGE_SIZE_MASK property?
> 
> On std iommu drivers, the check also is done on map, right?

The host device needs to check alignment:

  If virt_start, phys_start or (virt_end + 1) is not aligned on the page
  granularity, the device SHOULD reject the request and set status to
  VIRTIO_IOMMU_S_RANGE.

I think host VFIO checks all of these.

Guest IOMMU drivers will base their mappings on the page granule, but that
granule may be wrong if they ignored the PAGE_SIZE_MASK property of an
endpoint.

> If so, and given the current known user pool, I would leave it as it is
> today.

Agreed with this

Thanks,
Jean

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-03-27 11:39 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-23 13:38 [virtio-dev] [RFC PATCH] virtio-iommu: Add PAGE_SIZE_MASK property Jean-Philippe Brucker
2020-03-23 14:22 ` [virtio-dev] " Auger Eric
2020-03-26 10:49   ` Jean-Philippe Brucker
2020-03-26 11:20     ` Auger Eric
     [not found]       ` <MWHPR1801MB1966D5B702D86554421EE3D6E3CC0@MWHPR1801MB1966.namprd18.prod.outlook.com>
2020-03-27  9:00         ` [virtio-dev] Re: [EXT] " Jean-Philippe Brucker
     [not found]           ` <MWHPR1801MB1966713697492AEF974ED528E3CC0@MWHPR1801MB1966.namprd18.prod.outlook.com>
     [not found]             ` <MWHPR1801MB19664CB43F9CE2377B74C1E6E3CC0@MWHPR1801MB1966.namprd18.prod.outlook.com>
2020-03-27  9:35               ` Jean-Philippe Brucker
2020-03-27 11:05                 ` Auger Eric
2020-03-27 11:39                   ` Jean-Philippe Brucker

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.