All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory
@ 2017-05-31 13:03 Julien Grall
  2017-05-31 13:54 ` Boris Ostrovsky
                   ` (3 more replies)
  0 siblings, 4 replies; 22+ messages in thread
From: Julien Grall @ 2017-05-31 13:03 UTC (permalink / raw)
  To: xen-devel
  Cc: sstabellini, boris.ostrovsky, jgross, linux-kernel, Julien Grall,
	stable, Feng Kan

Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page granularity" did
not go far enough to support 64KB in mmap_batch_fn.

The variable 'nr' is the number of 4KB chunk to map. However, when Linux
is using 64KB page granularity the array of pages (vma->vm_private_data)
contain one page per 64KB. Fix it by incrementing st->index correctly.

Furthermore, st->va is not correctly incremented as PAGE_SIZE !=
XEN_PAGE_SIZE.

Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page granularity")
CC: stable@vger.kernel.org
Reported-by: Feng Kan <fkan@apm.com>
Signed-off-by: Julien Grall <julien.grall@arm.com>
---
 drivers/xen/privcmd.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 7a92a5e1d40c..feca75b07fdd 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -362,8 +362,8 @@ static int mmap_batch_fn(void *data, int nr, void *state)
 				st->global_error = 1;
 		}
 	}
-	st->va += PAGE_SIZE * nr;
-	st->index += nr;
+	st->va += XEN_PAGE_SIZE * nr;
+	st->index += nr / XEN_PFN_PER_PAGE;
 
 	return 0;
 }
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory
  2017-05-31 13:03 [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory Julien Grall
@ 2017-05-31 13:54 ` Boris Ostrovsky
  2017-06-01 12:50   ` Julien Grall
  2017-06-01 12:50   ` Julien Grall
  2017-05-31 13:54 ` Boris Ostrovsky
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 22+ messages in thread
From: Boris Ostrovsky @ 2017-05-31 13:54 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: sstabellini, jgross, linux-kernel, stable, Feng Kan

On 05/31/2017 09:03 AM, Julien Grall wrote:
> Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page granularity" did
> not go far enough to support 64KB in mmap_batch_fn.
>
> The variable 'nr' is the number of 4KB chunk to map. However, when Linux
> is using 64KB page granularity the array of pages (vma->vm_private_data)
> contain one page per 64KB. Fix it by incrementing st->index correctly.
>
> Furthermore, st->va is not correctly incremented as PAGE_SIZE !=
> XEN_PAGE_SIZE.
>
> Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page granularity")
> CC: stable@vger.kernel.org
> Reported-by: Feng Kan <fkan@apm.com>
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> ---
>  drivers/xen/privcmd.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index 7a92a5e1d40c..feca75b07fdd 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -362,8 +362,8 @@ static int mmap_batch_fn(void *data, int nr, void *state)
>  				st->global_error = 1;
>  		}
>  	}
> -	st->va += PAGE_SIZE * nr;
> -	st->index += nr;
> +	st->va += XEN_PAGE_SIZE * nr;
> +	st->index += nr / XEN_PFN_PER_PAGE;
>  
>  	return 0;
>  }


Are we still using PAGE_MASK for xen_remap_domain_gfn_array()?

-boris

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory
  2017-05-31 13:03 [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory Julien Grall
  2017-05-31 13:54 ` Boris Ostrovsky
@ 2017-05-31 13:54 ` Boris Ostrovsky
  2017-06-07  9:25 ` Juergen Gross
  2017-06-07  9:25 ` Juergen Gross
  3 siblings, 0 replies; 22+ messages in thread
From: Boris Ostrovsky @ 2017-05-31 13:54 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: jgross, Feng Kan, sstabellini, linux-kernel, stable

On 05/31/2017 09:03 AM, Julien Grall wrote:
> Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page granularity" did
> not go far enough to support 64KB in mmap_batch_fn.
>
> The variable 'nr' is the number of 4KB chunk to map. However, when Linux
> is using 64KB page granularity the array of pages (vma->vm_private_data)
> contain one page per 64KB. Fix it by incrementing st->index correctly.
>
> Furthermore, st->va is not correctly incremented as PAGE_SIZE !=
> XEN_PAGE_SIZE.
>
> Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page granularity")
> CC: stable@vger.kernel.org
> Reported-by: Feng Kan <fkan@apm.com>
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> ---
>  drivers/xen/privcmd.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index 7a92a5e1d40c..feca75b07fdd 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -362,8 +362,8 @@ static int mmap_batch_fn(void *data, int nr, void *state)
>  				st->global_error = 1;
>  		}
>  	}
> -	st->va += PAGE_SIZE * nr;
> -	st->index += nr;
> +	st->va += XEN_PAGE_SIZE * nr;
> +	st->index += nr / XEN_PFN_PER_PAGE;
>  
>  	return 0;
>  }


Are we still using PAGE_MASK for xen_remap_domain_gfn_array()?

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory
  2017-05-31 13:54 ` Boris Ostrovsky
@ 2017-06-01 12:50   ` Julien Grall
  2017-06-01 13:33     ` Boris Ostrovsky
  2017-06-01 13:33     ` Boris Ostrovsky
  2017-06-01 12:50   ` Julien Grall
  1 sibling, 2 replies; 22+ messages in thread
From: Julien Grall @ 2017-06-01 12:50 UTC (permalink / raw)
  To: Boris Ostrovsky, xen-devel
  Cc: sstabellini, jgross, linux-kernel, stable, Feng Kan

Hi Boris,

On 31/05/17 14:54, Boris Ostrovsky wrote:
> On 05/31/2017 09:03 AM, Julien Grall wrote:
>> Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page granularity" did
>> not go far enough to support 64KB in mmap_batch_fn.
>>
>> The variable 'nr' is the number of 4KB chunk to map. However, when Linux
>> is using 64KB page granularity the array of pages (vma->vm_private_data)
>> contain one page per 64KB. Fix it by incrementing st->index correctly.
>>
>> Furthermore, st->va is not correctly incremented as PAGE_SIZE !=
>> XEN_PAGE_SIZE.
>>
>> Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page granularity")
>> CC: stable@vger.kernel.org
>> Reported-by: Feng Kan <fkan@apm.com>
>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>> ---
>>  drivers/xen/privcmd.c | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>> index 7a92a5e1d40c..feca75b07fdd 100644
>> --- a/drivers/xen/privcmd.c
>> +++ b/drivers/xen/privcmd.c
>> @@ -362,8 +362,8 @@ static int mmap_batch_fn(void *data, int nr, void *state)
>>  				st->global_error = 1;
>>  		}
>>  	}
>> -	st->va += PAGE_SIZE * nr;
>> -	st->index += nr;
>> +	st->va += XEN_PAGE_SIZE * nr;
>> +	st->index += nr / XEN_PFN_PER_PAGE;
>>
>>  	return 0;
>>  }
>
>
> Are we still using PAGE_MASK for xen_remap_domain_gfn_array()?

Do you mean in the xen_xlate_remap_gfn_array implementation? If so there 
are no use of PAGE_MASK as the code has been converted to support 64K 
page granularity.

If you mean the x86 version of xen_remap_domain_gfn_array, then we don't 
really care as x86 only use 4KB page granularity.

Cheers,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory
  2017-05-31 13:54 ` Boris Ostrovsky
  2017-06-01 12:50   ` Julien Grall
@ 2017-06-01 12:50   ` Julien Grall
  1 sibling, 0 replies; 22+ messages in thread
From: Julien Grall @ 2017-06-01 12:50 UTC (permalink / raw)
  To: Boris Ostrovsky, xen-devel
  Cc: jgross, Feng Kan, sstabellini, linux-kernel, stable

Hi Boris,

On 31/05/17 14:54, Boris Ostrovsky wrote:
> On 05/31/2017 09:03 AM, Julien Grall wrote:
>> Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page granularity" did
>> not go far enough to support 64KB in mmap_batch_fn.
>>
>> The variable 'nr' is the number of 4KB chunk to map. However, when Linux
>> is using 64KB page granularity the array of pages (vma->vm_private_data)
>> contain one page per 64KB. Fix it by incrementing st->index correctly.
>>
>> Furthermore, st->va is not correctly incremented as PAGE_SIZE !=
>> XEN_PAGE_SIZE.
>>
>> Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page granularity")
>> CC: stable@vger.kernel.org
>> Reported-by: Feng Kan <fkan@apm.com>
>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>> ---
>>  drivers/xen/privcmd.c | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>> index 7a92a5e1d40c..feca75b07fdd 100644
>> --- a/drivers/xen/privcmd.c
>> +++ b/drivers/xen/privcmd.c
>> @@ -362,8 +362,8 @@ static int mmap_batch_fn(void *data, int nr, void *state)
>>  				st->global_error = 1;
>>  		}
>>  	}
>> -	st->va += PAGE_SIZE * nr;
>> -	st->index += nr;
>> +	st->va += XEN_PAGE_SIZE * nr;
>> +	st->index += nr / XEN_PFN_PER_PAGE;
>>
>>  	return 0;
>>  }
>
>
> Are we still using PAGE_MASK for xen_remap_domain_gfn_array()?

Do you mean in the xen_xlate_remap_gfn_array implementation? If so there 
are no use of PAGE_MASK as the code has been converted to support 64K 
page granularity.

If you mean the x86 version of xen_remap_domain_gfn_array, then we don't 
really care as x86 only use 4KB page granularity.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory
  2017-06-01 12:50   ` Julien Grall
  2017-06-01 13:33     ` Boris Ostrovsky
@ 2017-06-01 13:33     ` Boris Ostrovsky
  2017-06-01 14:01       ` Julien Grall
  2017-06-01 14:01       ` Julien Grall
  1 sibling, 2 replies; 22+ messages in thread
From: Boris Ostrovsky @ 2017-06-01 13:33 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: sstabellini, jgross, linux-kernel, stable, Feng Kan

On 06/01/2017 08:50 AM, Julien Grall wrote:
> Hi Boris,
>
> On 31/05/17 14:54, Boris Ostrovsky wrote:
>> On 05/31/2017 09:03 AM, Julien Grall wrote:
>>> Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page
>>> granularity" did
>>> not go far enough to support 64KB in mmap_batch_fn.
>>>
>>> The variable 'nr' is the number of 4KB chunk to map. However, when
>>> Linux
>>> is using 64KB page granularity the array of pages
>>> (vma->vm_private_data)
>>> contain one page per 64KB. Fix it by incrementing st->index correctly.
>>>
>>> Furthermore, st->va is not correctly incremented as PAGE_SIZE !=
>>> XEN_PAGE_SIZE.
>>>
>>> Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page
>>> granularity")
>>> CC: stable@vger.kernel.org
>>> Reported-by: Feng Kan <fkan@apm.com>
>>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>>> ---
>>>  drivers/xen/privcmd.c | 4 ++--
>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>>> index 7a92a5e1d40c..feca75b07fdd 100644
>>> --- a/drivers/xen/privcmd.c
>>> +++ b/drivers/xen/privcmd.c
>>> @@ -362,8 +362,8 @@ static int mmap_batch_fn(void *data, int nr,
>>> void *state)
>>>                  st->global_error = 1;
>>>          }
>>>      }
>>> -    st->va += PAGE_SIZE * nr;
>>> -    st->index += nr;
>>> +    st->va += XEN_PAGE_SIZE * nr;
>>> +    st->index += nr / XEN_PFN_PER_PAGE;
>>>
>>>      return 0;
>>>  }
>>
>>
>> Are we still using PAGE_MASK for xen_remap_domain_gfn_array()?
>
> Do you mean in the xen_xlate_remap_gfn_array implementation? If so
> there are no use of PAGE_MASK as the code has been converted to
> support 64K page granularity.
>
> If you mean the x86 version of xen_remap_domain_gfn_array, then we
> don't really care as x86 only use 4KB page granularity.


I meant right above the change that you made. Should it also be replaced
with XEN_PAGE_MASK? (Sorry for being unclear.)

 ==>     ret = xen_remap_domain_gfn_array(st->vma, st->va & PAGE_MASK,
gfnp, nr,
                                         (int *)gfnp, st->vma->vm_page_prot,
                                         st->domain, cur_pages);

        /* Adjust the global_error? */
        if (ret != nr) {
                if (ret == -ENOENT)
                        st->global_error = -ENOENT;
                else {
                        /* Record that at least one error has happened. */
                        if (st->global_error == 0)
                                st->global_error = 1;
                }
        }
        st->va += PAGE_SIZE * nr;
        st->index += nr;

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory
  2017-06-01 12:50   ` Julien Grall
@ 2017-06-01 13:33     ` Boris Ostrovsky
  2017-06-01 13:33     ` Boris Ostrovsky
  1 sibling, 0 replies; 22+ messages in thread
From: Boris Ostrovsky @ 2017-06-01 13:33 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: jgross, Feng Kan, sstabellini, linux-kernel, stable

On 06/01/2017 08:50 AM, Julien Grall wrote:
> Hi Boris,
>
> On 31/05/17 14:54, Boris Ostrovsky wrote:
>> On 05/31/2017 09:03 AM, Julien Grall wrote:
>>> Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page
>>> granularity" did
>>> not go far enough to support 64KB in mmap_batch_fn.
>>>
>>> The variable 'nr' is the number of 4KB chunk to map. However, when
>>> Linux
>>> is using 64KB page granularity the array of pages
>>> (vma->vm_private_data)
>>> contain one page per 64KB. Fix it by incrementing st->index correctly.
>>>
>>> Furthermore, st->va is not correctly incremented as PAGE_SIZE !=
>>> XEN_PAGE_SIZE.
>>>
>>> Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page
>>> granularity")
>>> CC: stable@vger.kernel.org
>>> Reported-by: Feng Kan <fkan@apm.com>
>>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>>> ---
>>>  drivers/xen/privcmd.c | 4 ++--
>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>>> index 7a92a5e1d40c..feca75b07fdd 100644
>>> --- a/drivers/xen/privcmd.c
>>> +++ b/drivers/xen/privcmd.c
>>> @@ -362,8 +362,8 @@ static int mmap_batch_fn(void *data, int nr,
>>> void *state)
>>>                  st->global_error = 1;
>>>          }
>>>      }
>>> -    st->va += PAGE_SIZE * nr;
>>> -    st->index += nr;
>>> +    st->va += XEN_PAGE_SIZE * nr;
>>> +    st->index += nr / XEN_PFN_PER_PAGE;
>>>
>>>      return 0;
>>>  }
>>
>>
>> Are we still using PAGE_MASK for xen_remap_domain_gfn_array()?
>
> Do you mean in the xen_xlate_remap_gfn_array implementation? If so
> there are no use of PAGE_MASK as the code has been converted to
> support 64K page granularity.
>
> If you mean the x86 version of xen_remap_domain_gfn_array, then we
> don't really care as x86 only use 4KB page granularity.


I meant right above the change that you made. Should it also be replaced
with XEN_PAGE_MASK? (Sorry for being unclear.)

 ==>     ret = xen_remap_domain_gfn_array(st->vma, st->va & PAGE_MASK,
gfnp, nr,
                                         (int *)gfnp, st->vma->vm_page_prot,
                                         st->domain, cur_pages);

        /* Adjust the global_error? */
        if (ret != nr) {
                if (ret == -ENOENT)
                        st->global_error = -ENOENT;
                else {
                        /* Record that at least one error has happened. */
                        if (st->global_error == 0)
                                st->global_error = 1;
                }
        }
        st->va += PAGE_SIZE * nr;
        st->index += nr;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory
  2017-06-01 13:33     ` Boris Ostrovsky
@ 2017-06-01 14:01       ` Julien Grall
  2017-06-01 15:16         ` Boris Ostrovsky
  2017-06-01 15:16         ` Boris Ostrovsky
  2017-06-01 14:01       ` Julien Grall
  1 sibling, 2 replies; 22+ messages in thread
From: Julien Grall @ 2017-06-01 14:01 UTC (permalink / raw)
  To: Boris Ostrovsky, xen-devel
  Cc: sstabellini, jgross, linux-kernel, stable, Feng Kan

Hi Boris,

On 01/06/17 14:33, Boris Ostrovsky wrote:
> On 06/01/2017 08:50 AM, Julien Grall wrote:
>> Hi Boris,
>>
>> On 31/05/17 14:54, Boris Ostrovsky wrote:
>>> On 05/31/2017 09:03 AM, Julien Grall wrote:
>>>> Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page
>>>> granularity" did
>>>> not go far enough to support 64KB in mmap_batch_fn.
>>>>
>>>> The variable 'nr' is the number of 4KB chunk to map. However, when
>>>> Linux
>>>> is using 64KB page granularity the array of pages
>>>> (vma->vm_private_data)
>>>> contain one page per 64KB. Fix it by incrementing st->index correctly.
>>>>
>>>> Furthermore, st->va is not correctly incremented as PAGE_SIZE !=
>>>> XEN_PAGE_SIZE.
>>>>
>>>> Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page
>>>> granularity")
>>>> CC: stable@vger.kernel.org
>>>> Reported-by: Feng Kan <fkan@apm.com>
>>>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>>>> ---
>>>>  drivers/xen/privcmd.c | 4 ++--
>>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>>>> index 7a92a5e1d40c..feca75b07fdd 100644
>>>> --- a/drivers/xen/privcmd.c
>>>> +++ b/drivers/xen/privcmd.c
>>>> @@ -362,8 +362,8 @@ static int mmap_batch_fn(void *data, int nr,
>>>> void *state)
>>>>                  st->global_error = 1;
>>>>          }
>>>>      }
>>>> -    st->va += PAGE_SIZE * nr;
>>>> -    st->index += nr;
>>>> +    st->va += XEN_PAGE_SIZE * nr;
>>>> +    st->index += nr / XEN_PFN_PER_PAGE;
>>>>
>>>>      return 0;
>>>>  }
>>>
>>>
>>> Are we still using PAGE_MASK for xen_remap_domain_gfn_array()?
>>
>> Do you mean in the xen_xlate_remap_gfn_array implementation? If so
>> there are no use of PAGE_MASK as the code has been converted to
>> support 64K page granularity.
>>
>> If you mean the x86 version of xen_remap_domain_gfn_array, then we
>> don't really care as x86 only use 4KB page granularity.
>
>
> I meant right above the change that you made. Should it also be replaced
> with XEN_PAGE_MASK? (Sorry for being unclear.)

Oh. The code in xen_remap_domain_gfn_array is relying on st->va to be 
page aligned. So I think we want to keep PAGE_MASK here.

Cheers,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory
  2017-06-01 13:33     ` Boris Ostrovsky
  2017-06-01 14:01       ` Julien Grall
@ 2017-06-01 14:01       ` Julien Grall
  1 sibling, 0 replies; 22+ messages in thread
From: Julien Grall @ 2017-06-01 14:01 UTC (permalink / raw)
  To: Boris Ostrovsky, xen-devel
  Cc: jgross, Feng Kan, sstabellini, linux-kernel, stable

Hi Boris,

On 01/06/17 14:33, Boris Ostrovsky wrote:
> On 06/01/2017 08:50 AM, Julien Grall wrote:
>> Hi Boris,
>>
>> On 31/05/17 14:54, Boris Ostrovsky wrote:
>>> On 05/31/2017 09:03 AM, Julien Grall wrote:
>>>> Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page
>>>> granularity" did
>>>> not go far enough to support 64KB in mmap_batch_fn.
>>>>
>>>> The variable 'nr' is the number of 4KB chunk to map. However, when
>>>> Linux
>>>> is using 64KB page granularity the array of pages
>>>> (vma->vm_private_data)
>>>> contain one page per 64KB. Fix it by incrementing st->index correctly.
>>>>
>>>> Furthermore, st->va is not correctly incremented as PAGE_SIZE !=
>>>> XEN_PAGE_SIZE.
>>>>
>>>> Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page
>>>> granularity")
>>>> CC: stable@vger.kernel.org
>>>> Reported-by: Feng Kan <fkan@apm.com>
>>>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>>>> ---
>>>>  drivers/xen/privcmd.c | 4 ++--
>>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>>>> index 7a92a5e1d40c..feca75b07fdd 100644
>>>> --- a/drivers/xen/privcmd.c
>>>> +++ b/drivers/xen/privcmd.c
>>>> @@ -362,8 +362,8 @@ static int mmap_batch_fn(void *data, int nr,
>>>> void *state)
>>>>                  st->global_error = 1;
>>>>          }
>>>>      }
>>>> -    st->va += PAGE_SIZE * nr;
>>>> -    st->index += nr;
>>>> +    st->va += XEN_PAGE_SIZE * nr;
>>>> +    st->index += nr / XEN_PFN_PER_PAGE;
>>>>
>>>>      return 0;
>>>>  }
>>>
>>>
>>> Are we still using PAGE_MASK for xen_remap_domain_gfn_array()?
>>
>> Do you mean in the xen_xlate_remap_gfn_array implementation? If so
>> there are no use of PAGE_MASK as the code has been converted to
>> support 64K page granularity.
>>
>> If you mean the x86 version of xen_remap_domain_gfn_array, then we
>> don't really care as x86 only use 4KB page granularity.
>
>
> I meant right above the change that you made. Should it also be replaced
> with XEN_PAGE_MASK? (Sorry for being unclear.)

Oh. The code in xen_remap_domain_gfn_array is relying on st->va to be 
page aligned. So I think we want to keep PAGE_MASK here.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory
  2017-06-01 14:01       ` Julien Grall
@ 2017-06-01 15:16         ` Boris Ostrovsky
  2017-06-01 15:38           ` Julien Grall
  2017-06-01 15:38           ` Julien Grall
  2017-06-01 15:16         ` Boris Ostrovsky
  1 sibling, 2 replies; 22+ messages in thread
From: Boris Ostrovsky @ 2017-06-01 15:16 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: sstabellini, jgross, linux-kernel, stable, Feng Kan

On 06/01/2017 10:01 AM, Julien Grall wrote:
> Hi Boris,
>
> On 01/06/17 14:33, Boris Ostrovsky wrote:
>> On 06/01/2017 08:50 AM, Julien Grall wrote:
>>> Hi Boris,
>>>
>>> On 31/05/17 14:54, Boris Ostrovsky wrote:
>>>> On 05/31/2017 09:03 AM, Julien Grall wrote:
>>>>> Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page
>>>>> granularity" did
>>>>> not go far enough to support 64KB in mmap_batch_fn.
>>>>>
>>>>> The variable 'nr' is the number of 4KB chunk to map. However, when
>>>>> Linux
>>>>> is using 64KB page granularity the array of pages
>>>>> (vma->vm_private_data)
>>>>> contain one page per 64KB. Fix it by incrementing st->index
>>>>> correctly.
>>>>>
>>>>> Furthermore, st->va is not correctly incremented as PAGE_SIZE !=
>>>>> XEN_PAGE_SIZE.
>>>>>
>>>>> Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page
>>>>> granularity")
>>>>> CC: stable@vger.kernel.org
>>>>> Reported-by: Feng Kan <fkan@apm.com>
>>>>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>>>>> ---
>>>>>  drivers/xen/privcmd.c | 4 ++--
>>>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>>>>> index 7a92a5e1d40c..feca75b07fdd 100644
>>>>> --- a/drivers/xen/privcmd.c
>>>>> +++ b/drivers/xen/privcmd.c
>>>>> @@ -362,8 +362,8 @@ static int mmap_batch_fn(void *data, int nr,
>>>>> void *state)
>>>>>                  st->global_error = 1;
>>>>>          }
>>>>>      }
>>>>> -    st->va += PAGE_SIZE * nr;
>>>>> -    st->index += nr;
>>>>> +    st->va += XEN_PAGE_SIZE * nr;
>>>>> +    st->index += nr / XEN_PFN_PER_PAGE;
>>>>>
>>>>>      return 0;
>>>>>  }
>>>>
>>>>
>>>> Are we still using PAGE_MASK for xen_remap_domain_gfn_array()?
>>>
>>> Do you mean in the xen_xlate_remap_gfn_array implementation? If so
>>> there are no use of PAGE_MASK as the code has been converted to
>>> support 64K page granularity.
>>>
>>> If you mean the x86 version of xen_remap_domain_gfn_array, then we
>>> don't really care as x86 only use 4KB page granularity.
>>
>>
>> I meant right above the change that you made. Should it also be replaced
>> with XEN_PAGE_MASK? (Sorry for being unclear.)
>
> Oh. The code in xen_remap_domain_gfn_array is relying on st->va to be
> page aligned. So I think we want to keep PAGE_MASK here.

Doe this imply then that 'nr' 4K pages is integral number of PAGE_SIZE
(i.e. (nr*XEN_PAGE_SIZE) % PAGE_SIZE == 0) and if yes --- do we test
this somewhere? I don't see it.

-boris

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory
  2017-06-01 14:01       ` Julien Grall
  2017-06-01 15:16         ` Boris Ostrovsky
@ 2017-06-01 15:16         ` Boris Ostrovsky
  1 sibling, 0 replies; 22+ messages in thread
From: Boris Ostrovsky @ 2017-06-01 15:16 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: jgross, Feng Kan, sstabellini, linux-kernel, stable

On 06/01/2017 10:01 AM, Julien Grall wrote:
> Hi Boris,
>
> On 01/06/17 14:33, Boris Ostrovsky wrote:
>> On 06/01/2017 08:50 AM, Julien Grall wrote:
>>> Hi Boris,
>>>
>>> On 31/05/17 14:54, Boris Ostrovsky wrote:
>>>> On 05/31/2017 09:03 AM, Julien Grall wrote:
>>>>> Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page
>>>>> granularity" did
>>>>> not go far enough to support 64KB in mmap_batch_fn.
>>>>>
>>>>> The variable 'nr' is the number of 4KB chunk to map. However, when
>>>>> Linux
>>>>> is using 64KB page granularity the array of pages
>>>>> (vma->vm_private_data)
>>>>> contain one page per 64KB. Fix it by incrementing st->index
>>>>> correctly.
>>>>>
>>>>> Furthermore, st->va is not correctly incremented as PAGE_SIZE !=
>>>>> XEN_PAGE_SIZE.
>>>>>
>>>>> Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page
>>>>> granularity")
>>>>> CC: stable@vger.kernel.org
>>>>> Reported-by: Feng Kan <fkan@apm.com>
>>>>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>>>>> ---
>>>>>  drivers/xen/privcmd.c | 4 ++--
>>>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>>>>> index 7a92a5e1d40c..feca75b07fdd 100644
>>>>> --- a/drivers/xen/privcmd.c
>>>>> +++ b/drivers/xen/privcmd.c
>>>>> @@ -362,8 +362,8 @@ static int mmap_batch_fn(void *data, int nr,
>>>>> void *state)
>>>>>                  st->global_error = 1;
>>>>>          }
>>>>>      }
>>>>> -    st->va += PAGE_SIZE * nr;
>>>>> -    st->index += nr;
>>>>> +    st->va += XEN_PAGE_SIZE * nr;
>>>>> +    st->index += nr / XEN_PFN_PER_PAGE;
>>>>>
>>>>>      return 0;
>>>>>  }
>>>>
>>>>
>>>> Are we still using PAGE_MASK for xen_remap_domain_gfn_array()?
>>>
>>> Do you mean in the xen_xlate_remap_gfn_array implementation? If so
>>> there are no use of PAGE_MASK as the code has been converted to
>>> support 64K page granularity.
>>>
>>> If you mean the x86 version of xen_remap_domain_gfn_array, then we
>>> don't really care as x86 only use 4KB page granularity.
>>
>>
>> I meant right above the change that you made. Should it also be replaced
>> with XEN_PAGE_MASK? (Sorry for being unclear.)
>
> Oh. The code in xen_remap_domain_gfn_array is relying on st->va to be
> page aligned. So I think we want to keep PAGE_MASK here.

Doe this imply then that 'nr' 4K pages is integral number of PAGE_SIZE
(i.e. (nr*XEN_PAGE_SIZE) % PAGE_SIZE == 0) and if yes --- do we test
this somewhere? I don't see it.

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory
  2017-06-01 15:16         ` Boris Ostrovsky
  2017-06-01 15:38           ` Julien Grall
@ 2017-06-01 15:38           ` Julien Grall
  2017-06-01 20:41             ` Boris Ostrovsky
  2017-06-01 20:41             ` Boris Ostrovsky
  1 sibling, 2 replies; 22+ messages in thread
From: Julien Grall @ 2017-06-01 15:38 UTC (permalink / raw)
  To: Boris Ostrovsky, xen-devel
  Cc: sstabellini, jgross, linux-kernel, stable, Feng Kan

Hi Boris,

On 01/06/17 16:16, Boris Ostrovsky wrote:
> On 06/01/2017 10:01 AM, Julien Grall wrote:
>> Hi Boris,
>>
>> On 01/06/17 14:33, Boris Ostrovsky wrote:
>>> On 06/01/2017 08:50 AM, Julien Grall wrote:
>>>> Hi Boris,
>>>>
>>>> On 31/05/17 14:54, Boris Ostrovsky wrote:
>>>>> On 05/31/2017 09:03 AM, Julien Grall wrote:
>>>>>> Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page
>>>>>> granularity" did
>>>>>> not go far enough to support 64KB in mmap_batch_fn.
>>>>>>
>>>>>> The variable 'nr' is the number of 4KB chunk to map. However, when
>>>>>> Linux
>>>>>> is using 64KB page granularity the array of pages
>>>>>> (vma->vm_private_data)
>>>>>> contain one page per 64KB. Fix it by incrementing st->index
>>>>>> correctly.
>>>>>>
>>>>>> Furthermore, st->va is not correctly incremented as PAGE_SIZE !=
>>>>>> XEN_PAGE_SIZE.
>>>>>>
>>>>>> Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page
>>>>>> granularity")
>>>>>> CC: stable@vger.kernel.org
>>>>>> Reported-by: Feng Kan <fkan@apm.com>
>>>>>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>>>>>> ---
>>>>>>  drivers/xen/privcmd.c | 4 ++--
>>>>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>>
>>>>>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>>>>>> index 7a92a5e1d40c..feca75b07fdd 100644
>>>>>> --- a/drivers/xen/privcmd.c
>>>>>> +++ b/drivers/xen/privcmd.c
>>>>>> @@ -362,8 +362,8 @@ static int mmap_batch_fn(void *data, int nr,
>>>>>> void *state)
>>>>>>                  st->global_error = 1;
>>>>>>          }
>>>>>>      }
>>>>>> -    st->va += PAGE_SIZE * nr;
>>>>>> -    st->index += nr;
>>>>>> +    st->va += XEN_PAGE_SIZE * nr;
>>>>>> +    st->index += nr / XEN_PFN_PER_PAGE;
>>>>>>
>>>>>>      return 0;
>>>>>>  }
>>>>>
>>>>>
>>>>> Are we still using PAGE_MASK for xen_remap_domain_gfn_array()?
>>>>
>>>> Do you mean in the xen_xlate_remap_gfn_array implementation? If so
>>>> there are no use of PAGE_MASK as the code has been converted to
>>>> support 64K page granularity.
>>>>
>>>> If you mean the x86 version of xen_remap_domain_gfn_array, then we
>>>> don't really care as x86 only use 4KB page granularity.
>>>
>>>
>>> I meant right above the change that you made. Should it also be replaced
>>> with XEN_PAGE_MASK? (Sorry for being unclear.)
>>
>> Oh. The code in xen_remap_domain_gfn_array is relying on st->va to be
>> page aligned. So I think we want to keep PAGE_MASK here.
>
> Doe this imply then that 'nr' 4K pages is integral number of PAGE_SIZE
> (i.e. (nr*XEN_PAGE_SIZE) % PAGE_SIZE == 0) and if yes --- do we test
> this somewhere? I don't see it.

nr might be smaller for the last batch. But all the intermediate batch 
should have ((nr * XEN_PAGE_SIZE) % PAGE_SIZE == 0).

I think the BUILD_BUG_ON in privcmd_ioctl_mmap_batch ensure that all the 
intermediate batch will always be an integral number of PAGE_SIZE.

Cheers,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory
  2017-06-01 15:16         ` Boris Ostrovsky
@ 2017-06-01 15:38           ` Julien Grall
  2017-06-01 15:38           ` Julien Grall
  1 sibling, 0 replies; 22+ messages in thread
From: Julien Grall @ 2017-06-01 15:38 UTC (permalink / raw)
  To: Boris Ostrovsky, xen-devel
  Cc: jgross, Feng Kan, sstabellini, linux-kernel, stable

Hi Boris,

On 01/06/17 16:16, Boris Ostrovsky wrote:
> On 06/01/2017 10:01 AM, Julien Grall wrote:
>> Hi Boris,
>>
>> On 01/06/17 14:33, Boris Ostrovsky wrote:
>>> On 06/01/2017 08:50 AM, Julien Grall wrote:
>>>> Hi Boris,
>>>>
>>>> On 31/05/17 14:54, Boris Ostrovsky wrote:
>>>>> On 05/31/2017 09:03 AM, Julien Grall wrote:
>>>>>> Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page
>>>>>> granularity" did
>>>>>> not go far enough to support 64KB in mmap_batch_fn.
>>>>>>
>>>>>> The variable 'nr' is the number of 4KB chunk to map. However, when
>>>>>> Linux
>>>>>> is using 64KB page granularity the array of pages
>>>>>> (vma->vm_private_data)
>>>>>> contain one page per 64KB. Fix it by incrementing st->index
>>>>>> correctly.
>>>>>>
>>>>>> Furthermore, st->va is not correctly incremented as PAGE_SIZE !=
>>>>>> XEN_PAGE_SIZE.
>>>>>>
>>>>>> Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page
>>>>>> granularity")
>>>>>> CC: stable@vger.kernel.org
>>>>>> Reported-by: Feng Kan <fkan@apm.com>
>>>>>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>>>>>> ---
>>>>>>  drivers/xen/privcmd.c | 4 ++--
>>>>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>>
>>>>>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>>>>>> index 7a92a5e1d40c..feca75b07fdd 100644
>>>>>> --- a/drivers/xen/privcmd.c
>>>>>> +++ b/drivers/xen/privcmd.c
>>>>>> @@ -362,8 +362,8 @@ static int mmap_batch_fn(void *data, int nr,
>>>>>> void *state)
>>>>>>                  st->global_error = 1;
>>>>>>          }
>>>>>>      }
>>>>>> -    st->va += PAGE_SIZE * nr;
>>>>>> -    st->index += nr;
>>>>>> +    st->va += XEN_PAGE_SIZE * nr;
>>>>>> +    st->index += nr / XEN_PFN_PER_PAGE;
>>>>>>
>>>>>>      return 0;
>>>>>>  }
>>>>>
>>>>>
>>>>> Are we still using PAGE_MASK for xen_remap_domain_gfn_array()?
>>>>
>>>> Do you mean in the xen_xlate_remap_gfn_array implementation? If so
>>>> there are no use of PAGE_MASK as the code has been converted to
>>>> support 64K page granularity.
>>>>
>>>> If you mean the x86 version of xen_remap_domain_gfn_array, then we
>>>> don't really care as x86 only use 4KB page granularity.
>>>
>>>
>>> I meant right above the change that you made. Should it also be replaced
>>> with XEN_PAGE_MASK? (Sorry for being unclear.)
>>
>> Oh. The code in xen_remap_domain_gfn_array is relying on st->va to be
>> page aligned. So I think we want to keep PAGE_MASK here.
>
> Doe this imply then that 'nr' 4K pages is integral number of PAGE_SIZE
> (i.e. (nr*XEN_PAGE_SIZE) % PAGE_SIZE == 0) and if yes --- do we test
> this somewhere? I don't see it.

nr might be smaller for the last batch. But all the intermediate batch 
should have ((nr * XEN_PAGE_SIZE) % PAGE_SIZE == 0).

I think the BUILD_BUG_ON in privcmd_ioctl_mmap_batch ensure that all the 
intermediate batch will always be an integral number of PAGE_SIZE.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory
  2017-06-01 15:38           ` Julien Grall
@ 2017-06-01 20:41             ` Boris Ostrovsky
  2017-06-01 21:04               ` Julien Grall
                                 ` (3 more replies)
  2017-06-01 20:41             ` Boris Ostrovsky
  1 sibling, 4 replies; 22+ messages in thread
From: Boris Ostrovsky @ 2017-06-01 20:41 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: sstabellini, jgross, linux-kernel, stable, Feng Kan

On 06/01/2017 11:38 AM, Julien Grall wrote:
> Hi Boris,
>
> On 01/06/17 16:16, Boris Ostrovsky wrote:
>> On 06/01/2017 10:01 AM, Julien Grall wrote:
>>> Hi Boris,
>>>
>>> On 01/06/17 14:33, Boris Ostrovsky wrote:
>>>> On 06/01/2017 08:50 AM, Julien Grall wrote:
>>>>> Hi Boris,
>>>>>
>>>>> On 31/05/17 14:54, Boris Ostrovsky wrote:
>>>>>> On 05/31/2017 09:03 AM, Julien Grall wrote:
>>>>>>> Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page
>>>>>>> granularity" did
>>>>>>> not go far enough to support 64KB in mmap_batch_fn.
>>>>>>>
>>>>>>> The variable 'nr' is the number of 4KB chunk to map. However, when
>>>>>>> Linux
>>>>>>> is using 64KB page granularity the array of pages
>>>>>>> (vma->vm_private_data)
>>>>>>> contain one page per 64KB. Fix it by incrementing st->index
>>>>>>> correctly.
>>>>>>>
>>>>>>> Furthermore, st->va is not correctly incremented as PAGE_SIZE !=
>>>>>>> XEN_PAGE_SIZE.
>>>>>>>
>>>>>>> Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page
>>>>>>> granularity")
>>>>>>> CC: stable@vger.kernel.org
>>>>>>> Reported-by: Feng Kan <fkan@apm.com>
>>>>>>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>>>>>>> ---
>>>>>>>  drivers/xen/privcmd.c | 4 ++--
>>>>>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>>>
>>>>>>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>>>>>>> index 7a92a5e1d40c..feca75b07fdd 100644
>>>>>>> --- a/drivers/xen/privcmd.c
>>>>>>> +++ b/drivers/xen/privcmd.c
>>>>>>> @@ -362,8 +362,8 @@ static int mmap_batch_fn(void *data, int nr,
>>>>>>> void *state)
>>>>>>>                  st->global_error = 1;
>>>>>>>          }
>>>>>>>      }
>>>>>>> -    st->va += PAGE_SIZE * nr;
>>>>>>> -    st->index += nr;
>>>>>>> +    st->va += XEN_PAGE_SIZE * nr;
>>>>>>> +    st->index += nr / XEN_PFN_PER_PAGE;
>>>>>>>
>>>>>>>      return 0;
>>>>>>>  }
>>>>>>
>>>>>>
>>>>>> Are we still using PAGE_MASK for xen_remap_domain_gfn_array()?
>>>>>
>>>>> Do you mean in the xen_xlate_remap_gfn_array implementation? If so
>>>>> there are no use of PAGE_MASK as the code has been converted to
>>>>> support 64K page granularity.
>>>>>
>>>>> If you mean the x86 version of xen_remap_domain_gfn_array, then we
>>>>> don't really care as x86 only use 4KB page granularity.
>>>>
>>>>
>>>> I meant right above the change that you made. Should it also be
>>>> replaced
>>>> with XEN_PAGE_MASK? (Sorry for being unclear.)
>>>
>>> Oh. The code in xen_remap_domain_gfn_array is relying on st->va to be
>>> page aligned. So I think we want to keep PAGE_MASK here.
>>
>> Doe this imply then that 'nr' 4K pages is integral number of PAGE_SIZE
>> (i.e. (nr*XEN_PAGE_SIZE) % PAGE_SIZE == 0) and if yes --- do we test
>> this somewhere? I don't see it.
>

I now see that this should (obviously) stay as PAGE_MASK, so

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

but

> nr might be smaller for the last batch. But all the intermediate batch
> should have ((nr * XEN_PAGE_SIZE) % PAGE_SIZE == 0).

how can we have nr not covering full PAGE_SIZEs? If you are using 64K
pages, how can you map, say, only 4K (if nr==1)?

-boris

>
> I think the BUILD_BUG_ON in privcmd_ioctl_mmap_batch ensure that all
> the intermediate batch will always be an integral number of PAGE_SIZE.
>
> Cheers,
>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory
  2017-06-01 15:38           ` Julien Grall
  2017-06-01 20:41             ` Boris Ostrovsky
@ 2017-06-01 20:41             ` Boris Ostrovsky
  1 sibling, 0 replies; 22+ messages in thread
From: Boris Ostrovsky @ 2017-06-01 20:41 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: jgross, Feng Kan, sstabellini, linux-kernel, stable

On 06/01/2017 11:38 AM, Julien Grall wrote:
> Hi Boris,
>
> On 01/06/17 16:16, Boris Ostrovsky wrote:
>> On 06/01/2017 10:01 AM, Julien Grall wrote:
>>> Hi Boris,
>>>
>>> On 01/06/17 14:33, Boris Ostrovsky wrote:
>>>> On 06/01/2017 08:50 AM, Julien Grall wrote:
>>>>> Hi Boris,
>>>>>
>>>>> On 31/05/17 14:54, Boris Ostrovsky wrote:
>>>>>> On 05/31/2017 09:03 AM, Julien Grall wrote:
>>>>>>> Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page
>>>>>>> granularity" did
>>>>>>> not go far enough to support 64KB in mmap_batch_fn.
>>>>>>>
>>>>>>> The variable 'nr' is the number of 4KB chunk to map. However, when
>>>>>>> Linux
>>>>>>> is using 64KB page granularity the array of pages
>>>>>>> (vma->vm_private_data)
>>>>>>> contain one page per 64KB. Fix it by incrementing st->index
>>>>>>> correctly.
>>>>>>>
>>>>>>> Furthermore, st->va is not correctly incremented as PAGE_SIZE !=
>>>>>>> XEN_PAGE_SIZE.
>>>>>>>
>>>>>>> Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page
>>>>>>> granularity")
>>>>>>> CC: stable@vger.kernel.org
>>>>>>> Reported-by: Feng Kan <fkan@apm.com>
>>>>>>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>>>>>>> ---
>>>>>>>  drivers/xen/privcmd.c | 4 ++--
>>>>>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>>>
>>>>>>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>>>>>>> index 7a92a5e1d40c..feca75b07fdd 100644
>>>>>>> --- a/drivers/xen/privcmd.c
>>>>>>> +++ b/drivers/xen/privcmd.c
>>>>>>> @@ -362,8 +362,8 @@ static int mmap_batch_fn(void *data, int nr,
>>>>>>> void *state)
>>>>>>>                  st->global_error = 1;
>>>>>>>          }
>>>>>>>      }
>>>>>>> -    st->va += PAGE_SIZE * nr;
>>>>>>> -    st->index += nr;
>>>>>>> +    st->va += XEN_PAGE_SIZE * nr;
>>>>>>> +    st->index += nr / XEN_PFN_PER_PAGE;
>>>>>>>
>>>>>>>      return 0;
>>>>>>>  }
>>>>>>
>>>>>>
>>>>>> Are we still using PAGE_MASK for xen_remap_domain_gfn_array()?
>>>>>
>>>>> Do you mean in the xen_xlate_remap_gfn_array implementation? If so
>>>>> there are no use of PAGE_MASK as the code has been converted to
>>>>> support 64K page granularity.
>>>>>
>>>>> If you mean the x86 version of xen_remap_domain_gfn_array, then we
>>>>> don't really care as x86 only use 4KB page granularity.
>>>>
>>>>
>>>> I meant right above the change that you made. Should it also be
>>>> replaced
>>>> with XEN_PAGE_MASK? (Sorry for being unclear.)
>>>
>>> Oh. The code in xen_remap_domain_gfn_array is relying on st->va to be
>>> page aligned. So I think we want to keep PAGE_MASK here.
>>
>> Doe this imply then that 'nr' 4K pages is integral number of PAGE_SIZE
>> (i.e. (nr*XEN_PAGE_SIZE) % PAGE_SIZE == 0) and if yes --- do we test
>> this somewhere? I don't see it.
>

I now see that this should (obviously) stay as PAGE_MASK, so

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

but

> nr might be smaller for the last batch. But all the intermediate batch
> should have ((nr * XEN_PAGE_SIZE) % PAGE_SIZE == 0).

how can we have nr not covering full PAGE_SIZEs? If you are using 64K
pages, how can you map, say, only 4K (if nr==1)?

-boris

>
> I think the BUILD_BUG_ON in privcmd_ioctl_mmap_batch ensure that all
> the intermediate batch will always be an integral number of PAGE_SIZE.
>
> Cheers,
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory
  2017-06-01 20:41             ` Boris Ostrovsky
  2017-06-01 21:04               ` Julien Grall
@ 2017-06-01 21:04               ` Julien Grall
  2017-06-06 16:17               ` Julien Grall
  2017-06-06 16:17               ` Julien Grall
  3 siblings, 0 replies; 22+ messages in thread
From: Julien Grall @ 2017-06-01 21:04 UTC (permalink / raw)
  To: Boris Ostrovsky, xen-devel
  Cc: nd, sstabellini, jgross, linux-kernel, stable, Feng Kan



On 01/06/2017 21:41, Boris Ostrovsky wrote:
> On 06/01/2017 11:38 AM, Julien Grall wrote:
>> Hi Boris,
>>
>> On 01/06/17 16:16, Boris Ostrovsky wrote:
>>> On 06/01/2017 10:01 AM, Julien Grall wrote:
>>>> Hi Boris,
>>>>
>>>> On 01/06/17 14:33, Boris Ostrovsky wrote:
>>>>> On 06/01/2017 08:50 AM, Julien Grall wrote:
>>>>>> Hi Boris,
>>>>>>
>>>>>> On 31/05/17 14:54, Boris Ostrovsky wrote:
>>>>>>> On 05/31/2017 09:03 AM, Julien Grall wrote:
>>>>>>>> Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page
>>>>>>>> granularity" did
>>>>>>>> not go far enough to support 64KB in mmap_batch_fn.
>>>>>>>>
>>>>>>>> The variable 'nr' is the number of 4KB chunk to map. However, when
>>>>>>>> Linux
>>>>>>>> is using 64KB page granularity the array of pages
>>>>>>>> (vma->vm_private_data)
>>>>>>>> contain one page per 64KB. Fix it by incrementing st->index
>>>>>>>> correctly.
>>>>>>>>
>>>>>>>> Furthermore, st->va is not correctly incremented as PAGE_SIZE !=
>>>>>>>> XEN_PAGE_SIZE.
>>>>>>>>
>>>>>>>> Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page
>>>>>>>> granularity")
>>>>>>>> CC: stable@vger.kernel.org
>>>>>>>> Reported-by: Feng Kan <fkan@apm.com>
>>>>>>>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>>>>>>>> ---
>>>>>>>>  drivers/xen/privcmd.c | 4 ++--
>>>>>>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>>>>
>>>>>>>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>>>>>>>> index 7a92a5e1d40c..feca75b07fdd 100644
>>>>>>>> --- a/drivers/xen/privcmd.c
>>>>>>>> +++ b/drivers/xen/privcmd.c
>>>>>>>> @@ -362,8 +362,8 @@ static int mmap_batch_fn(void *data, int nr,
>>>>>>>> void *state)
>>>>>>>>                  st->global_error = 1;
>>>>>>>>          }
>>>>>>>>      }
>>>>>>>> -    st->va += PAGE_SIZE * nr;
>>>>>>>> -    st->index += nr;
>>>>>>>> +    st->va += XEN_PAGE_SIZE * nr;
>>>>>>>> +    st->index += nr / XEN_PFN_PER_PAGE;
>>>>>>>>
>>>>>>>>      return 0;
>>>>>>>>  }
>>>>>>>
>>>>>>>
>>>>>>> Are we still using PAGE_MASK for xen_remap_domain_gfn_array()?
>>>>>>
>>>>>> Do you mean in the xen_xlate_remap_gfn_array implementation? If so
>>>>>> there are no use of PAGE_MASK as the code has been converted to
>>>>>> support 64K page granularity.
>>>>>>
>>>>>> If you mean the x86 version of xen_remap_domain_gfn_array, then we
>>>>>> don't really care as x86 only use 4KB page granularity.
>>>>>
>>>>>
>>>>> I meant right above the change that you made. Should it also be
>>>>> replaced
>>>>> with XEN_PAGE_MASK? (Sorry for being unclear.)
>>>>
>>>> Oh. The code in xen_remap_domain_gfn_array is relying on st->va to be
>>>> page aligned. So I think we want to keep PAGE_MASK here.
>>>
>>> Doe this imply then that 'nr' 4K pages is integral number of PAGE_SIZE
>>> (i.e. (nr*XEN_PAGE_SIZE) % PAGE_SIZE == 0) and if yes --- do we test
>>> this somewhere? I don't see it.
>>
>
> I now see that this should (obviously) stay as PAGE_MASK, so
>
> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

Thank you!

>
> but
>
>> nr might be smaller for the last batch. But all the intermediate batch
>> should have ((nr * XEN_PAGE_SIZE) % PAGE_SIZE == 0).
>
> how can we have nr not covering full PAGE_SIZEs? If you are using 64K
> pages, how can you map, say, only 4K (if nr==1)?

Xen is using 4K page granularity and so does the toolstack and the 
hypercall interface. They are unaware of the page size of the kernel.

So even if Linux is using 64K pages, the resulting mapping will be 
broken down in 4K chunk to issue the hypercall.

To give you a concrete example, if the toolstack requests to map 4K, 
Linux will allocate a 64K page. Only the first 4K chunk (0-4K) will
have effective mapping to host RAM. The rest (4-64K) will not be mapped.

Cheers,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory
  2017-06-01 20:41             ` Boris Ostrovsky
@ 2017-06-01 21:04               ` Julien Grall
  2017-06-01 21:04               ` Julien Grall
                                 ` (2 subsequent siblings)
  3 siblings, 0 replies; 22+ messages in thread
From: Julien Grall @ 2017-06-01 21:04 UTC (permalink / raw)
  To: Boris Ostrovsky, xen-devel
  Cc: jgross, Feng Kan, sstabellini, linux-kernel, stable, nd



On 01/06/2017 21:41, Boris Ostrovsky wrote:
> On 06/01/2017 11:38 AM, Julien Grall wrote:
>> Hi Boris,
>>
>> On 01/06/17 16:16, Boris Ostrovsky wrote:
>>> On 06/01/2017 10:01 AM, Julien Grall wrote:
>>>> Hi Boris,
>>>>
>>>> On 01/06/17 14:33, Boris Ostrovsky wrote:
>>>>> On 06/01/2017 08:50 AM, Julien Grall wrote:
>>>>>> Hi Boris,
>>>>>>
>>>>>> On 31/05/17 14:54, Boris Ostrovsky wrote:
>>>>>>> On 05/31/2017 09:03 AM, Julien Grall wrote:
>>>>>>>> Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page
>>>>>>>> granularity" did
>>>>>>>> not go far enough to support 64KB in mmap_batch_fn.
>>>>>>>>
>>>>>>>> The variable 'nr' is the number of 4KB chunk to map. However, when
>>>>>>>> Linux
>>>>>>>> is using 64KB page granularity the array of pages
>>>>>>>> (vma->vm_private_data)
>>>>>>>> contain one page per 64KB. Fix it by incrementing st->index
>>>>>>>> correctly.
>>>>>>>>
>>>>>>>> Furthermore, st->va is not correctly incremented as PAGE_SIZE !=
>>>>>>>> XEN_PAGE_SIZE.
>>>>>>>>
>>>>>>>> Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page
>>>>>>>> granularity")
>>>>>>>> CC: stable@vger.kernel.org
>>>>>>>> Reported-by: Feng Kan <fkan@apm.com>
>>>>>>>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>>>>>>>> ---
>>>>>>>>  drivers/xen/privcmd.c | 4 ++--
>>>>>>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>>>>
>>>>>>>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>>>>>>>> index 7a92a5e1d40c..feca75b07fdd 100644
>>>>>>>> --- a/drivers/xen/privcmd.c
>>>>>>>> +++ b/drivers/xen/privcmd.c
>>>>>>>> @@ -362,8 +362,8 @@ static int mmap_batch_fn(void *data, int nr,
>>>>>>>> void *state)
>>>>>>>>                  st->global_error = 1;
>>>>>>>>          }
>>>>>>>>      }
>>>>>>>> -    st->va += PAGE_SIZE * nr;
>>>>>>>> -    st->index += nr;
>>>>>>>> +    st->va += XEN_PAGE_SIZE * nr;
>>>>>>>> +    st->index += nr / XEN_PFN_PER_PAGE;
>>>>>>>>
>>>>>>>>      return 0;
>>>>>>>>  }
>>>>>>>
>>>>>>>
>>>>>>> Are we still using PAGE_MASK for xen_remap_domain_gfn_array()?
>>>>>>
>>>>>> Do you mean in the xen_xlate_remap_gfn_array implementation? If so
>>>>>> there are no use of PAGE_MASK as the code has been converted to
>>>>>> support 64K page granularity.
>>>>>>
>>>>>> If you mean the x86 version of xen_remap_domain_gfn_array, then we
>>>>>> don't really care as x86 only use 4KB page granularity.
>>>>>
>>>>>
>>>>> I meant right above the change that you made. Should it also be
>>>>> replaced
>>>>> with XEN_PAGE_MASK? (Sorry for being unclear.)
>>>>
>>>> Oh. The code in xen_remap_domain_gfn_array is relying on st->va to be
>>>> page aligned. So I think we want to keep PAGE_MASK here.
>>>
>>> Doe this imply then that 'nr' 4K pages is integral number of PAGE_SIZE
>>> (i.e. (nr*XEN_PAGE_SIZE) % PAGE_SIZE == 0) and if yes --- do we test
>>> this somewhere? I don't see it.
>>
>
> I now see that this should (obviously) stay as PAGE_MASK, so
>
> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

Thank you!

>
> but
>
>> nr might be smaller for the last batch. But all the intermediate batch
>> should have ((nr * XEN_PAGE_SIZE) % PAGE_SIZE == 0).
>
> how can we have nr not covering full PAGE_SIZEs? If you are using 64K
> pages, how can you map, say, only 4K (if nr==1)?

Xen is using 4K page granularity and so does the toolstack and the 
hypercall interface. They are unaware of the page size of the kernel.

So even if Linux is using 64K pages, the resulting mapping will be 
broken down in 4K chunk to issue the hypercall.

To give you a concrete example, if the toolstack requests to map 4K, 
Linux will allocate a 64K page. Only the first 4K chunk (0-4K) will
have effective mapping to host RAM. The rest (4-64K) will not be mapped.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory
  2017-06-01 20:41             ` Boris Ostrovsky
                                 ` (2 preceding siblings ...)
  2017-06-06 16:17               ` Julien Grall
@ 2017-06-06 16:17               ` Julien Grall
  3 siblings, 0 replies; 22+ messages in thread
From: Julien Grall @ 2017-06-06 16:17 UTC (permalink / raw)
  To: Boris Ostrovsky, xen-devel
  Cc: sstabellini, jgross, linux-kernel, stable, Feng Kan

Hi,

It has been reviewed-by Boris but I don't see the patch queued. Would it 
be possible to queue it for 4.12?

Cheers,

On 01/06/17 21:41, Boris Ostrovsky wrote:
> On 06/01/2017 11:38 AM, Julien Grall wrote:
>> Hi Boris,
>>
>> On 01/06/17 16:16, Boris Ostrovsky wrote:
>>> On 06/01/2017 10:01 AM, Julien Grall wrote:
>>>> Hi Boris,
>>>>
>>>> On 01/06/17 14:33, Boris Ostrovsky wrote:
>>>>> On 06/01/2017 08:50 AM, Julien Grall wrote:
>>>>>> Hi Boris,
>>>>>>
>>>>>> On 31/05/17 14:54, Boris Ostrovsky wrote:
>>>>>>> On 05/31/2017 09:03 AM, Julien Grall wrote:
>>>>>>>> Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page
>>>>>>>> granularity" did
>>>>>>>> not go far enough to support 64KB in mmap_batch_fn.
>>>>>>>>
>>>>>>>> The variable 'nr' is the number of 4KB chunk to map. However, when
>>>>>>>> Linux
>>>>>>>> is using 64KB page granularity the array of pages
>>>>>>>> (vma->vm_private_data)
>>>>>>>> contain one page per 64KB. Fix it by incrementing st->index
>>>>>>>> correctly.
>>>>>>>>
>>>>>>>> Furthermore, st->va is not correctly incremented as PAGE_SIZE !=
>>>>>>>> XEN_PAGE_SIZE.
>>>>>>>>
>>>>>>>> Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page
>>>>>>>> granularity")
>>>>>>>> CC: stable@vger.kernel.org
>>>>>>>> Reported-by: Feng Kan <fkan@apm.com>
>>>>>>>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>>>>>>>> ---
>>>>>>>>  drivers/xen/privcmd.c | 4 ++--
>>>>>>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>>>>
>>>>>>>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>>>>>>>> index 7a92a5e1d40c..feca75b07fdd 100644
>>>>>>>> --- a/drivers/xen/privcmd.c
>>>>>>>> +++ b/drivers/xen/privcmd.c
>>>>>>>> @@ -362,8 +362,8 @@ static int mmap_batch_fn(void *data, int nr,
>>>>>>>> void *state)
>>>>>>>>                  st->global_error = 1;
>>>>>>>>          }
>>>>>>>>      }
>>>>>>>> -    st->va += PAGE_SIZE * nr;
>>>>>>>> -    st->index += nr;
>>>>>>>> +    st->va += XEN_PAGE_SIZE * nr;
>>>>>>>> +    st->index += nr / XEN_PFN_PER_PAGE;
>>>>>>>>
>>>>>>>>      return 0;
>>>>>>>>  }
>>>>>>>
>>>>>>>
>>>>>>> Are we still using PAGE_MASK for xen_remap_domain_gfn_array()?
>>>>>>
>>>>>> Do you mean in the xen_xlate_remap_gfn_array implementation? If so
>>>>>> there are no use of PAGE_MASK as the code has been converted to
>>>>>> support 64K page granularity.
>>>>>>
>>>>>> If you mean the x86 version of xen_remap_domain_gfn_array, then we
>>>>>> don't really care as x86 only use 4KB page granularity.
>>>>>
>>>>>
>>>>> I meant right above the change that you made. Should it also be
>>>>> replaced
>>>>> with XEN_PAGE_MASK? (Sorry for being unclear.)
>>>>
>>>> Oh. The code in xen_remap_domain_gfn_array is relying on st->va to be
>>>> page aligned. So I think we want to keep PAGE_MASK here.
>>>
>>> Doe this imply then that 'nr' 4K pages is integral number of PAGE_SIZE
>>> (i.e. (nr*XEN_PAGE_SIZE) % PAGE_SIZE == 0) and if yes --- do we test
>>> this somewhere? I don't see it.
>>
>
> I now see that this should (obviously) stay as PAGE_MASK, so
>
> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>
> but
>
>> nr might be smaller for the last batch. But all the intermediate batch
>> should have ((nr * XEN_PAGE_SIZE) % PAGE_SIZE == 0).
>
> how can we have nr not covering full PAGE_SIZEs? If you are using 64K
> pages, how can you map, say, only 4K (if nr==1)?
>
> -boris
>
>>
>> I think the BUILD_BUG_ON in privcmd_ioctl_mmap_batch ensure that all
>> the intermediate batch will always be an integral number of PAGE_SIZE.
>>
>> Cheers,
>>
>

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory
  2017-06-01 20:41             ` Boris Ostrovsky
  2017-06-01 21:04               ` Julien Grall
  2017-06-01 21:04               ` Julien Grall
@ 2017-06-06 16:17               ` Julien Grall
  2017-06-06 16:17               ` Julien Grall
  3 siblings, 0 replies; 22+ messages in thread
From: Julien Grall @ 2017-06-06 16:17 UTC (permalink / raw)
  To: Boris Ostrovsky, xen-devel
  Cc: jgross, Feng Kan, sstabellini, linux-kernel, stable

Hi,

It has been reviewed-by Boris but I don't see the patch queued. Would it 
be possible to queue it for 4.12?

Cheers,

On 01/06/17 21:41, Boris Ostrovsky wrote:
> On 06/01/2017 11:38 AM, Julien Grall wrote:
>> Hi Boris,
>>
>> On 01/06/17 16:16, Boris Ostrovsky wrote:
>>> On 06/01/2017 10:01 AM, Julien Grall wrote:
>>>> Hi Boris,
>>>>
>>>> On 01/06/17 14:33, Boris Ostrovsky wrote:
>>>>> On 06/01/2017 08:50 AM, Julien Grall wrote:
>>>>>> Hi Boris,
>>>>>>
>>>>>> On 31/05/17 14:54, Boris Ostrovsky wrote:
>>>>>>> On 05/31/2017 09:03 AM, Julien Grall wrote:
>>>>>>>> Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page
>>>>>>>> granularity" did
>>>>>>>> not go far enough to support 64KB in mmap_batch_fn.
>>>>>>>>
>>>>>>>> The variable 'nr' is the number of 4KB chunk to map. However, when
>>>>>>>> Linux
>>>>>>>> is using 64KB page granularity the array of pages
>>>>>>>> (vma->vm_private_data)
>>>>>>>> contain one page per 64KB. Fix it by incrementing st->index
>>>>>>>> correctly.
>>>>>>>>
>>>>>>>> Furthermore, st->va is not correctly incremented as PAGE_SIZE !=
>>>>>>>> XEN_PAGE_SIZE.
>>>>>>>>
>>>>>>>> Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page
>>>>>>>> granularity")
>>>>>>>> CC: stable@vger.kernel.org
>>>>>>>> Reported-by: Feng Kan <fkan@apm.com>
>>>>>>>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>>>>>>>> ---
>>>>>>>>  drivers/xen/privcmd.c | 4 ++--
>>>>>>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>>>>
>>>>>>>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>>>>>>>> index 7a92a5e1d40c..feca75b07fdd 100644
>>>>>>>> --- a/drivers/xen/privcmd.c
>>>>>>>> +++ b/drivers/xen/privcmd.c
>>>>>>>> @@ -362,8 +362,8 @@ static int mmap_batch_fn(void *data, int nr,
>>>>>>>> void *state)
>>>>>>>>                  st->global_error = 1;
>>>>>>>>          }
>>>>>>>>      }
>>>>>>>> -    st->va += PAGE_SIZE * nr;
>>>>>>>> -    st->index += nr;
>>>>>>>> +    st->va += XEN_PAGE_SIZE * nr;
>>>>>>>> +    st->index += nr / XEN_PFN_PER_PAGE;
>>>>>>>>
>>>>>>>>      return 0;
>>>>>>>>  }
>>>>>>>
>>>>>>>
>>>>>>> Are we still using PAGE_MASK for xen_remap_domain_gfn_array()?
>>>>>>
>>>>>> Do you mean in the xen_xlate_remap_gfn_array implementation? If so
>>>>>> there are no use of PAGE_MASK as the code has been converted to
>>>>>> support 64K page granularity.
>>>>>>
>>>>>> If you mean the x86 version of xen_remap_domain_gfn_array, then we
>>>>>> don't really care as x86 only use 4KB page granularity.
>>>>>
>>>>>
>>>>> I meant right above the change that you made. Should it also be
>>>>> replaced
>>>>> with XEN_PAGE_MASK? (Sorry for being unclear.)
>>>>
>>>> Oh. The code in xen_remap_domain_gfn_array is relying on st->va to be
>>>> page aligned. So I think we want to keep PAGE_MASK here.
>>>
>>> Doe this imply then that 'nr' 4K pages is integral number of PAGE_SIZE
>>> (i.e. (nr*XEN_PAGE_SIZE) % PAGE_SIZE == 0) and if yes --- do we test
>>> this somewhere? I don't see it.
>>
>
> I now see that this should (obviously) stay as PAGE_MASK, so
>
> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>
> but
>
>> nr might be smaller for the last batch. But all the intermediate batch
>> should have ((nr * XEN_PAGE_SIZE) % PAGE_SIZE == 0).
>
> how can we have nr not covering full PAGE_SIZEs? If you are using 64K
> pages, how can you map, say, only 4K (if nr==1)?
>
> -boris
>
>>
>> I think the BUILD_BUG_ON in privcmd_ioctl_mmap_batch ensure that all
>> the intermediate batch will always be an integral number of PAGE_SIZE.
>>
>> Cheers,
>>
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory
  2017-05-31 13:03 [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory Julien Grall
                   ` (2 preceding siblings ...)
  2017-06-07  9:25 ` Juergen Gross
@ 2017-06-07  9:25 ` Juergen Gross
  3 siblings, 0 replies; 22+ messages in thread
From: Juergen Gross @ 2017-06-07  9:25 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: sstabellini, boris.ostrovsky, linux-kernel, stable, Feng Kan

On 31/05/17 15:03, Julien Grall wrote:
> Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page granularity" did
> not go far enough to support 64KB in mmap_batch_fn.
> 
> The variable 'nr' is the number of 4KB chunk to map. However, when Linux
> is using 64KB page granularity the array of pages (vma->vm_private_data)
> contain one page per 64KB. Fix it by incrementing st->index correctly.
> 
> Furthermore, st->va is not correctly incremented as PAGE_SIZE !=
> XEN_PAGE_SIZE.
> 
> Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page granularity")
> CC: stable@vger.kernel.org
> Reported-by: Feng Kan <fkan@apm.com>
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Committed to xen.tip for-linus-4.12b


Juergen

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory
  2017-05-31 13:03 [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory Julien Grall
  2017-05-31 13:54 ` Boris Ostrovsky
  2017-05-31 13:54 ` Boris Ostrovsky
@ 2017-06-07  9:25 ` Juergen Gross
  2017-06-07  9:25 ` Juergen Gross
  3 siblings, 0 replies; 22+ messages in thread
From: Juergen Gross @ 2017-06-07  9:25 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Feng Kan, boris.ostrovsky, sstabellini, linux-kernel, stable

On 31/05/17 15:03, Julien Grall wrote:
> Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page granularity" did
> not go far enough to support 64KB in mmap_batch_fn.
> 
> The variable 'nr' is the number of 4KB chunk to map. However, when Linux
> is using 64KB page granularity the array of pages (vma->vm_private_data)
> contain one page per 64KB. Fix it by incrementing st->index correctly.
> 
> Furthermore, st->va is not correctly incremented as PAGE_SIZE !=
> XEN_PAGE_SIZE.
> 
> Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page granularity")
> CC: stable@vger.kernel.org
> Reported-by: Feng Kan <fkan@apm.com>
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Committed to xen.tip for-linus-4.12b


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory
@ 2017-05-31 13:03 Julien Grall
  0 siblings, 0 replies; 22+ messages in thread
From: Julien Grall @ 2017-05-31 13:03 UTC (permalink / raw)
  To: xen-devel
  Cc: jgross, Feng Kan, sstabellini, linux-kernel, stable,
	Julien Grall, boris.ostrovsky

Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page granularity" did
not go far enough to support 64KB in mmap_batch_fn.

The variable 'nr' is the number of 4KB chunk to map. However, when Linux
is using 64KB page granularity the array of pages (vma->vm_private_data)
contain one page per 64KB. Fix it by incrementing st->index correctly.

Furthermore, st->va is not correctly incremented as PAGE_SIZE !=
XEN_PAGE_SIZE.

Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page granularity")
CC: stable@vger.kernel.org
Reported-by: Feng Kan <fkan@apm.com>
Signed-off-by: Julien Grall <julien.grall@arm.com>
---
 drivers/xen/privcmd.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 7a92a5e1d40c..feca75b07fdd 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -362,8 +362,8 @@ static int mmap_batch_fn(void *data, int nr, void *state)
 				st->global_error = 1;
 		}
 	}
-	st->va += PAGE_SIZE * nr;
-	st->index += nr;
+	st->va += XEN_PAGE_SIZE * nr;
+	st->index += nr / XEN_PFN_PER_PAGE;
 
 	return 0;
 }
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2017-06-07  9:25 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-31 13:03 [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory Julien Grall
2017-05-31 13:54 ` Boris Ostrovsky
2017-06-01 12:50   ` Julien Grall
2017-06-01 13:33     ` Boris Ostrovsky
2017-06-01 13:33     ` Boris Ostrovsky
2017-06-01 14:01       ` Julien Grall
2017-06-01 15:16         ` Boris Ostrovsky
2017-06-01 15:38           ` Julien Grall
2017-06-01 15:38           ` Julien Grall
2017-06-01 20:41             ` Boris Ostrovsky
2017-06-01 21:04               ` Julien Grall
2017-06-01 21:04               ` Julien Grall
2017-06-06 16:17               ` Julien Grall
2017-06-06 16:17               ` Julien Grall
2017-06-01 20:41             ` Boris Ostrovsky
2017-06-01 15:16         ` Boris Ostrovsky
2017-06-01 14:01       ` Julien Grall
2017-06-01 12:50   ` Julien Grall
2017-05-31 13:54 ` Boris Ostrovsky
2017-06-07  9:25 ` Juergen Gross
2017-06-07  9:25 ` Juergen Gross
2017-05-31 13:03 Julien Grall

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.