All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] a few cleanup and bugfixes about shmem
@ 2022-06-01 12:44 Chen Wandun
  2022-06-01 12:44 ` [PATCH 1/4] mm/shmem: check return value of shmem_init_inodecache Chen Wandun
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Chen Wandun @ 2022-06-01 12:44 UTC (permalink / raw)
  To: hughd, akpm, linux-mm, linux-kernel; +Cc: chenwandun

Chen Wandun (4):
  mm/shmem: check return value of shmem_init_inodecache
  mm/shmem: return -EINVAL for addr not PAGE_SIZE aligned
  mm/shmem: return error code directly
  mm/shmem: rework calculation of inflated_addr in
    shmem_get_unmapped_area

 mm/shmem.c | 24 ++++++++++++++++--------
 1 file changed, 16 insertions(+), 8 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/4] mm/shmem: check return value of shmem_init_inodecache
  2022-06-01 12:44 [PATCH 0/4] a few cleanup and bugfixes about shmem Chen Wandun
@ 2022-06-01 12:44 ` Chen Wandun
  2022-06-01 12:54   ` Matthew Wilcox
  2022-06-01 12:44 ` [PATCH 2/4] mm/shmem: return -EINVAL for addr not PAGE_SIZE aligned Chen Wandun
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 10+ messages in thread
From: Chen Wandun @ 2022-06-01 12:44 UTC (permalink / raw)
  To: hughd, akpm, linux-mm, linux-kernel; +Cc: chenwandun

It will result in null pointer access if shmem_init_inodecache fail,
so check return value of shmem_init_inodecache

Signed-off-by: Chen Wandun <chenwandun@huawei.com>
---
 mm/shmem.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index d55dd972023a..80c361c3d82c 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -3776,11 +3776,13 @@ static void shmem_init_inode(void *foo)
 	inode_init_once(&info->vfs_inode);
 }
 
-static void shmem_init_inodecache(void)
+static struct kmem_cache *shmem_init_inodecache(void)
 {
 	shmem_inode_cachep = kmem_cache_create("shmem_inode_cache",
 				sizeof(struct shmem_inode_info),
 				0, SLAB_PANIC|SLAB_ACCOUNT, shmem_init_inode);
+
+	return shmem_inode_cachep;
 }
 
 static void shmem_destroy_inodecache(void)
@@ -3924,7 +3926,10 @@ void __init shmem_init(void)
 {
 	int error;
 
-	shmem_init_inodecache();
+	if (!shmem_init_inodecache()) {
+		error = -ENOMEM;
+		goto out2;
+	}
 
 	error = register_filesystem(&shmem_fs_type);
 	if (error) {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/4] mm/shmem: return -EINVAL for addr not PAGE_SIZE aligned
  2022-06-01 12:44 [PATCH 0/4] a few cleanup and bugfixes about shmem Chen Wandun
  2022-06-01 12:44 ` [PATCH 1/4] mm/shmem: check return value of shmem_init_inodecache Chen Wandun
@ 2022-06-01 12:44 ` Chen Wandun
  2022-06-01 13:37   ` David Hildenbrand
  2022-06-01 12:44 ` [PATCH 3/4] mm/shmem: return error code directly Chen Wandun
  2022-06-01 12:44 ` [PATCH 4/4] mm/shmem: rework calculation of inflated_addr in shmem_get_unmapped_area Chen Wandun
  3 siblings, 1 reply; 10+ messages in thread
From: Chen Wandun @ 2022-06-01 12:44 UTC (permalink / raw)
  To: hughd, akpm, linux-mm, linux-kernel; +Cc: chenwandun

If addr is not PAGE_SIZE aligned, return -EINVAL directly.
Besides, use macro offset_in_page to check addr is not
PAGE_SIZE aligned case.

Signed-off-by: Chen Wandun <chenwandun@huawei.com>
---
 mm/shmem.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index 80c361c3d82c..1136dd7da9e5 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2143,8 +2143,8 @@ unsigned long shmem_get_unmapped_area(struct file *file,
 		return addr;
 	if (IS_ERR_VALUE(addr))
 		return addr;
-	if (addr & ~PAGE_MASK)
-		return addr;
+	if (offset_in_page(addr))
+		return -EINVAL;
 	if (addr > TASK_SIZE - len)
 		return addr;
 
@@ -2197,7 +2197,7 @@ unsigned long shmem_get_unmapped_area(struct file *file,
 	inflated_addr = get_area(NULL, uaddr, inflated_len, 0, flags);
 	if (IS_ERR_VALUE(inflated_addr))
 		return addr;
-	if (inflated_addr & ~PAGE_MASK)
+	if (offset_in_page(inflated_addr))
 		return addr;
 
 	inflated_offset = inflated_addr & (HPAGE_PMD_SIZE-1);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/4] mm/shmem: return error code directly
  2022-06-01 12:44 [PATCH 0/4] a few cleanup and bugfixes about shmem Chen Wandun
  2022-06-01 12:44 ` [PATCH 1/4] mm/shmem: check return value of shmem_init_inodecache Chen Wandun
  2022-06-01 12:44 ` [PATCH 2/4] mm/shmem: return -EINVAL for addr not PAGE_SIZE aligned Chen Wandun
@ 2022-06-01 12:44 ` Chen Wandun
  2022-06-01 12:44 ` [PATCH 4/4] mm/shmem: rework calculation of inflated_addr in shmem_get_unmapped_area Chen Wandun
  3 siblings, 0 replies; 10+ messages in thread
From: Chen Wandun @ 2022-06-01 12:44 UTC (permalink / raw)
  To: hughd, akpm, linux-mm, linux-kernel; +Cc: chenwandun

If [addr, addr + len) beyond TASK_SIZE, return error code directly.
No need to check this case in caller.

Signed-off-by: Chen Wandun <chenwandun@huawei.com>
---
 mm/shmem.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index 1136dd7da9e5..ca04f3975a8a 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2146,7 +2146,7 @@ unsigned long shmem_get_unmapped_area(struct file *file,
 	if (offset_in_page(addr))
 		return -EINVAL;
 	if (addr > TASK_SIZE - len)
-		return addr;
+		return -ENOMEM;
 
 	if (shmem_huge == SHMEM_HUGE_DENY)
 		return addr;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/4] mm/shmem: rework calculation of inflated_addr in shmem_get_unmapped_area
  2022-06-01 12:44 [PATCH 0/4] a few cleanup and bugfixes about shmem Chen Wandun
                   ` (2 preceding siblings ...)
  2022-06-01 12:44 ` [PATCH 3/4] mm/shmem: return error code directly Chen Wandun
@ 2022-06-01 12:44 ` Chen Wandun
  3 siblings, 0 replies; 10+ messages in thread
From: Chen Wandun @ 2022-06-01 12:44 UTC (permalink / raw)
  To: hughd, akpm, linux-mm, linux-kernel; +Cc: chenwandun

In function shmem_get_unmapped_area, inflated_offset and offset
are unsigned long, it will result in underflow when offset below
inflated_offset, a little confusing, no functional change.

Signed-off-by: Chen Wandun <chenwandun@huawei.com>
---
 mm/shmem.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index ca04f3975a8a..f12163bd0f69 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2201,9 +2201,12 @@ unsigned long shmem_get_unmapped_area(struct file *file,
 		return addr;
 
 	inflated_offset = inflated_addr & (HPAGE_PMD_SIZE-1);
-	inflated_addr += offset - inflated_offset;
-	if (inflated_offset > offset)
+	if (offset > inflated_offset)
+		inflated_addr += offset - inflated_offset;
+	else if (offset < inflated_offset) {
+		inflated_addr -= inflated_offset - offset;
 		inflated_addr += HPAGE_PMD_SIZE;
+	}
 
 	if (inflated_addr > TASK_SIZE - len)
 		return addr;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/4] mm/shmem: check return value of shmem_init_inodecache
  2022-06-01 12:44 ` [PATCH 1/4] mm/shmem: check return value of shmem_init_inodecache Chen Wandun
@ 2022-06-01 12:54   ` Matthew Wilcox
  2022-06-01 13:34     ` Kefeng Wang
  0 siblings, 1 reply; 10+ messages in thread
From: Matthew Wilcox @ 2022-06-01 12:54 UTC (permalink / raw)
  To: Chen Wandun; +Cc: hughd, akpm, linux-mm, linux-kernel

On Wed, Jun 01, 2022 at 08:44:14PM +0800, Chen Wandun wrote:
> -static void shmem_init_inodecache(void)
> +static struct kmem_cache *shmem_init_inodecache(void)
>  {
>  	shmem_inode_cachep = kmem_cache_create("shmem_inode_cache",
>  				sizeof(struct shmem_inode_info),
>  				0, SLAB_PANIC|SLAB_ACCOUNT, shmem_init_inode);
> +
> +	return shmem_inode_cachep;
>  }
>  
>  static void shmem_destroy_inodecache(void)
> @@ -3924,7 +3926,10 @@ void __init shmem_init(void)
>  {
>  	int error;
>  
> -	shmem_init_inodecache();
> +	if (!shmem_init_inodecache()) {
> +		error = -ENOMEM;
> +		goto out2;
> +	}

better to return the errno from shmem_init_inodecache():

	error = shmem_init_inodecache();
	if (error)
		goto out2;

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/4] mm/shmem: check return value of shmem_init_inodecache
  2022-06-01 12:54   ` Matthew Wilcox
@ 2022-06-01 13:34     ` Kefeng Wang
  0 siblings, 0 replies; 10+ messages in thread
From: Kefeng Wang @ 2022-06-01 13:34 UTC (permalink / raw)
  To: Matthew Wilcox, Chen Wandun; +Cc: hughd, akpm, linux-mm, linux-kernel


On 2022/6/1 20:54, Matthew Wilcox wrote:
> On Wed, Jun 01, 2022 at 08:44:14PM +0800, Chen Wandun wrote:
>> -static void shmem_init_inodecache(void)
>> +static struct kmem_cache *shmem_init_inodecache(void)
>>   {
>>   	shmem_inode_cachep = kmem_cache_create("shmem_inode_cache",
>>   				sizeof(struct shmem_inode_info),
>>   				0, SLAB_PANIC|SLAB_ACCOUNT, shmem_init_inode);
>> +
>> +	return shmem_inode_cachep;
>>   }
>>   
>>   static void shmem_destroy_inodecache(void)
>> @@ -3924,7 +3926,10 @@ void __init shmem_init(void)
>>   {
>>   	int error;
>>   
>> -	shmem_init_inodecache();
>> +	if (!shmem_init_inodecache()) {
>> +		error = -ENOMEM;
>> +		goto out2;
>> +	}
> better to return the errno from shmem_init_inodecache():
>
> 	error = shmem_init_inodecache();

kmem_cache_create return a pointer to the cache on success, NULL on failure,
so error = -ENOMEM; is right  :)

> 	if (error)
> 		goto out2;
>
> .

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/4] mm/shmem: return -EINVAL for addr not PAGE_SIZE aligned
  2022-06-01 12:44 ` [PATCH 2/4] mm/shmem: return -EINVAL for addr not PAGE_SIZE aligned Chen Wandun
@ 2022-06-01 13:37   ` David Hildenbrand
  2022-06-01 14:14     ` David Hildenbrand
  0 siblings, 1 reply; 10+ messages in thread
From: David Hildenbrand @ 2022-06-01 13:37 UTC (permalink / raw)
  To: Chen Wandun, hughd, akpm, linux-mm, linux-kernel

On 01.06.22 14:44, Chen Wandun wrote:
> If addr is not PAGE_SIZE aligned, return -EINVAL directly.

Why is this one to be treated in a special way compared to all of the
other related checks?

> Besides, use macro offset_in_page to check addr is not
> PAGE_SIZE aligned case.

Using offset_in_page() LGTM.

> 
> Signed-off-by: Chen Wandun <chenwandun@huawei.com>
> ---
>  mm/shmem.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 80c361c3d82c..1136dd7da9e5 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -2143,8 +2143,8 @@ unsigned long shmem_get_unmapped_area(struct file *file,
>  		return addr;
>  	if (IS_ERR_VALUE(addr))
>  		return addr;
> -	if (addr & ~PAGE_MASK)
> -		return addr;
> +	if (offset_in_page(addr))
> +		return -EINVAL;
>  	if (addr > TASK_SIZE - len)
>  		return addr;
>  
> @@ -2197,7 +2197,7 @@ unsigned long shmem_get_unmapped_area(struct file *file,
>  	inflated_addr = get_area(NULL, uaddr, inflated_len, 0, flags);
>  	if (IS_ERR_VALUE(inflated_addr))
>  		return addr;
> -	if (inflated_addr & ~PAGE_MASK)
> +	if (offset_in_page(inflated_addr))
>  		return addr;
>  
>  	inflated_offset = inflated_addr & (HPAGE_PMD_SIZE-1);


-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/4] mm/shmem: return -EINVAL for addr not PAGE_SIZE aligned
  2022-06-01 13:37   ` David Hildenbrand
@ 2022-06-01 14:14     ` David Hildenbrand
  2022-06-05  2:05       ` Chen Wandun
  0 siblings, 1 reply; 10+ messages in thread
From: David Hildenbrand @ 2022-06-01 14:14 UTC (permalink / raw)
  To: Chen Wandun, hughd, akpm, linux-mm, linux-kernel

On 01.06.22 15:37, David Hildenbrand wrote:
> On 01.06.22 14:44, Chen Wandun wrote:
>> If addr is not PAGE_SIZE aligned, return -EINVAL directly.
> 
> Why is this one to be treated in a special way compared to all of the
> other related checks?

Ah, I see you modify other places in other patches. Maybe just combine
all these return value changes into a single patch? That makes it look
less "special".



-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/4] mm/shmem: return -EINVAL for addr not PAGE_SIZE aligned
  2022-06-01 14:14     ` David Hildenbrand
@ 2022-06-05  2:05       ` Chen Wandun
  0 siblings, 0 replies; 10+ messages in thread
From: Chen Wandun @ 2022-06-05  2:05 UTC (permalink / raw)
  To: David Hildenbrand, hughd, akpm, linux-mm, linux-kernel



On 2022/6/1 22:14, David Hildenbrand wrote:
> On 01.06.22 15:37, David Hildenbrand wrote:
>> On 01.06.22 14:44, Chen Wandun wrote:
>>> If addr is not PAGE_SIZE aligned, return -EINVAL directly.
>> Why is this one to be treated in a special way compared to all of the
>> other related checks?
> Ah, I see you modify other places in other patches. Maybe just combine
> all these return value changes into a single patch? That makes it look
> less "special".
OK, patch 2 and patch 3 are used to return error value directly, I will 
combine these patch
to a single patch in next version.

Thanks.
>
>
>


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-06-05  2:05 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-01 12:44 [PATCH 0/4] a few cleanup and bugfixes about shmem Chen Wandun
2022-06-01 12:44 ` [PATCH 1/4] mm/shmem: check return value of shmem_init_inodecache Chen Wandun
2022-06-01 12:54   ` Matthew Wilcox
2022-06-01 13:34     ` Kefeng Wang
2022-06-01 12:44 ` [PATCH 2/4] mm/shmem: return -EINVAL for addr not PAGE_SIZE aligned Chen Wandun
2022-06-01 13:37   ` David Hildenbrand
2022-06-01 14:14     ` David Hildenbrand
2022-06-05  2:05       ` Chen Wandun
2022-06-01 12:44 ` [PATCH 3/4] mm/shmem: return error code directly Chen Wandun
2022-06-01 12:44 ` [PATCH 4/4] mm/shmem: rework calculation of inflated_addr in shmem_get_unmapped_area Chen Wandun

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.