linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [STABLE PATCH] slub: make ->cpu_partial unsigned int
@ 2018-09-30 10:28 zhong jiang
  2018-09-30 12:37 ` Greg KH
  2018-09-30 12:50 ` Matthew Wilcox
  0 siblings, 2 replies; 10+ messages in thread
From: zhong jiang @ 2018-09-30 10:28 UTC (permalink / raw)
  To: gregkh
  Cc: cl, penberg, rientjes, iamjoonsoo.kim, akpm, mhocko, mgorman,
	vbabka, andrea, kirill, linux-mm, linux-kernel

From: Alexey Dobriyan <adobriyan@gmail.com>

[ Upstream commit e5d9998f3e09359b372a037a6ac55ba235d95d57 ]

        /*
         * cpu_partial determined the maximum number of objects
         * kept in the per cpu partial lists of a processor.
         */

Can't be negative.

I hit a real issue that it will result in a large number of memory leak.
Becuase Freeing slabs are in interrupt context. So it can trigger this issue.
put_cpu_partial can be interrupted more than once.
due to a union struct of lru and pobjects in struct page, when other core handles
page->lru list, for eaxmple, remove_partial in freeing slab code flow, It will
result in pobjects being a negative value(0xdead0000). Therefore, a large number
of slabs will be added to per_cpu partial list.

I had posted the issue to community before. The detailed issue description is as follows.

https://www.spinics.net/lists/kernel/msg2870979.html

After applying the patch, The issue is fixed. So the patch is a effective bugfix.
It should go into stable.

Link: http://lkml.kernel.org/r/20180305200730.15812-15-adobriyan@gmail.com
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: <stable@vger.kernel.org> # 4.4.x
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: zhong jiang <zhongjiang@huawei.com>
---
 include/linux/slub_def.h | 3 ++-
 mm/slub.c                | 6 +++---
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 3388511..9b681f2 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -67,7 +67,8 @@ struct kmem_cache {
 	int size;		/* The size of an object including meta data */
 	int object_size;	/* The size of an object without meta data */
 	int offset;		/* Free pointer offset. */
-	int cpu_partial;	/* Number of per cpu partial objects to keep around */
+	/* Number of per cpu partial objects to keep around */
+	unsigned int cpu_partial;
 	struct kmem_cache_order_objects oo;
 
 	/* Allocation and freeing of slabs */
diff --git a/mm/slub.c b/mm/slub.c
index 2284c43..c33b0e1 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1661,7 +1661,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n,
 {
 	struct page *page, *page2;
 	void *object = NULL;
-	int available = 0;
+	unsigned int available = 0;
 	int objects;
 
 	/*
@@ -4674,10 +4674,10 @@ static ssize_t cpu_partial_show(struct kmem_cache *s, char *buf)
 static ssize_t cpu_partial_store(struct kmem_cache *s, const char *buf,
 				 size_t length)
 {
-	unsigned long objects;
+	unsigned int objects;
 	int err;
 
-	err = kstrtoul(buf, 10, &objects);
+	err = kstrtouint(buf, 10, &objects);
 	if (err)
 		return err;
 	if (objects && !kmem_cache_has_cpu_partial(s))
-- 
1.7.12.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [STABLE PATCH] slub: make ->cpu_partial unsigned int
  2018-09-30 10:28 [STABLE PATCH] slub: make ->cpu_partial unsigned int zhong jiang
@ 2018-09-30 12:37 ` Greg KH
  2018-09-30 12:50 ` Matthew Wilcox
  1 sibling, 0 replies; 10+ messages in thread
From: Greg KH @ 2018-09-30 12:37 UTC (permalink / raw)
  To: zhong jiang
  Cc: cl, penberg, rientjes, iamjoonsoo.kim, akpm, mhocko, mgorman,
	vbabka, andrea, kirill, linux-mm, linux-kernel

On Sun, Sep 30, 2018 at 06:28:21PM +0800, zhong jiang wrote:
> From: Alexey Dobriyan <adobriyan@gmail.com>
> 
> [ Upstream commit e5d9998f3e09359b372a037a6ac55ba235d95d57 ]
> 
>         /*
>          * cpu_partial determined the maximum number of objects
>          * kept in the per cpu partial lists of a processor.
>          */
> 
> Can't be negative.
> 
> I hit a real issue that it will result in a large number of memory leak.
> Becuase Freeing slabs are in interrupt context. So it can trigger this issue.
> put_cpu_partial can be interrupted more than once.
> due to a union struct of lru and pobjects in struct page, when other core handles
> page->lru list, for eaxmple, remove_partial in freeing slab code flow, It will
> result in pobjects being a negative value(0xdead0000). Therefore, a large number
> of slabs will be added to per_cpu partial list.
> 
> I had posted the issue to community before. The detailed issue description is as follows.
> 
> https://www.spinics.net/lists/kernel/msg2870979.html
> 
> After applying the patch, The issue is fixed. So the patch is a effective bugfix.
> It should go into stable.
> 
> Link: http://lkml.kernel.org/r/20180305200730.15812-15-adobriyan@gmail.com
> Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
> Acked-by: Christoph Lameter <cl@linux.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Cc: <stable@vger.kernel.org> # 4.4.x

This didn't apply to 4.14.y and any reason you didn't cc: the stable
mailing list for the other stable developers to see it?

I've fixed up the patch, but next time please always cc: the stable
list.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [STABLE PATCH] slub: make ->cpu_partial unsigned int
  2018-09-30 10:28 [STABLE PATCH] slub: make ->cpu_partial unsigned int zhong jiang
  2018-09-30 12:37 ` Greg KH
@ 2018-09-30 12:50 ` Matthew Wilcox
  2018-09-30 13:10   ` Greg KH
  1 sibling, 1 reply; 10+ messages in thread
From: Matthew Wilcox @ 2018-09-30 12:50 UTC (permalink / raw)
  To: zhong jiang
  Cc: gregkh, cl, penberg, rientjes, iamjoonsoo.kim, akpm, mhocko,
	mgorman, vbabka, andrea, kirill, linux-mm, linux-kernel

On Sun, Sep 30, 2018 at 06:28:21PM +0800, zhong jiang wrote:
> From: Alexey Dobriyan <adobriyan@gmail.com>
> 
> [ Upstream commit e5d9998f3e09359b372a037a6ac55ba235d95d57 ]
> 
>         /*
>          * cpu_partial determined the maximum number of objects
>          * kept in the per cpu partial lists of a processor.
>          */
> 
> Can't be negative.
> 
> I hit a real issue that it will result in a large number of memory leak.
> Becuase Freeing slabs are in interrupt context. So it can trigger this issue.
> put_cpu_partial can be interrupted more than once.
> due to a union struct of lru and pobjects in struct page, when other core handles
> page->lru list, for eaxmple, remove_partial in freeing slab code flow, It will
> result in pobjects being a negative value(0xdead0000). Therefore, a large number
> of slabs will be added to per_cpu partial list.
> 
> I had posted the issue to community before. The detailed issue description is as follows.
> 
> https://www.spinics.net/lists/kernel/msg2870979.html
> 
> After applying the patch, The issue is fixed. So the patch is a effective bugfix.
> It should go into stable.
> 
> Link: http://lkml.kernel.org/r/20180305200730.15812-15-adobriyan@gmail.com
> Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
> Acked-by: Christoph Lameter <cl@linux.com>

Hang on.  Christoph acked the _original_ patch going into upstream.
When he reviewed this patch for _stable_ last week, he asked for more
investigation.  Including this patch in stable is misleading.

> Cc: Pekka Enberg <penberg@kernel.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Cc: <stable@vger.kernel.org> # 4.4.x
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
> Signed-off-by: zhong jiang <zhongjiang@huawei.com>
> ---
>  include/linux/slub_def.h | 3 ++-
>  mm/slub.c                | 6 +++---
>  2 files changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
> index 3388511..9b681f2 100644
> --- a/include/linux/slub_def.h
> +++ b/include/linux/slub_def.h
> @@ -67,7 +67,8 @@ struct kmem_cache {
>  	int size;		/* The size of an object including meta data */
>  	int object_size;	/* The size of an object without meta data */
>  	int offset;		/* Free pointer offset. */
> -	int cpu_partial;	/* Number of per cpu partial objects to keep around */
> +	/* Number of per cpu partial objects to keep around */
> +	unsigned int cpu_partial;
>  	struct kmem_cache_order_objects oo;
>  
>  	/* Allocation and freeing of slabs */
> diff --git a/mm/slub.c b/mm/slub.c
> index 2284c43..c33b0e1 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1661,7 +1661,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n,
>  {
>  	struct page *page, *page2;
>  	void *object = NULL;
> -	int available = 0;
> +	unsigned int available = 0;
>  	int objects;
>  
>  	/*
> @@ -4674,10 +4674,10 @@ static ssize_t cpu_partial_show(struct kmem_cache *s, char *buf)
>  static ssize_t cpu_partial_store(struct kmem_cache *s, const char *buf,
>  				 size_t length)
>  {
> -	unsigned long objects;
> +	unsigned int objects;
>  	int err;
>  
> -	err = kstrtoul(buf, 10, &objects);
> +	err = kstrtouint(buf, 10, &objects);
>  	if (err)
>  		return err;
>  	if (objects && !kmem_cache_has_cpu_partial(s))
> -- 
> 1.7.12.4
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [STABLE PATCH] slub: make ->cpu_partial unsigned int
  2018-09-30 12:50 ` Matthew Wilcox
@ 2018-09-30 13:10   ` Greg KH
  2018-09-30 13:23     ` Matthew Wilcox
  0 siblings, 1 reply; 10+ messages in thread
From: Greg KH @ 2018-09-30 13:10 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: zhong jiang, cl, penberg, rientjes, iamjoonsoo.kim, akpm, mhocko,
	mgorman, vbabka, andrea, kirill, linux-mm, linux-kernel

On Sun, Sep 30, 2018 at 05:50:38AM -0700, Matthew Wilcox wrote:
> On Sun, Sep 30, 2018 at 06:28:21PM +0800, zhong jiang wrote:
> > From: Alexey Dobriyan <adobriyan@gmail.com>
> > 
> > [ Upstream commit e5d9998f3e09359b372a037a6ac55ba235d95d57 ]
> > 
> >         /*
> >          * cpu_partial determined the maximum number of objects
> >          * kept in the per cpu partial lists of a processor.
> >          */
> > 
> > Can't be negative.
> > 
> > I hit a real issue that it will result in a large number of memory leak.
> > Becuase Freeing slabs are in interrupt context. So it can trigger this issue.
> > put_cpu_partial can be interrupted more than once.
> > due to a union struct of lru and pobjects in struct page, when other core handles
> > page->lru list, for eaxmple, remove_partial in freeing slab code flow, It will
> > result in pobjects being a negative value(0xdead0000). Therefore, a large number
> > of slabs will be added to per_cpu partial list.
> > 
> > I had posted the issue to community before. The detailed issue description is as follows.
> > 
> > https://www.spinics.net/lists/kernel/msg2870979.html
> > 
> > After applying the patch, The issue is fixed. So the patch is a effective bugfix.
> > It should go into stable.
> > 
> > Link: http://lkml.kernel.org/r/20180305200730.15812-15-adobriyan@gmail.com
> > Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
> > Acked-by: Christoph Lameter <cl@linux.com>
> 
> Hang on.  Christoph acked the _original_ patch going into upstream.
> When he reviewed this patch for _stable_ last week, he asked for more
> investigation.  Including this patch in stable is misleading.

But the original patch has been in upstream for a long time now (it went
into 4.17-rc1).  If there was a real problem here, whouldn't it have
been resolved already?

And the patch in mainline has Christoph's ack...

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [STABLE PATCH] slub: make ->cpu_partial unsigned int
  2018-09-30 13:10   ` Greg KH
@ 2018-09-30 13:23     ` Matthew Wilcox
  2018-10-02 14:50       ` Christopher Lameter
  0 siblings, 1 reply; 10+ messages in thread
From: Matthew Wilcox @ 2018-09-30 13:23 UTC (permalink / raw)
  To: Greg KH
  Cc: zhong jiang, cl, penberg, rientjes, iamjoonsoo.kim, akpm, mhocko,
	mgorman, vbabka, andrea, kirill, linux-mm, linux-kernel

On Sun, Sep 30, 2018 at 06:10:26AM -0700, Greg KH wrote:
> On Sun, Sep 30, 2018 at 05:50:38AM -0700, Matthew Wilcox wrote:
> > On Sun, Sep 30, 2018 at 06:28:21PM +0800, zhong jiang wrote:
> > > From: Alexey Dobriyan <adobriyan@gmail.com>
> > > 
> > > [ Upstream commit e5d9998f3e09359b372a037a6ac55ba235d95d57 ]
> > > 
> > >         /*
> > >          * cpu_partial determined the maximum number of objects
> > >          * kept in the per cpu partial lists of a processor.
> > >          */
> > > 
> > > Can't be negative.
> > > 
> > > I hit a real issue that it will result in a large number of memory leak.
> > > Becuase Freeing slabs are in interrupt context. So it can trigger this issue.
> > > put_cpu_partial can be interrupted more than once.
> > > due to a union struct of lru and pobjects in struct page, when other core handles
> > > page->lru list, for eaxmple, remove_partial in freeing slab code flow, It will
> > > result in pobjects being a negative value(0xdead0000). Therefore, a large number
> > > of slabs will be added to per_cpu partial list.
> > > 
> > > I had posted the issue to community before. The detailed issue description is as follows.
> > > 
> > > https://www.spinics.net/lists/kernel/msg2870979.html
> > > 
> > > After applying the patch, The issue is fixed. So the patch is a effective bugfix.
> > > It should go into stable.
> > > 
> > > Link: http://lkml.kernel.org/r/20180305200730.15812-15-adobriyan@gmail.com
> > > Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
> > > Acked-by: Christoph Lameter <cl@linux.com>
> > 
> > Hang on.  Christoph acked the _original_ patch going into upstream.
> > When he reviewed this patch for _stable_ last week, he asked for more
> > investigation.  Including this patch in stable is misleading.
> 
> But the original patch has been in upstream for a long time now (it went
> into 4.17-rc1).  If there was a real problem here, whouldn't it have
> been resolved already?
> 
> And the patch in mainline has Christoph's ack...

I'm not saying there's a problem with the patch.  It's that the rationale
for backporting doesn't make any damned sense.  There's something going
on that nobody understands.  This patch is probably masking an underlying
problem that will pop back up and bite us again someday.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [STABLE PATCH] slub: make ->cpu_partial unsigned int
  2018-09-30 13:23     ` Matthew Wilcox
@ 2018-10-02 14:50       ` Christopher Lameter
  0 siblings, 0 replies; 10+ messages in thread
From: Christopher Lameter @ 2018-10-02 14:50 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Greg KH, zhong jiang, penberg, rientjes, iamjoonsoo.kim, akpm,
	mhocko, mgorman, vbabka, andrea, kirill, linux-mm, linux-kernel

On Sun, 30 Sep 2018, Matthew Wilcox wrote:

> > And the patch in mainline has Christoph's ack...
>
> I'm not saying there's a problem with the patch.  It's that the rationale
> for backporting doesn't make any damned sense.  There's something going
> on that nobody understands.  This patch is probably masking an underlying
> problem that will pop back up and bite us again someday.

Right. That is why I raised the issue. I do not see any harm in
backporting but I do not think it fixes the real issue which may be in
concurrent use of page struct fields that are overlapping.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [STABLE PATCH] slub: make ->cpu_partial unsigned int
  2018-09-27 15:46 ` Greg KH
@ 2018-09-28  8:06   ` zhong jiang
  0 siblings, 0 replies; 10+ messages in thread
From: zhong jiang @ 2018-09-28  8:06 UTC (permalink / raw)
  To: Greg KH
  Cc: iamjoonsoo.kim, rientjes, cl, penberg, akpm, mhocko, linux-mm,
	linux-kernel, stable

On 2018/9/27 23:46, Greg KH wrote:
> On Thu, Sep 27, 2018 at 10:43:40PM +0800, zhong jiang wrote:
>> From: Alexey Dobriyan <adobriyan@gmail.com>
>>
>>         /*
>>          * cpu_partial determined the maximum number of objects
>>          * kept in the per cpu partial lists of a processor.
>>          */
>>
>> Can't be negative.
>>
>> I hit a real issue that it will result in a large number of memory leak.
>> Because Freeing slabs are in interrupt context. So it can trigger this issue.
>> put_cpu_partial can be interrupted more than once.
>> due to a union struct of lru and pobjects in struct page, when other core handles
>> page->lru list, for eaxmple, remove_partial in freeing slab code flow, It will
>> result in pobjects being a negative value(0xdead0000). Therefore, a large number
>> of slabs will be added to per_cpu partial list.
>>
>> I had posted the issue to community before. The detailed issue description is as follows.
>>
>> Link: https://www.spinics.net/lists/kernel/msg2870979.html
>>
>> After applying the patch, The issue is fixed. So the patch is a effective bugfix.
>> It should go into stable.
> <formletter>
>
> This is not the correct way to submit patches for inclusion in the
> stable kernel tree.  Please read:
>     https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
> for how to do this properly.
>
> </formletter>
Will resend with proper format.

Thanks,
zhong jiang


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [STABLE PATCH] slub: make ->cpu_partial unsigned int
  2018-09-27 14:43 zhong jiang
  2018-09-27 15:26 ` Christopher Lameter
@ 2018-09-27 15:46 ` Greg KH
  2018-09-28  8:06   ` zhong jiang
  1 sibling, 1 reply; 10+ messages in thread
From: Greg KH @ 2018-09-27 15:46 UTC (permalink / raw)
  To: zhong jiang
  Cc: iamjoonsoo.kim, rientjes, cl, penberg, akpm, mhocko, linux-mm,
	linux-kernel, stable

On Thu, Sep 27, 2018 at 10:43:40PM +0800, zhong jiang wrote:
> From: Alexey Dobriyan <adobriyan@gmail.com>
> 
>         /*
>          * cpu_partial determined the maximum number of objects
>          * kept in the per cpu partial lists of a processor.
>          */
> 
> Can't be negative.
> 
> I hit a real issue that it will result in a large number of memory leak.
> Because Freeing slabs are in interrupt context. So it can trigger this issue.
> put_cpu_partial can be interrupted more than once.
> due to a union struct of lru and pobjects in struct page, when other core handles
> page->lru list, for eaxmple, remove_partial in freeing slab code flow, It will
> result in pobjects being a negative value(0xdead0000). Therefore, a large number
> of slabs will be added to per_cpu partial list.
> 
> I had posted the issue to community before. The detailed issue description is as follows.
> 
> Link: https://www.spinics.net/lists/kernel/msg2870979.html
> 
> After applying the patch, The issue is fixed. So the patch is a effective bugfix.
> It should go into stable.

<formletter>

This is not the correct way to submit patches for inclusion in the
stable kernel tree.  Please read:
    https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
for how to do this properly.

</formletter>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [STABLE PATCH] slub: make ->cpu_partial unsigned int
  2018-09-27 14:43 zhong jiang
@ 2018-09-27 15:26 ` Christopher Lameter
  2018-09-27 15:46 ` Greg KH
  1 sibling, 0 replies; 10+ messages in thread
From: Christopher Lameter @ 2018-09-27 15:26 UTC (permalink / raw)
  To: zhong jiang
  Cc: gregkh, iamjoonsoo.kim, rientjes, penberg, akpm, mhocko,
	linux-mm, linux-kernel, stable

On Thu, 27 Sep 2018, zhong jiang wrote:

> From: Alexey Dobriyan <adobriyan@gmail.com>
>
>         /*
>          * cpu_partial determined the maximum number of objects
>          * kept in the per cpu partial lists of a processor.
>          */
>
> Can't be negative.

True.

> I hit a real issue that it will result in a large number of memory leak.
> Because Freeing slabs are in interrupt context. So it can trigger this issue.
> put_cpu_partial can be interrupted more than once.
> due to a union struct of lru and pobjects in struct page, when other core handles
> page->lru list, for eaxmple, remove_partial in freeing slab code flow, It will
> result in pobjects being a negative value(0xdead0000). Therefore, a large number
> of slabs will be added to per_cpu partial list.

That sounds like it needs more investigation. Concurrent use of page
fields for other purposes can cause serious bugs.

>
> I had posted the issue to community before. The detailed issue description is as follows.

I did not see it. Please make sure to CC the maintainers.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [STABLE PATCH] slub: make ->cpu_partial unsigned int
@ 2018-09-27 14:43 zhong jiang
  2018-09-27 15:26 ` Christopher Lameter
  2018-09-27 15:46 ` Greg KH
  0 siblings, 2 replies; 10+ messages in thread
From: zhong jiang @ 2018-09-27 14:43 UTC (permalink / raw)
  To: gregkh
  Cc: iamjoonsoo.kim, rientjes, cl, penberg, akpm, mhocko, linux-mm,
	linux-kernel, stable

From: Alexey Dobriyan <adobriyan@gmail.com>

        /*
         * cpu_partial determined the maximum number of objects
         * kept in the per cpu partial lists of a processor.
         */

Can't be negative.

I hit a real issue that it will result in a large number of memory leak.
Because Freeing slabs are in interrupt context. So it can trigger this issue.
put_cpu_partial can be interrupted more than once.
due to a union struct of lru and pobjects in struct page, when other core handles
page->lru list, for eaxmple, remove_partial in freeing slab code flow, It will
result in pobjects being a negative value(0xdead0000). Therefore, a large number
of slabs will be added to per_cpu partial list.

I had posted the issue to community before. The detailed issue description is as follows.

Link: https://www.spinics.net/lists/kernel/msg2870979.html

After applying the patch, The issue is fixed. So the patch is a effective bugfix.
It should go into stable.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: stable@vger.kernel.org 
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: zhong jiang <zhongjiang@huawei.com>
---
 include/linux/slub_def.h | 3 ++-
 mm/slub.c                | 6 +++---
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 3388511..9b681f2 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -67,7 +67,8 @@ struct kmem_cache {
 	int size;		/* The size of an object including meta data */
 	int object_size;	/* The size of an object without meta data */
 	int offset;		/* Free pointer offset. */
-	int cpu_partial;	/* Number of per cpu partial objects to keep around */
+	/* Number of per cpu partial objects to keep around */
+	unsigned int cpu_partial;
 	struct kmem_cache_order_objects oo;
 
 	/* Allocation and freeing of slabs */
diff --git a/mm/slub.c b/mm/slub.c
index 2284c43..c33b0e1 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1661,7 +1661,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n,
 {
 	struct page *page, *page2;
 	void *object = NULL;
-	int available = 0;
+	unsigned int available = 0;
 	int objects;
 
 	/*
@@ -4674,10 +4674,10 @@ static ssize_t cpu_partial_show(struct kmem_cache *s, char *buf)
 static ssize_t cpu_partial_store(struct kmem_cache *s, const char *buf,
 				 size_t length)
 {
-	unsigned long objects;
+	unsigned int objects;
 	int err;
 
-	err = kstrtoul(buf, 10, &objects);
+	err = kstrtouint(buf, 10, &objects);
 	if (err)
 		return err;
 	if (objects && !kmem_cache_has_cpu_partial(s))
-- 
1.7.12.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2018-10-02 14:50 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-30 10:28 [STABLE PATCH] slub: make ->cpu_partial unsigned int zhong jiang
2018-09-30 12:37 ` Greg KH
2018-09-30 12:50 ` Matthew Wilcox
2018-09-30 13:10   ` Greg KH
2018-09-30 13:23     ` Matthew Wilcox
2018-10-02 14:50       ` Christopher Lameter
  -- strict thread matches above, loose matches on Subject: below --
2018-09-27 14:43 zhong jiang
2018-09-27 15:26 ` Christopher Lameter
2018-09-27 15:46 ` Greg KH
2018-09-28  8:06   ` zhong jiang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).