All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2] mm/slub.c: replace kmem_cache->cpu_partial with wrapped APIs
@ 2020-02-19  2:32 qiwuchen55
  2020-02-22  3:40 ` Christopher Lameter
  0 siblings, 1 reply; 4+ messages in thread
From: qiwuchen55 @ 2020-02-19  2:32 UTC (permalink / raw)
  To: cl, penberg, rientjes, iamjoonsoo.kim, akpm; +Cc: linux-mm, chenqiwu

From: chenqiwu <chenqiwu@xiaomi.com>

There are slub_cpu_partial() and slub_set_cpu_partial() APIs
to wrap kmem_cache->cpu_partial. This patch will use the two
APIs to replace kmem_cache->cpu_partial in slub code.

Signed-off-by: chenqiwu <chenqiwu@xiaomi.com>
---
changes in v2:
 - rewrite the commit title.
---
 mm/slub.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 17dc00e..1eb888c 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2284,7 +2284,7 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
 		if (oldpage) {
 			pobjects = oldpage->pobjects;
 			pages = oldpage->pages;
-			if (drain && pobjects > s->cpu_partial) {
+			if (drain && pobjects > slub_cpu_partial(s)) {
 				unsigned long flags;
 				/*
 				 * partial array is full. Move the existing
@@ -2309,7 +2309,7 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
 
 	} while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page)
 								!= oldpage);
-	if (unlikely(!s->cpu_partial)) {
+	if (unlikely(!slub_cpu_partial(s))) {
 		unsigned long flags;
 
 		local_irq_save(flags);
@@ -3493,15 +3493,15 @@ static void set_cpu_partial(struct kmem_cache *s)
 	 *    50% to keep some capacity around for frees.
 	 */
 	if (!kmem_cache_has_cpu_partial(s))
-		s->cpu_partial = 0;
+		slub_set_cpu_partial(s, 0);
 	else if (s->size >= PAGE_SIZE)
-		s->cpu_partial = 2;
+		slub_set_cpu_partial(s, 2);
 	else if (s->size >= 1024)
-		s->cpu_partial = 6;
+		slub_set_cpu_partial(s, 6);
 	else if (s->size >= 256)
-		s->cpu_partial = 13;
+		slub_set_cpu_partial(s, 13);
 	else
-		s->cpu_partial = 30;
+		slub_set_cpu_partial(s, 30);
 #endif
 }
 
-- 
1.9.1



^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] mm/slub.c: replace kmem_cache->cpu_partial with wrapped APIs
  2020-02-19  2:32 [PATCH v2] mm/slub.c: replace kmem_cache->cpu_partial with wrapped APIs qiwuchen55
@ 2020-02-22  3:40 ` Christopher Lameter
  2020-02-26 18:13   ` Vlastimil Babka
  0 siblings, 1 reply; 4+ messages in thread
From: Christopher Lameter @ 2020-02-22  3:40 UTC (permalink / raw)
  To: qiwuchen55; +Cc: penberg, rientjes, iamjoonsoo.kim, akpm, linux-mm, chenqiwu

On Wed, 19 Feb 2020, qiwuchen55@gmail.com wrote:

> diff --git a/mm/slub.c b/mm/slub.c
> index 17dc00e..1eb888c 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2284,7 +2284,7 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
>  		if (oldpage) {
>  			pobjects = oldpage->pobjects;
>  			pages = oldpage->pages;
> -			if (drain && pobjects > s->cpu_partial) {
> +			if (drain && pobjects > slub_cpu_partial(s)) {
>  				unsigned long flags;
>  				/*
>  				 * partial array is full. Move the existing

Maybe its better to not generate code for put_cpu_partial() instead of
using macros there if per cpu partials are disabled?



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] mm/slub.c: replace kmem_cache->cpu_partial with wrapped APIs
  2020-02-22  3:40 ` Christopher Lameter
@ 2020-02-26 18:13   ` Vlastimil Babka
  2020-02-27  1:47     ` Andrew Morton
  0 siblings, 1 reply; 4+ messages in thread
From: Vlastimil Babka @ 2020-02-26 18:13 UTC (permalink / raw)
  To: Christopher Lameter, qiwuchen55
  Cc: penberg, rientjes, iamjoonsoo.kim, akpm, linux-mm, chenqiwu

On 2/22/20 4:40 AM, Christopher Lameter wrote:
> On Wed, 19 Feb 2020, qiwuchen55@gmail.com wrote:
> 
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 17dc00e..1eb888c 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -2284,7 +2284,7 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
>>  		if (oldpage) {
>>  			pobjects = oldpage->pobjects;
>>  			pages = oldpage->pages;
>> -			if (drain && pobjects > s->cpu_partial) {
>> +			if (drain && pobjects > slub_cpu_partial(s)) {
>>  				unsigned long flags;
>>  				/*
>>  				 * partial array is full. Move the existing
> 
> Maybe its better to not generate code for put_cpu_partial() instead of
> using macros there if per cpu partials are disabled?

The whole code of put_cpu_partial() is already under
#ifdef CONFIG_SLUB_CPU_PARTIAL.

I agree that the wrapper shouldn't be used in a function that deals only with
the case that the partials do exist. It just obscures the code unnecessarily.



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] mm/slub.c: replace kmem_cache->cpu_partial with wrapped APIs
  2020-02-26 18:13   ` Vlastimil Babka
@ 2020-02-27  1:47     ` Andrew Morton
  0 siblings, 0 replies; 4+ messages in thread
From: Andrew Morton @ 2020-02-27  1:47 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Christopher Lameter, qiwuchen55, penberg, rientjes,
	iamjoonsoo.kim, linux-mm, chenqiwu

On Wed, 26 Feb 2020 19:13:31 +0100 Vlastimil Babka <vbabka@suse.cz> wrote:

> On 2/22/20 4:40 AM, Christopher Lameter wrote:
> > On Wed, 19 Feb 2020, qiwuchen55@gmail.com wrote:
> > 
> >> diff --git a/mm/slub.c b/mm/slub.c
> >> index 17dc00e..1eb888c 100644
> >> --- a/mm/slub.c
> >> +++ b/mm/slub.c
> >> @@ -2284,7 +2284,7 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
> >>  		if (oldpage) {
> >>  			pobjects = oldpage->pobjects;
> >>  			pages = oldpage->pages;
> >> -			if (drain && pobjects > s->cpu_partial) {
> >> +			if (drain && pobjects > slub_cpu_partial(s)) {
> >>  				unsigned long flags;
> >>  				/*
> >>  				 * partial array is full. Move the existing
> > 
> > Maybe its better to not generate code for put_cpu_partial() instead of
> > using macros there if per cpu partials are disabled?
> 
> The whole code of put_cpu_partial() is already under
> #ifdef CONFIG_SLUB_CPU_PARTIAL.
> 
> I agree that the wrapper shouldn't be used in a function that deals only with
> the case that the partials do exist. It just obscures the code unnecessarily.

Yes, I scratched my head over this and decided to go ahead with the
patches.  Given that we have an accessor for ->cpu_partial, it's best
that it be used consistently.  The fact that the wrapper is there to
avoid ifdefs is an implementation detail which is private to the header
files and Kconfig system.

IOW, the way we open-code it in some places and use the accessor in
other places also obscures the code.  Which is preferable?  I don't see
a clear answer, but lean towards consistency.


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-02-27  1:47 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-19  2:32 [PATCH v2] mm/slub.c: replace kmem_cache->cpu_partial with wrapped APIs qiwuchen55
2020-02-22  3:40 ` Christopher Lameter
2020-02-26 18:13   ` Vlastimil Babka
2020-02-27  1:47     ` Andrew Morton

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.