All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] SLUB: improve filling cpu partial a bit in get_partial_node()
@ 2024-03-31  2:19 xiongwei.song
  2024-03-31  2:19 ` [PATCH 1/4] mm/slub: remove the check of !kmem_cache_has_cpu_partial() xiongwei.song
                   ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: xiongwei.song @ 2024-03-31  2:19 UTC (permalink / raw)
  To: vbabka, rientjes, cl, penberg, iamjoonsoo.kim, akpm,
	roman.gushchin, 42.hyeyoo
  Cc: linux-mm, linux-kernel, chengming.zhou

From: Xiongwei Song <xiongwei.song@windriver.com>

This series is to remove the unnecessary check for filling cpu partial
and improve the readability.

Introduce slub_get_cpu_partial() and dummy function to prevent compiler
warning with CONFIG_SLUB_CPU_PARTIAL disabled. This is done in patch 2.
Use the helper in patch 3 and 4.

No functionality changed.

Actually, the series is the improvement of patch below:
https://lore.kernel.org/lkml/934f65c6-4d97-6c4d-b123-4937ede24a99@google.com/T/

Regards,
Xiongwei

Xiongwei Song (4):
  mm/slub: remove the check of !kmem_cache_has_cpu_partial()
  mm/slub: add slub_get_cpu_partial() helper
  mm/slub: simpilify get_partial_node()
  mm/slub: don't read slab->cpu_partial_slabs directly

 mm/slub.c | 35 +++++++++++++++++++++++------------
 1 file changed, 23 insertions(+), 12 deletions(-)

-- 
2.27.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/4] mm/slub: remove the check of !kmem_cache_has_cpu_partial()
  2024-03-31  2:19 [PATCH 0/4] SLUB: improve filling cpu partial a bit in get_partial_node() xiongwei.song
@ 2024-03-31  2:19 ` xiongwei.song
  2024-04-02  9:45   ` Vlastimil Babka
  2024-03-31  2:19 ` [PATCH 2/4] mm/slub: add slub_get_cpu_partial() helper xiongwei.song
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 13+ messages in thread
From: xiongwei.song @ 2024-03-31  2:19 UTC (permalink / raw)
  To: vbabka, rientjes, cl, penberg, iamjoonsoo.kim, akpm,
	roman.gushchin, 42.hyeyoo
  Cc: linux-mm, linux-kernel, chengming.zhou

From: Xiongwei Song <xiongwei.song@windriver.com>

The check of !kmem_cache_has_cpu_partial(s) with
CONFIG_SLUB_CPU_PARTIAL enabled here is always false. We have known the
result by calling kmem_cacke_debug(). Here we can remove it.

Signed-off-by: Xiongwei Song <xiongwei.song@windriver.com>
---
 mm/slub.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 1bb2a93cf7b6..059922044a4f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2610,8 +2610,7 @@ static struct slab *get_partial_node(struct kmem_cache *s,
 			partial_slabs++;
 		}
 #ifdef CONFIG_SLUB_CPU_PARTIAL
-		if (!kmem_cache_has_cpu_partial(s)
-			|| partial_slabs > s->cpu_partial_slabs / 2)
+		if (partial_slabs > s->cpu_partial_slabs / 2)
 			break;
 #else
 		break;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/4] mm/slub: add slub_get_cpu_partial() helper
  2024-03-31  2:19 [PATCH 0/4] SLUB: improve filling cpu partial a bit in get_partial_node() xiongwei.song
  2024-03-31  2:19 ` [PATCH 1/4] mm/slub: remove the check of !kmem_cache_has_cpu_partial() xiongwei.song
@ 2024-03-31  2:19 ` xiongwei.song
  2024-03-31  2:19 ` [PATCH 3/4] mm/slub: simplify get_partial_node() xiongwei.song
  2024-03-31  2:19 ` [PATCH 4/4] mm/slub: don't read slab->cpu_partial_slabs directly xiongwei.song
  3 siblings, 0 replies; 13+ messages in thread
From: xiongwei.song @ 2024-03-31  2:19 UTC (permalink / raw)
  To: vbabka, rientjes, cl, penberg, iamjoonsoo.kim, akpm,
	roman.gushchin, 42.hyeyoo
  Cc: linux-mm, linux-kernel, chengming.zhou

From: Xiongwei Song <xiongwei.song@windriver.com>

Add slub_get_cpu_partial() and dummy function to help improve
get_partial_node(). It can prevent compile error when accessing
cpu_partial_slabs with CONFIG_SLUB_CPU_PARTIAL disabled.

Signed-off-by: Xiongwei Song <xiongwei.song@windriver.com>
---
 mm/slub.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index 059922044a4f..590cc953895d 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -604,11 +604,21 @@ static void slub_set_cpu_partial(struct kmem_cache *s, unsigned int nr_objects)
 	nr_slabs = DIV_ROUND_UP(nr_objects * 2, oo_objects(s->oo));
 	s->cpu_partial_slabs = nr_slabs;
 }
+
+static inline unsigned int slub_get_cpu_partial(struct kmem_cache *s)
+{
+	return s->cpu_partial_slabs;
+}
 #else
 static inline void
 slub_set_cpu_partial(struct kmem_cache *s, unsigned int nr_objects)
 {
 }
+
+static inline unsigned int slub_get_cpu_partial(struct kmem_cache *s)
+{
+	return 0;
+}
 #endif /* CONFIG_SLUB_CPU_PARTIAL */
 
 /*
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/4] mm/slub: simplify get_partial_node()
  2024-03-31  2:19 [PATCH 0/4] SLUB: improve filling cpu partial a bit in get_partial_node() xiongwei.song
  2024-03-31  2:19 ` [PATCH 1/4] mm/slub: remove the check of !kmem_cache_has_cpu_partial() xiongwei.song
  2024-03-31  2:19 ` [PATCH 2/4] mm/slub: add slub_get_cpu_partial() helper xiongwei.song
@ 2024-03-31  2:19 ` xiongwei.song
  2024-04-02  9:41   ` Vlastimil Babka
  2024-03-31  2:19 ` [PATCH 4/4] mm/slub: don't read slab->cpu_partial_slabs directly xiongwei.song
  3 siblings, 1 reply; 13+ messages in thread
From: xiongwei.song @ 2024-03-31  2:19 UTC (permalink / raw)
  To: vbabka, rientjes, cl, penberg, iamjoonsoo.kim, akpm,
	roman.gushchin, 42.hyeyoo
  Cc: linux-mm, linux-kernel, chengming.zhou

From: Xiongwei Song <xiongwei.song@windriver.com>

The break conditions can be more readable and simple.

We can check if we need to fill cpu partial after getting the first
partial slab. If kmem_cache_has_cpu_partial() returns true, we fill
cpu partial from next iteration, or break up the loop.

Then we can remove the preprocessor condition of
CONFIG_SLUB_CPU_PARTIAL. Use dummy slub_get_cpu_partial() to make
compiler silent.

Signed-off-by: Xiongwei Song <xiongwei.song@windriver.com>
---
 mm/slub.c | 22 ++++++++++++----------
 1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 590cc953895d..ec91c7435d4e 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2614,18 +2614,20 @@ static struct slab *get_partial_node(struct kmem_cache *s,
 		if (!partial) {
 			partial = slab;
 			stat(s, ALLOC_FROM_PARTIAL);
-		} else {
-			put_cpu_partial(s, slab, 0);
-			stat(s, CPU_PARTIAL_NODE);
-			partial_slabs++;
+
+			/* Fill cpu partial if needed from next iteration, or break */
+			if (kmem_cache_has_cpu_partial(s))
+				continue;
+			else
+				break;
 		}
-#ifdef CONFIG_SLUB_CPU_PARTIAL
-		if (partial_slabs > s->cpu_partial_slabs / 2)
-			break;
-#else
-		break;
-#endif
 
+		put_cpu_partial(s, slab, 0);
+		stat(s, CPU_PARTIAL_NODE);
+		partial_slabs++;
+
+		if (partial_slabs > slub_get_cpu_partial(s) / 2)
+			break;
 	}
 	spin_unlock_irqrestore(&n->list_lock, flags);
 	return partial;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 4/4] mm/slub: don't read slab->cpu_partial_slabs directly
  2024-03-31  2:19 [PATCH 0/4] SLUB: improve filling cpu partial a bit in get_partial_node() xiongwei.song
                   ` (2 preceding siblings ...)
  2024-03-31  2:19 ` [PATCH 3/4] mm/slub: simplify get_partial_node() xiongwei.song
@ 2024-03-31  2:19 ` xiongwei.song
  2024-04-02  9:42   ` Vlastimil Babka
  3 siblings, 1 reply; 13+ messages in thread
From: xiongwei.song @ 2024-03-31  2:19 UTC (permalink / raw)
  To: vbabka, rientjes, cl, penberg, iamjoonsoo.kim, akpm,
	roman.gushchin, 42.hyeyoo
  Cc: linux-mm, linux-kernel, chengming.zhou

From: Xiongwei Song <xiongwei.song@windriver.com>

We can use slub_get_cpu_partial() to read cpu_partial_slabs.

Signed-off-by: Xiongwei Song <xiongwei.song@windriver.com>
---
 mm/slub.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/slub.c b/mm/slub.c
index ec91c7435d4e..47ea06d6feae 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2966,7 +2966,7 @@ static void put_cpu_partial(struct kmem_cache *s, struct slab *slab, int drain)
 	oldslab = this_cpu_read(s->cpu_slab->partial);
 
 	if (oldslab) {
-		if (drain && oldslab->slabs >= s->cpu_partial_slabs) {
+		if (drain && oldslab->slabs >= slub_get_cpu_partial(s)) {
 			/*
 			 * Partial array is full. Move the existing set to the
 			 * per node partial list. Postpone the actual unfreezing
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/4] mm/slub: simplify get_partial_node()
  2024-03-31  2:19 ` [PATCH 3/4] mm/slub: simplify get_partial_node() xiongwei.song
@ 2024-04-02  9:41   ` Vlastimil Babka
  2024-04-03  0:37     ` Song, Xiongwei
  0 siblings, 1 reply; 13+ messages in thread
From: Vlastimil Babka @ 2024-04-02  9:41 UTC (permalink / raw)
  To: xiongwei.song, rientjes, cl, penberg, iamjoonsoo.kim, akpm,
	roman.gushchin, 42.hyeyoo
  Cc: linux-mm, linux-kernel, chengming.zhou

On 3/31/24 4:19 AM, xiongwei.song@windriver.com wrote:
> From: Xiongwei Song <xiongwei.song@windriver.com>
> 
> The break conditions can be more readable and simple.
> 
> We can check if we need to fill cpu partial after getting the first
> partial slab. If kmem_cache_has_cpu_partial() returns true, we fill
> cpu partial from next iteration, or break up the loop.
> 
> Then we can remove the preprocessor condition of
> CONFIG_SLUB_CPU_PARTIAL. Use dummy slub_get_cpu_partial() to make
> compiler silent.
> 
> Signed-off-by: Xiongwei Song <xiongwei.song@windriver.com>
> ---
>  mm/slub.c | 22 ++++++++++++----------
>  1 file changed, 12 insertions(+), 10 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 590cc953895d..ec91c7435d4e 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2614,18 +2614,20 @@ static struct slab *get_partial_node(struct kmem_cache *s,
>  		if (!partial) {
>  			partial = slab;
>  			stat(s, ALLOC_FROM_PARTIAL);
> -		} else {
> -			put_cpu_partial(s, slab, 0);
> -			stat(s, CPU_PARTIAL_NODE);
> -			partial_slabs++;
> +
> +			/* Fill cpu partial if needed from next iteration, or break */
> +			if (kmem_cache_has_cpu_partial(s))

That kinda puts back the check removed in patch 1, although only in the
first iteration. Still not ideal.

> +				continue;
> +			else
> +				break;
>  		}
> -#ifdef CONFIG_SLUB_CPU_PARTIAL
> -		if (partial_slabs > s->cpu_partial_slabs / 2)
> -			break;
> -#else
> -		break;
> -#endif

I'd suggest intead of the changes done in this patch, only change this part
above to:

	if ((slub_get_cpu_partial(s) == 0) ||
	    (partial_slabs > slub_get_cpu_partial(s) / 2))
		break;

That gets rid of the #ifdef and also fixes a weird corner case that if we
set cpu_partial_slabs to 0 from sysfs, we still allocate at least one here.

It could be tempting to use >= instead of > to achieve the same effect but
that would have unintended performance effects that would best be evaluated
separately.

>  
> +		put_cpu_partial(s, slab, 0);
> +		stat(s, CPU_PARTIAL_NODE);
> +		partial_slabs++;
> +
> +		if (partial_slabs > slub_get_cpu_partial(s) / 2)
> +			break;
>  	}
>  	spin_unlock_irqrestore(&n->list_lock, flags);
>  	return partial;


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/4] mm/slub: don't read slab->cpu_partial_slabs directly
  2024-03-31  2:19 ` [PATCH 4/4] mm/slub: don't read slab->cpu_partial_slabs directly xiongwei.song
@ 2024-04-02  9:42   ` Vlastimil Babka
  2024-04-03  0:11     ` Song, Xiongwei
  0 siblings, 1 reply; 13+ messages in thread
From: Vlastimil Babka @ 2024-04-02  9:42 UTC (permalink / raw)
  To: xiongwei.song, rientjes, cl, penberg, iamjoonsoo.kim, akpm,
	roman.gushchin, 42.hyeyoo
  Cc: linux-mm, linux-kernel, chengming.zhou

On 3/31/24 4:19 AM, xiongwei.song@windriver.com wrote:
> From: Xiongwei Song <xiongwei.song@windriver.com>
> 
> We can use slub_get_cpu_partial() to read cpu_partial_slabs.

This code is under the #ifdef so it's not necessary to use the wrapper, only
makes it harder to read imho.

> Signed-off-by: Xiongwei Song <xiongwei.song@windriver.com>
> ---
>  mm/slub.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index ec91c7435d4e..47ea06d6feae 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2966,7 +2966,7 @@ static void put_cpu_partial(struct kmem_cache *s, struct slab *slab, int drain)
>  	oldslab = this_cpu_read(s->cpu_slab->partial);
>  
>  	if (oldslab) {
> -		if (drain && oldslab->slabs >= s->cpu_partial_slabs) {
> +		if (drain && oldslab->slabs >= slub_get_cpu_partial(s)) {
>  			/*
>  			 * Partial array is full. Move the existing set to the
>  			 * per node partial list. Postpone the actual unfreezing


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/4] mm/slub: remove the check of !kmem_cache_has_cpu_partial()
  2024-03-31  2:19 ` [PATCH 1/4] mm/slub: remove the check of !kmem_cache_has_cpu_partial() xiongwei.song
@ 2024-04-02  9:45   ` Vlastimil Babka
  2024-04-03  0:10     ` Song, Xiongwei
  0 siblings, 1 reply; 13+ messages in thread
From: Vlastimil Babka @ 2024-04-02  9:45 UTC (permalink / raw)
  To: xiongwei.song, rientjes, cl, penberg, iamjoonsoo.kim, akpm,
	roman.gushchin, 42.hyeyoo
  Cc: linux-mm, linux-kernel, chengming.zhou

On 3/31/24 4:19 AM, xiongwei.song@windriver.com wrote:
> From: Xiongwei Song <xiongwei.song@windriver.com>
> 
> The check of !kmem_cache_has_cpu_partial(s) with
> CONFIG_SLUB_CPU_PARTIAL enabled here is always false. We have known the
> result by calling kmem_cacke_debug(). Here we can remove it.

Could we be more obvious. We have already checked kmem_cache_debug() earlier
and if it was true, the we either continued or broke from the loop so we
can't reach this code in that case and don't need to check
kmem_cache_debug() as part of kmem_cache_has_cpu_partial() again.

> Signed-off-by: Xiongwei Song <xiongwei.song@windriver.com>
> ---
>  mm/slub.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 1bb2a93cf7b6..059922044a4f 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2610,8 +2610,7 @@ static struct slab *get_partial_node(struct kmem_cache *s,
>  			partial_slabs++;
>  		}
>  #ifdef CONFIG_SLUB_CPU_PARTIAL
> -		if (!kmem_cache_has_cpu_partial(s)
> -			|| partial_slabs > s->cpu_partial_slabs / 2)
> +		if (partial_slabs > s->cpu_partial_slabs / 2)
>  			break;
>  #else
>  		break;


^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH 1/4] mm/slub: remove the check of !kmem_cache_has_cpu_partial()
  2024-04-02  9:45   ` Vlastimil Babka
@ 2024-04-03  0:10     ` Song, Xiongwei
  0 siblings, 0 replies; 13+ messages in thread
From: Song, Xiongwei @ 2024-04-03  0:10 UTC (permalink / raw)
  To: Vlastimil Babka, rientjes, cl, penberg, iamjoonsoo.kim, akpm,
	roman.gushchin, 42.hyeyoo
  Cc: linux-mm, linux-kernel, chengming.zhou

> 
> On 3/31/24 4:19 AM, xiongwei.song@windriver.com wrote:
> > From: Xiongwei Song <xiongwei.song@windriver.com>
> >
> > The check of !kmem_cache_has_cpu_partial(s) with
> > CONFIG_SLUB_CPU_PARTIAL enabled here is always false. We have known the
> > result by calling kmem_cacke_debug(). Here we can remove it.
> 
> Could we be more obvious. We have already checked kmem_cache_debug() earlier
> and if it was true, the we either continued or broke from the loop so we
> can't reach this code in that case and don't need to check
> kmem_cache_debug() as part of kmem_cache_has_cpu_partial() again.

Ok, looks better. Will update.

Thanks,
Xiongwei

> 
> > Signed-off-by: Xiongwei Song <xiongwei.song@windriver.com>
> > ---
> >  mm/slub.c | 3 +--
> >  1 file changed, 1 insertion(+), 2 deletions(-)
> >
> > diff --git a/mm/slub.c b/mm/slub.c
> > index 1bb2a93cf7b6..059922044a4f 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -2610,8 +2610,7 @@ static struct slab *get_partial_node(struct kmem_cache *s,
> >                       partial_slabs++;
> >               }
> >  #ifdef CONFIG_SLUB_CPU_PARTIAL
> > -             if (!kmem_cache_has_cpu_partial(s)
> > -                     || partial_slabs > s->cpu_partial_slabs / 2)
> > +             if (partial_slabs > s->cpu_partial_slabs / 2)
> >                       break;
> >  #else
> >               break;


^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH 4/4] mm/slub: don't read slab->cpu_partial_slabs directly
  2024-04-02  9:42   ` Vlastimil Babka
@ 2024-04-03  0:11     ` Song, Xiongwei
  0 siblings, 0 replies; 13+ messages in thread
From: Song, Xiongwei @ 2024-04-03  0:11 UTC (permalink / raw)
  To: Vlastimil Babka, rientjes, cl, penberg, iamjoonsoo.kim, akpm,
	roman.gushchin, 42.hyeyoo
  Cc: linux-mm, linux-kernel, chengming.zhou

> CAUTION: This email comes from a non Wind River email account!
> Do not click links or open attachments unless you recognize the sender and know the content
> is safe.
> 
> On 3/31/24 4:19 AM, xiongwei.song@windriver.com wrote:
> > From: Xiongwei Song <xiongwei.song@windriver.com>
> >
> > We can use slub_get_cpu_partial() to read cpu_partial_slabs.
> 
> This code is under the #ifdef so it's not necessary to use the wrapper, only
> makes it harder to read imho.

Ok, got it. Will drop this one.

Thanks.

> 
> > Signed-off-by: Xiongwei Song <xiongwei.song@windriver.com>
> > ---
> >  mm/slub.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/mm/slub.c b/mm/slub.c
> > index ec91c7435d4e..47ea06d6feae 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -2966,7 +2966,7 @@ static void put_cpu_partial(struct kmem_cache *s, struct slab
> *slab, int drain)
> >       oldslab = this_cpu_read(s->cpu_slab->partial);
> >
> >       if (oldslab) {
> > -             if (drain && oldslab->slabs >= s->cpu_partial_slabs) {
> > +             if (drain && oldslab->slabs >= slub_get_cpu_partial(s)) {
> >                       /*
> >                        * Partial array is full. Move the existing set to the
> >                        * per node partial list. Postpone the actual unfreezing
> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH 3/4] mm/slub: simplify get_partial_node()
  2024-04-02  9:41   ` Vlastimil Babka
@ 2024-04-03  0:37     ` Song, Xiongwei
  2024-04-03  7:25       ` Vlastimil Babka
  0 siblings, 1 reply; 13+ messages in thread
From: Song, Xiongwei @ 2024-04-03  0:37 UTC (permalink / raw)
  To: Vlastimil Babka, rientjes, cl, penberg, iamjoonsoo.kim, akpm,
	roman.gushchin, 42.hyeyoo
  Cc: linux-mm, linux-kernel, chengming.zhou

> 
> On 3/31/24 4:19 AM, xiongwei.song@windriver.com wrote:
> > From: Xiongwei Song <xiongwei.song@windriver.com>
> >
> > The break conditions can be more readable and simple.
> >
> > We can check if we need to fill cpu partial after getting the first
> > partial slab. If kmem_cache_has_cpu_partial() returns true, we fill
> > cpu partial from next iteration, or break up the loop.
> >
> > Then we can remove the preprocessor condition of
> > CONFIG_SLUB_CPU_PARTIAL. Use dummy slub_get_cpu_partial() to make
> > compiler silent.
> >
> > Signed-off-by: Xiongwei Song <xiongwei.song@windriver.com>
> > ---
> >  mm/slub.c | 22 ++++++++++++----------
> >  1 file changed, 12 insertions(+), 10 deletions(-)
> >
> > diff --git a/mm/slub.c b/mm/slub.c
> > index 590cc953895d..ec91c7435d4e 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -2614,18 +2614,20 @@ static struct slab *get_partial_node(struct kmem_cache *s,
> >               if (!partial) {
> >                       partial = slab;
> >                       stat(s, ALLOC_FROM_PARTIAL);
> > -             } else {
> > -                     put_cpu_partial(s, slab, 0);
> > -                     stat(s, CPU_PARTIAL_NODE);
> > -                     partial_slabs++;
> > +
> > +                     /* Fill cpu partial if needed from next iteration, or break */
> > +                     if (kmem_cache_has_cpu_partial(s))
> 
> That kinda puts back the check removed in patch 1, although only in the
> first iteration. Still not ideal.
> 
> > +                             continue;
> > +                     else
> > +                             break;
> >               }
> > -#ifdef CONFIG_SLUB_CPU_PARTIAL
> > -             if (partial_slabs > s->cpu_partial_slabs / 2)
> > -                     break;
> > -#else
> > -             break;
> > -#endif
> 
> I'd suggest intead of the changes done in this patch, only change this part
> above to:
> 
>         if ((slub_get_cpu_partial(s) == 0) ||
>             (partial_slabs > slub_get_cpu_partial(s) / 2))
>                 break;
> 
> That gets rid of the #ifdef and also fixes a weird corner case that if we
> set cpu_partial_slabs to 0 from sysfs, we still allocate at least one here.

Oh, yes. Will update.

> 
> It could be tempting to use >= instead of > to achieve the same effect but
> that would have unintended performance effects that would best be evaluated
> separately.

I can run a test to measure Amean changes. But in terms of x86 assembly, there 
should not be extra  instructions with ">=".

Did a simple test, for ">=" it uses "jle" instruction, while "jl" instruction is used for ">".
No more instructions involved. So there should not be performance effects on x86.

Thanks,
Xiongwei

> 
> >
> > +             put_cpu_partial(s, slab, 0);
> > +             stat(s, CPU_PARTIAL_NODE);
> > +             partial_slabs++;
> > +
> > +             if (partial_slabs > slub_get_cpu_partial(s) / 2)
> > +                     break;
> >       }
> >       spin_unlock_irqrestore(&n->list_lock, flags);
> >       return partial;


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/4] mm/slub: simplify get_partial_node()
  2024-04-03  0:37     ` Song, Xiongwei
@ 2024-04-03  7:25       ` Vlastimil Babka
  2024-04-03 11:15         ` Song, Xiongwei
  0 siblings, 1 reply; 13+ messages in thread
From: Vlastimil Babka @ 2024-04-03  7:25 UTC (permalink / raw)
  To: Song, Xiongwei, rientjes, cl, penberg, iamjoonsoo.kim, akpm,
	roman.gushchin, 42.hyeyoo
  Cc: linux-mm, linux-kernel, chengming.zhou

On 4/3/24 2:37 AM, Song, Xiongwei wrote:
>> 
>> 
>> It could be tempting to use >= instead of > to achieve the same effect but
>> that would have unintended performance effects that would best be evaluated
>> separately.
> 
> I can run a test to measure Amean changes. But in terms of x86 assembly, there 
> should not be extra  instructions with ">=".
> 
> Did a simple test, for ">=" it uses "jle" instruction, while "jl" instruction is used for ">".
> No more instructions involved. So there should not be performance effects on x86.

Right, I didn't mean the code of the test, but how the difference of the
comparison affects how many cpu partial slabs would be put on the cpu
partial list here.

> Thanks,
> Xiongwei
> 
>> 
>> >
>> > +             put_cpu_partial(s, slab, 0);
>> > +             stat(s, CPU_PARTIAL_NODE);
>> > +             partial_slabs++;
>> > +
>> > +             if (partial_slabs > slub_get_cpu_partial(s) / 2)
>> > +                     break;
>> >       }
>> >       spin_unlock_irqrestore(&n->list_lock, flags);
>> >       return partial;
> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH 3/4] mm/slub: simplify get_partial_node()
  2024-04-03  7:25       ` Vlastimil Babka
@ 2024-04-03 11:15         ` Song, Xiongwei
  0 siblings, 0 replies; 13+ messages in thread
From: Song, Xiongwei @ 2024-04-03 11:15 UTC (permalink / raw)
  To: Vlastimil Babka, rientjes, cl, penberg, iamjoonsoo.kim, akpm,
	roman.gushchin, 42.hyeyoo
  Cc: linux-mm, linux-kernel, chengming.zhou

> 
> On 4/3/24 2:37 AM, Song, Xiongwei wrote:
> >>
> >>
> >> It could be tempting to use >= instead of > to achieve the same effect but
> >> that would have unintended performance effects that would best be evaluated
> >> separately.
> >
> > I can run a test to measure Amean changes. But in terms of x86 assembly, there
> > should not be extra  instructions with ">=".
> >
> > Did a simple test, for ">=" it uses "jle" instruction, while "jl" instruction is used for ">".
> > No more instructions involved. So there should not be performance effects on x86.
> 
> Right, I didn't mean the code of the test, but how the difference of the
> comparison affects how many cpu partial slabs would be put on the cpu
> partial list here.

Got it. Will do measurement for it.

Thanks,
Xiongwei

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2024-04-03 11:16 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-31  2:19 [PATCH 0/4] SLUB: improve filling cpu partial a bit in get_partial_node() xiongwei.song
2024-03-31  2:19 ` [PATCH 1/4] mm/slub: remove the check of !kmem_cache_has_cpu_partial() xiongwei.song
2024-04-02  9:45   ` Vlastimil Babka
2024-04-03  0:10     ` Song, Xiongwei
2024-03-31  2:19 ` [PATCH 2/4] mm/slub: add slub_get_cpu_partial() helper xiongwei.song
2024-03-31  2:19 ` [PATCH 3/4] mm/slub: simplify get_partial_node() xiongwei.song
2024-04-02  9:41   ` Vlastimil Babka
2024-04-03  0:37     ` Song, Xiongwei
2024-04-03  7:25       ` Vlastimil Babka
2024-04-03 11:15         ` Song, Xiongwei
2024-03-31  2:19 ` [PATCH 4/4] mm/slub: don't read slab->cpu_partial_slabs directly xiongwei.song
2024-04-02  9:42   ` Vlastimil Babka
2024-04-03  0:11     ` Song, Xiongwei

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.