linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [Patch v3] mm/slub: Remove repeated action in calculate_order()
@ 2022-04-30  0:25 Wonhyuk Yang
  2022-05-02 10:00 ` Vlastimil Babka
  0 siblings, 1 reply; 4+ messages in thread
From: Wonhyuk Yang @ 2022-04-30  0:25 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Vlastimil Babka, Roman Gushchin
  Cc: Wonhyuk Yang, Hyeonggon Yoo, linux-mm, linux-kernel

To calculate order, calc_slab_order() is called repeatly changing the
fract_leftover. Thus, the branch which is not dependent on
fract_leftover is executed repeatly. So make it run only once.

Plus, when min_object reached to 1, we set fract_leftover to 1. In
this case, we can calculate order by max(slub_min_order,
get_order(size)) instead of calling calc_slab_order().

No functional impact expected.

Signed-off-by: Wonhyuk Yang <vvghjk1234@gmail.com>
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---

 mm/slub.c | 18 +++++++-----------
 1 file changed, 7 insertions(+), 11 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index ed5c2c03a47a..1fe4d62b72b8 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3795,9 +3795,6 @@ static inline unsigned int calc_slab_order(unsigned int size,
 	unsigned int min_order = slub_min_order;
 	unsigned int order;
 
-	if (order_objects(min_order, size) > MAX_OBJS_PER_PAGE)
-		return get_order(size * MAX_OBJS_PER_PAGE) - 1;
-
 	for (order = max(min_order, (unsigned int)get_order(min_objects * size));
 			order <= max_order; order++) {
 
@@ -3820,6 +3817,11 @@ static inline int calculate_order(unsigned int size)
 	unsigned int max_objects;
 	unsigned int nr_cpus;
 
+	if (unlikely(order_objects(slub_min_order, size) > MAX_OBJS_PER_PAGE)) {
+		order = get_order(size * MAX_OBJS_PER_PAGE) - 1;
+		goto out;
+	}
+
 	/*
 	 * Attempt to find best configuration for a slab. This
 	 * works by first attempting to generate a layout with
@@ -3865,14 +3867,8 @@ static inline int calculate_order(unsigned int size)
 	 * We were unable to place multiple objects in a slab. Now
 	 * lets see if we can place a single object there.
 	 */
-	order = calc_slab_order(size, 1, slub_max_order, 1);
-	if (order <= slub_max_order)
-		return order;
-
-	/*
-	 * Doh this slab cannot be placed using slub_max_order.
-	 */
-	order = calc_slab_order(size, 1, MAX_ORDER, 1);
+	order = max_t(unsigned int, slub_min_order, get_order(size));
+out:
 	if (order < MAX_ORDER)
 		return order;
 	return -ENOSYS;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [Patch v3] mm/slub: Remove repeated action in calculate_order()
  2022-04-30  0:25 [Patch v3] mm/slub: Remove repeated action in calculate_order() Wonhyuk Yang
@ 2022-05-02 10:00 ` Vlastimil Babka
  2022-05-09  9:42   ` Wonhyuk Yang
  0 siblings, 1 reply; 4+ messages in thread
From: Vlastimil Babka @ 2022-05-02 10:00 UTC (permalink / raw)
  To: Wonhyuk Yang, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Roman Gushchin
  Cc: Hyeonggon Yoo, linux-mm, linux-kernel

On 4/30/22 02:25, Wonhyuk Yang wrote:
> To calculate order, calc_slab_order() is called repeatly changing the
> fract_leftover. Thus, the branch which is not dependent on
> fract_leftover is executed repeatly. So make it run only once.
> 
> Plus, when min_object reached to 1, we set fract_leftover to 1. In
> this case, we can calculate order by max(slub_min_order,
> get_order(size)) instead of calling calc_slab_order().
> 
> No functional impact expected.
> 
> Signed-off-by: Wonhyuk Yang <vvghjk1234@gmail.com>
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> ---
> 
>  mm/slub.c | 18 +++++++-----------
>  1 file changed, 7 insertions(+), 11 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index ed5c2c03a47a..1fe4d62b72b8 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3795,9 +3795,6 @@ static inline unsigned int calc_slab_order(unsigned int size,
>  	unsigned int min_order = slub_min_order;
>  	unsigned int order;
>  
> -	if (order_objects(min_order, size) > MAX_OBJS_PER_PAGE)
> -		return get_order(size * MAX_OBJS_PER_PAGE) - 1;
> -
>  	for (order = max(min_order, (unsigned int)get_order(min_objects * size));
>  			order <= max_order; order++) {
>  
> @@ -3820,6 +3817,11 @@ static inline int calculate_order(unsigned int size)
>  	unsigned int max_objects;
>  	unsigned int nr_cpus;
>  
> +	if (unlikely(order_objects(slub_min_order, size) > MAX_OBJS_PER_PAGE)) {
> +		order = get_order(size * MAX_OBJS_PER_PAGE) - 1;
> +		goto out;
> +	}

Hm interestingly, both before and after your patch, MAX_OBJS_PER_PAGE might
be theoretically overflowed not by slub_min_order, but then with higher
orders. Seems to be prevented only as a side-effect of fragmentation close
to none, thus higher orders not attempted. Would be maybe less confusing to
check that explicitly. Even if that's wasteful, but this is not really perf
critical code.

> +
>  	/*
>  	 * Attempt to find best configuration for a slab. This
>  	 * works by first attempting to generate a layout with
> @@ -3865,14 +3867,8 @@ static inline int calculate_order(unsigned int size)
>  	 * We were unable to place multiple objects in a slab. Now
>  	 * lets see if we can place a single object there.
>  	 */
> -	order = calc_slab_order(size, 1, slub_max_order, 1);
> -	if (order <= slub_max_order)
> -		return order;
> -
> -	/*
> -	 * Doh this slab cannot be placed using slub_max_order.
> -	 */
> -	order = calc_slab_order(size, 1, MAX_ORDER, 1);
> +	order = max_t(unsigned int, slub_min_order, get_order(size));

If we failed to assign order above, then AFAICS it means even slub_min_order
will not give us more than 1 object per slub. Thus it doesn't make sense to
use it in a max() formula, and we can just se get_order(), no?

> +out:
>  	if (order < MAX_ORDER)
>  		return order;
>  	return -ENOSYS;


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Patch v3] mm/slub: Remove repeated action in calculate_order()
  2022-05-02 10:00 ` Vlastimil Babka
@ 2022-05-09  9:42   ` Wonhyuk Yang
  2022-05-09 11:43     ` Vlastimil Babka
  0 siblings, 1 reply; 4+ messages in thread
From: Wonhyuk Yang @ 2022-05-09  9:42 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Roman Gushchin, Hyeonggon Yoo, linux-mm,
	linux-kernel

On Mon, May 2, 2022 at 7:00 PM Vlastimil Babka <vbabka@suse.cz> wrote:
>
> On 4/30/22 02:25, Wonhyuk Yang wrote:
> > To calculate order, calc_slab_order() is called repeatly changing the
> > fract_leftover. Thus, the branch which is not dependent on
> > fract_leftover is executed repeatly. So make it run only once.
> >
> > Plus, when min_object reached to 1, we set fract_leftover to 1. In
> > this case, we can calculate order by max(slub_min_order,
> > get_order(size)) instead of calling calc_slab_order().
> >
> > No functional impact expected.
> >
> > Signed-off-by: Wonhyuk Yang <vvghjk1234@gmail.com>
> > Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > ---
> >
> >  mm/slub.c | 18 +++++++-----------
> >  1 file changed, 7 insertions(+), 11 deletions(-)
> >
> > diff --git a/mm/slub.c b/mm/slub.c
> > index ed5c2c03a47a..1fe4d62b72b8 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -3795,9 +3795,6 @@ static inline unsigned int calc_slab_order(unsigned int size,
> >       unsigned int min_order = slub_min_order;
> >       unsigned int order;
> >
> > -     if (order_objects(min_order, size) > MAX_OBJS_PER_PAGE)
> > -             return get_order(size * MAX_OBJS_PER_PAGE) - 1;
> > -
> >       for (order = max(min_order, (unsigned int)get_order(min_objects * size));
> >                       order <= max_order; order++) {
> >
> > @@ -3820,6 +3817,11 @@ static inline int calculate_order(unsigned int size)
> >       unsigned int max_objects;
> >       unsigned int nr_cpus;
> >
> > +     if (unlikely(order_objects(slub_min_order, size) > MAX_OBJS_PER_PAGE)) {
> > +             order = get_order(size * MAX_OBJS_PER_PAGE) - 1;
> > +             goto out;
> > +     }
>
> Hm interestingly, both before and after your patch, MAX_OBJS_PER_PAGE might
> be theoretically overflowed not by slub_min_order, but then with higher
> orders. Seems to be prevented only as a side-effect of fragmentation close
> to none, thus higher orders not attempted. Would be maybe less confusing to
> check that explicitly. Even if that's wasteful, but this is not really perf
> critical code.

Yes, I agree that checking the overflow of object number explicitly is
better even if
it is almost impossible. But it checked repeatedly by calling
calc_slab_order(). It
seems to me that is unnecessary doesn't it?

>
> > +
> >       /*
> >        * Attempt to find best configuration for a slab. This
> >        * works by first attempting to generate a layout with
> > @@ -3865,14 +3867,8 @@ static inline int calculate_order(unsigned int size)
> >        * We were unable to place multiple objects in a slab. Now
> >        * lets see if we can place a single object there.
> >        */
> > -     order = calc_slab_order(size, 1, slub_max_order, 1);
> > -     if (order <= slub_max_order)
> > -             return order;
> > -
> > -     /*
> > -      * Doh this slab cannot be placed using slub_max_order.
> > -      */
> > -     order = calc_slab_order(size, 1, MAX_ORDER, 1);
> > +     order = max_t(unsigned int, slub_min_order, get_order(size));
>
> If we failed to assign order above, then AFAICS it means even slub_min_order
> will not give us more than 1 object per slub. Thus it doesn't make sense to
> use it in a max() formula, and we can just se get_order(), no?

That's sounds reasonable. When it reached here, we don't need to keep the
slub_min_order.

>
> > +out:
> >       if (order < MAX_ORDER)
> >               return order;
> >       return -ENOSYS;
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Patch v3] mm/slub: Remove repeated action in calculate_order()
  2022-05-09  9:42   ` Wonhyuk Yang
@ 2022-05-09 11:43     ` Vlastimil Babka
  0 siblings, 0 replies; 4+ messages in thread
From: Vlastimil Babka @ 2022-05-09 11:43 UTC (permalink / raw)
  To: Wonhyuk Yang
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Roman Gushchin, Hyeonggon Yoo, linux-mm,
	linux-kernel

On 5/9/22 11:42, Wonhyuk Yang wrote:

>> > +     if (unlikely(order_objects(slub_min_order, size) > MAX_OBJS_PER_PAGE)) {
>> > +             order = get_order(size * MAX_OBJS_PER_PAGE) - 1;
>> > +             goto out;
>> > +     }
>>
>> Hm interestingly, both before and after your patch, MAX_OBJS_PER_PAGE might
>> be theoretically overflowed not by slub_min_order, but then with higher
>> orders. Seems to be prevented only as a side-effect of fragmentation close
>> to none, thus higher orders not attempted. Would be maybe less confusing to
>> check that explicitly. Even if that's wasteful, but this is not really perf
>> critical code.
> 
> Yes, I agree that checking the overflow of object number explicitly is
> better even if
> it is almost impossible. But it checked repeatedly by calling
> calc_slab_order(). It
> seems to me that is unnecessary doesn't it?

Yeah I'm OK with the goal of your patch.


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-05-09 11:44 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-30  0:25 [Patch v3] mm/slub: Remove repeated action in calculate_order() Wonhyuk Yang
2022-05-02 10:00 ` Vlastimil Babka
2022-05-09  9:42   ` Wonhyuk Yang
2022-05-09 11:43     ` Vlastimil Babka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).