linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] binder: Use kmem_cache for binder_thread
@ 2019-08-29  5:49 Peikan Tsai
  2019-08-29  6:42 ` Greg KH
  2019-08-29 13:43 ` Joel Fernandes
  0 siblings, 2 replies; 11+ messages in thread
From: Peikan Tsai @ 2019-08-29  5:49 UTC (permalink / raw)
  To: gregkh, arve, tkjos, maco, joel, christian; +Cc: devel, linux-kernel

Hi,

The allocated size for each binder_thread is 512 bytes by kzalloc.
Because the size of binder_thread is fixed and it's only 304 bytes.
It will save 208 bytes per binder_thread when use create a kmem_cache
for the binder_thread.

Signed-off-by: Peikan Tsai <peikantsai@gmail.com>
---
 drivers/android/binder.c | 16 +++++++++++++---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index dc1c83eafc22..043e0ebd0fe7 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -87,6 +87,8 @@ static struct dentry *binder_debugfs_dir_entry_root;
 static struct dentry *binder_debugfs_dir_entry_proc;
 static atomic_t binder_last_id;

+static struct kmem_cache *binder_thread_cachep;
+
 static int proc_show(struct seq_file *m, void *unused);
 DEFINE_SHOW_ATTRIBUTE(proc);

@@ -4696,14 +4698,15 @@ static struct binder_thread *binder_get_thread(struct binder_proc *proc)
 	thread = binder_get_thread_ilocked(proc, NULL);
 	binder_inner_proc_unlock(proc);
 	if (!thread) {
-		new_thread = kzalloc(sizeof(*thread), GFP_KERNEL);
+		new_thread = kmem_cache_zalloc(binder_thread_cachep,
+					       GFP_KERNEL);
 		if (new_thread == NULL)
 			return NULL;
 		binder_inner_proc_lock(proc);
 		thread = binder_get_thread_ilocked(proc, new_thread);
 		binder_inner_proc_unlock(proc);
 		if (thread != new_thread)
-			kfree(new_thread);
+			kmem_cache_free(binder_thread_cachep, new_thread);
 	}
 	return thread;
 }
@@ -4723,7 +4726,7 @@ static void binder_free_thread(struct binder_thread *thread)
 	BUG_ON(!list_empty(&thread->todo));
 	binder_stats_deleted(BINDER_STAT_THREAD);
 	binder_proc_dec_tmpref(thread->proc);
-	kfree(thread);
+	kmem_cache_free(binder_thread_cachep, thread);
 }

 static int binder_thread_release(struct binder_proc *proc,
@@ -6095,6 +6098,12 @@ static int __init binder_init(void)
 	if (ret)
 		return ret;

+	binder_thread_cachep = kmem_cache_create("binder_thread",
+						 sizeof(struct binder_thread),
+						 0, 0, NULL);
+	if (!binder_thread_cachep)
+		return -ENOMEM;
+
 	atomic_set(&binder_transaction_log.cur, ~0U);
 	atomic_set(&binder_transaction_log_failed.cur, ~0U);

@@ -6167,6 +6176,7 @@ static int __init binder_init(void)

 err_alloc_device_names_failed:
 	debugfs_remove_recursive(binder_debugfs_dir_entry_root);
+	kmem_cache_destroy(binder_thread_cachep);

 	return ret;
 }
--
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH] binder: Use kmem_cache for binder_thread
  2019-08-29  5:49 [PATCH] binder: Use kmem_cache for binder_thread Peikan Tsai
@ 2019-08-29  6:42 ` Greg KH
  2019-08-29 13:53   ` Joel Fernandes
  2019-08-29 13:43 ` Joel Fernandes
  1 sibling, 1 reply; 11+ messages in thread
From: Greg KH @ 2019-08-29  6:42 UTC (permalink / raw)
  To: Peikan Tsai; +Cc: arve, tkjos, maco, joel, christian, devel, linux-kernel

On Thu, Aug 29, 2019 at 01:49:53PM +0800, Peikan Tsai wrote:
> Hi,

No need for that in a changelog text :)

> The allocated size for each binder_thread is 512 bytes by kzalloc.
> Because the size of binder_thread is fixed and it's only 304 bytes.
> It will save 208 bytes per binder_thread when use create a kmem_cache
> for the binder_thread.

Are you _sure_ it really will save that much memory?  You want to do
allocations based on a nice alignment for lots of good reasons,
especially for something that needs quick accesses.

Did you test your change on a system that relies on binder and find any
speed improvement or decrease, and any actual memory savings?

If so, can you post your results?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] binder: Use kmem_cache for binder_thread
  2019-08-29  5:49 [PATCH] binder: Use kmem_cache for binder_thread Peikan Tsai
  2019-08-29  6:42 ` Greg KH
@ 2019-08-29 13:43 ` Joel Fernandes
  1 sibling, 0 replies; 11+ messages in thread
From: Joel Fernandes @ 2019-08-29 13:43 UTC (permalink / raw)
  To: Peikan Tsai; +Cc: gregkh, arve, tkjos, maco, christian, devel, linux-kernel

On Thu, Aug 29, 2019 at 01:49:53PM +0800, Peikan Tsai wrote:
> Hi,
> 
> The allocated size for each binder_thread is 512 bytes by kzalloc.
> Because the size of binder_thread is fixed and it's only 304 bytes.
> It will save 208 bytes per binder_thread when use create a kmem_cache
> for the binder_thread.

Awesome change and observation!!!

Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>

(Another thought: how did you discover this? Are you using some tools to look
into slab fragmentation?).

thanks,

 - Joel

> Signed-off-by: Peikan Tsai <peikantsai@gmail.com>
> ---
>  drivers/android/binder.c | 16 +++++++++++++---
>  1 file changed, 13 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/android/binder.c b/drivers/android/binder.c
> index dc1c83eafc22..043e0ebd0fe7 100644
> --- a/drivers/android/binder.c
> +++ b/drivers/android/binder.c
> @@ -87,6 +87,8 @@ static struct dentry *binder_debugfs_dir_entry_root;
>  static struct dentry *binder_debugfs_dir_entry_proc;
>  static atomic_t binder_last_id;
> 
> +static struct kmem_cache *binder_thread_cachep;
> +
>  static int proc_show(struct seq_file *m, void *unused);
>  DEFINE_SHOW_ATTRIBUTE(proc);
> 
> @@ -4696,14 +4698,15 @@ static struct binder_thread *binder_get_thread(struct binder_proc *proc)
>  	thread = binder_get_thread_ilocked(proc, NULL);
>  	binder_inner_proc_unlock(proc);
>  	if (!thread) {
> -		new_thread = kzalloc(sizeof(*thread), GFP_KERNEL);
> +		new_thread = kmem_cache_zalloc(binder_thread_cachep,
> +					       GFP_KERNEL);
>  		if (new_thread == NULL)
>  			return NULL;
>  		binder_inner_proc_lock(proc);
>  		thread = binder_get_thread_ilocked(proc, new_thread);
>  		binder_inner_proc_unlock(proc);
>  		if (thread != new_thread)
> -			kfree(new_thread);
> +			kmem_cache_free(binder_thread_cachep, new_thread);
>  	}
>  	return thread;
>  }
> @@ -4723,7 +4726,7 @@ static void binder_free_thread(struct binder_thread *thread)
>  	BUG_ON(!list_empty(&thread->todo));
>  	binder_stats_deleted(BINDER_STAT_THREAD);
>  	binder_proc_dec_tmpref(thread->proc);
> -	kfree(thread);
> +	kmem_cache_free(binder_thread_cachep, thread);
>  }
> 
>  static int binder_thread_release(struct binder_proc *proc,
> @@ -6095,6 +6098,12 @@ static int __init binder_init(void)
>  	if (ret)
>  		return ret;
> 
> +	binder_thread_cachep = kmem_cache_create("binder_thread",
> +						 sizeof(struct binder_thread),
> +						 0, 0, NULL);
> +	if (!binder_thread_cachep)
> +		return -ENOMEM;
> +
>  	atomic_set(&binder_transaction_log.cur, ~0U);
>  	atomic_set(&binder_transaction_log_failed.cur, ~0U);
> 
> @@ -6167,6 +6176,7 @@ static int __init binder_init(void)
> 
>  err_alloc_device_names_failed:
>  	debugfs_remove_recursive(binder_debugfs_dir_entry_root);
> +	kmem_cache_destroy(binder_thread_cachep);
> 
>  	return ret;
>  }
> --
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] binder: Use kmem_cache for binder_thread
  2019-08-29  6:42 ` Greg KH
@ 2019-08-29 13:53   ` Joel Fernandes
  2019-08-29 15:27     ` Christian Brauner
  0 siblings, 1 reply; 11+ messages in thread
From: Joel Fernandes @ 2019-08-29 13:53 UTC (permalink / raw)
  To: Greg KH; +Cc: Peikan Tsai, arve, tkjos, maco, christian, devel, linux-kernel

On Thu, Aug 29, 2019 at 08:42:29AM +0200, Greg KH wrote:
> On Thu, Aug 29, 2019 at 01:49:53PM +0800, Peikan Tsai wrote:
[snip] 
> > The allocated size for each binder_thread is 512 bytes by kzalloc.
> > Because the size of binder_thread is fixed and it's only 304 bytes.
> > It will save 208 bytes per binder_thread when use create a kmem_cache
> > for the binder_thread.
> 
> Are you _sure_ it really will save that much memory?  You want to do
> allocations based on a nice alignment for lots of good reasons,
> especially for something that needs quick accesses.

Alignment can be done for slab allocations, kmem_cache_create() takes an
align argument. I am not sure what the default alignment of objects is
though (probably no default alignment). What is an optimal alignment in your
view?

> Did you test your change on a system that relies on binder and find any
> speed improvement or decrease, and any actual memory savings?
> 
> If so, can you post your results?

That's certainly worth it and I thought of asking for the same, but spoke too
soon!

Independent note: In general I find the internal fragmentation with large
kmalloc()s troubling in the kernel :-(. Say you have a 5000 objects of 512
allocation, each 300 bytes. 212 * 5000 is around 1MB. Which is arguably not
neglible on a small memory system, right?

thanks,

 - Joel


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] binder: Use kmem_cache for binder_thread
  2019-08-29 13:53   ` Joel Fernandes
@ 2019-08-29 15:27     ` Christian Brauner
  2019-08-29 18:59       ` Peikan Tsai
  2019-08-30  6:38       ` Greg KH
  0 siblings, 2 replies; 11+ messages in thread
From: Christian Brauner @ 2019-08-29 15:27 UTC (permalink / raw)
  To: Joel Fernandes
  Cc: Greg KH, Peikan Tsai, arve, tkjos, maco, devel, linux-kernel

On Thu, Aug 29, 2019 at 09:53:59AM -0400, Joel Fernandes wrote:
> On Thu, Aug 29, 2019 at 08:42:29AM +0200, Greg KH wrote:
> > On Thu, Aug 29, 2019 at 01:49:53PM +0800, Peikan Tsai wrote:
> [snip] 
> > > The allocated size for each binder_thread is 512 bytes by kzalloc.
> > > Because the size of binder_thread is fixed and it's only 304 bytes.
> > > It will save 208 bytes per binder_thread when use create a kmem_cache
> > > for the binder_thread.
> > 
> > Are you _sure_ it really will save that much memory?  You want to do
> > allocations based on a nice alignment for lots of good reasons,
> > especially for something that needs quick accesses.
> 
> Alignment can be done for slab allocations, kmem_cache_create() takes an
> align argument. I am not sure what the default alignment of objects is
> though (probably no default alignment). What is an optimal alignment in your
> view?

Probably SLAB_HWCACHE_ALIGN would make most sense.

> 
> > Did you test your change on a system that relies on binder and find any
> > speed improvement or decrease, and any actual memory savings?
> > 
> > If so, can you post your results?
> 
> That's certainly worth it and I thought of asking for the same, but spoke too
> soon!

Yeah, it'd be interesting to see what difference this actually makes. 

Christian

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] binder: Use kmem_cache for binder_thread
  2019-08-29 15:27     ` Christian Brauner
@ 2019-08-29 18:59       ` Peikan Tsai
  2019-08-29 19:30         ` joel
  2019-08-30  6:39         ` Greg KH
  2019-08-30  6:38       ` Greg KH
  1 sibling, 2 replies; 11+ messages in thread
From: Peikan Tsai @ 2019-08-29 18:59 UTC (permalink / raw)
  To: Christian Brauner
  Cc: Joel Fernandes, Greg KH, arve, tkjos, maco, devel, linux-kernel

On Thu, Aug 29, 2019 at 05:27:22PM +0200, Christian Brauner wrote:
> On Thu, Aug 29, 2019 at 09:53:59AM -0400, Joel Fernandes wrote:
> > On Thu, Aug 29, 2019 at 08:42:29AM +0200, Greg KH wrote:
> > > On Thu, Aug 29, 2019 at 01:49:53PM +0800, Peikan Tsai wrote:
> > [snip] 
> > > > The allocated size for each binder_thread is 512 bytes by kzalloc.
> > > > Because the size of binder_thread is fixed and it's only 304 bytes.
> > > > It will save 208 bytes per binder_thread when use create a kmem_cache
> > > > for the binder_thread.
> > > 
> > > Are you _sure_ it really will save that much memory?  You want to do
> > > allocations based on a nice alignment for lots of good reasons,
> > > especially for something that needs quick accesses.
> > 
> > Alignment can be done for slab allocations, kmem_cache_create() takes an
> > align argument. I am not sure what the default alignment of objects is
> > though (probably no default alignment). What is an optimal alignment in your
> > view?
> 
> Probably SLAB_HWCACHE_ALIGN would make most sense.
> 

Agree. Thanks for yours comments and suggestions.
I'll put SLAB_HWCACHE_ALIGN it in patch v2.

> > 
> > > Did you test your change on a system that relies on binder and find any
> > > speed improvement or decrease, and any actual memory savings?
> > > 
> > > If so, can you post your results?
> > 
> > That's certainly worth it and I thought of asking for the same, but spoke too
> > soon!
> 
> Yeah, it'd be interesting to see what difference this actually makes. 
> 
> Christian

I tested this change on an Android device(arm) with AOSP kernel 4.19 and
observed
memory usage of binder_thread. But I didn't do binder benchmark yet.

On my platform the memory usage of binder_thread reduce about 90 KB as
the
following result.
        nr obj          obj size        total
	before: 624             512             319488 bytes
	after:  728             312             227136 bytes


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] binder: Use kmem_cache for binder_thread
  2019-08-29 18:59       ` Peikan Tsai
@ 2019-08-29 19:30         ` joel
  2019-08-30  6:39         ` Greg KH
  1 sibling, 0 replies; 11+ messages in thread
From: joel @ 2019-08-29 19:30 UTC (permalink / raw)
  To: Peikan Tsai, Christian Brauner
  Cc: Greg KH, arve, tkjos, maco, devel, linux-kernel



On August 29, 2019 2:59:01 PM EDT, Peikan Tsai <peikantsai@gmail.com> wrote:
>On Thu, Aug 29, 2019 at 05:27:22PM +0200, Christian Brauner wrote:
>> On Thu, Aug 29, 2019 at 09:53:59AM -0400, Joel Fernandes wrote:
>> > On Thu, Aug 29, 2019 at 08:42:29AM +0200, Greg KH wrote:
>> > > On Thu, Aug 29, 2019 at 01:49:53PM +0800, Peikan Tsai wrote:
>> > [snip] 
>> > > > The allocated size for each binder_thread is 512 bytes by
>kzalloc.
>> > > > Because the size of binder_thread is fixed and it's only 304
>bytes.
>> > > > It will save 208 bytes per binder_thread when use create a
>kmem_cache
>> > > > for the binder_thread.
>> > > 
>> > > Are you _sure_ it really will save that much memory?  You want to
>do
>> > > allocations based on a nice alignment for lots of good reasons,
>> > > especially for something that needs quick accesses.
>> > 
>> > Alignment can be done for slab allocations, kmem_cache_create()
>takes an
>> > align argument. I am not sure what the default alignment of objects
>is
>> > though (probably no default alignment). What is an optimal
>alignment in your
>> > view?
>> 
>> Probably SLAB_HWCACHE_ALIGN would make most sense.
>> 
>
>Agree. Thanks for yours comments and suggestions.
>I'll put SLAB_HWCACHE_ALIGN it in patch v2.
>
>> > 
>> > > Did you test your change on a system that relies on binder and
>find any
>> > > speed improvement or decrease, and any actual memory savings?
>> > > 
>> > > If so, can you post your results?
>> > 
>> > That's certainly worth it and I thought of asking for the same, but
>spoke too
>> > soon!
>> 
>> Yeah, it'd be interesting to see what difference this actually makes.
>
>> 
>> Christian
>
>I tested this change on an Android device(arm) with AOSP kernel 4.19
>and
>observed
>memory usage of binder_thread. But I didn't do binder benchmark yet.
>
>On my platform the memory usage of binder_thread reduce about 90 KB as
>the
>following result.
>        nr obj          obj size        total
>	before: 624             512             319488 bytes
>	after:  728             312             227136 bytes

And add this to the changelog as well. Curious- why is nrobj higher with the patch?

Please don't use my reviewed-by tag yet and I will review the new patch and provide tag separately.

Thank you.

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] binder: Use kmem_cache for binder_thread
  2019-08-29 15:27     ` Christian Brauner
  2019-08-29 18:59       ` Peikan Tsai
@ 2019-08-30  6:38       ` Greg KH
  2019-08-30 12:12         ` Christian Brauner
  1 sibling, 1 reply; 11+ messages in thread
From: Greg KH @ 2019-08-30  6:38 UTC (permalink / raw)
  To: Christian Brauner
  Cc: Joel Fernandes, Peikan Tsai, arve, tkjos, maco, devel, linux-kernel

On Thu, Aug 29, 2019 at 05:27:22PM +0200, Christian Brauner wrote:
> On Thu, Aug 29, 2019 at 09:53:59AM -0400, Joel Fernandes wrote:
> > On Thu, Aug 29, 2019 at 08:42:29AM +0200, Greg KH wrote:
> > > On Thu, Aug 29, 2019 at 01:49:53PM +0800, Peikan Tsai wrote:
> > [snip] 
> > > > The allocated size for each binder_thread is 512 bytes by kzalloc.
> > > > Because the size of binder_thread is fixed and it's only 304 bytes.
> > > > It will save 208 bytes per binder_thread when use create a kmem_cache
> > > > for the binder_thread.
> > > 
> > > Are you _sure_ it really will save that much memory?  You want to do
> > > allocations based on a nice alignment for lots of good reasons,
> > > especially for something that needs quick accesses.
> > 
> > Alignment can be done for slab allocations, kmem_cache_create() takes an
> > align argument. I am not sure what the default alignment of objects is
> > though (probably no default alignment). What is an optimal alignment in your
> > view?
> 
> Probably SLAB_HWCACHE_ALIGN would make most sense.

This isn't memory accessing hardware, so I don't think it would, right?

Anyway, some actual performance tests need to be run to see if any of
this make any difference at all please...

thanks,

greg k-h


> 
> > 
> > > Did you test your change on a system that relies on binder and find any
> > > speed improvement or decrease, and any actual memory savings?
> > > 
> > > If so, can you post your results?
> > 
> > That's certainly worth it and I thought of asking for the same, but spoke too
> > soon!
> 
> Yeah, it'd be interesting to see what difference this actually makes. 
> 
> Christian

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] binder: Use kmem_cache for binder_thread
  2019-08-29 18:59       ` Peikan Tsai
  2019-08-29 19:30         ` joel
@ 2019-08-30  6:39         ` Greg KH
  2019-09-02 14:12           ` Peikan Tsai
  1 sibling, 1 reply; 11+ messages in thread
From: Greg KH @ 2019-08-30  6:39 UTC (permalink / raw)
  To: Peikan Tsai
  Cc: Christian Brauner, devel, tkjos, linux-kernel, arve,
	Joel Fernandes, maco

On Fri, Aug 30, 2019 at 02:59:01AM +0800, Peikan Tsai wrote:
> On Thu, Aug 29, 2019 at 05:27:22PM +0200, Christian Brauner wrote:
> > On Thu, Aug 29, 2019 at 09:53:59AM -0400, Joel Fernandes wrote:
> > > On Thu, Aug 29, 2019 at 08:42:29AM +0200, Greg KH wrote:
> > > > On Thu, Aug 29, 2019 at 01:49:53PM +0800, Peikan Tsai wrote:
> > > [snip] 
> > > > > The allocated size for each binder_thread is 512 bytes by kzalloc.
> > > > > Because the size of binder_thread is fixed and it's only 304 bytes.
> > > > > It will save 208 bytes per binder_thread when use create a kmem_cache
> > > > > for the binder_thread.
> > > > 
> > > > Are you _sure_ it really will save that much memory?  You want to do
> > > > allocations based on a nice alignment for lots of good reasons,
> > > > especially for something that needs quick accesses.
> > > 
> > > Alignment can be done for slab allocations, kmem_cache_create() takes an
> > > align argument. I am not sure what the default alignment of objects is
> > > though (probably no default alignment). What is an optimal alignment in your
> > > view?
> > 
> > Probably SLAB_HWCACHE_ALIGN would make most sense.
> > 
> 
> Agree. Thanks for yours comments and suggestions.
> I'll put SLAB_HWCACHE_ALIGN it in patch v2.
> 
> > > 
> > > > Did you test your change on a system that relies on binder and find any
> > > > speed improvement or decrease, and any actual memory savings?
> > > > 
> > > > If so, can you post your results?
> > > 
> > > That's certainly worth it and I thought of asking for the same, but spoke too
> > > soon!
> > 
> > Yeah, it'd be interesting to see what difference this actually makes. 
> > 
> > Christian
> 
> I tested this change on an Android device(arm) with AOSP kernel 4.19 and
> observed
> memory usage of binder_thread. But I didn't do binder benchmark yet.
> 
> On my platform the memory usage of binder_thread reduce about 90 KB as
> the
> following result.
>         nr obj          obj size        total
> 	before: 624             512             319488 bytes
> 	after:  728             312             227136 bytes

You have more objects???


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] binder: Use kmem_cache for binder_thread
  2019-08-30  6:38       ` Greg KH
@ 2019-08-30 12:12         ` Christian Brauner
  0 siblings, 0 replies; 11+ messages in thread
From: Christian Brauner @ 2019-08-30 12:12 UTC (permalink / raw)
  To: Greg KH
  Cc: Joel Fernandes, Peikan Tsai, arve, tkjos, maco, devel, linux-kernel

On Fri, Aug 30, 2019 at 08:38:51AM +0200, Greg KH wrote:
> On Thu, Aug 29, 2019 at 05:27:22PM +0200, Christian Brauner wrote:
> > On Thu, Aug 29, 2019 at 09:53:59AM -0400, Joel Fernandes wrote:
> > > On Thu, Aug 29, 2019 at 08:42:29AM +0200, Greg KH wrote:
> > > > On Thu, Aug 29, 2019 at 01:49:53PM +0800, Peikan Tsai wrote:
> > > [snip] 
> > > > > The allocated size for each binder_thread is 512 bytes by kzalloc.
> > > > > Because the size of binder_thread is fixed and it's only 304 bytes.
> > > > > It will save 208 bytes per binder_thread when use create a kmem_cache
> > > > > for the binder_thread.
> > > > 
> > > > Are you _sure_ it really will save that much memory?  You want to do
> > > > allocations based on a nice alignment for lots of good reasons,
> > > > especially for something that needs quick accesses.
> > > 
> > > Alignment can be done for slab allocations, kmem_cache_create() takes an
> > > align argument. I am not sure what the default alignment of objects is
> > > though (probably no default alignment). What is an optimal alignment in your
> > > view?
> > 
> > Probably SLAB_HWCACHE_ALIGN would make most sense.
> 
> This isn't memory accessing hardware, so I don't think it would, right?

I was more thinking of cacheline bouncing under contention. But maybe
that's not worth it in this case...

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] binder: Use kmem_cache for binder_thread
  2019-08-30  6:39         ` Greg KH
@ 2019-09-02 14:12           ` Peikan Tsai
  0 siblings, 0 replies; 11+ messages in thread
From: Peikan Tsai @ 2019-09-02 14:12 UTC (permalink / raw)
  To: Greg KH
  Cc: Christian Brauner, devel, tkjos, linux-kernel, arve,
	Joel Fernandes, maco

On Fri, Aug 30, 2019 at 08:39:43AM +0200, Greg KH wrote:
> On Fri, Aug 30, 2019 at 02:59:01AM +0800, Peikan Tsai wrote:
> > On Thu, Aug 29, 2019 at 05:27:22PM +0200, Christian Brauner wrote:
> > > On Thu, Aug 29, 2019 at 09:53:59AM -0400, Joel Fernandes wrote:
> > > > On Thu, Aug 29, 2019 at 08:42:29AM +0200, Greg KH wrote:
> > > > > On Thu, Aug 29, 2019 at 01:49:53PM +0800, Peikan Tsai wrote:
> > > > [snip] 
> > > > > > The allocated size for each binder_thread is 512 bytes by kzalloc.
> > > > > > Because the size of binder_thread is fixed and it's only 304 bytes.
> > > > > > It will save 208 bytes per binder_thread when use create a kmem_cache
> > > > > > for the binder_thread.
> > > > > 
> > > > > Are you _sure_ it really will save that much memory?  You want to do
> > > > > allocations based on a nice alignment for lots of good reasons,
> > > > > especially for something that needs quick accesses.
> > > > 
> > > > Alignment can be done for slab allocations, kmem_cache_create() takes an
> > > > align argument. I am not sure what the default alignment of objects is
> > > > though (probably no default alignment). What is an optimal alignment in your
> > > > view?
> > > 
> > > Probably SLAB_HWCACHE_ALIGN would make most sense.
> > > 
> > 
> > Agree. Thanks for yours comments and suggestions.
> > I'll put SLAB_HWCACHE_ALIGN it in patch v2.
> > 
> > > > 
> > > > > Did you test your change on a system that relies on binder and find any
> > > > > speed improvement or decrease, and any actual memory savings?
> > > > > 
> > > > > If so, can you post your results?
> > > > 
> > > > That's certainly worth it and I thought of asking for the same, but spoke too
> > > > soon!
> > > 
> > > Yeah, it'd be interesting to see what difference this actually makes. 
> > > 
> > > Christian
> > 
> > I tested this change on an Android device(arm) with AOSP kernel 4.19 and
> > observed
> > memory usage of binder_thread. But I didn't do binder benchmark yet.
> > 
> > On my platform the memory usage of binder_thread reduce about 90 KB as
> > the
> > following result.
> >         nr obj          obj size        total
> > 	before: 624             512             319488 bytes
> > 	after:  728             312             227136 bytes
> 
> You have more objects???
> 

Sorry, it's total objects which include some inactive objects ...
And because I tested it on an Android platform so there may be some noise.

So I try 'adb stop' and 'echo 3 > /proc/sys/vm/drop_caches' before starting
test to reduce the noise, and the result are as following.

                    objs
kzalloc              220  (kmalloc-512 alloc by binder_get_thread)

             active_objs  total objs   objperslab  slabdata
kmem_cache           194         403           13        31

Seems there are more objects when use kmemcache for binder_thread...
But as I understand it, those inactive objects can be free by kmemcahe shrink?

Also, I tested the throughput by using performace test of Android VTS.

size(bytes)	kzalloc(byte/ns)	kmemcache(byte/ns)
4		0.17			0.17
8		0.33			0.32
16		0.66			0.66
32		1.36			1.42
64		2.66			2.61
128		5.4			5.26
256		10.29			10.77
512		21.51			21.36
1k		41			40.26
2k		82.12			80.28
4k		149.24			146.95
8k		262.34			256
16k		417.96			422.2
32k		596.66			590.23
64k		600.84			601.25



^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2019-09-02 14:12 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-29  5:49 [PATCH] binder: Use kmem_cache for binder_thread Peikan Tsai
2019-08-29  6:42 ` Greg KH
2019-08-29 13:53   ` Joel Fernandes
2019-08-29 15:27     ` Christian Brauner
2019-08-29 18:59       ` Peikan Tsai
2019-08-29 19:30         ` joel
2019-08-30  6:39         ` Greg KH
2019-09-02 14:12           ` Peikan Tsai
2019-08-30  6:38       ` Greg KH
2019-08-30 12:12         ` Christian Brauner
2019-08-29 13:43 ` Joel Fernandes

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).