* [PATCH] mm/mmu_notifier.c: micro-optimization substitute kzalloc with kmalloc @ 2020-09-06 11:43 mateusznosek0 2020-09-06 14:26 ` Mike Rapoport 0 siblings, 1 reply; 5+ messages in thread From: mateusznosek0 @ 2020-09-06 11:43 UTC (permalink / raw) To: linux-mm, linux-kernel; +Cc: Mateusz Nosek, akpm From: Mateusz Nosek <mateusznosek0@gmail.com> Most fields in struct pointed by 'subscriptions' are initialized explicitly after the allocation. By changing kzalloc to kmalloc the call to memset is avoided. As the only new code consists of 2 simple memory accesses, the performance is increased. Signed-off-by: Mateusz Nosek <mateusznosek0@gmail.com> --- mm/mmu_notifier.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c index 4fc918163dd3..190e198dc5be 100644 --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -625,7 +625,7 @@ int __mmu_notifier_register(struct mmu_notifier *subscription, * know that mm->notifier_subscriptions can't change while we * hold the write side of the mmap_lock. */ - subscriptions = kzalloc( + subscriptions = kmalloc( sizeof(struct mmu_notifier_subscriptions), GFP_KERNEL); if (!subscriptions) return -ENOMEM; @@ -636,6 +636,8 @@ int __mmu_notifier_register(struct mmu_notifier *subscription, subscriptions->itree = RB_ROOT_CACHED; init_waitqueue_head(&subscriptions->wq); INIT_HLIST_HEAD(&subscriptions->deferred_list); + subscriptions->active_invalidate_ranges = 0; + subscriptions->has_itree = false; } ret = mm_take_all_locks(mm); -- 2.20.1 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] mm/mmu_notifier.c: micro-optimization substitute kzalloc with kmalloc 2020-09-06 11:43 [PATCH] mm/mmu_notifier.c: micro-optimization substitute kzalloc with kmalloc mateusznosek0 @ 2020-09-06 14:26 ` Mike Rapoport 2020-09-06 16:06 ` Mateusz Nosek 0 siblings, 1 reply; 5+ messages in thread From: Mike Rapoport @ 2020-09-06 14:26 UTC (permalink / raw) To: mateusznosek0; +Cc: linux-mm, linux-kernel, akpm Hi, On Sun, Sep 06, 2020 at 01:43:21PM +0200, mateusznosek0@gmail.com wrote: > From: Mateusz Nosek <mateusznosek0@gmail.com> > > Most fields in struct pointed by 'subscriptions' are initialized explicitly > after the allocation. By changing kzalloc to kmalloc the call to memset > is avoided. As the only new code consists of 2 simple memory accesses, > the performance is increased. Is there a measurable performance increase? The __mmu_notifier_register() is not used that frequently to trade off robustness of kzalloc() for slight (if visible at all) performance gain. > Signed-off-by: Mateusz Nosek <mateusznosek0@gmail.com> > --- > mm/mmu_notifier.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c > index 4fc918163dd3..190e198dc5be 100644 > --- a/mm/mmu_notifier.c > +++ b/mm/mmu_notifier.c > @@ -625,7 +625,7 @@ int __mmu_notifier_register(struct mmu_notifier *subscription, > * know that mm->notifier_subscriptions can't change while we > * hold the write side of the mmap_lock. > */ > - subscriptions = kzalloc( > + subscriptions = kmalloc( > sizeof(struct mmu_notifier_subscriptions), GFP_KERNEL); > if (!subscriptions) > return -ENOMEM; > @@ -636,6 +636,8 @@ int __mmu_notifier_register(struct mmu_notifier *subscription, > subscriptions->itree = RB_ROOT_CACHED; > init_waitqueue_head(&subscriptions->wq); > INIT_HLIST_HEAD(&subscriptions->deferred_list); > + subscriptions->active_invalidate_ranges = 0; > + subscriptions->has_itree = false; > } > > ret = mm_take_all_locks(mm); > -- > 2.20.1 > > -- Sincerely yours, Mike. ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] mm/mmu_notifier.c: micro-optimization substitute kzalloc with kmalloc 2020-09-06 14:26 ` Mike Rapoport @ 2020-09-06 16:06 ` Mateusz Nosek 2020-09-08 6:42 ` Mike Rapoport 0 siblings, 1 reply; 5+ messages in thread From: Mateusz Nosek @ 2020-09-06 16:06 UTC (permalink / raw) To: Mike Rapoport; +Cc: linux-mm, linux-kernel, akpm Hi, I performed simple benchmarks using custom kernel module with the code fragment in question 'copy-pasted' in there in both versions. In case of 1k, 10k and 100k iterations the average time for kzalloc version was 5.1 and for kmalloc 3.9, for each iterations number. The time was measured using 'ktime_get(void)' function and the results given here are in ktime_t units. The machine I use has 4 core Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz CPU. The performance increase happens, but as you wrote it is probably not really noticeable. I have found 3 other places in kernel code with similar kzalloc related issues, none of which seems to be 'hot' code. I leave the decision if this patch and potential others I would send regarding this issue, are worth applying to the community and maintainers. Best regards, Mateusz Nosek On 9/6/2020 4:26 PM, Mike Rapoport wrote: > Hi, > > On Sun, Sep 06, 2020 at 01:43:21PM +0200, mateusznosek0@gmail.com wrote: >> From: Mateusz Nosek <mateusznosek0@gmail.com> >> >> Most fields in struct pointed by 'subscriptions' are initialized explicitly >> after the allocation. By changing kzalloc to kmalloc the call to memset >> is avoided. As the only new code consists of 2 simple memory accesses, >> the performance is increased. > > Is there a measurable performance increase? > > The __mmu_notifier_register() is not used that frequently to trade off > robustness of kzalloc() for slight (if visible at all) performance gain. > >> Signed-off-by: Mateusz Nosek <mateusznosek0@gmail.com> >> --- >> mm/mmu_notifier.c | 4 +++- >> 1 file changed, 3 insertions(+), 1 deletion(-) >> >> diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c >> index 4fc918163dd3..190e198dc5be 100644 >> --- a/mm/mmu_notifier.c >> +++ b/mm/mmu_notifier.c >> @@ -625,7 +625,7 @@ int __mmu_notifier_register(struct mmu_notifier *subscription, >> * know that mm->notifier_subscriptions can't change while we >> * hold the write side of the mmap_lock. >> */ >> - subscriptions = kzalloc( >> + subscriptions = kmalloc( >> sizeof(struct mmu_notifier_subscriptions), GFP_KERNEL); >> if (!subscriptions) >> return -ENOMEM; >> @@ -636,6 +636,8 @@ int __mmu_notifier_register(struct mmu_notifier *subscription, >> subscriptions->itree = RB_ROOT_CACHED; >> init_waitqueue_head(&subscriptions->wq); >> INIT_HLIST_HEAD(&subscriptions->deferred_list); >> + subscriptions->active_invalidate_ranges = 0; >> + subscriptions->has_itree = false; >> } >> >> ret = mm_take_all_locks(mm); >> -- >> 2.20.1 >> >> > ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] mm/mmu_notifier.c: micro-optimization substitute kzalloc with kmalloc 2020-09-06 16:06 ` Mateusz Nosek @ 2020-09-08 6:42 ` Mike Rapoport 2020-09-08 23:32 ` Jason Gunthorpe 0 siblings, 1 reply; 5+ messages in thread From: Mike Rapoport @ 2020-09-08 6:42 UTC (permalink / raw) To: Mateusz Nosek; +Cc: linux-mm, linux-kernel, akpm On Sun, Sep 06, 2020 at 06:06:39PM +0200, Mateusz Nosek wrote: > Hi, > > I performed simple benchmarks using custom kernel module with the code > fragment in question 'copy-pasted' in there in both versions. In case of 1k, > 10k and 100k iterations the average time for kzalloc version was 5.1 and for > kmalloc 3.9, for each iterations number. > The time was measured using 'ktime_get(void)' function and the results given > here are in ktime_t units. > The machine I use has 4 core Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz CPU. > > The performance increase happens, but as you wrote it is probably not really > noticeable. I don't think that saving a few cylces of memset() in a function that called only on the initialization path in very particular cases is worth risking uninitialized variables when somebody will add a new field to the 'struct mmu_notifier_subscriptions' and will forget to explicitly set it. > I have found 3 other places in kernel code with similar kzalloc related > issues, none of which seems to be 'hot' code. > I leave the decision if this patch and potential others I would send > regarding this issue, are worth applying to the community and maintainers. > > Best regards, > Mateusz Nosek > > On 9/6/2020 4:26 PM, Mike Rapoport wrote: > > Hi, > > > > On Sun, Sep 06, 2020 at 01:43:21PM +0200, mateusznosek0@gmail.com wrote: > > > From: Mateusz Nosek <mateusznosek0@gmail.com> > > > > > > Most fields in struct pointed by 'subscriptions' are initialized explicitly > > > after the allocation. By changing kzalloc to kmalloc the call to memset > > > is avoided. As the only new code consists of 2 simple memory accesses, > > > the performance is increased. > > > > Is there a measurable performance increase? > > > > The __mmu_notifier_register() is not used that frequently to trade off > > robustness of kzalloc() for slight (if visible at all) performance gain. > > > > > Signed-off-by: Mateusz Nosek <mateusznosek0@gmail.com> > > > --- > > > mm/mmu_notifier.c | 4 +++- > > > 1 file changed, 3 insertions(+), 1 deletion(-) > > > > > > diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c > > > index 4fc918163dd3..190e198dc5be 100644 > > > --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -625,7 +625,7 > > > @@ int __mmu_notifier_register(struct mmu_notifier *subscription, > > > * know that mm->notifier_subscriptions can't change while we * > > > hold the write side of the mmap_lock. */ > > > - subscriptions = kzalloc( > > > + subscriptions = kmalloc( > > > sizeof(struct mmu_notifier_subscriptions), GFP_KERNEL); > > > if (!subscriptions) > > > return -ENOMEM; > > > @@ -636,6 +636,8 @@ int __mmu_notifier_register(struct mmu_notifier *subscription, > > > subscriptions->itree = RB_ROOT_CACHED; > > > init_waitqueue_head(&subscriptions->wq); > > > INIT_HLIST_HEAD(&subscriptions->deferred_list); > > > + subscriptions->active_invalidate_ranges = 0; > > > + subscriptions->has_itree = false; > > > } > > > ret = mm_take_all_locks(mm); > > > -- > > > 2.20.1 > > > > > > > > -- Sincerely yours, Mike. ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] mm/mmu_notifier.c: micro-optimization substitute kzalloc with kmalloc 2020-09-08 6:42 ` Mike Rapoport @ 2020-09-08 23:32 ` Jason Gunthorpe 0 siblings, 0 replies; 5+ messages in thread From: Jason Gunthorpe @ 2020-09-08 23:32 UTC (permalink / raw) To: Mike Rapoport; +Cc: Mateusz Nosek, linux-mm, linux-kernel, akpm On Tue, Sep 08, 2020 at 09:42:45AM +0300, Mike Rapoport wrote: > On Sun, Sep 06, 2020 at 06:06:39PM +0200, Mateusz Nosek wrote: > > Hi, > > > > I performed simple benchmarks using custom kernel module with the code > > fragment in question 'copy-pasted' in there in both versions. In case of 1k, > > 10k and 100k iterations the average time for kzalloc version was 5.1 and for > > kmalloc 3.9, for each iterations number. > > The time was measured using 'ktime_get(void)' function and the results given > > here are in ktime_t units. > > The machine I use has 4 core Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz CPU. > > > > The performance increase happens, but as you wrote it is probably not really > > noticeable. > > I don't think that saving a few cylces of memset() in a function that > called only on the initialization path in very particular cases is worth > risking uninitialized variables when somebody will add a new field to > the 'struct mmu_notifier_subscriptions' and will forget to explicitly > set it. Indeed, it is not a common path, it is already very expensive if code is running here (eg it does mm_take_all_locks()). So there is no reason at all to optimize this and risk problems down the road. Jason ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2020-09-08 23:32 UTC | newest] Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2020-09-06 11:43 [PATCH] mm/mmu_notifier.c: micro-optimization substitute kzalloc with kmalloc mateusznosek0 2020-09-06 14:26 ` Mike Rapoport 2020-09-06 16:06 ` Mateusz Nosek 2020-09-08 6:42 ` Mike Rapoport 2020-09-08 23:32 ` Jason Gunthorpe
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).