* [bug report] kmemleak observed with blktests nvme-tcp tests
@ 2021-09-30 5:59 Yi Zhang
2021-09-30 6:55 ` Chaitanya Kulkarni
0 siblings, 1 reply; 14+ messages in thread
From: Yi Zhang @ 2021-09-30 5:59 UTC (permalink / raw)
To: linux-nvme
Hello
Below kmemleak was triggered with blktests nvme-tcp on latest
5.15.0-rc3, pls check it.
unreferenced object 0xffff8882bc8d6668 (size 8):
comm "kworker/u26:2", pid 82, jiffies 4295107562 (age 2911.554s)
hex dump (first 8 bytes):
6e 67 31 6e 31 00 7b 7c ng1n1.{|
backtrace:
[<0000000046e1c456>] __kmalloc_track_caller+0x129/0x260
[<00000000a8f7a3a1>] kvasprintf+0xa7/0x120
[<0000000076a54cc5>] kobject_set_name_vargs+0x41/0x110
[<00000000a569a16a>] dev_set_name+0x9b/0xd0
[<00000000f793cc3d>] nvme_mpath_set_live+0x322/0x430 [nvme_core]
[<000000001f948cbb>] nvme_mpath_add_disk+0x3ef/0x6a0 [nvme_core]
[<00000000d405af45>] nvme_alloc_ns+0xeb1/0x1ae0 [nvme_core]
[<000000002fd9b34d>] nvme_validate_or_alloc_ns+0x170/0x350 [nvme_core]
[<000000009762df74>] nvme_scan_work+0x2dc/0x4b0 [nvme_core]
[<000000007be5c512>] process_one_work+0x9a8/0x16b0
[<000000002ae51314>] worker_thread+0x87/0xbf0
[<0000000034c41079>] kthread+0x371/0x440
[<0000000020c3a70f>] ret_from_fork+0x22/0x30
unreferenced object 0xffff8882d1509800 (size 512):
comm "kworker/u26:2", pid 82, jiffies 4295107562 (age 2911.554s)
hex dump (first 32 bytes):
00 00 00 00 ad 4e ad de ff ff ff ff 00 00 00 00 .....N..........
ff ff ff ff ff ff ff ff c0 b6 b7 97 ff ff ff ff ................
backtrace:
[<00000000d6c8d6f1>] kmem_cache_alloc_trace+0x10b/0x220
[<00000000e6493d28>] device_add+0xe08/0x1d10
[<00000000aa40e6ce>] cdev_device_add+0xf1/0x150
[<00000000142436f1>] nvme_cdev_add+0xf8/0x160 [nvme_core]
[<00000000d948ccab>] nvme_mpath_set_live+0x347/0x430 [nvme_core]
[<000000001f948cbb>] nvme_mpath_add_disk+0x3ef/0x6a0 [nvme_core]
[<00000000d405af45>] nvme_alloc_ns+0xeb1/0x1ae0 [nvme_core]
[<000000002fd9b34d>] nvme_validate_or_alloc_ns+0x170/0x350 [nvme_core]
[<000000009762df74>] nvme_scan_work+0x2dc/0x4b0 [nvme_core]
[<000000007be5c512>] process_one_work+0x9a8/0x16b0
[<000000002ae51314>] worker_thread+0x87/0xbf0
[<0000000034c41079>] kthread+0x371/0x440
[<0000000020c3a70f>] ret_from_fork+0x22/0x30
--
Best Regards,
Yi Zhang
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [bug report] kmemleak observed with blktests nvme-tcp tests
2021-09-30 5:59 [bug report] kmemleak observed with blktests nvme-tcp tests Yi Zhang
@ 2021-09-30 6:55 ` Chaitanya Kulkarni
2021-09-30 7:13 ` Yi Zhang
0 siblings, 1 reply; 14+ messages in thread
From: Chaitanya Kulkarni @ 2021-09-30 6:55 UTC (permalink / raw)
To: linux-nvme
On 9/29/21 10:59 PM, Yi Zhang wrote:
> Hello
>
> Below kmemleak was triggered with blktests nvme-tcp on latest
> 5.15.0-rc3, pls check it.
>
Please share the test number and the frequency to reproduce this...
> unreferenced object 0xffff8882bc8d6668 (size 8):
> comm "kworker/u26:2", pid 82, jiffies 4295107562 (age 2911.554s)
> hex dump (first 8 bytes):
> 6e 67 31 6e 31 00 7b 7c ng1n1.{|
> backtrace:
> [<0000000046e1c456>] __kmalloc_track_caller+0x129/0x260
> [<00000000a8f7a3a1>] kvasprintf+0xa7/0x120
> [<0000000076a54cc5>] kobject_set_name_vargs+0x41/0x110
> [<00000000a569a16a>] dev_set_name+0x9b/0xd0
> [<00000000f793cc3d>] nvme_mpath_set_live+0x322/0x430 [nvme_core]
> [<000000001f948cbb>] nvme_mpath_add_disk+0x3ef/0x6a0 [nvme_core]
> [<00000000d405af45>] nvme_alloc_ns+0xeb1/0x1ae0 [nvme_core]
> [<000000002fd9b34d>] nvme_validate_or_alloc_ns+0x170/0x350 [nvme_core]
> [<000000009762df74>] nvme_scan_work+0x2dc/0x4b0 [nvme_core]
> [<000000007be5c512>] process_one_work+0x9a8/0x16b0
> [<000000002ae51314>] worker_thread+0x87/0xbf0
> [<0000000034c41079>] kthread+0x371/0x440
> [<0000000020c3a70f>] ret_from_fork+0x22/0x30
> unreferenced object 0xffff8882d1509800 (size 512):
> comm "kworker/u26:2", pid 82, jiffies 4295107562 (age 2911.554s)
> hex dump (first 32 bytes):
> 00 00 00 00 ad 4e ad de ff ff ff ff 00 00 00 00 .....N..........
> ff ff ff ff ff ff ff ff c0 b6 b7 97 ff ff ff ff ................
> backtrace:
> [<00000000d6c8d6f1>] kmem_cache_alloc_trace+0x10b/0x220
> [<00000000e6493d28>] device_add+0xe08/0x1d10
> [<00000000aa40e6ce>] cdev_device_add+0xf1/0x150
> [<00000000142436f1>] nvme_cdev_add+0xf8/0x160 [nvme_core]
> [<00000000d948ccab>] nvme_mpath_set_live+0x347/0x430 [nvme_core]
> [<000000001f948cbb>] nvme_mpath_add_disk+0x3ef/0x6a0 [nvme_core]
> [<00000000d405af45>] nvme_alloc_ns+0xeb1/0x1ae0 [nvme_core]
> [<000000002fd9b34d>] nvme_validate_or_alloc_ns+0x170/0x350 [nvme_core]
> [<000000009762df74>] nvme_scan_work+0x2dc/0x4b0 [nvme_core]
> [<000000007be5c512>] process_one_work+0x9a8/0x16b0
> [<000000002ae51314>] worker_thread+0x87/0xbf0
> [<0000000034c41079>] kthread+0x371/0x440
> [<0000000020c3a70f>] ret_from_fork+0x22/0x30
>
>
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [bug report] kmemleak observed with blktests nvme-tcp tests
2021-09-30 6:55 ` Chaitanya Kulkarni
@ 2021-09-30 7:13 ` Yi Zhang
2021-09-30 7:55 ` Sagi Grimberg
2021-09-30 9:31 ` Chaitanya Kulkarni
0 siblings, 2 replies; 14+ messages in thread
From: Yi Zhang @ 2021-09-30 7:13 UTC (permalink / raw)
To: Chaitanya Kulkarni; +Cc: linux-nvme
On Thu, Sep 30, 2021 at 3:04 PM Chaitanya Kulkarni
<chaitanyak@nvidia.com> wrote:
>
> On 9/29/21 10:59 PM, Yi Zhang wrote:
> > Hello
> >
> > Below kmemleak was triggered with blktests nvme-tcp on latest
> > 5.15.0-rc3, pls check it.
> >
>
> Please share the test number and the frequency to reproduce this...
>
Hi
I'm running the full blktests nvme-tcp[1] and it's 100% reproduced.
[1]
# nvme_trtype=tcp ./check nvme/
> > unreferenced object 0xffff8882bc8d6668 (size 8):
> > comm "kworker/u26:2", pid 82, jiffies 4295107562 (age 2911.554s)
> > hex dump (first 8 bytes):
> > 6e 67 31 6e 31 00 7b 7c ng1n1.{|
> > backtrace:
> > [<0000000046e1c456>] __kmalloc_track_caller+0x129/0x260
> > [<00000000a8f7a3a1>] kvasprintf+0xa7/0x120
> > [<0000000076a54cc5>] kobject_set_name_vargs+0x41/0x110
> > [<00000000a569a16a>] dev_set_name+0x9b/0xd0
> > [<00000000f793cc3d>] nvme_mpath_set_live+0x322/0x430 [nvme_core]
> > [<000000001f948cbb>] nvme_mpath_add_disk+0x3ef/0x6a0 [nvme_core]
> > [<00000000d405af45>] nvme_alloc_ns+0xeb1/0x1ae0 [nvme_core]
> > [<000000002fd9b34d>] nvme_validate_or_alloc_ns+0x170/0x350 [nvme_core]
> > [<000000009762df74>] nvme_scan_work+0x2dc/0x4b0 [nvme_core]
> > [<000000007be5c512>] process_one_work+0x9a8/0x16b0
> > [<000000002ae51314>] worker_thread+0x87/0xbf0
> > [<0000000034c41079>] kthread+0x371/0x440
> > [<0000000020c3a70f>] ret_from_fork+0x22/0x30
> > unreferenced object 0xffff8882d1509800 (size 512):
> > comm "kworker/u26:2", pid 82, jiffies 4295107562 (age 2911.554s)
> > hex dump (first 32 bytes):
> > 00 00 00 00 ad 4e ad de ff ff ff ff 00 00 00 00 .....N..........
> > ff ff ff ff ff ff ff ff c0 b6 b7 97 ff ff ff ff ................
> > backtrace:
> > [<00000000d6c8d6f1>] kmem_cache_alloc_trace+0x10b/0x220
> > [<00000000e6493d28>] device_add+0xe08/0x1d10
> > [<00000000aa40e6ce>] cdev_device_add+0xf1/0x150
> > [<00000000142436f1>] nvme_cdev_add+0xf8/0x160 [nvme_core]
> > [<00000000d948ccab>] nvme_mpath_set_live+0x347/0x430 [nvme_core]
> > [<000000001f948cbb>] nvme_mpath_add_disk+0x3ef/0x6a0 [nvme_core]
> > [<00000000d405af45>] nvme_alloc_ns+0xeb1/0x1ae0 [nvme_core]
> > [<000000002fd9b34d>] nvme_validate_or_alloc_ns+0x170/0x350 [nvme_core]
> > [<000000009762df74>] nvme_scan_work+0x2dc/0x4b0 [nvme_core]
> > [<000000007be5c512>] process_one_work+0x9a8/0x16b0
> > [<000000002ae51314>] worker_thread+0x87/0xbf0
> > [<0000000034c41079>] kthread+0x371/0x440
> > [<0000000020c3a70f>] ret_from_fork+0x22/0x30
> >
> >
>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
>
--
Best Regards,
Yi Zhang
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [bug report] kmemleak observed with blktests nvme-tcp tests
2021-09-30 7:13 ` Yi Zhang
@ 2021-09-30 7:55 ` Sagi Grimberg
2021-09-30 10:36 ` Yi Zhang
2021-09-30 9:31 ` Chaitanya Kulkarni
1 sibling, 1 reply; 14+ messages in thread
From: Sagi Grimberg @ 2021-09-30 7:55 UTC (permalink / raw)
To: Yi Zhang, Chaitanya Kulkarni; +Cc: linux-nvme
>>> Hello
>>>
>>> Below kmemleak was triggered with blktests nvme-tcp on latest
>>> 5.15.0-rc3, pls check it.
>>>
>>
>> Please share the test number and the frequency to reproduce this...
>>
>
> Hi
> I'm running the full blktests nvme-tcp[1] and it's 100% reproduced.
>
> [1]
> # nvme_trtype=tcp ./check nvme/
Yi, this does not happen with nvme_trtype=rdma? It looks like
we don't get to call cdev_device_del and del_gendisk, which means
we may have a referencing problem...
I'm wandering if this is a regression we can bisect to?
>
>>> unreferenced object 0xffff8882bc8d6668 (size 8):
>>> comm "kworker/u26:2", pid 82, jiffies 4295107562 (age 2911.554s)
>>> hex dump (first 8 bytes):
>>> 6e 67 31 6e 31 00 7b 7c ng1n1.{|
>>> backtrace:
>>> [<0000000046e1c456>] __kmalloc_track_caller+0x129/0x260
>>> [<00000000a8f7a3a1>] kvasprintf+0xa7/0x120
>>> [<0000000076a54cc5>] kobject_set_name_vargs+0x41/0x110
>>> [<00000000a569a16a>] dev_set_name+0x9b/0xd0
>>> [<00000000f793cc3d>] nvme_mpath_set_live+0x322/0x430 [nvme_core]
>>> [<000000001f948cbb>] nvme_mpath_add_disk+0x3ef/0x6a0 [nvme_core]
>>> [<00000000d405af45>] nvme_alloc_ns+0xeb1/0x1ae0 [nvme_core]
>>> [<000000002fd9b34d>] nvme_validate_or_alloc_ns+0x170/0x350 [nvme_core]
>>> [<000000009762df74>] nvme_scan_work+0x2dc/0x4b0 [nvme_core]
>>> [<000000007be5c512>] process_one_work+0x9a8/0x16b0
>>> [<000000002ae51314>] worker_thread+0x87/0xbf0
>>> [<0000000034c41079>] kthread+0x371/0x440
>>> [<0000000020c3a70f>] ret_from_fork+0x22/0x30
>>> unreferenced object 0xffff8882d1509800 (size 512):
>>> comm "kworker/u26:2", pid 82, jiffies 4295107562 (age 2911.554s)
>>> hex dump (first 32 bytes):
>>> 00 00 00 00 ad 4e ad de ff ff ff ff 00 00 00 00 .....N..........
>>> ff ff ff ff ff ff ff ff c0 b6 b7 97 ff ff ff ff ................
>>> backtrace:
>>> [<00000000d6c8d6f1>] kmem_cache_alloc_trace+0x10b/0x220
>>> [<00000000e6493d28>] device_add+0xe08/0x1d10
>>> [<00000000aa40e6ce>] cdev_device_add+0xf1/0x150
>>> [<00000000142436f1>] nvme_cdev_add+0xf8/0x160 [nvme_core]
>>> [<00000000d948ccab>] nvme_mpath_set_live+0x347/0x430 [nvme_core]
>>> [<000000001f948cbb>] nvme_mpath_add_disk+0x3ef/0x6a0 [nvme_core]
>>> [<00000000d405af45>] nvme_alloc_ns+0xeb1/0x1ae0 [nvme_core]
>>> [<000000002fd9b34d>] nvme_validate_or_alloc_ns+0x170/0x350 [nvme_core]
>>> [<000000009762df74>] nvme_scan_work+0x2dc/0x4b0 [nvme_core]
>>> [<000000007be5c512>] process_one_work+0x9a8/0x16b0
>>> [<000000002ae51314>] worker_thread+0x87/0xbf0
>>> [<0000000034c41079>] kthread+0x371/0x440
>>> [<0000000020c3a70f>] ret_from_fork+0x22/0x30
>>>
>>>
>>
>> _______________________________________________
>> Linux-nvme mailing list
>> Linux-nvme@lists.infradead.org
>> http://lists.infradead.org/mailman/listinfo/linux-nvme
>>
>
>
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [bug report] kmemleak observed with blktests nvme-tcp tests
2021-09-30 7:13 ` Yi Zhang
2021-09-30 7:55 ` Sagi Grimberg
@ 2021-09-30 9:31 ` Chaitanya Kulkarni
1 sibling, 0 replies; 14+ messages in thread
From: Chaitanya Kulkarni @ 2021-09-30 9:31 UTC (permalink / raw)
To: Yi Zhang; +Cc: linux-nvme
On 9/30/2021 12:13 AM, Yi Zhang wrote:
> External email: Use caution opening links or attachments
>
>
> On Thu, Sep 30, 2021 at 3:04 PM Chaitanya Kulkarni
> <chaitanyak@nvidia.com> wrote:
>>
>> On 9/29/21 10:59 PM, Yi Zhang wrote:
>>> Hello
>>>
>>> Below kmemleak was triggered with blktests nvme-tcp on latest
>>> 5.15.0-rc3, pls check it.
>>>
>>
>> Please share the test number and the frequency to reproduce this...
>>
>
> Hi
> I'm running the full blktests nvme-tcp[1] and it's 100% reproduced.
>
> [1]
> # nvme_trtype=tcp ./check nvme/
>
It will run all the tests under nvme/,
without knowing which testcases is causing the problem it will be hard
to help you...
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [bug report] kmemleak observed with blktests nvme-tcp tests
2021-09-30 7:55 ` Sagi Grimberg
@ 2021-09-30 10:36 ` Yi Zhang
2021-09-30 13:01 ` Yi Zhang
0 siblings, 1 reply; 14+ messages in thread
From: Yi Zhang @ 2021-09-30 10:36 UTC (permalink / raw)
To: Sagi Grimberg; +Cc: Chaitanya Kulkarni, linux-nvme
On Thu, Sep 30, 2021 at 3:55 PM Sagi Grimberg <sagi@grimberg.me> wrote:
>
>
> >>> Hello
> >>>
> >>> Below kmemleak was triggered with blktests nvme-tcp on latest
> >>> 5.15.0-rc3, pls check it.
> >>>
> >>
> >> Please share the test number and the frequency to reproduce this...
> >>
> >
> > Hi
> > I'm running the full blktests nvme-tcp[1] and it's 100% reproduced.
> >
> > [1]
> > # nvme_trtype=tcp ./check nvme/
>
> Yi, this does not happen with nvme_trtype=rdma? It looks like
nvme_trtype=rdma use_siw=1 also can reproduce it.
> we don't get to call cdev_device_del and del_gendisk, which means
> we may have a referencing problem...
>
> I'm wandering if this is a regression we can bisect to?
So just run[1] with nvme_core: multipath=Y will trigger it.
[1]
nvme_trtype=tcp ./check nvme/004
Will try bisect it.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [bug report] kmemleak observed with blktests nvme-tcp tests
2021-09-30 10:36 ` Yi Zhang
@ 2021-09-30 13:01 ` Yi Zhang
2021-09-30 14:07 ` Sagi Grimberg
0 siblings, 1 reply; 14+ messages in thread
From: Yi Zhang @ 2021-09-30 13:01 UTC (permalink / raw)
To: Sagi Grimberg, minwoo.im.dev; +Cc: Chaitanya Kulkarni, linux-nvme
Bisect shows it was introduced from the below commit:
commit 2637baed78010eeaae274feb5b99ce90933fadfb
Author: Minwoo Im <minwoo.im.dev@gmail.com>
Date: Wed Apr 21 16:45:04 2021 +0900
nvme: introduce generic per-namespace chardev
On Thu, Sep 30, 2021 at 6:36 PM Yi Zhang <yi.zhang@redhat.com> wrote:
>
> On Thu, Sep 30, 2021 at 3:55 PM Sagi Grimberg <sagi@grimberg.me> wrote:
> >
> >
> > >>> Hello
> > >>>
> > >>> Below kmemleak was triggered with blktests nvme-tcp on latest
> > >>> 5.15.0-rc3, pls check it.
> > >>>
> > >>
> > >> Please share the test number and the frequency to reproduce this...
> > >>
> > >
> > > Hi
> > > I'm running the full blktests nvme-tcp[1] and it's 100% reproduced.
> > >
> > > [1]
> > > # nvme_trtype=tcp ./check nvme/
> >
> > Yi, this does not happen with nvme_trtype=rdma? It looks like
>
> nvme_trtype=rdma use_siw=1 also can reproduce it.
>
> > we don't get to call cdev_device_del and del_gendisk, which means
> > we may have a referencing problem...
> >
> > I'm wandering if this is a regression we can bisect to?
>
> So just run[1] with nvme_core: multipath=Y will trigger it.
> [1]
> nvme_trtype=tcp ./check nvme/004
>
> Will try bisect it.
--
Best Regards,
Yi Zhang
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [bug report] kmemleak observed with blktests nvme-tcp tests
2021-09-30 13:01 ` Yi Zhang
@ 2021-09-30 14:07 ` Sagi Grimberg
2021-10-01 0:27 ` Yi Zhang
0 siblings, 1 reply; 14+ messages in thread
From: Sagi Grimberg @ 2021-09-30 14:07 UTC (permalink / raw)
To: Yi Zhang, minwoo.im.dev; +Cc: Chaitanya Kulkarni, linux-nvme
> Bisect shows it was introduced from the below commit:
>
> commit 2637baed78010eeaae274feb5b99ce90933fadfb
> Author: Minwoo Im <minwoo.im.dev@gmail.com>
> Date: Wed Apr 21 16:45:04 2021 +0900
>
> nvme: introduce generic per-namespace chardev
>
Makes sense as both leaks relate to the nshead cdev...
I think another put on the cdev_device is missing?
--
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 1d103ae4afdf..328e314af199 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3668,6 +3668,7 @@ void nvme_cdev_del(struct cdev *cdev, struct
device *cdev_device)
{
cdev_device_del(cdev, cdev_device);
ida_simple_remove(&nvme_ns_chr_minor_ida,
MINOR(cdev_device->devt));
+ put_device(cdev_device);
}
int nvme_cdev_add(struct cdev *cdev, struct device *cdev_device,
--
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [bug report] kmemleak observed with blktests nvme-tcp tests
2021-09-30 14:07 ` Sagi Grimberg
@ 2021-10-01 0:27 ` Yi Zhang
2021-10-02 23:02 ` Sagi Grimberg
0 siblings, 1 reply; 14+ messages in thread
From: Yi Zhang @ 2021-10-01 0:27 UTC (permalink / raw)
To: Sagi Grimberg; +Cc: minwoo.im.dev, Chaitanya Kulkarni, linux-nvme
On Thu, Sep 30, 2021 at 10:07 PM Sagi Grimberg <sagi@grimberg.me> wrote:
>
>
> > Bisect shows it was introduced from the below commit:
> >
> > commit 2637baed78010eeaae274feb5b99ce90933fadfb
> > Author: Minwoo Im <minwoo.im.dev@gmail.com>
> > Date: Wed Apr 21 16:45:04 2021 +0900
> >
> > nvme: introduce generic per-namespace chardev
> >
>
> Makes sense as both leaks relate to the nshead cdev...
>
> I think another put on the cdev_device is missing?
> --
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 1d103ae4afdf..328e314af199 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -3668,6 +3668,7 @@ void nvme_cdev_del(struct cdev *cdev, struct
> device *cdev_device)
> {
> cdev_device_del(cdev, cdev_device);
> ida_simple_remove(&nvme_ns_chr_minor_ida,
> MINOR(cdev_device->devt));
> + put_device(cdev_device);
> }
>
> int nvme_cdev_add(struct cdev *cdev, struct device *cdev_device,
> --
>
Hi Sagi
This introduced one new issue, here is the log:
[ 250.764659] run blktests nvme/004 at 2021-09-30 20:23:39
[ 250.938913] loop0: detected capacity change from 0 to 2097152
[ 250.963292] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
[ 250.976418] nvmet_tcp: enabling port 0 (127.0.0.1:4420)
[ 251.003499] nvmet: creating controller 1 for subsystem
blktests-subsystem-1 for NQN
nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0035-4b10-8044-b9c04f463333.
[ 251.020277] nvme nvme0: creating 32 I/O queues.
[ 251.050637] nvme nvme0: mapped 32/0/0 default/read/poll queues.
[ 251.091232] nvme nvme0: new ctrl: NQN "blktests-subsystem-1", addr
127.0.0.1:4420
[ 252.179608] nvme nvme0: Removing ctrl: NQN "blktests-subsystem-1"
[ 252.228383] ------------[ cut here ]------------
[ 252.234400] Device 'ng0n1' does not have a release() function, it
is broken and must be fixed. See Documentation/core-api/kobject.rst.
[ 252.246498] WARNING: CPU: 10 PID: 2086 at drivers/base/core.c:2198
device_release+0x189/0x210
[ 252.255029] Modules linked in: nvme_tcp nvme_fabrics nvme_core
nvmet_tcp nvmet loop rfkill sunrpc vfat fat dm_multipath iTCO_wdt
iTCO_vendor_support ipmi_ssif intel_rapl_msr intel_rapl_common
isst_if_common skx_edac x86_pkg_temp_thermal intel_powerclamp coretemp
kvm_intel mgag200 i2c_algo_bit kvm drm_kms_helper dell_smbios
irqbypass crct10dif_pclmul crc32_pclmul syscopyarea sysfillrect
sysimgblt dcdbas fb_sys_fops ghash_clmulni_intel cec rapl intel_cstate
drm intel_uncore mei_me dell_wmi_descriptor wmi_bmof pcspkr i2c_i801
mei acpi_ipmi i2c_smbus lpc_ich ipmi_si ipmi_devintf ipmi_msghandler
dax_pmem_compat nd_pmem device_dax nd_btt dax_pmem_core
acpi_power_meter xfs libcrc32c sd_mod t10_pi sg ahci libahci libata
tg3 megaraid_sas crc32c_intel wmi nfit libnvdimm dm_mirror
dm_region_hash dm_log dm_mod [last unloaded: nvmet]
[ 252.327704] CPU: 10 PID: 2086 Comm: nvme Tainted: G S I
5.15.0-rc3.v1.fix+ #4
[ 252.335974] Hardware name: Dell Inc. PowerEdge R640/06NR82, BIOS
2.11.2 004/21/2021
[ 252.343635] RIP: 0010:device_release+0x189/0x210
[ 252.348262] Code: 48 8d 7b 50 48 89 fa 48 c1 ea 03 80 3c 02 00 0f
85 88 00 00 00 48 8b 73 50 48 85 f6 74 13 48 c7 c7 60 cb 18 af e8 dc
fb c5 00 <0f> 0b e9 0b ff ff ff 48 b8 00 00 00 00 00 fc ff df 48 89 da
48 c1
[ 252.367015] RSP: 0018:ffffc90003d5fb00 EFLAGS: 00010282
[ 252.372249] RAX: 0000000000000000 RBX: ffff8882a5474a48 RCX: ffffffffad731d52
[ 252.379393] RDX: 0000000000000004 RSI: 0000000000000008 RDI: ffff888e259e3b2c
[ 252.386533] RBP: ffff8882e390ec00 R08: ffffed11c4b3d9b9 R09: ffffed11c4b3d9b9
[ 252.393675] R10: ffff888e259ecdc7 R11: ffffed11c4b3d9b8 R12: ffff8882e328b500
[ 252.400812] R13: ffff88852e9ee500 R14: 0000000000000000 R15: ffffc90003d5fbf8
[ 252.407946] FS: 00007f6f3cad2780(0000) GS:ffff888e25800000(0000)
knlGS:0000000000000000
[ 252.416040] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 252.421795] CR2: 000055c593c2e6b0 CR3: 00000002a1aec006 CR4: 00000000007706e0
[ 252.428937] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 252.436078] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 252.443221] PKRU: 55555554
[ 252.445941] Call Trace:
[ 252.448403] kobject_release+0x109/0x3a0
[ 252.452338] nvme_mpath_shutdown_disk+0x92/0xe0 [nvme_core]
[ 252.457929] nvme_ns_remove+0x4a3/0x7f0 [nvme_core]
[ 252.462824] ? up_write+0x14d/0x460
[ 252.466324] nvme_remove_namespaces+0x242/0x3a0 [nvme_core]
[ 252.471914] ? nvme_execute_passthru_rq+0x5a0/0x5a0 [nvme_core]
[ 252.477852] ? del_timer_sync+0xab/0xf0
[ 252.481699] nvme_do_delete_ctrl+0xaa/0x108 [nvme_core]
[ 252.486941] nvme_sysfs_delete.cold.100+0x8/0xd [nvme_core]
[ 252.492532] kernfs_fop_write_iter+0x2d0/0x490
[ 252.496984] ? trace_hardirqs_on+0x1c/0x150
[ 252.501180] new_sync_write+0x3b2/0x620
[ 252.505026] ? rcu_read_lock_held_common+0xe/0xa0
[ 252.509742] ? new_sync_read+0x610/0x610
[ 252.513677] ? rcu_tasks_trace_pregp_step+0xe1/0x170
[ 252.518651] ? rcu_read_lock_held_common+0xe/0xa0
[ 252.523368] ? rcu_read_lock_sched_held+0x5f/0xd0
[ 252.528082] ? rcu_read_unlock+0x40/0x40
[ 252.532016] ? rcu_read_lock_held+0xb0/0xb0
[ 252.536212] vfs_write+0x4b5/0x950
[ 252.539626] ksys_write+0xf1/0x1c0
[ 252.543039] ? __ia32_sys_read+0xb0/0xb0
[ 252.546975] do_syscall_64+0x37/0x80
[ 252.550563] entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 252.555621] RIP: 0033:0x7f6f3c1bb648
[ 252.559209] Code: 89 02 48 c7 c0 ff ff ff ff eb b3 0f 1f 80 00 00
00 00 f3 0f 1e fa 48 8d 05 55 6f 2d 00 8b 00 85 c0 75 17 b8 01 00 00
00 0f 05 <48> 3d 00 f0 ff ff 77 58 c3 0f 1f 80 00 00 00 00 41 54 49 89
d4 55
[ 252.577965] RSP: 002b:00007fff4826bb88 EFLAGS: 00000246 ORIG_RAX:
0000000000000001
[ 252.585537] RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007f6f3c1bb648
[ 252.592679] RDX: 0000000000000001 RSI: 000055c593c70da5 RDI: 0000000000000004
[ 252.599821] RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000000
[ 252.606962] R10: 0000000000000000 R11: 0000000000000246 R12: 000055c5945d7540
[ 252.614102] R13: 00007fff4826e0fc R14: 0000000000000008 R15: 0000000000000003
[ 252.621246] irq event stamp: 0
[ 252.624310] hardirqs last enabled at (0): [<0000000000000000>] 0x0
[ 252.630585] hardirqs last disabled at (0): [<ffffffffac9d68f3>]
copy_process+0x2023/0x6b20
[ 252.638854] softirqs last enabled at (0): [<ffffffffac9d6932>]
copy_process+0x2062/0x6b20
[ 252.647121] softirqs last disabled at (0): [<0000000000000000>] 0x0
[ 252.653396] ---[ end trace 96526c0d562adac3 ]---
--
Best Regards,
Yi Zhang
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [bug report] kmemleak observed with blktests nvme-tcp tests
2021-10-01 0:27 ` Yi Zhang
@ 2021-10-02 23:02 ` Sagi Grimberg
2021-10-12 18:35 ` Adam Manzanares
0 siblings, 1 reply; 14+ messages in thread
From: Sagi Grimberg @ 2021-10-02 23:02 UTC (permalink / raw)
To: Yi Zhang; +Cc: minwoo.im.dev, Chaitanya Kulkarni, linux-nvme
>>> Bisect shows it was introduced from the below commit:
>>>
>>> commit 2637baed78010eeaae274feb5b99ce90933fadfb
>>> Author: Minwoo Im <minwoo.im.dev@gmail.com>
>>> Date: Wed Apr 21 16:45:04 2021 +0900
>>>
>>> nvme: introduce generic per-namespace chardev
>>>
>>
>> Makes sense as both leaks relate to the nshead cdev...
>>
>> I think another put on the cdev_device is missing?
>> --
>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
>> index 1d103ae4afdf..328e314af199 100644
>> --- a/drivers/nvme/host/core.c
>> +++ b/drivers/nvme/host/core.c
>> @@ -3668,6 +3668,7 @@ void nvme_cdev_del(struct cdev *cdev, struct
>> device *cdev_device)
>> {
>> cdev_device_del(cdev, cdev_device);
>> ida_simple_remove(&nvme_ns_chr_minor_ida,
>> MINOR(cdev_device->devt));
>> + put_device(cdev_device);
>> }
>>
>> int nvme_cdev_add(struct cdev *cdev, struct device *cdev_device,
>> --
>>
>
> Hi Sagi
>
> This introduced one new issue, here is the log:
Hmm, looks like a use-after-free. I thought that
there was a missing put on the cdev_device paired to
device_initialize() call on it...
Minwoo?
>
> [ 250.764659] run blktests nvme/004 at 2021-09-30 20:23:39
> [ 250.938913] loop0: detected capacity change from 0 to 2097152
> [ 250.963292] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
> [ 250.976418] nvmet_tcp: enabling port 0 (127.0.0.1:4420)
> [ 251.003499] nvmet: creating controller 1 for subsystem
> blktests-subsystem-1 for NQN
> nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0035-4b10-8044-b9c04f463333.
> [ 251.020277] nvme nvme0: creating 32 I/O queues.
> [ 251.050637] nvme nvme0: mapped 32/0/0 default/read/poll queues.
> [ 251.091232] nvme nvme0: new ctrl: NQN "blktests-subsystem-1", addr
> 127.0.0.1:4420
> [ 252.179608] nvme nvme0: Removing ctrl: NQN "blktests-subsystem-1"
> [ 252.228383] ------------[ cut here ]------------
> [ 252.234400] Device 'ng0n1' does not have a release() function, it
> is broken and must be fixed. See Documentation/core-api/kobject.rst.
> [ 252.246498] WARNING: CPU: 10 PID: 2086 at drivers/base/core.c:2198
> device_release+0x189/0x210
> [ 252.255029] Modules linked in: nvme_tcp nvme_fabrics nvme_core
> nvmet_tcp nvmet loop rfkill sunrpc vfat fat dm_multipath iTCO_wdt
> iTCO_vendor_support ipmi_ssif intel_rapl_msr intel_rapl_common
> isst_if_common skx_edac x86_pkg_temp_thermal intel_powerclamp coretemp
> kvm_intel mgag200 i2c_algo_bit kvm drm_kms_helper dell_smbios
> irqbypass crct10dif_pclmul crc32_pclmul syscopyarea sysfillrect
> sysimgblt dcdbas fb_sys_fops ghash_clmulni_intel cec rapl intel_cstate
> drm intel_uncore mei_me dell_wmi_descriptor wmi_bmof pcspkr i2c_i801
> mei acpi_ipmi i2c_smbus lpc_ich ipmi_si ipmi_devintf ipmi_msghandler
> dax_pmem_compat nd_pmem device_dax nd_btt dax_pmem_core
> acpi_power_meter xfs libcrc32c sd_mod t10_pi sg ahci libahci libata
> tg3 megaraid_sas crc32c_intel wmi nfit libnvdimm dm_mirror
> dm_region_hash dm_log dm_mod [last unloaded: nvmet]
> [ 252.327704] CPU: 10 PID: 2086 Comm: nvme Tainted: G S I
> 5.15.0-rc3.v1.fix+ #4
> [ 252.335974] Hardware name: Dell Inc. PowerEdge R640/06NR82, BIOS
> 2.11.2 004/21/2021
> [ 252.343635] RIP: 0010:device_release+0x189/0x210
> [ 252.348262] Code: 48 8d 7b 50 48 89 fa 48 c1 ea 03 80 3c 02 00 0f
> 85 88 00 00 00 48 8b 73 50 48 85 f6 74 13 48 c7 c7 60 cb 18 af e8 dc
> fb c5 00 <0f> 0b e9 0b ff ff ff 48 b8 00 00 00 00 00 fc ff df 48 89 da
> 48 c1
> [ 252.367015] RSP: 0018:ffffc90003d5fb00 EFLAGS: 00010282
> [ 252.372249] RAX: 0000000000000000 RBX: ffff8882a5474a48 RCX: ffffffffad731d52
> [ 252.379393] RDX: 0000000000000004 RSI: 0000000000000008 RDI: ffff888e259e3b2c
> [ 252.386533] RBP: ffff8882e390ec00 R08: ffffed11c4b3d9b9 R09: ffffed11c4b3d9b9
> [ 252.393675] R10: ffff888e259ecdc7 R11: ffffed11c4b3d9b8 R12: ffff8882e328b500
> [ 252.400812] R13: ffff88852e9ee500 R14: 0000000000000000 R15: ffffc90003d5fbf8
> [ 252.407946] FS: 00007f6f3cad2780(0000) GS:ffff888e25800000(0000)
> knlGS:0000000000000000
> [ 252.416040] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 252.421795] CR2: 000055c593c2e6b0 CR3: 00000002a1aec006 CR4: 00000000007706e0
> [ 252.428937] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> [ 252.436078] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> [ 252.443221] PKRU: 55555554
> [ 252.445941] Call Trace:
> [ 252.448403] kobject_release+0x109/0x3a0
> [ 252.452338] nvme_mpath_shutdown_disk+0x92/0xe0 [nvme_core]
> [ 252.457929] nvme_ns_remove+0x4a3/0x7f0 [nvme_core]
> [ 252.462824] ? up_write+0x14d/0x460
> [ 252.466324] nvme_remove_namespaces+0x242/0x3a0 [nvme_core]
> [ 252.471914] ? nvme_execute_passthru_rq+0x5a0/0x5a0 [nvme_core]
> [ 252.477852] ? del_timer_sync+0xab/0xf0
> [ 252.481699] nvme_do_delete_ctrl+0xaa/0x108 [nvme_core]
> [ 252.486941] nvme_sysfs_delete.cold.100+0x8/0xd [nvme_core]
> [ 252.492532] kernfs_fop_write_iter+0x2d0/0x490
> [ 252.496984] ? trace_hardirqs_on+0x1c/0x150
> [ 252.501180] new_sync_write+0x3b2/0x620
> [ 252.505026] ? rcu_read_lock_held_common+0xe/0xa0
> [ 252.509742] ? new_sync_read+0x610/0x610
> [ 252.513677] ? rcu_tasks_trace_pregp_step+0xe1/0x170
> [ 252.518651] ? rcu_read_lock_held_common+0xe/0xa0
> [ 252.523368] ? rcu_read_lock_sched_held+0x5f/0xd0
> [ 252.528082] ? rcu_read_unlock+0x40/0x40
> [ 252.532016] ? rcu_read_lock_held+0xb0/0xb0
> [ 252.536212] vfs_write+0x4b5/0x950
> [ 252.539626] ksys_write+0xf1/0x1c0
> [ 252.543039] ? __ia32_sys_read+0xb0/0xb0
> [ 252.546975] do_syscall_64+0x37/0x80
> [ 252.550563] entry_SYSCALL_64_after_hwframe+0x44/0xae
> [ 252.555621] RIP: 0033:0x7f6f3c1bb648
> [ 252.559209] Code: 89 02 48 c7 c0 ff ff ff ff eb b3 0f 1f 80 00 00
> 00 00 f3 0f 1e fa 48 8d 05 55 6f 2d 00 8b 00 85 c0 75 17 b8 01 00 00
> 00 0f 05 <48> 3d 00 f0 ff ff 77 58 c3 0f 1f 80 00 00 00 00 41 54 49 89
> d4 55
> [ 252.577965] RSP: 002b:00007fff4826bb88 EFLAGS: 00000246 ORIG_RAX:
> 0000000000000001
> [ 252.585537] RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007f6f3c1bb648
> [ 252.592679] RDX: 0000000000000001 RSI: 000055c593c70da5 RDI: 0000000000000004
> [ 252.599821] RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000000
> [ 252.606962] R10: 0000000000000000 R11: 0000000000000246 R12: 000055c5945d7540
> [ 252.614102] R13: 00007fff4826e0fc R14: 0000000000000008 R15: 0000000000000003
> [ 252.621246] irq event stamp: 0
> [ 252.624310] hardirqs last enabled at (0): [<0000000000000000>] 0x0
> [ 252.630585] hardirqs last disabled at (0): [<ffffffffac9d68f3>]
> copy_process+0x2023/0x6b20
> [ 252.638854] softirqs last enabled at (0): [<ffffffffac9d6932>]
> copy_process+0x2062/0x6b20
> [ 252.647121] softirqs last disabled at (0): [<0000000000000000>] 0x0
> [ 252.653396] ---[ end trace 96526c0d562adac3 ]---
>
>
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [bug report] kmemleak observed with blktests nvme-tcp tests
2021-10-02 23:02 ` Sagi Grimberg
@ 2021-10-12 18:35 ` Adam Manzanares
2021-10-13 9:34 ` Yi Zhang
0 siblings, 1 reply; 14+ messages in thread
From: Adam Manzanares @ 2021-10-12 18:35 UTC (permalink / raw)
To: Sagi Grimberg; +Cc: Yi Zhang, minwoo.im.dev, Chaitanya Kulkarni, linux-nvme
On Sun, Oct 03, 2021 at 02:02:20AM +0300, Sagi Grimberg wrote:
>
> > > > Bisect shows it was introduced from the below commit:
> > > >
> > > > commit 2637baed78010eeaae274feb5b99ce90933fadfb
> > > > Author: Minwoo Im <minwoo.im.dev@gmail.com>
> > > > Date: Wed Apr 21 16:45:04 2021 +0900
> > > >
> > > > nvme: introduce generic per-namespace chardev
> > > >
> > >
> > > Makes sense as both leaks relate to the nshead cdev...
> > >
> > > I think another put on the cdev_device is missing?
> > > --
> > > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> > > index 1d103ae4afdf..328e314af199 100644
> > > --- a/drivers/nvme/host/core.c
> > > +++ b/drivers/nvme/host/core.c
> > > @@ -3668,6 +3668,7 @@ void nvme_cdev_del(struct cdev *cdev, struct
> > > device *cdev_device)
> > > {
> > > cdev_device_del(cdev, cdev_device);
> > > ida_simple_remove(&nvme_ns_chr_minor_ida,
> > > MINOR(cdev_device->devt));
> > > + put_device(cdev_device);
> > > }
> > >
> > > int nvme_cdev_add(struct cdev *cdev, struct device *cdev_device,
> > > --
> > >
> >
> > Hi Sagi
> >
> > This introduced one new issue, here is the log:
>
> Hmm, looks like a use-after-free. I thought that
> there was a missing put on the cdev_device paired to
> device_initialize() call on it...
>
> Minwoo?
Hello all,
Does the following patch fix the issue for you.
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index e486845d2c7e..587385bc82b6 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3548,10 +3548,15 @@ static int __nvme_check_ids(struct nvme_subsystem *subsys,
return 0;
}
+static void nvme_cdev_rel(struct device *dev)
+{
+ ida_simple_remove(&nvme_ns_chr_minor_ida, MINOR(dev->devt));
+}
+
void nvme_cdev_del(struct cdev *cdev, struct device *cdev_device)
{
cdev_device_del(cdev, cdev_device);
- ida_simple_remove(&nvme_ns_chr_minor_ida, MINOR(cdev_device->devt));
+ put_device(cdev_device);
}
int nvme_cdev_add(struct cdev *cdev, struct device *cdev_device,
@@ -3564,14 +3569,14 @@ int nvme_cdev_add(struct cdev *cdev, struct device *cdev_device,
return minor;
cdev_device->devt = MKDEV(MAJOR(nvme_ns_chr_devt), minor);
cdev_device->class = nvme_ns_chr_class;
+ cdev_device->release = nvme_cdev_rel;
device_initialize(cdev_device);
cdev_init(cdev, fops);
cdev->owner = owner;
ret = cdev_device_add(cdev, cdev_device);
- if (ret) {
+ if (ret)
put_device(cdev_device);
- ida_simple_remove(&nvme_ns_chr_minor_ida, minor);
- }
+
return ret;
}
@@ -3603,11 +3608,9 @@ static int nvme_add_ns_cdev(struct nvme_ns *ns)
ns->ctrl->instance, ns->head->instance);
if (ret)
return ret;
- ret = nvme_cdev_add(&ns->cdev, &ns->cdev_device, &nvme_ns_chr_fops,
- ns->ctrl->ops->module);
- if (ret)
- kfree_const(ns->cdev_device.kobj.name);
- return ret;
+
+ return nvme_cdev_add(&ns->cdev, &ns->cdev_device, &nvme_ns_chr_fops,
+ ns->ctrl->ops->module);
}
static struct nvme_ns_head *nvme_alloc_ns_head(struct nvme_ctrl *ctrl,
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index e8ccdd398f78..fba06618c6c2 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -431,8 +431,6 @@ static int nvme_add_ns_head_cdev(struct nvme_ns_head *head)
return ret;
ret = nvme_cdev_add(&head->cdev, &head->cdev_device,
&nvme_ns_head_chr_fops, THIS_MODULE);
- if (ret)
- kfree_const(head->cdev_device.kobj.name);
return ret;
}
>
> >
> > [ 250.764659] run blktests nvme/004 at 2021-09-30 20:23:39
> > [ 250.938913] loop0: detected capacity change from 0 to 2097152
> > [ 250.963292] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
> > [ 250.976418] nvmet_tcp: enabling port 0 (127.0.0.1:4420)
> > [ 251.003499] nvmet: creating controller 1 for subsystem
> > blktests-subsystem-1 for NQN
> > nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0035-4b10-8044-b9c04f463333.
> > [ 251.020277] nvme nvme0: creating 32 I/O queues.
> > [ 251.050637] nvme nvme0: mapped 32/0/0 default/read/poll queues.
> > [ 251.091232] nvme nvme0: new ctrl: NQN "blktests-subsystem-1", addr
> > 127.0.0.1:4420
> > [ 252.179608] nvme nvme0: Removing ctrl: NQN "blktests-subsystem-1"
> > [ 252.228383] ------------[ cut here ]------------
> > [ 252.234400] Device 'ng0n1' does not have a release() function, it
> > is broken and must be fixed. See Documentation/core-api/kobject.rst.
> > [ 252.246498] WARNING: CPU: 10 PID: 2086 at drivers/base/core.c:2198
> > device_release+0x189/0x210
> > [ 252.255029] Modules linked in: nvme_tcp nvme_fabrics nvme_core
> > nvmet_tcp nvmet loop rfkill sunrpc vfat fat dm_multipath iTCO_wdt
> > iTCO_vendor_support ipmi_ssif intel_rapl_msr intel_rapl_common
> > isst_if_common skx_edac x86_pkg_temp_thermal intel_powerclamp coretemp
> > kvm_intel mgag200 i2c_algo_bit kvm drm_kms_helper dell_smbios
> > irqbypass crct10dif_pclmul crc32_pclmul syscopyarea sysfillrect
> > sysimgblt dcdbas fb_sys_fops ghash_clmulni_intel cec rapl intel_cstate
> > drm intel_uncore mei_me dell_wmi_descriptor wmi_bmof pcspkr i2c_i801
> > mei acpi_ipmi i2c_smbus lpc_ich ipmi_si ipmi_devintf ipmi_msghandler
> > dax_pmem_compat nd_pmem device_dax nd_btt dax_pmem_core
> > acpi_power_meter xfs libcrc32c sd_mod t10_pi sg ahci libahci libata
> > tg3 megaraid_sas crc32c_intel wmi nfit libnvdimm dm_mirror
> > dm_region_hash dm_log dm_mod [last unloaded: nvmet]
> > [ 252.327704] CPU: 10 PID: 2086 Comm: nvme Tainted: G S I
> > 5.15.0-rc3.v1.fix+ #4
> > [ 252.335974] Hardware name: Dell Inc. PowerEdge R640/06NR82, BIOS
> > 2.11.2 004/21/2021
> > [ 252.343635] RIP: 0010:device_release+0x189/0x210
> > [ 252.348262] Code: 48 8d 7b 50 48 89 fa 48 c1 ea 03 80 3c 02 00 0f
> > 85 88 00 00 00 48 8b 73 50 48 85 f6 74 13 48 c7 c7 60 cb 18 af e8 dc
> > fb c5 00 <0f> 0b e9 0b ff ff ff 48 b8 00 00 00 00 00 fc ff df 48 89 da
> > 48 c1
> > [ 252.367015] RSP: 0018:ffffc90003d5fb00 EFLAGS: 00010282
> > [ 252.372249] RAX: 0000000000000000 RBX: ffff8882a5474a48 RCX: ffffffffad731d52
> > [ 252.379393] RDX: 0000000000000004 RSI: 0000000000000008 RDI: ffff888e259e3b2c
> > [ 252.386533] RBP: ffff8882e390ec00 R08: ffffed11c4b3d9b9 R09: ffffed11c4b3d9b9
> > [ 252.393675] R10: ffff888e259ecdc7 R11: ffffed11c4b3d9b8 R12: ffff8882e328b500
> > [ 252.400812] R13: ffff88852e9ee500 R14: 0000000000000000 R15: ffffc90003d5fbf8
> > [ 252.407946] FS: 00007f6f3cad2780(0000) GS:ffff888e25800000(0000)
> > knlGS:0000000000000000
> > [ 252.416040] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > [ 252.421795] CR2: 000055c593c2e6b0 CR3: 00000002a1aec006 CR4: 00000000007706e0
> > [ 252.428937] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > [ 252.436078] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> > [ 252.443221] PKRU: 55555554
> > [ 252.445941] Call Trace:
> > [ 252.448403] kobject_release+0x109/0x3a0
> > [ 252.452338] nvme_mpath_shutdown_disk+0x92/0xe0 [nvme_core]
> > [ 252.457929] nvme_ns_remove+0x4a3/0x7f0 [nvme_core]
> > [ 252.462824] ? up_write+0x14d/0x460
> > [ 252.466324] nvme_remove_namespaces+0x242/0x3a0 [nvme_core]
> > [ 252.471914] ? nvme_execute_passthru_rq+0x5a0/0x5a0 [nvme_core]
> > [ 252.477852] ? del_timer_sync+0xab/0xf0
> > [ 252.481699] nvme_do_delete_ctrl+0xaa/0x108 [nvme_core]
> > [ 252.486941] nvme_sysfs_delete.cold.100+0x8/0xd [nvme_core]
> > [ 252.492532] kernfs_fop_write_iter+0x2d0/0x490
> > [ 252.496984] ? trace_hardirqs_on+0x1c/0x150
> > [ 252.501180] new_sync_write+0x3b2/0x620
> > [ 252.505026] ? rcu_read_lock_held_common+0xe/0xa0
> > [ 252.509742] ? new_sync_read+0x610/0x610
> > [ 252.513677] ? rcu_tasks_trace_pregp_step+0xe1/0x170
> > [ 252.518651] ? rcu_read_lock_held_common+0xe/0xa0
> > [ 252.523368] ? rcu_read_lock_sched_held+0x5f/0xd0
> > [ 252.528082] ? rcu_read_unlock+0x40/0x40
> > [ 252.532016] ? rcu_read_lock_held+0xb0/0xb0
> > [ 252.536212] vfs_write+0x4b5/0x950
> > [ 252.539626] ksys_write+0xf1/0x1c0
> > [ 252.543039] ? __ia32_sys_read+0xb0/0xb0
> > [ 252.546975] do_syscall_64+0x37/0x80
> > [ 252.550563] entry_SYSCALL_64_after_hwframe+0x44/0xae
> > [ 252.555621] RIP: 0033:0x7f6f3c1bb648
> > [ 252.559209] Code: 89 02 48 c7 c0 ff ff ff ff eb b3 0f 1f 80 00 00
> > 00 00 f3 0f 1e fa 48 8d 05 55 6f 2d 00 8b 00 85 c0 75 17 b8 01 00 00
> > 00 0f 05 <48> 3d 00 f0 ff ff 77 58 c3 0f 1f 80 00 00 00 00 41 54 49 89
> > d4 55
> > [ 252.577965] RSP: 002b:00007fff4826bb88 EFLAGS: 00000246 ORIG_RAX:
> > 0000000000000001
> > [ 252.585537] RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007f6f3c1bb648
> > [ 252.592679] RDX: 0000000000000001 RSI: 000055c593c70da5 RDI: 0000000000000004
> > [ 252.599821] RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000000
> > [ 252.606962] R10: 0000000000000000 R11: 0000000000000246 R12: 000055c5945d7540
> > [ 252.614102] R13: 00007fff4826e0fc R14: 0000000000000008 R15: 0000000000000003
> > [ 252.621246] irq event stamp: 0
> > [ 252.624310] hardirqs last enabled at (0): [<0000000000000000>] 0x0
> > [ 252.630585] hardirqs last disabled at (0): [<ffffffffac9d68f3>]
> > copy_process+0x2023/0x6b20
> > [ 252.638854] softirqs last enabled at (0): [<ffffffffac9d6932>]
> > copy_process+0x2062/0x6b20
> > [ 252.647121] softirqs last disabled at (0): [<0000000000000000>] 0x0
> > [ 252.653396] ---[ end trace 96526c0d562adac3 ]---
> >
> >
>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme@lists.infradead.org
> https://urldefense.com/v3/__https://protect2.fireeye.com/v1/url?k=c4623b75-9bf90238-c463b03a-0cc47aa8f5ba-4705d26d62157aef&q=1&e=e8f1e635-f4b8-4b35-af63-85700ecd6dd3&u=http*3A*2F*2Flists.infradead.org*2Fmailman*2Flistinfo*2Flinux-nvme__;JSUlJSUl!!EwVzqGoTKBqv-0DWAJBm!GNcPQ8XAKsCJJcCcuc6fY9-radUDLzK6DGiO1Js8mf_-Jg1XVAfs30Cd7G7YXqwtPRdh$
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [bug report] kmemleak observed with blktests nvme-tcp tests
2021-10-12 18:35 ` Adam Manzanares
@ 2021-10-13 9:34 ` Yi Zhang
2021-10-13 10:47 ` Sagi Grimberg
0 siblings, 1 reply; 14+ messages in thread
From: Yi Zhang @ 2021-10-13 9:34 UTC (permalink / raw)
To: Adam Manzanares
Cc: Sagi Grimberg, minwoo.im.dev, Chaitanya Kulkarni, linux-nvme
On Wed, Oct 13, 2021 at 2:35 AM Adam Manzanares
<a.manzanares@samsung.com> wrote:
>
> On Sun, Oct 03, 2021 at 02:02:20AM +0300, Sagi Grimberg wrote:
> >
> > > > > Bisect shows it was introduced from the below commit:
> > > > >
> > > > > commit 2637baed78010eeaae274feb5b99ce90933fadfb
> > > > > Author: Minwoo Im <minwoo.im.dev@gmail.com>
> > > > > Date: Wed Apr 21 16:45:04 2021 +0900
> > > > >
> > > > > nvme: introduce generic per-namespace chardev
> > > > >
> > > >
> > > > Makes sense as both leaks relate to the nshead cdev...
> > > >
> > > > I think another put on the cdev_device is missing?
> > > > --
> > > > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> > > > index 1d103ae4afdf..328e314af199 100644
> > > > --- a/drivers/nvme/host/core.c
> > > > +++ b/drivers/nvme/host/core.c
> > > > @@ -3668,6 +3668,7 @@ void nvme_cdev_del(struct cdev *cdev, struct
> > > > device *cdev_device)
> > > > {
> > > > cdev_device_del(cdev, cdev_device);
> > > > ida_simple_remove(&nvme_ns_chr_minor_ida,
> > > > MINOR(cdev_device->devt));
> > > > + put_device(cdev_device);
> > > > }
> > > >
> > > > int nvme_cdev_add(struct cdev *cdev, struct device *cdev_device,
> > > > --
> > > >
> > >
> > > Hi Sagi
> > >
> > > This introduced one new issue, here is the log:
> >
> > Hmm, looks like a use-after-free. I thought that
> > there was a missing put on the cdev_device paired to
> > device_initialize() call on it...
> >
> > Minwoo?
>
> Hello all,
>
> Does the following patch fix the issue for you.
>
Yes, the kmemleak was fixed by this patch.
>
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index e486845d2c7e..587385bc82b6 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -3548,10 +3548,15 @@ static int __nvme_check_ids(struct nvme_subsystem *subsys,
> return 0;
> }
>
> +static void nvme_cdev_rel(struct device *dev)
> +{
> + ida_simple_remove(&nvme_ns_chr_minor_ida, MINOR(dev->devt));
> +}
> +
> void nvme_cdev_del(struct cdev *cdev, struct device *cdev_device)
> {
> cdev_device_del(cdev, cdev_device);
> - ida_simple_remove(&nvme_ns_chr_minor_ida, MINOR(cdev_device->devt));
> + put_device(cdev_device);
> }
>
> int nvme_cdev_add(struct cdev *cdev, struct device *cdev_device,
> @@ -3564,14 +3569,14 @@ int nvme_cdev_add(struct cdev *cdev, struct device *cdev_device,
> return minor;
> cdev_device->devt = MKDEV(MAJOR(nvme_ns_chr_devt), minor);
> cdev_device->class = nvme_ns_chr_class;
> + cdev_device->release = nvme_cdev_rel;
> device_initialize(cdev_device);
> cdev_init(cdev, fops);
> cdev->owner = owner;
> ret = cdev_device_add(cdev, cdev_device);
> - if (ret) {
> + if (ret)
> put_device(cdev_device);
> - ida_simple_remove(&nvme_ns_chr_minor_ida, minor);
> - }
> +
> return ret;
> }
>
> @@ -3603,11 +3608,9 @@ static int nvme_add_ns_cdev(struct nvme_ns *ns)
> ns->ctrl->instance, ns->head->instance);
> if (ret)
> return ret;
> - ret = nvme_cdev_add(&ns->cdev, &ns->cdev_device, &nvme_ns_chr_fops,
> - ns->ctrl->ops->module);
> - if (ret)
> - kfree_const(ns->cdev_device.kobj.name);
> - return ret;
> +
> + return nvme_cdev_add(&ns->cdev, &ns->cdev_device, &nvme_ns_chr_fops,
> + ns->ctrl->ops->module);
> }
>
> static struct nvme_ns_head *nvme_alloc_ns_head(struct nvme_ctrl *ctrl,
> diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
> index e8ccdd398f78..fba06618c6c2 100644
> --- a/drivers/nvme/host/multipath.c
> +++ b/drivers/nvme/host/multipath.c
> @@ -431,8 +431,6 @@ static int nvme_add_ns_head_cdev(struct nvme_ns_head *head)
> return ret;
> ret = nvme_cdev_add(&head->cdev, &head->cdev_device,
> &nvme_ns_head_chr_fops, THIS_MODULE);
> - if (ret)
> - kfree_const(head->cdev_device.kobj.name);
> return ret;
> }
>
>
>
> >
> > >
> > > [ 250.764659] run blktests nvme/004 at 2021-09-30 20:23:39
> > > [ 250.938913] loop0: detected capacity change from 0 to 2097152
> > > [ 250.963292] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
> > > [ 250.976418] nvmet_tcp: enabling port 0 (127.0.0.1:4420)
> > > [ 251.003499] nvmet: creating controller 1 for subsystem
> > > blktests-subsystem-1 for NQN
> > > nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0035-4b10-8044-b9c04f463333.
> > > [ 251.020277] nvme nvme0: creating 32 I/O queues.
> > > [ 251.050637] nvme nvme0: mapped 32/0/0 default/read/poll queues.
> > > [ 251.091232] nvme nvme0: new ctrl: NQN "blktests-subsystem-1", addr
> > > 127.0.0.1:4420
> > > [ 252.179608] nvme nvme0: Removing ctrl: NQN "blktests-subsystem-1"
> > > [ 252.228383] ------------[ cut here ]------------
> > > [ 252.234400] Device 'ng0n1' does not have a release() function, it
> > > is broken and must be fixed. See Documentation/core-api/kobject.rst.
> > > [ 252.246498] WARNING: CPU: 10 PID: 2086 at drivers/base/core.c:2198
> > > device_release+0x189/0x210
> > > [ 252.255029] Modules linked in: nvme_tcp nvme_fabrics nvme_core
> > > nvmet_tcp nvmet loop rfkill sunrpc vfat fat dm_multipath iTCO_wdt
> > > iTCO_vendor_support ipmi_ssif intel_rapl_msr intel_rapl_common
> > > isst_if_common skx_edac x86_pkg_temp_thermal intel_powerclamp coretemp
> > > kvm_intel mgag200 i2c_algo_bit kvm drm_kms_helper dell_smbios
> > > irqbypass crct10dif_pclmul crc32_pclmul syscopyarea sysfillrect
> > > sysimgblt dcdbas fb_sys_fops ghash_clmulni_intel cec rapl intel_cstate
> > > drm intel_uncore mei_me dell_wmi_descriptor wmi_bmof pcspkr i2c_i801
> > > mei acpi_ipmi i2c_smbus lpc_ich ipmi_si ipmi_devintf ipmi_msghandler
> > > dax_pmem_compat nd_pmem device_dax nd_btt dax_pmem_core
> > > acpi_power_meter xfs libcrc32c sd_mod t10_pi sg ahci libahci libata
> > > tg3 megaraid_sas crc32c_intel wmi nfit libnvdimm dm_mirror
> > > dm_region_hash dm_log dm_mod [last unloaded: nvmet]
> > > [ 252.327704] CPU: 10 PID: 2086 Comm: nvme Tainted: G S I
> > > 5.15.0-rc3.v1.fix+ #4
> > > [ 252.335974] Hardware name: Dell Inc. PowerEdge R640/06NR82, BIOS
> > > 2.11.2 004/21/2021
> > > [ 252.343635] RIP: 0010:device_release+0x189/0x210
> > > [ 252.348262] Code: 48 8d 7b 50 48 89 fa 48 c1 ea 03 80 3c 02 00 0f
> > > 85 88 00 00 00 48 8b 73 50 48 85 f6 74 13 48 c7 c7 60 cb 18 af e8 dc
> > > fb c5 00 <0f> 0b e9 0b ff ff ff 48 b8 00 00 00 00 00 fc ff df 48 89 da
> > > 48 c1
> > > [ 252.367015] RSP: 0018:ffffc90003d5fb00 EFLAGS: 00010282
> > > [ 252.372249] RAX: 0000000000000000 RBX: ffff8882a5474a48 RCX: ffffffffad731d52
> > > [ 252.379393] RDX: 0000000000000004 RSI: 0000000000000008 RDI: ffff888e259e3b2c
> > > [ 252.386533] RBP: ffff8882e390ec00 R08: ffffed11c4b3d9b9 R09: ffffed11c4b3d9b9
> > > [ 252.393675] R10: ffff888e259ecdc7 R11: ffffed11c4b3d9b8 R12: ffff8882e328b500
> > > [ 252.400812] R13: ffff88852e9ee500 R14: 0000000000000000 R15: ffffc90003d5fbf8
> > > [ 252.407946] FS: 00007f6f3cad2780(0000) GS:ffff888e25800000(0000)
> > > knlGS:0000000000000000
> > > [ 252.416040] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > > [ 252.421795] CR2: 000055c593c2e6b0 CR3: 00000002a1aec006 CR4: 00000000007706e0
> > > [ 252.428937] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > > [ 252.436078] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> > > [ 252.443221] PKRU: 55555554
> > > [ 252.445941] Call Trace:
> > > [ 252.448403] kobject_release+0x109/0x3a0
> > > [ 252.452338] nvme_mpath_shutdown_disk+0x92/0xe0 [nvme_core]
> > > [ 252.457929] nvme_ns_remove+0x4a3/0x7f0 [nvme_core]
> > > [ 252.462824] ? up_write+0x14d/0x460
> > > [ 252.466324] nvme_remove_namespaces+0x242/0x3a0 [nvme_core]
> > > [ 252.471914] ? nvme_execute_passthru_rq+0x5a0/0x5a0 [nvme_core]
> > > [ 252.477852] ? del_timer_sync+0xab/0xf0
> > > [ 252.481699] nvme_do_delete_ctrl+0xaa/0x108 [nvme_core]
> > > [ 252.486941] nvme_sysfs_delete.cold.100+0x8/0xd [nvme_core]
> > > [ 252.492532] kernfs_fop_write_iter+0x2d0/0x490
> > > [ 252.496984] ? trace_hardirqs_on+0x1c/0x150
> > > [ 252.501180] new_sync_write+0x3b2/0x620
> > > [ 252.505026] ? rcu_read_lock_held_common+0xe/0xa0
> > > [ 252.509742] ? new_sync_read+0x610/0x610
> > > [ 252.513677] ? rcu_tasks_trace_pregp_step+0xe1/0x170
> > > [ 252.518651] ? rcu_read_lock_held_common+0xe/0xa0
> > > [ 252.523368] ? rcu_read_lock_sched_held+0x5f/0xd0
> > > [ 252.528082] ? rcu_read_unlock+0x40/0x40
> > > [ 252.532016] ? rcu_read_lock_held+0xb0/0xb0
> > > [ 252.536212] vfs_write+0x4b5/0x950
> > > [ 252.539626] ksys_write+0xf1/0x1c0
> > > [ 252.543039] ? __ia32_sys_read+0xb0/0xb0
> > > [ 252.546975] do_syscall_64+0x37/0x80
> > > [ 252.550563] entry_SYSCALL_64_after_hwframe+0x44/0xae
> > > [ 252.555621] RIP: 0033:0x7f6f3c1bb648
> > > [ 252.559209] Code: 89 02 48 c7 c0 ff ff ff ff eb b3 0f 1f 80 00 00
> > > 00 00 f3 0f 1e fa 48 8d 05 55 6f 2d 00 8b 00 85 c0 75 17 b8 01 00 00
> > > 00 0f 05 <48> 3d 00 f0 ff ff 77 58 c3 0f 1f 80 00 00 00 00 41 54 49 89
> > > d4 55
> > > [ 252.577965] RSP: 002b:00007fff4826bb88 EFLAGS: 00000246 ORIG_RAX:
> > > 0000000000000001
> > > [ 252.585537] RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007f6f3c1bb648
> > > [ 252.592679] RDX: 0000000000000001 RSI: 000055c593c70da5 RDI: 0000000000000004
> > > [ 252.599821] RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000000
> > > [ 252.606962] R10: 0000000000000000 R11: 0000000000000246 R12: 000055c5945d7540
> > > [ 252.614102] R13: 00007fff4826e0fc R14: 0000000000000008 R15: 0000000000000003
> > > [ 252.621246] irq event stamp: 0
> > > [ 252.624310] hardirqs last enabled at (0): [<0000000000000000>] 0x0
> > > [ 252.630585] hardirqs last disabled at (0): [<ffffffffac9d68f3>]
> > > copy_process+0x2023/0x6b20
> > > [ 252.638854] softirqs last enabled at (0): [<ffffffffac9d6932>]
> > > copy_process+0x2062/0x6b20
> > > [ 252.647121] softirqs last disabled at (0): [<0000000000000000>] 0x0
> > > [ 252.653396] ---[ end trace 96526c0d562adac3 ]---
> > >
> > >
> >
> > _______________________________________________
> > Linux-nvme mailing list
> > Linux-nvme@lists.infradead.org
> > https://urldefense.com/v3/__https://protect2.fireeye.com/v1/url?k=c4623b75-9bf90238-c463b03a-0cc47aa8f5ba-4705d26d62157aef&q=1&e=e8f1e635-f4b8-4b35-af63-85700ecd6dd3&u=http*3A*2F*2Flists.infradead.org*2Fmailman*2Flistinfo*2Flinux-nvme__;JSUlJSUl!!EwVzqGoTKBqv-0DWAJBm!GNcPQ8XAKsCJJcCcuc6fY9-radUDLzK6DGiO1Js8mf_-Jg1XVAfs30Cd7G7YXqwtPRdh$
>
--
Best Regards,
Yi Zhang
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [bug report] kmemleak observed with blktests nvme-tcp tests
2021-10-13 9:34 ` Yi Zhang
@ 2021-10-13 10:47 ` Sagi Grimberg
2021-10-13 14:40 ` Adam Manzanares
0 siblings, 1 reply; 14+ messages in thread
From: Sagi Grimberg @ 2021-10-13 10:47 UTC (permalink / raw)
To: Yi Zhang, Adam Manzanares; +Cc: minwoo.im.dev, Chaitanya Kulkarni, linux-nvme
>> Hello all,
>>
>> Does the following patch fix the issue for you.
>>
>
> Yes, the kmemleak was fixed by this patch.
Great, Adam care to send a proper patch?
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [bug report] kmemleak observed with blktests nvme-tcp tests
2021-10-13 10:47 ` Sagi Grimberg
@ 2021-10-13 14:40 ` Adam Manzanares
0 siblings, 0 replies; 14+ messages in thread
From: Adam Manzanares @ 2021-10-13 14:40 UTC (permalink / raw)
To: Sagi Grimberg; +Cc: Yi Zhang, minwoo.im.dev, Chaitanya Kulkarni, linux-nvme
On Wed, Oct 13, 2021 at 01:47:42PM +0300, Sagi Grimberg wrote:
>
> > > Hello all,
> > >
> > > Does the following patch fix the issue for you.
> > >
> >
> > Yes, the kmemleak was fixed by this patch.
>
> Great, Adam care to send a proper patch?
I'll send a patch out.
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2021-10-13 14:40 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-30 5:59 [bug report] kmemleak observed with blktests nvme-tcp tests Yi Zhang
2021-09-30 6:55 ` Chaitanya Kulkarni
2021-09-30 7:13 ` Yi Zhang
2021-09-30 7:55 ` Sagi Grimberg
2021-09-30 10:36 ` Yi Zhang
2021-09-30 13:01 ` Yi Zhang
2021-09-30 14:07 ` Sagi Grimberg
2021-10-01 0:27 ` Yi Zhang
2021-10-02 23:02 ` Sagi Grimberg
2021-10-12 18:35 ` Adam Manzanares
2021-10-13 9:34 ` Yi Zhang
2021-10-13 10:47 ` Sagi Grimberg
2021-10-13 14:40 ` Adam Manzanares
2021-09-30 9:31 ` Chaitanya Kulkarni
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.