All of lore.kernel.org
 help / color / mirror / Atom feed
* [bug report] kmemleak observed from blktests on latest linux-block/for-next
@ 2022-06-12  7:23 Yi Zhang
  2022-06-13  3:29 ` Ming Lei
  0 siblings, 1 reply; 6+ messages in thread
From: Yi Zhang @ 2022-06-12  7:23 UTC (permalink / raw)
  To: linux-block, open list:NVM EXPRESS DRIVER

Hello
I found below kmemleak with the latest linux-block/for-next[1], pls
help check it, thanks.

[1]
75d6654eb3ab (origin/for-next) Merge branch 'for-5.19/block' into for-next


unreferenced object 0xffff88831d0fe800 (size 256):
  comm "check", pid 15430, jiffies 4306578361 (age 70450.608s)
  hex dump (first 32 bytes):
    a0 08 80 ab ff ff ff ff 00 80 76 a6 83 88 ff ff  ..........v.....
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace:
    [<000000004bf8f45a>] blk_iolatency_init+0x4b/0x470
    [<00000000bdbef6c6>] blkcg_init_queue+0x122/0x4c0
    [<00000000549164e5>] __alloc_disk_node+0x23c/0x5b0
    [<0000000059f8cecc>] __blk_alloc_disk+0x31/0x60
    [<00000000a875060e>] nbd_config_put+0x6c1/0x7e0 [nbd]
    [<0000000086fab6c1>] nbd_start_device_ioctl+0x454/0x4a0 [nbd]
    [<000000009305a7c9>] configfs_write_iter+0x2b0/0x480
    [<0000000047e9815b>] new_sync_write+0x2ef/0x530
    [<0000000009113f79>] vfs_write+0x626/0x910
    [<00000000ef2d7042>] ksys_write+0xf9/0x1d0
    [<00000000ca06addd>] do_syscall_64+0x5c/0x80
    [<00000000e1ffe4b5>]
entry_SYSCALL_64_after_hwframe+0x46/0xb0unreferenced object
0xffff88818f43fe00 (size 256):
  comm "kworker/u32:13", pid 53617, jiffies 4370965500 (age 6066.292s)
  hex dump (first 32 bytes):
    a0 08 80 ab ff ff ff ff c0 62 c1 0d 81 88 ff ff  .........b......
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace:
    [<000000004bf8f45a>] blk_iolatency_init+0x4b/0x470
    [<00000000bdbef6c6>] blkcg_init_queue+0x122/0x4c0
    [<00000000549164e5>] __alloc_disk_node+0x23c/0x5b0
    [<0000000059f8cecc>] __blk_alloc_disk+0x31/0x60
    [<0000000031ca7691>] nvme_mpath_alloc_disk+0x28a/0x8a0 [nvme_core]
    [<000000002038acbe>] nvme_alloc_ns_head+0x40c/0x740 [nvme_core]
    [<00000000e54cea22>] nvme_init_ns_head+0x4a3/0xa40 [nvme_core]
    [<000000007694f30a>] nvme_alloc_ns+0x3c7/0x1690 [nvme_core]
    [<0000000085ede1e2>] nvme_validate_or_alloc_ns+0x240/0x400 [nvme_core]
    [<000000001de40492>] nvme_scan_ns_list+0x20b/0x550 [nvme_core]
    [<00000000e799d365>] nvme_scan_work+0x2d2/0x760 [nvme_core]
    [<000000005b788977>] process_one_work+0x8d4/0x14d0
    [<00000000c452e193>] worker_thread+0x5ac/0xec0
    [<000000005065b8e4>] kthread+0x2a7/0x350
    [<00000000fe3dc1db>] ret_from_fork+0x22/0x30
unreferenced object 0xffff888720279c00 (size 256):
  comm "kworker/u32:2", pid 62305, jiffies 4370965926 (age 6065.866s)
  hex dump (first 32 bytes):
    a0 08 80 ab ff ff ff ff 58 8c b0 88 85 88 ff ff  ........X.......
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace:
    [<000000004bf8f45a>] blk_iolatency_init+0x4b/0x470
    [<00000000bdbef6c6>] blkcg_init_queue+0x122/0x4c0
    [<00000000549164e5>] __alloc_disk_node+0x23c/0x5b0
    [<0000000059f8cecc>] __blk_alloc_disk+0x31/0x60
    [<0000000031ca7691>] nvme_mpath_alloc_disk+0x28a/0x8a0 [nvme_core]
    [<000000002038acbe>] nvme_alloc_ns_head+0x40c/0x740 [nvme_core]
    [<00000000e54cea22>] nvme_init_ns_head+0x4a3/0xa40 [nvme_core]
    [<000000007694f30a>] nvme_alloc_ns+0x3c7/0x1690 [nvme_core]
    [<0000000085ede1e2>] nvme_validate_or_alloc_ns+0x240/0x400 [nvme_core]
    [<000000001de40492>] nvme_scan_ns_list+0x20b/0x550 [nvme_core]
    [<00000000e799d365>] nvme_scan_work+0x2d2/0x760 [nvme_core]
    [<000000005b788977>] process_one_work+0x8d4/0x14d0
    [<00000000c452e193>] worker_thread+0x5ac/0xec0
    [<000000005065b8e4>] kthread+0x2a7/0x350
    [<00000000fe3dc1db>] ret_from_fork+0x22/0x30
unreferenced object 0xffff888163681c00 (size 256):
  comm "kworker/u32:13", pid 53617, jiffies 4370966347 (age 6065.585s)
  hex dump (first 32 bytes):
    a0 08 80 ab ff ff ff ff 60 31 b2 9c 82 88 ff ff  ........`1......
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace:
    [<000000004bf8f45a>] blk_iolatency_init+0x4b/0x470
    [<00000000bdbef6c6>] blkcg_init_queue+0x122/0x4c0
    [<00000000549164e5>] __alloc_disk_node+0x23c/0x5b0
    [<0000000059f8cecc>] __blk_alloc_disk+0x31/0x60
    [<0000000031ca7691>] nvme_mpath_alloc_disk+0x28a/0x8a0 [nvme_core]
    [<000000002038acbe>] nvme_alloc_ns_head+0x40c/0x740 [nvme_core]
    [<00000000e54cea22>] nvme_init_ns_head+0x4a3/0xa40 [nvme_core]
    [<000000007694f30a>] nvme_alloc_ns+0x3c7/0x1690 [nvme_core]
    [<0000000085ede1e2>] nvme_validate_or_alloc_ns+0x240/0x400 [nvme_core]
    [<000000001de40492>] nvme_scan_ns_list+0x20b/0x550 [nvme_core]
    [<00000000e799d365>] nvme_scan_work+0x2d2/0x760 [nvme_core]
    [<000000005b788977>] process_one_work+0x8d4/0x14d0
    [<00000000c452e193>] worker_thread+0x5ac/0xec0
    [<000000005065b8e4>] kthread+0x2a7/0x350
    [<00000000fe3dc1db>] ret_from_fork+0x22/0x30
-- 
Best Regards,
  Yi Zhang


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [bug report] kmemleak observed from blktests on latest linux-block/for-next
  2022-06-12  7:23 [bug report] kmemleak observed from blktests on latest linux-block/for-next Yi Zhang
@ 2022-06-13  3:29 ` Ming Lei
  2022-06-13 13:08   ` Yi Zhang
  0 siblings, 1 reply; 6+ messages in thread
From: Ming Lei @ 2022-06-13  3:29 UTC (permalink / raw)
  To: Yi Zhang; +Cc: linux-block, open list:NVM EXPRESS DRIVER

On Sun, Jun 12, 2022 at 03:23:36PM +0800, Yi Zhang wrote:
> Hello
> I found below kmemleak with the latest linux-block/for-next[1], pls
> help check it, thanks.
> 
> [1]
> 75d6654eb3ab (origin/for-next) Merge branch 'for-5.19/block' into for-next

Hi Yi,

for-5.19/block should be stale, and seems not see such issue when running blktests
on v5.19-rc2.


Thanks,
Ming


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [bug report] kmemleak observed from blktests on latest linux-block/for-next
  2022-06-13  3:29 ` Ming Lei
@ 2022-06-13 13:08   ` Yi Zhang
  2022-06-13 14:23     ` Ming Lei
  0 siblings, 1 reply; 6+ messages in thread
From: Yi Zhang @ 2022-06-13 13:08 UTC (permalink / raw)
  To: Ming Lei; +Cc: linux-block, open list:NVM EXPRESS DRIVER

Hi Ming

The kmemleak also can be reproduced on 5.19.0-rc2, pls try to enable
nvme_core multipath and retest.

# cat /sys/module/nvme_core/parameters/multipath
Y

[ 7924.010585] run blktests nvme/013 at 2022-06-13 08:22:46
[ 8184.561412] kmemleak: 6 new suspected memory leaks (see
/sys/kernel/debug/kmemleak)
[ 8325.411833] run blktests nvme/014 at 2022-06-13 08:29:27
[ 8346.540549] run blktests nvme/015 at 2022-06-13 08:29:48
[ 8370.369628] run blktests nvme/016 at 2022-06-13 08:30:12
[ 8415.649177] run blktests nvme/017 at 2022-06-13 08:30:57
[ 8752.325270] run blktests nvme/018 at 2022-06-13 08:36:34
[ 8755.591067] run blktests nvme/019 at 2022-06-13 08:36:37
[ 8758.549365] run blktests nvme/020 at 2022-06-13 08:36:40
[ 8761.768996] run blktests nvme/021 at 2022-06-13 08:36:43
[ 8765.001197] run blktests nvme/022 at 2022-06-13 08:36:47
[ 8768.416640] run blktests nvme/023 at 2022-06-13 08:36:50
[ 8771.390804] run blktests nvme/024 at 2022-06-13 08:36:53
[ 8774.600581] run blktests nvme/025 at 2022-06-13 08:36:56
[ 8777.796734] run blktests nvme/026 at 2022-06-13 08:36:59
[ 8780.996435] run blktests nvme/027 at 2022-06-13 08:37:03
[ 8784.198718] run blktests nvme/028 at 2022-06-13 08:37:06
[ 8787.433885] run blktests nvme/029 at 2022-06-13 08:37:09
[ 8791.489253] run blktests nvme/030 at 2022-06-13 08:37:13
[ 8794.647937] run blktests nvme/031 at 2022-06-13 08:37:16
[ 8830.344625] run blktests nvme/032 at 2022-06-13 08:37:52
[ 8836.335828] run blktests nvme/032 at 2022-06-13 08:37:58
[ 8838.544409] kmemleak: 1 new suspected memory leaks (see
/sys/kernel/debug/kmemleak)
[ 8843.460926] run blktests nvme/032 at 2022-06-13 08:38:05
[ 8854.927287] run blktests nvme/038 at 2022-06-13 08:38:17
[ 8856.009771] run blktests nvme/039 at 2022-06-13 08:38:18
[ 8857.073699] run blktests nvme/039 at 2022-06-13 08:38:19
[ 8858.186741] run blktests nvme/039 at 2022-06-13 08:38:20
[ 8859.671329] run blktests scsi/004 at 2022-06-13 08:38:21
[ 8861.887756] run blktests scsi/005 at 2022-06-13 08:38:23
[ 8868.325999] run blktests scsi/007 at 2022-06-13 08:38:30
[ 8879.486812] run blktests zbd/002 at 2022-06-13 08:38:41
[ 8880.266536] run blktests zbd/003 at 2022-06-13 08:38:42
[ 8881.237849] run blktests zbd/004 at 2022-06-13 08:38:43
[ 8883.809750] run blktests zbd/005 at 2022-06-13 08:38:45
[ 8885.738364] run blktests zbd/006 at 2022-06-13 08:38:47
[ 8887.247449] run blktests zbd/008 at 2022-06-13 08:38:49
[ 9480.025760] kmemleak: 22 new suspected memory leaks (see
/sys/kernel/debug/kmemleak)
[10337.012629] run blktests nvme/039 at 2022-06-13 09:02:59
[10337.808560] run blktests nvme/039 at 2022-06-13 09:02:59
[10338.729114] run blktests nvme/039 at 2022-06-13 09:03:00

On Mon, Jun 13, 2022 at 11:29 AM Ming Lei <ming.lei@redhat.com> wrote:
>
> On Sun, Jun 12, 2022 at 03:23:36PM +0800, Yi Zhang wrote:
> > Hello
> > I found below kmemleak with the latest linux-block/for-next[1], pls
> > help check it, thanks.
> >
> > [1]
> > 75d6654eb3ab (origin/for-next) Merge branch 'for-5.19/block' into for-next
>
> Hi Yi,
>
> for-5.19/block should be stale, and seems not see such issue when running blktests
> on v5.19-rc2.
>
>
> Thanks,
> Ming
>


-- 
Best Regards,
  Yi Zhang


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [bug report] kmemleak observed from blktests on latest linux-block/for-next
  2022-06-13 13:08   ` Yi Zhang
@ 2022-06-13 14:23     ` Ming Lei
  2022-06-14  5:35       ` Yi Zhang
  0 siblings, 1 reply; 6+ messages in thread
From: Ming Lei @ 2022-06-13 14:23 UTC (permalink / raw)
  To: Yi Zhang; +Cc: linux-block, open list:NVM EXPRESS DRIVER

On Mon, Jun 13, 2022 at 09:08:11PM +0800, Yi Zhang wrote:
> Hi Ming
> 
> The kmemleak also can be reproduced on 5.19.0-rc2, pls try to enable
> nvme_core multipath and retest.
> 
> # cat /sys/module/nvme_core/parameters/multipath
> Y
>

OK, I can understand the reason now since rqos is only removed for blk-mq queue,
then rqos allocated for bio queue is leaked, see disk_release_mq().

The following patch should fix it:

diff --git a/block/genhd.c b/block/genhd.c
index 556d6e4b38d9..6e7ca8c302aa 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -1120,9 +1120,10 @@ static const struct attribute_group *disk_attr_groups[] = {
 	NULL
 };
 
-static void disk_release_mq(struct request_queue *q)
+static void disk_release_queue(struct request_queue *q)
 {
-	blk_mq_cancel_work_sync(q);
+	if (queue_is_mq(q))
+		blk_mq_cancel_work_sync(q);
 
 	/*
 	 * There can't be any non non-passthrough bios in flight here, but
@@ -1166,8 +1167,7 @@ static void disk_release(struct device *dev)
 	might_sleep();
 	WARN_ON_ONCE(disk_live(disk));
 
-	if (queue_is_mq(disk->queue))
-		disk_release_mq(disk->queue);
+	disk_release_queue(disk->queue);
 
 	blkcg_exit_queue(disk->queue);
 

Thanks,
Ming


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [bug report] kmemleak observed from blktests on latest linux-block/for-next
  2022-06-13 14:23     ` Ming Lei
@ 2022-06-14  5:35       ` Yi Zhang
  2022-06-26 14:16         ` Sagi Grimberg
  0 siblings, 1 reply; 6+ messages in thread
From: Yi Zhang @ 2022-06-14  5:35 UTC (permalink / raw)
  To: Ming Lei; +Cc: linux-block, open list:NVM EXPRESS DRIVER

On Mon, Jun 13, 2022 at 10:23 PM Ming Lei <ming.lei@redhat.com> wrote:
>
> On Mon, Jun 13, 2022 at 09:08:11PM +0800, Yi Zhang wrote:
> > Hi Ming
> >
> > The kmemleak also can be reproduced on 5.19.0-rc2, pls try to enable
> > nvme_core multipath and retest.
> >
> > # cat /sys/module/nvme_core/parameters/multipath
> > Y
> >
>
> OK, I can understand the reason now since rqos is only removed for blk-mq queue,
> then rqos allocated for bio queue is leaked, see disk_release_mq().
>
> The following patch should fix it:

Hi Ming
The kmemleak was fixed by this change, feel free to add

Tested-by: Yi Zhang <yi.zhang@redhat.com>

>
> diff --git a/block/genhd.c b/block/genhd.c
> index 556d6e4b38d9..6e7ca8c302aa 100644
> --- a/block/genhd.c
> +++ b/block/genhd.c
> @@ -1120,9 +1120,10 @@ static const struct attribute_group *disk_attr_groups[] = {
>         NULL
>  };
>
> -static void disk_release_mq(struct request_queue *q)
> +static void disk_release_queue(struct request_queue *q)
>  {
> -       blk_mq_cancel_work_sync(q);
> +       if (queue_is_mq(q))
> +               blk_mq_cancel_work_sync(q);
>
>         /*
>          * There can't be any non non-passthrough bios in flight here, but
> @@ -1166,8 +1167,7 @@ static void disk_release(struct device *dev)
>         might_sleep();
>         WARN_ON_ONCE(disk_live(disk));
>
> -       if (queue_is_mq(disk->queue))
> -               disk_release_mq(disk->queue);
> +       disk_release_queue(disk->queue);
>
>         blkcg_exit_queue(disk->queue);
>
>
> Thanks,
> Ming
>


-- 
Best Regards,
  Yi Zhang


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [bug report] kmemleak observed from blktests on latest linux-block/for-next
  2022-06-14  5:35       ` Yi Zhang
@ 2022-06-26 14:16         ` Sagi Grimberg
  0 siblings, 0 replies; 6+ messages in thread
From: Sagi Grimberg @ 2022-06-26 14:16 UTC (permalink / raw)
  To: Yi Zhang, Ming Lei; +Cc: linux-block, open list:NVM EXPRESS DRIVER


>>> Hi Ming
>>>
>>> The kmemleak also can be reproduced on 5.19.0-rc2, pls try to enable
>>> nvme_core multipath and retest.
>>>
>>> # cat /sys/module/nvme_core/parameters/multipath
>>> Y
>>>
>>
>> OK, I can understand the reason now since rqos is only removed for blk-mq queue,
>> then rqos allocated for bio queue is leaked, see disk_release_mq().
>>
>> The following patch should fix it:
> 
> Hi Ming
> The kmemleak was fixed by this change, feel free to add
> 
> Tested-by: Yi Zhang <yi.zhang@redhat.com>

I just tripped on this myself...
You can add,

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2022-06-26 14:16 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-12  7:23 [bug report] kmemleak observed from blktests on latest linux-block/for-next Yi Zhang
2022-06-13  3:29 ` Ming Lei
2022-06-13 13:08   ` Yi Zhang
2022-06-13 14:23     ` Ming Lei
2022-06-14  5:35       ` Yi Zhang
2022-06-26 14:16         ` Sagi Grimberg

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.