linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Possible regression in drm/i915 driver: memleak
@ 2022-12-20 14:33 Mirsad Todorovac
  2022-12-20 15:22 ` srinivas pandruvada
  0 siblings, 1 reply; 14+ messages in thread
From: Mirsad Todorovac @ 2022-12-20 14:33 UTC (permalink / raw)
  To: LKML; +Cc: Thorsten Leemhuis, srinivas pandruvada

[-- Attachment #1: Type: text/plain, Size: 2050 bytes --]

Hi all,

I have been unsuccessful to find any particular Intel i915 maintainer 
emails, so my best bet is to post here, as you will must assuredly 
already know them.

The problem is a kernel memory leak that is repeatedly occurring 
triggered during the execution of Chrome browser under the latest 6.1.0+ 
kernel of this morning and Almalinux 8.6 on a Lenovo desktop box
with Intel(R) Core(TM) i5-8400 CPU @ 2.80GHz CPU.

The build is with KMEMLEAK, KASAN and MGLRU turned on during the build, 
on a vanilla mainline kernel from Mr. Torvalds' tree.

The leaks look like this one:

unreferenced object 0xffff888131754880 (size 64):
   comm "chrome", pid 13058, jiffies 4298568878 (age 3708.084s)
   hex dump (first 32 bytes):
     01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
     00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
   backtrace:
     [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
     [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
     [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
     [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
     [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
     [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
     [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
     [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
     [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
     [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
     [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc

The complete list of leaks in attachment, but they seem similar or the same.

Please find attached lshw and kernel build config file.

I will probably check the same parms on my laptop at home, which is also 
Lenovo, but a different hw config and Ubuntu 22.10.

Thanks,
Mirsad

-- 
Mirsad Goran Todorovac
Sistem inženjer
Grafički fakultet | Akademija likovnih umjetnosti
Sveučilište u Zagrebu
-- 
System engineer
Faculty of Graphic Arts | Academy of Fine Arts
University of Zagreb, Republic of Croatia

[-- Attachment #2: 6.1.0+-kmemleak-chrome-i915-drm.log --]
[-- Type: text/x-log, Size: 30198 bytes --]

unreferenced object 0xffff888106801200 (size 16):
  comm "kworker/u12:6", pid 353, jiffies 4294902874 (age 15169.560s)
  hex dump (first 16 bytes):
    6d 65 6d 73 74 69 63 6b 30 00 00 00 00 00 00 00  memstick0.......
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f8175>] __kmalloc_node_track_caller+0x55/0x160
    [<ffffffff9e8e34a6>] kstrdup+0x36/0x60
    [<ffffffff9e8e3508>] kstrdup_const+0x28/0x30
    [<ffffffff9eed0757>] kvasprintf_const+0x97/0xd0
    [<ffffffff9fa9cdf4>] kobject_set_name_vargs+0x34/0xc0
    [<ffffffff9f30289b>] dev_set_name+0x9b/0xd0
    [<ffffffffc11a1201>] memstick_check+0x181/0x639 [memstick]
    [<ffffffff9e56e1d6>] process_one_work+0x4e6/0x7e0
    [<ffffffff9e56e556>] worker_thread+0x76/0x770
    [<ffffffff9e57b468>] kthread+0x168/0x1a0
    [<ffffffff9e404c99>] ret_from_fork+0x29/0x50
unreferenced object 0xffff888106801480 (size 16):
  comm "kworker/u12:6", pid 353, jiffies 4294902879 (age 15169.540s)
  hex dump (first 16 bytes):
    6d 65 6d 73 74 69 63 6b 30 00 00 00 00 00 00 00  memstick0.......
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f8175>] __kmalloc_node_track_caller+0x55/0x160
    [<ffffffff9e8e34a6>] kstrdup+0x36/0x60
    [<ffffffff9e8e3508>] kstrdup_const+0x28/0x30
    [<ffffffff9eed0757>] kvasprintf_const+0x97/0xd0
    [<ffffffff9fa9cdf4>] kobject_set_name_vargs+0x34/0xc0
    [<ffffffff9f30289b>] dev_set_name+0x9b/0xd0
    [<ffffffffc11a1201>] memstick_check+0x181/0x639 [memstick]
    [<ffffffff9e56e1d6>] process_one_work+0x4e6/0x7e0
    [<ffffffff9e56e556>] worker_thread+0x76/0x770
    [<ffffffff9e57b468>] kthread+0x168/0x1a0
    [<ffffffff9e404c99>] ret_from_fork+0x29/0x50
unreferenced object 0xffff888125a4b080 (size 64):
  comm "chrome", pid 13058, jiffies 4296358343 (age 9347.816s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff888110e67b80 (size 64):
  comm "chrome", pid 13058, jiffies 4296360724 (age 9338.296s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff88812c4aa880 (size 64):
  comm "chrome", pid 13058, jiffies 4296363186 (age 9328.504s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff8882cf8e6980 (size 64):
  comm "chrome", pid 13058, jiffies 4296364091 (age 9324.888s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff888348679780 (size 64):
  comm "chrome", pid 13058, jiffies 4296364716 (age 9322.388s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff8881296f0980 (size 64):
  comm "chrome", pid 13058, jiffies 4296364730 (age 9322.336s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff888184070e00 (size 64):
  comm "chrome", pid 13058, jiffies 4296369494 (age 9303.340s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff888118adde80 (size 64):
  comm "chrome", pid 13058, jiffies 4296369636 (age 9302.772s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff888112a4f100 (size 64):
  comm "chrome", pid 13058, jiffies 4296369649 (age 9302.720s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff88826d5af280 (size 64):
  comm "chrome", pid 13058, jiffies 4296403460 (age 9167.480s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff88819c897580 (size 64):
  comm "chrome", pid 13058, jiffies 4296405918 (age 9157.708s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff888158992e00 (size 64):
  comm "chrome", pid 13058, jiffies 4296413631 (age 9126.860s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff8881aa860c00 (size 64):
  comm "chrome", pid 13058, jiffies 4296489224 (age 8824.488s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff88815e52c380 (size 64):
  comm "chrome", pid 13058, jiffies 4296496255 (age 8796.368s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff88801024fb80 (size 64):
  comm "chrome", pid 13058, jiffies 4296497014 (age 8793.392s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff8881309b4a80 (size 64):
  comm "chrome", pid 13058, jiffies 4296506150 (age 8756.848s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff888003918600 (size 64):
  comm "chrome", pid 13058, jiffies 4296508245 (age 8748.472s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff88833a40f680 (size 64):
  comm "chrome", pid 13058, jiffies 4296535101 (age 8641.048s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff888339826f00 (size 64):
  comm "chrome", pid 13058, jiffies 4297560865 (age 4538.084s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff888339826080 (size 64):
  comm "chrome", pid 13058, jiffies 4297560865 (age 4538.084s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff888129064400 (size 64):
  comm "chrome", pid 13058, jiffies 4297582297 (age 4452.360s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff8881120e0a00 (size 64):
  comm "chrome", pid 13058, jiffies 4297818388 (age 3508.020s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff88828d467080 (size 64):
  comm "chrome", pid 13058, jiffies 4297864465 (age 3323.796s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff8880024bcd80 (size 64):
  comm "chrome", pid 13058, jiffies 4297869778 (age 3302.544s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff88825f3dd500 (size 64):
  comm "chrome", pid 13058, jiffies 4297887950 (age 3229.860s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff888134770980 (size 64):
  comm "chrome", pid 13058, jiffies 4297899648 (age 3183.068s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff88807d624d00 (size 64):
  comm "chrome", pid 13058, jiffies 4297989777 (age 2822.632s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff888103ce3b00 (size 64):
  comm "chrome", pid 13058, jiffies 4298005178 (age 2761.032s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff888129650e00 (size 64):
  comm "chrome", pid 13058, jiffies 4298008798 (age 2746.552s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff88828124a780 (size 64):
  comm "chrome", pid 13058, jiffies 4298022534 (age 2691.608s)
  hex dump (first 32 bytes):
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff  ...........>....
  backtrace:
    [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
    [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
    [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
    [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
    [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820 [i915]
    [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110 [i915]
    [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
    [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
    [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
    [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
    [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc

[-- Attachment #3: lshw.txt.xz --]
[-- Type: application/x-xz, Size: 4628 bytes --]

[-- Attachment #4: config-6.1.0+.xz --]
[-- Type: application/x-xz, Size: 57168 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Possible regression in drm/i915 driver: memleak
  2022-12-20 14:33 Possible regression in drm/i915 driver: memleak Mirsad Todorovac
@ 2022-12-20 15:22 ` srinivas pandruvada
  2022-12-20 15:52   ` Tvrtko Ursulin
  0 siblings, 1 reply; 14+ messages in thread
From: srinivas pandruvada @ 2022-12-20 15:22 UTC (permalink / raw)
  To: Mirsad Todorovac, LKML, jani.nikula, joonas.lahtinen,
	Rodrigo Vivi, tvrtko.ursulin
  Cc: Thorsten Leemhuis, intel-gfx

+Added DRM mailing list and maintainers

On Tue, 2022-12-20 at 15:33 +0100, Mirsad Todorovac wrote:
> Hi all,
> 
> I have been unsuccessful to find any particular Intel i915 maintainer
> emails, so my best bet is to post here, as you will must assuredly 
> already know them.
> 
> The problem is a kernel memory leak that is repeatedly occurring 
> triggered during the execution of Chrome browser under the latest
> 6.1.0+ 
> kernel of this morning and Almalinux 8.6 on a Lenovo desktop box
> with Intel(R) Core(TM) i5-8400 CPU @ 2.80GHz CPU.
> 
> The build is with KMEMLEAK, KASAN and MGLRU turned on during the
> build, 
> on a vanilla mainline kernel from Mr. Torvalds' tree.
> 
> The leaks look like this one:
> 
> unreferenced object 0xffff888131754880 (size 64):
>    comm "chrome", pid 13058, jiffies 4298568878 (age 3708.084s)
>    hex dump (first 32 bytes):
>      01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
> ................
>      00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff 
> ...........>....
>    backtrace:
>      [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
>      [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
>      [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
>      [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
>      [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820
> [i915]
>      [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110
> [i915]
>      [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
>      [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
>      [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
>      [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
>      [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
> 
> The complete list of leaks in attachment, but they seem similar or
> the same.
> 
> Please find attached lshw and kernel build config file.
> 
> I will probably check the same parms on my laptop at home, which is
> also 
> Lenovo, but a different hw config and Ubuntu 22.10.
> 
> Thanks,
> Mirsad
> 
> -- 
> Mirsad Goran Todorovac
> Sistem inženjer
> Grafički fakultet | Akademija likovnih umjetnosti
> Sveučilište u Zagrebu


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Possible regression in drm/i915 driver: memleak
  2022-12-20 15:22 ` srinivas pandruvada
@ 2022-12-20 15:52   ` Tvrtko Ursulin
  2022-12-20 17:20     ` Mirsad Goran Todorovac
  2022-12-20 19:34     ` LOOKS GOOD: " Mirsad Todorovac
  0 siblings, 2 replies; 14+ messages in thread
From: Tvrtko Ursulin @ 2022-12-20 15:52 UTC (permalink / raw)
  To: srinivas pandruvada, Mirsad Todorovac, LKML, jani.nikula,
	joonas.lahtinen, Rodrigo Vivi
  Cc: Thorsten Leemhuis, intel-gfx


Hi,

On 20/12/2022 15:22, srinivas pandruvada wrote:
> +Added DRM mailing list and maintainers
> 
> On Tue, 2022-12-20 at 15:33 +0100, Mirsad Todorovac wrote:
>> Hi all,
>>
>> I have been unsuccessful to find any particular Intel i915 maintainer
>> emails, so my best bet is to post here, as you will must assuredly
>> already know them.

For future reference you can use ${kernel_dir}/scripts/get_maintainer.pl -f ...

>> The problem is a kernel memory leak that is repeatedly occurring
>> triggered during the execution of Chrome browser under the latest
>> 6.1.0+
>> kernel of this morning and Almalinux 8.6 on a Lenovo desktop box
>> with Intel(R) Core(TM) i5-8400 CPU @ 2.80GHz CPU.
>>
>> The build is with KMEMLEAK, KASAN and MGLRU turned on during the
>> build,
>> on a vanilla mainline kernel from Mr. Torvalds' tree.
>>
>> The leaks look like this one:
>>
>> unreferenced object 0xffff888131754880 (size 64):
>>     comm "chrome", pid 13058, jiffies 4298568878 (age 3708.084s)
>>     hex dump (first 32 bytes):
>>       01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>> ................
>>       00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff
>> ...........>....
>>     backtrace:
>>       [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
>>       [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
>>       [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
>>       [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
>>       [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820
>> [i915]
>>       [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110
>> [i915]
>>       [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
>>       [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
>>       [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
>>       [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
>>       [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
>>
>> The complete list of leaks in attachment, but they seem similar or
>> the same.
>>
>> Please find attached lshw and kernel build config file.
>>
>> I will probably check the same parms on my laptop at home, which is
>> also
>> Lenovo, but a different hw config and Ubuntu 22.10.

Could you try the below patch?

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
index c3ea243d414d..0b07534c203a 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
@@ -679,9 +679,10 @@ mmap_offset_attach(struct drm_i915_gem_object *obj,
  insert:
         mmo = insert_mmo(obj, mmo);
         GEM_BUG_ON(lookup_mmo(obj, mmap_type) != mmo);
-out:
+
         if (file)
                 drm_vma_node_allow(&mmo->vma_node, file);
+out:
         return mmo;

  err:

Maybe it is not the best fix but curious to know if it will make the leak go away.

Regards,

Tvrtko

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: Possible regression in drm/i915 driver: memleak
  2022-12-20 15:52   ` Tvrtko Ursulin
@ 2022-12-20 17:20     ` Mirsad Goran Todorovac
  2022-12-20 19:34     ` LOOKS GOOD: " Mirsad Todorovac
  1 sibling, 0 replies; 14+ messages in thread
From: Mirsad Goran Todorovac @ 2022-12-20 17:20 UTC (permalink / raw)
  To: Tvrtko Ursulin, srinivas pandruvada, LKML, jani.nikula,
	joonas.lahtinen, Rodrigo Vivi
  Cc: Thorsten Leemhuis, intel-gfx

On 20. 12. 2022. 16:52, Tvrtko Ursulin wrote:

> On 20/12/2022 15:22, srinivas pandruvada wrote:
>> +Added DRM mailing list and maintainers
>>
>> On Tue, 2022-12-20 at 15:33 +0100, Mirsad Todorovac wrote:
>>> Hi all,
>>>
>>> I have been unsuccessful to find any particular Intel i915 maintainer
>>> emails, so my best bet is to post here, as you will must assuredly
>>> already know them.
> 
> For future reference you can use ${kernel_dir}/scripts/get_maintainer.pl -f ...

Thank you, this will help a great deal provided that I find any
more bugs ...

>>> The problem is a kernel memory leak that is repeatedly occurring
>>> triggered during the execution of Chrome browser under the latest
>>> 6.1.0+
>>> kernel of this morning and Almalinux 8.6 on a Lenovo desktop box
>>> with Intel(R) Core(TM) i5-8400 CPU @ 2.80GHz CPU.
>>>
>>> The build is with KMEMLEAK, KASAN and MGLRU turned on during the
>>> build,
>>> on a vanilla mainline kernel from Mr. Torvalds' tree.
>>>
>>> The leaks look like this one:
>>>
>>> unreferenced object 0xffff888131754880 (size 64):
>>>     comm "chrome", pid 13058, jiffies 4298568878 (age 3708.084s)
>>>     hex dump (first 32 bytes):
>>>       01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>>> ................
>>>       00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff
>>> ...........>....
>>>     backtrace:
>>>       [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
>>>       [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
>>>       [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
>>>       [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
>>>       [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820
>>> [i915]
>>>       [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110
>>> [i915]
>>>       [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
>>>       [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
>>>       [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
>>>       [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
>>>       [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
>>>
>>> The complete list of leaks in attachment, but they seem similar or
>>> the same.
>>>
>>> Please find attached lshw and kernel build config file.
>>>
>>> I will probably check the same parms on my laptop at home, which is
>>> also
>>> Lenovo, but a different hw config and Ubuntu 22.10.
> 
> Could you try the below patch?
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
> index c3ea243d414d..0b07534c203a 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
> @@ -679,9 +679,10 @@ mmap_offset_attach(struct drm_i915_gem_object *obj,
>   insert:
>          mmo = insert_mmo(obj, mmo);
>          GEM_BUG_ON(lookup_mmo(obj, mmap_type) != mmo);
> -out:
> +
>          if (file)
>                  drm_vma_node_allow(&mmo->vma_node, file);
> +out:
>          return mmo;
> 
>   err:
> 
> Maybe it is not the best fix but curious to know if it will make the leak go away.

The patch was successfully applied to the latest Mr. Torvalds' tree (commit b6bb9676f216).

It is currently building, which can take up to 90 minutes on our system.

Now the test depends on whether I will be able to setup the machine at work remotely
(there were some firewalls on port 22 recently).

I will keep you updated.

Thanks,
Mirsad

--
Mirsad Goran Todorovac
Sistem inženjer
Grafički fakultet | Akademija likovnih umjetnosti
Sveučilište u Zagrebu
-- 
System engineer
Faculty of Graphic Arts | Academy of Fine Arts
University of Zagreb, Republic of Croatia
The European Union


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: LOOKS GOOD: Possible regression in drm/i915 driver: memleak
  2022-12-20 15:52   ` Tvrtko Ursulin
  2022-12-20 17:20     ` Mirsad Goran Todorovac
@ 2022-12-20 19:34     ` Mirsad Todorovac
  2022-12-21  7:15       ` Mirsad Goran Todorovac
  2022-12-22  0:12       ` LOOKS GOOD: " Mirsad Goran Todorovac
  1 sibling, 2 replies; 14+ messages in thread
From: Mirsad Todorovac @ 2022-12-20 19:34 UTC (permalink / raw)
  To: Tvrtko Ursulin, srinivas pandruvada, LKML, jani.nikula,
	joonas.lahtinen, Rodrigo Vivi
  Cc: Thorsten Leemhuis, intel-gfx

On 12/20/22 16:52, Tvrtko Ursulin wrote:

> On 20/12/2022 15:22, srinivas pandruvada wrote:
>> +Added DRM mailing list and maintainers
>>
>> On Tue, 2022-12-20 at 15:33 +0100, Mirsad Todorovac wrote:
>>> Hi all,
>>>
>>> I have been unsuccessful to find any particular Intel i915 maintainer
>>> emails, so my best bet is to post here, as you will must assuredly
>>> already know them.
> 
> For future reference you can use ${kernel_dir}/scripts/get_maintainer.pl 
> -f ...
> 
>>> The problem is a kernel memory leak that is repeatedly occurring
>>> triggered during the execution of Chrome browser under the latest
>>> 6.1.0+
>>> kernel of this morning and Almalinux 8.6 on a Lenovo desktop box
>>> with Intel(R) Core(TM) i5-8400 CPU @ 2.80GHz CPU.
>>>
>>> The build is with KMEMLEAK, KASAN and MGLRU turned on during the
>>> build,
>>> on a vanilla mainline kernel from Mr. Torvalds' tree.
>>>
>>> The leaks look like this one:
>>>
>>> unreferenced object 0xffff888131754880 (size 64):
>>>     comm "chrome", pid 13058, jiffies 4298568878 (age 3708.084s)
>>>     hex dump (first 32 bytes):
>>>       01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>>> ................
>>>       00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff
>>> ...........>....
>>>     backtrace:
>>>       [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
>>>       [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
>>>       [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
>>>       [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
>>>       [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820
>>> [i915]
>>>       [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110
>>> [i915]
>>>       [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
>>>       [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
>>>       [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
>>>       [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
>>>       [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
>>>
>>> The complete list of leaks in attachment, but they seem similar or
>>> the same.
>>>
>>> Please find attached lshw and kernel build config file.
>>>
>>> I will probably check the same parms on my laptop at home, which is
>>> also
>>> Lenovo, but a different hw config and Ubuntu 22.10.
> 
> Could you try the below patch?
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c 
> b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
> index c3ea243d414d..0b07534c203a 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
> @@ -679,9 +679,10 @@ mmap_offset_attach(struct drm_i915_gem_object *obj,
>   insert:
>          mmo = insert_mmo(obj, mmo);
>          GEM_BUG_ON(lookup_mmo(obj, mmap_type) != mmo);
> -out:
> +
>          if (file)
>                  drm_vma_node_allow(&mmo->vma_node, file);
> +out:
>          return mmo;
> 
>   err:
> 
> Maybe it is not the best fix but curious to know if it will make the 
> leak go away.

Hi,

After 27 minutes uptime with the patched kernel it looks promising.
It is much longer than it took for the buggy kernel to leak slabs.

Here is the output:

[root@pc-mtodorov marvin]# echo scan > /sys/kernel/debug/kmemleak
[root@pc-mtodorov marvin]# cat !$
cat /sys/kernel/debug/kmemleak
unreferenced object 0xffff888105028d80 (size 16):
   comm "kworker/u12:5", pid 359, jiffies 4294902898 (age 1620.144s)
   hex dump (first 16 bytes):
     6d 65 6d 73 74 69 63 6b 30 00 00 00 00 00 00 00  memstick0.......
   backtrace:
     [<ffffffffb6bb5542>] slab_post_alloc_hook+0xb2/0x340
     [<ffffffffb6bbbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
     [<ffffffffb6af8175>] __kmalloc_node_track_caller+0x55/0x160
     [<ffffffffb6ae34a6>] kstrdup+0x36/0x60
     [<ffffffffb6ae3508>] kstrdup_const+0x28/0x30
     [<ffffffffb70d0757>] kvasprintf_const+0x97/0xd0
     [<ffffffffb7c9cdf4>] kobject_set_name_vargs+0x34/0xc0
     [<ffffffffb750289b>] dev_set_name+0x9b/0xd0
     [<ffffffffc12d9201>] memstick_check+0x181/0x639 [memstick]
     [<ffffffffb676e1d6>] process_one_work+0x4e6/0x7e0
     [<ffffffffb676e556>] worker_thread+0x76/0x770
     [<ffffffffb677b468>] kthread+0x168/0x1a0
     [<ffffffffb6604c99>] ret_from_fork+0x29/0x50
[root@pc-mtodorov marvin]# w
  20:27:35 up 27 min,  2 users,  load average: 0.83, 1.15, 1.19
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT
marvin   tty2     tty2             20:01   27:10  10:12   2.09s 
/opt/google/chrome/chrome --type=utility --utility-sub-type=audio.m
marvin   pts/1    -                20:01    0.00s  2:00   0.38s sudo bash
[root@pc-mtodorov marvin]# uname -rms
Linux 6.1.0-b6bb9676f216-mglru-kmemlk-kasan+ x86_64
[root@pc-mtodorov marvin]#

2. On the Ubuntu 22.10 with Debian build I did not reproduce the error 
thus far.

This looks to me like fixed, but if it doesn't leak anything until 
Thursday morning when I will see this desktop box next time, then we'll 
know with more certainty.

Hope this helps. (My $0.02 .)

Kudos for the quick fix :)

Kind regards,
Mirsad

-- 
Mirsad Goran Todorovac
Sistem inženjer
Grafički fakultet | Akademija likovnih umjetnosti
Sveučilište u Zagrebu
-- 
System engineer
Faculty of Graphic Arts | Academy of Fine Arts
University of Zagreb, Republic of Croatia

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Possible regression in drm/i915 driver: memleak
  2022-12-20 19:34     ` LOOKS GOOD: " Mirsad Todorovac
@ 2022-12-21  7:15       ` Mirsad Goran Todorovac
  2022-12-22  0:12       ` LOOKS GOOD: " Mirsad Goran Todorovac
  1 sibling, 0 replies; 14+ messages in thread
From: Mirsad Goran Todorovac @ 2022-12-21  7:15 UTC (permalink / raw)
  To: Tvrtko Ursulin, srinivas pandruvada, LKML, jani.nikula,
	joonas.lahtinen, Rodrigo Vivi
  Cc: Thorsten Leemhuis, intel-gfx

On 20.12.2022. 20:34, Mirsad Todorovac wrote:
> On 12/20/22 16:52, Tvrtko Ursulin wrote:
>
>> On 20/12/2022 15:22, srinivas pandruvada wrote:
>>> +Added DRM mailing list and maintainers
>>>
>>> On Tue, 2022-12-20 at 15:33 +0100, Mirsad Todorovac wrote:
>>>> Hi all,
>>>>
>>>> I have been unsuccessful to find any particular Intel i915 maintainer
>>>> emails, so my best bet is to post here, as you will must assuredly
>>>> already know them.
>>
>> For future reference you can use 
>> ${kernel_dir}/scripts/get_maintainer.pl -f ...
>>
>>>> The problem is a kernel memory leak that is repeatedly occurring
>>>> triggered during the execution of Chrome browser under the latest
>>>> 6.1.0+
>>>> kernel of this morning and Almalinux 8.6 on a Lenovo desktop box
>>>> with Intel(R) Core(TM) i5-8400 CPU @ 2.80GHz CPU.
>>>>
>>>> The build is with KMEMLEAK, KASAN and MGLRU turned on during the
>>>> build,
>>>> on a vanilla mainline kernel from Mr. Torvalds' tree.
>>>>
>>>> The leaks look like this one:
>>>>
>>>> unreferenced object 0xffff888131754880 (size 64):
>>>>     comm "chrome", pid 13058, jiffies 4298568878 (age 3708.084s)
>>>>     hex dump (first 32 bytes):
>>>>       01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>>>> ................
>>>>       00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff
>>>> ...........>....
>>>>     backtrace:
>>>>       [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
>>>>       [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
>>>>       [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
>>>>       [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
>>>>       [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820
>>>> [i915]
>>>>       [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110
>>>> [i915]
>>>>       [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
>>>>       [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
>>>>       [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
>>>>       [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
>>>>       [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
>>>>
>>>> The complete list of leaks in attachment, but they seem similar or
>>>> the same.
>>>>
>>>> Please find attached lshw and kernel build config file.
>>>>
>>>> I will probably check the same parms on my laptop at home, which is
>>>> also
>>>> Lenovo, but a different hw config and Ubuntu 22.10.
>>
>> Could you try the below patch?
>>
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c 
>> b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
>> index c3ea243d414d..0b07534c203a 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
>> @@ -679,9 +679,10 @@ mmap_offset_attach(struct drm_i915_gem_object *obj,
>>   insert:
>>          mmo = insert_mmo(obj, mmo);
>>          GEM_BUG_ON(lookup_mmo(obj, mmap_type) != mmo);
>> -out:
>> +
>>          if (file)
>>                  drm_vma_node_allow(&mmo->vma_node, file);
>> +out:
>>          return mmo;
>>
>>   err:
>>
>> Maybe it is not the best fix but curious to know if it will make the 
>> leak go away.
>
> Hi,
>
> After 27 minutes uptime with the patched kernel it looks promising.
> It is much longer than it took for the buggy kernel to leak slabs.
>
> Here is the output:
>
> [root@pc-mtodorov marvin]# echo scan > /sys/kernel/debug/kmemleak
> [root@pc-mtodorov marvin]# cat !$
> cat /sys/kernel/debug/kmemleak
> unreferenced object 0xffff888105028d80 (size 16):
>   comm "kworker/u12:5", pid 359, jiffies 4294902898 (age 1620.144s)
>   hex dump (first 16 bytes):
>     6d 65 6d 73 74 69 63 6b 30 00 00 00 00 00 00 00 memstick0.......
>   backtrace:
>     [<ffffffffb6bb5542>] slab_post_alloc_hook+0xb2/0x340
>     [<ffffffffb6bbbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
>     [<ffffffffb6af8175>] __kmalloc_node_track_caller+0x55/0x160
>     [<ffffffffb6ae34a6>] kstrdup+0x36/0x60
>     [<ffffffffb6ae3508>] kstrdup_const+0x28/0x30
>     [<ffffffffb70d0757>] kvasprintf_const+0x97/0xd0
>     [<ffffffffb7c9cdf4>] kobject_set_name_vargs+0x34/0xc0
>     [<ffffffffb750289b>] dev_set_name+0x9b/0xd0
>     [<ffffffffc12d9201>] memstick_check+0x181/0x639 [memstick]
>     [<ffffffffb676e1d6>] process_one_work+0x4e6/0x7e0
>     [<ffffffffb676e556>] worker_thread+0x76/0x770
>     [<ffffffffb677b468>] kthread+0x168/0x1a0
>     [<ffffffffb6604c99>] ret_from_fork+0x29/0x50
> [root@pc-mtodorov marvin]# w
>  20:27:35 up 27 min,  2 users,  load average: 0.83, 1.15, 1.19
> USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT
> marvin   tty2     tty2             20:01   27:10  10:12   2.09s 
> /opt/google/chrome/chrome --type=utility --utility-sub-type=audio.m
> marvin   pts/1    -                20:01    0.00s  2:00   0.38s sudo bash
> [root@pc-mtodorov marvin]# uname -rms
> Linux 6.1.0-b6bb9676f216-mglru-kmemlk-kasan+ x86_64
> [root@pc-mtodorov marvin]#
>
> 2. On the Ubuntu 22.10 with Debian build I did not reproduce the error 
> thus far.
>
> This looks to me like fixed, but if it doesn't leak anything until 
> Thursday morning when I will see this desktop box next time, then 
> we'll know with more certainty. 

After an inspection in the morning local time and 12:10h uptime, it 
appears that the problem is fixed. No chrome-triggered 
i915_gem_mmap_offset_ioctl leaks.

By this uptime, there were about 30 instances of leaks in the unpatched 
kernel.

Congratulations!

Kind regards,
Mirsad

-- 
Mirsad Todorovac
System engineer
Faculty of Graphic Arts | Academy of Fine Arts
University of Zagreb
Republic of Croatia, the European Union
--
Sistem inženjer
Grafički fakultet | Akademija likovnih umjetnosti
Sveučilište u Zagrebu


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: LOOKS GOOD: Possible regression in drm/i915 driver: memleak
  2022-12-20 19:34     ` LOOKS GOOD: " Mirsad Todorovac
  2022-12-21  7:15       ` Mirsad Goran Todorovac
@ 2022-12-22  0:12       ` Mirsad Goran Todorovac
  2022-12-22  8:04         ` Tvrtko Ursulin
  1 sibling, 1 reply; 14+ messages in thread
From: Mirsad Goran Todorovac @ 2022-12-22  0:12 UTC (permalink / raw)
  To: Tvrtko Ursulin, srinivas pandruvada, LKML, jani.nikula,
	joonas.lahtinen, Rodrigo Vivi
  Cc: Thorsten Leemhuis, intel-gfx

On 20. 12. 2022. 20:34, Mirsad Todorovac wrote:
> On 12/20/22 16:52, Tvrtko Ursulin wrote:
> 
>> On 20/12/2022 15:22, srinivas pandruvada wrote:
>>> +Added DRM mailing list and maintainers
>>>
>>> On Tue, 2022-12-20 at 15:33 +0100, Mirsad Todorovac wrote:
>>>> Hi all,
>>>>
>>>> I have been unsuccessful to find any particular Intel i915 maintainer
>>>> emails, so my best bet is to post here, as you will must assuredly
>>>> already know them.
>>
>> For future reference you can use ${kernel_dir}/scripts/get_maintainer.pl -f ...
>>
>>>> The problem is a kernel memory leak that is repeatedly occurring
>>>> triggered during the execution of Chrome browser under the latest
>>>> 6.1.0+
>>>> kernel of this morning and Almalinux 8.6 on a Lenovo desktop box
>>>> with Intel(R) Core(TM) i5-8400 CPU @ 2.80GHz CPU.
>>>>
>>>> The build is with KMEMLEAK, KASAN and MGLRU turned on during the
>>>> build,
>>>> on a vanilla mainline kernel from Mr. Torvalds' tree.
>>>>
>>>> The leaks look like this one:
>>>>
>>>> unreferenced object 0xffff888131754880 (size 64):
>>>>     comm "chrome", pid 13058, jiffies 4298568878 (age 3708.084s)
>>>>     hex dump (first 32 bytes):
>>>>       01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>>>> ................
>>>>       00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff
>>>> ...........>....
>>>>     backtrace:
>>>>       [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
>>>>       [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
>>>>       [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
>>>>       [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
>>>>       [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820
>>>> [i915]
>>>>       [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110
>>>> [i915]
>>>>       [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
>>>>       [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
>>>>       [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
>>>>       [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
>>>>       [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
>>>>
>>>> The complete list of leaks in attachment, but they seem similar or
>>>> the same.
>>>>
>>>> Please find attached lshw and kernel build config file.
>>>>
>>>> I will probably check the same parms on my laptop at home, which is
>>>> also
>>>> Lenovo, but a different hw config and Ubuntu 22.10.
>>
>> Could you try the below patch?
>>
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
>> index c3ea243d414d..0b07534c203a 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
>> @@ -679,9 +679,10 @@ mmap_offset_attach(struct drm_i915_gem_object *obj,
>>   insert:
>>          mmo = insert_mmo(obj, mmo);
>>          GEM_BUG_ON(lookup_mmo(obj, mmap_type) != mmo);
>> -out:
>> +
>>          if (file)
>>                  drm_vma_node_allow(&mmo->vma_node, file);
>> +out:
>>          return mmo;
>>
>>   err:
>>
>> Maybe it is not the best fix but curious to know if it will make the leak go away.
> 
> Hi,
> 
> After 27 minutes uptime with the patched kernel it looks promising.
> It is much longer than it took for the buggy kernel to leak slabs.
> 
> Here is the output:
> 
> [root@pc-mtodorov marvin]# echo scan > /sys/kernel/debug/kmemleak
> [root@pc-mtodorov marvin]# cat !$
> cat /sys/kernel/debug/kmemleak
> unreferenced object 0xffff888105028d80 (size 16):
>    comm "kworker/u12:5", pid 359, jiffies 4294902898 (age 1620.144s)
>    hex dump (first 16 bytes):
>      6d 65 6d 73 74 69 63 6b 30 00 00 00 00 00 00 00  memstick0.......
>    backtrace:
>      [<ffffffffb6bb5542>] slab_post_alloc_hook+0xb2/0x340
>      [<ffffffffb6bbbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
>      [<ffffffffb6af8175>] __kmalloc_node_track_caller+0x55/0x160
>      [<ffffffffb6ae34a6>] kstrdup+0x36/0x60
>      [<ffffffffb6ae3508>] kstrdup_const+0x28/0x30
>      [<ffffffffb70d0757>] kvasprintf_const+0x97/0xd0
>      [<ffffffffb7c9cdf4>] kobject_set_name_vargs+0x34/0xc0
>      [<ffffffffb750289b>] dev_set_name+0x9b/0xd0
>      [<ffffffffc12d9201>] memstick_check+0x181/0x639 [memstick]
>      [<ffffffffb676e1d6>] process_one_work+0x4e6/0x7e0
>      [<ffffffffb676e556>] worker_thread+0x76/0x770
>      [<ffffffffb677b468>] kthread+0x168/0x1a0
>      [<ffffffffb6604c99>] ret_from_fork+0x29/0x50
> [root@pc-mtodorov marvin]# w
>   20:27:35 up 27 min,  2 users,  load average: 0.83, 1.15, 1.19
> USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT
> marvin   tty2     tty2             20:01   27:10  10:12   2.09s /opt/google/chrome/chrome --type=utility --utility-sub-type=audio.m
> marvin   pts/1    -                20:01    0.00s  2:00   0.38s sudo bash
> [root@pc-mtodorov marvin]# uname -rms
> Linux 6.1.0-b6bb9676f216-mglru-kmemlk-kasan+ x86_64
> [root@pc-mtodorov marvin]#

As I hear no reply from Tvrtko, and there is already 1d5h uptime with no leaks (but
the kworker with memstick_check nag I couldn't bisect on the only box that reproduced it,
because something in hw was not supported in pre 4.16 kernels on the Lenovo V530S-07ICB.
Or I am doing something wrong.)

However, now I can find the memstick maintainers thanks to Tvrtko's hint.

If you no longer require my service, I would close this on my behalf.

I hope I did not cause too much trouble. The knowledgeable knew that this was not a security
risk, but only a bug. (30 leaks of 64 bytes each were hardly to exhaust memory in any realistic
time.)

However, having some experience with software development, I always preferred bugs reported
and fixed rather than concealed and lying in wait (or worse, found first by a motivated
adversary.) Forgive me this rant, I do not live from writing kernel drivers, this is just a
pet project as of time being ...

Thanks,
Mirsad

--
Mirsad Goran Todorovac
Sistem inženjer
Grafički fakultet | Akademija likovnih umjetnosti
Sveučilište u Zagrebu
-- 
System engineer
Faculty of Graphic Arts | Academy of Fine Arts
University of Zagreb, Republic of Croatia
The European Union


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: LOOKS GOOD: Possible regression in drm/i915 driver: memleak
  2022-12-22  0:12       ` LOOKS GOOD: " Mirsad Goran Todorovac
@ 2022-12-22  8:04         ` Tvrtko Ursulin
  2022-12-22 15:21           ` Mirsad Goran Todorovac
  2022-12-25 22:48           ` Mirsad Goran Todorovac
  0 siblings, 2 replies; 14+ messages in thread
From: Tvrtko Ursulin @ 2022-12-22  8:04 UTC (permalink / raw)
  To: Mirsad Goran Todorovac, srinivas pandruvada, LKML, jani.nikula,
	joonas.lahtinen, Rodrigo Vivi
  Cc: Thorsten Leemhuis, intel-gfx


On 22/12/2022 00:12, Mirsad Goran Todorovac wrote:
> On 20. 12. 2022. 20:34, Mirsad Todorovac wrote:
>> On 12/20/22 16:52, Tvrtko Ursulin wrote:
>>
>>> On 20/12/2022 15:22, srinivas pandruvada wrote:
>>>> +Added DRM mailing list and maintainers
>>>>
>>>> On Tue, 2022-12-20 at 15:33 +0100, Mirsad Todorovac wrote:
>>>>> Hi all,
>>>>>
>>>>> I have been unsuccessful to find any particular Intel i915 maintainer
>>>>> emails, so my best bet is to post here, as you will must assuredly
>>>>> already know them.
>>>
>>> For future reference you can use 
>>> ${kernel_dir}/scripts/get_maintainer.pl -f ...
>>>
>>>>> The problem is a kernel memory leak that is repeatedly occurring
>>>>> triggered during the execution of Chrome browser under the latest
>>>>> 6.1.0+
>>>>> kernel of this morning and Almalinux 8.6 on a Lenovo desktop box
>>>>> with Intel(R) Core(TM) i5-8400 CPU @ 2.80GHz CPU.
>>>>>
>>>>> The build is with KMEMLEAK, KASAN and MGLRU turned on during the
>>>>> build,
>>>>> on a vanilla mainline kernel from Mr. Torvalds' tree.
>>>>>
>>>>> The leaks look like this one:
>>>>>
>>>>> unreferenced object 0xffff888131754880 (size 64):
>>>>>     comm "chrome", pid 13058, jiffies 4298568878 (age 3708.084s)
>>>>>     hex dump (first 32 bytes):
>>>>>       01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>>>>> ................
>>>>>       00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff
>>>>> ...........>....
>>>>>     backtrace:
>>>>>       [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
>>>>>       [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
>>>>>       [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
>>>>>       [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
>>>>>       [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820
>>>>> [i915]
>>>>>       [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110
>>>>> [i915]
>>>>>       [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
>>>>>       [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
>>>>>       [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
>>>>>       [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
>>>>>       [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
>>>>>
>>>>> The complete list of leaks in attachment, but they seem similar or
>>>>> the same.
>>>>>
>>>>> Please find attached lshw and kernel build config file.
>>>>>
>>>>> I will probably check the same parms on my laptop at home, which is
>>>>> also
>>>>> Lenovo, but a different hw config and Ubuntu 22.10.
>>>
>>> Could you try the below patch?
>>>
>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c 
>>> b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
>>> index c3ea243d414d..0b07534c203a 100644
>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
>>> @@ -679,9 +679,10 @@ mmap_offset_attach(struct drm_i915_gem_object *obj,
>>>   insert:
>>>          mmo = insert_mmo(obj, mmo);
>>>          GEM_BUG_ON(lookup_mmo(obj, mmap_type) != mmo);
>>> -out:
>>> +
>>>          if (file)
>>>                  drm_vma_node_allow(&mmo->vma_node, file);
>>> +out:
>>>          return mmo;
>>>
>>>   err:
>>>
>>> Maybe it is not the best fix but curious to know if it will make the 
>>> leak go away.
>>
>> Hi,
>>
>> After 27 minutes uptime with the patched kernel it looks promising.
>> It is much longer than it took for the buggy kernel to leak slabs.
>>
>> Here is the output:
>>
>> [root@pc-mtodorov marvin]# echo scan > /sys/kernel/debug/kmemleak
>> [root@pc-mtodorov marvin]# cat !$
>> cat /sys/kernel/debug/kmemleak
>> unreferenced object 0xffff888105028d80 (size 16):
>>    comm "kworker/u12:5", pid 359, jiffies 4294902898 (age 1620.144s)
>>    hex dump (first 16 bytes):
>>      6d 65 6d 73 74 69 63 6b 30 00 00 00 00 00 00 00  memstick0.......
>>    backtrace:
>>      [<ffffffffb6bb5542>] slab_post_alloc_hook+0xb2/0x340
>>      [<ffffffffb6bbbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
>>      [<ffffffffb6af8175>] __kmalloc_node_track_caller+0x55/0x160
>>      [<ffffffffb6ae34a6>] kstrdup+0x36/0x60
>>      [<ffffffffb6ae3508>] kstrdup_const+0x28/0x30
>>      [<ffffffffb70d0757>] kvasprintf_const+0x97/0xd0
>>      [<ffffffffb7c9cdf4>] kobject_set_name_vargs+0x34/0xc0
>>      [<ffffffffb750289b>] dev_set_name+0x9b/0xd0
>>      [<ffffffffc12d9201>] memstick_check+0x181/0x639 [memstick]
>>      [<ffffffffb676e1d6>] process_one_work+0x4e6/0x7e0
>>      [<ffffffffb676e556>] worker_thread+0x76/0x770
>>      [<ffffffffb677b468>] kthread+0x168/0x1a0
>>      [<ffffffffb6604c99>] ret_from_fork+0x29/0x50
>> [root@pc-mtodorov marvin]# w
>>   20:27:35 up 27 min,  2 users,  load average: 0.83, 1.15, 1.19
>> USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT
>> marvin   tty2     tty2             20:01   27:10  10:12   2.09s 
>> /opt/google/chrome/chrome --type=utility --utility-sub-type=audio.m
>> marvin   pts/1    -                20:01    0.00s  2:00   0.38s sudo bash
>> [root@pc-mtodorov marvin]# uname -rms
>> Linux 6.1.0-b6bb9676f216-mglru-kmemlk-kasan+ x86_64
>> [root@pc-mtodorov marvin]#
> 
> As I hear no reply from Tvrtko, and there is already 1d5h uptime with no 
> leaks (but
> the kworker with memstick_check nag I couldn't bisect on the only box 
> that reproduced it,
> because something in hw was not supported in pre 4.16 kernels on the 
> Lenovo V530S-07ICB.
> Or I am doing something wrong.)
> 
> However, now I can find the memstick maintainers thanks to Tvrtko's hint.
> 
> If you no longer require my service, I would close this on my behalf.
> 
> I hope I did not cause too much trouble. The knowledgeable knew that 
> this was not a security
> risk, but only a bug. (30 leaks of 64 bytes each were hardly to exhaust 
> memory in any realistic
> time.)
> 
> However, having some experience with software development, I always 
> preferred bugs reported
> and fixed rather than concealed and lying in wait (or worse, found first 
> by a motivated
> adversary.) Forgive me this rant, I do not live from writing kernel 
> drivers, this is just a
> pet project as of time being ...

It is not forgotten - I was trying to reach out to the original author 
of the fixlet which worked for you. If that fails I will take it up on 
myself, but need to set aside some time to get into the exact problem 
space before I can vouch for the fix and send it on my own.

In the meantime definitely thanks a lot for testing this quickly and 
reporting back!

What will happen next is, that when either the original author or myself 
are ready to send out the fix as a proper patch, you will be copied on 
it via the "Reported-by" and possibly "Tested-by" tags. Latter is if the 
patch remains identical. If it changes we might kindly ask you to 
re-test if possible.

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: LOOKS GOOD: Possible regression in drm/i915 driver: memleak
  2022-12-22  8:04         ` Tvrtko Ursulin
@ 2022-12-22 15:21           ` Mirsad Goran Todorovac
  2022-12-23 12:18             ` Tvrtko Ursulin
  2022-12-25 22:48           ` Mirsad Goran Todorovac
  1 sibling, 1 reply; 14+ messages in thread
From: Mirsad Goran Todorovac @ 2022-12-22 15:21 UTC (permalink / raw)
  To: Tvrtko Ursulin, srinivas pandruvada, LKML, jani.nikula,
	joonas.lahtinen, Rodrigo Vivi
  Cc: Thorsten Leemhuis, intel-gfx

On 12/22/2022 09:04, Tvrtko Ursulin wrote:

>
> On 22/12/2022 00:12, Mirsad Goran Todorovac wrote:
>> On 20. 12. 2022. 20:34, Mirsad Todorovac wrote:
>>
>> As I hear no reply from Tvrtko, and there is already 1d5h uptime with 
>> no leaks (but
>> the kworker with memstick_check nag I couldn't bisect on the only box 
>> that reproduced it,
>> because something in hw was not supported in pre 4.16 kernels on the 
>> Lenovo V530S-07ICB.
>> Or I am doing something wrong.)
>>
>> However, now I can find the memstick maintainers thanks to Tvrtko's 
>> hint.
>>
>> If you no longer require my service, I would close this on my behalf.
>>
>> I hope I did not cause too much trouble. The knowledgeable knew that 
>> this was not a security
>> risk, but only a bug. (30 leaks of 64 bytes each were hardly to 
>> exhaust memory in any realistic
>> time.)
>>
>> However, having some experience with software development, I always 
>> preferred bugs reported
>> and fixed rather than concealed and lying in wait (or worse, found 
>> first by a motivated
>> adversary.) Forgive me this rant, I do not live from writing kernel 
>> drivers, this is just a
>> pet project as of time being ...
Hi,
> It is not forgotten - I was trying to reach out to the original author 
> of the fixlet which worked for you. If that fails I will take it up on 
> myself, but need to set aside some time to get into the exact problem 
> space before I can vouch for the fix and send it on my own.
That's good news. Possibly with some assistance I could bisect on pre 
4.16 kernels with the additional drivers.
> In the meantime definitely thanks a lot for testing this quickly and 
> reporting back!
Not at all, I considered it a privilege to assist your team.
> What will happen next is, that when either the original author or 
> myself are ready to send out the fix as a proper patch, you will be 
> copied on it via the "Reported-by" and possibly "Tested-by" tags. 
> Latter is if the patch remains identical. If it changes we might 
> kindly ask you to re-test if possible.

I've seen the published patch and it seems like the same two lines 
change (-1/+1).
In case of a change, I will attempt to test with the same config, setup 
and running programs.

I may need to correct myself in regard as to security aspect of this 
patch as addressed in 786555987207.

QUOTE:

     Currently we create a new mmap_offset for every call to
     mmap_offset_ioctl. This exposes ourselves to an abusive client that may
     simply create new mmap_offsets ad infinitum, which will exhaust 
physical
     memory and the virtual address space. In addition to the exhaustion, a
     very long linear list of mmap_offsets causes other clients using the
     object to incur long list walks -- these long lists can also be
     generated by simply having many clients generate their own mmap_offset.

It is unobvious whether the bug that caused chrome to trigger 30 
memleaks could be exploited by an
abusive script to exhaust larger parts of kernel memory and possibly 
crash the kernel?

Thanks,
Mirsad

-- 
Mirsad Goran Todorovac
Sistem inženjer
Grafički fakultet | Akademija likovnih umjetnosti
Sveučilište u Zagrebu
--
System engineer
Faculty of Graphic Arts | Academy of Fine Arts
University of Zagreb, Republic of Croatia
tel. +385 (0)1 3711 451
mob. +385 91 57 88 355


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: LOOKS GOOD: Possible regression in drm/i915 driver: memleak
  2022-12-22 15:21           ` Mirsad Goran Todorovac
@ 2022-12-23 12:18             ` Tvrtko Ursulin
  2022-12-25 21:11               ` Mirsad Goran Todorovac
  0 siblings, 1 reply; 14+ messages in thread
From: Tvrtko Ursulin @ 2022-12-23 12:18 UTC (permalink / raw)
  To: Mirsad Goran Todorovac, srinivas pandruvada, LKML, jani.nikula,
	joonas.lahtinen, Rodrigo Vivi
  Cc: Thorsten Leemhuis, intel-gfx


On 22/12/2022 15:21, Mirsad Goran Todorovac wrote:
> On 12/22/2022 09:04, Tvrtko Ursulin wrote:
>> On 22/12/2022 00:12, Mirsad Goran Todorovac wrote:
>>> On 20. 12. 2022. 20:34, Mirsad Todorovac wrote:
>>>
>>> As I hear no reply from Tvrtko, and there is already 1d5h uptime with 
>>> no leaks (but
>>> the kworker with memstick_check nag I couldn't bisect on the only box 
>>> that reproduced it,
>>> because something in hw was not supported in pre 4.16 kernels on the 
>>> Lenovo V530S-07ICB.
>>> Or I am doing something wrong.)
>>>
>>> However, now I can find the memstick maintainers thanks to Tvrtko's 
>>> hint.
>>>
>>> If you no longer require my service, I would close this on my behalf.
>>>
>>> I hope I did not cause too much trouble. The knowledgeable knew that 
>>> this was not a security
>>> risk, but only a bug. (30 leaks of 64 bytes each were hardly to 
>>> exhaust memory in any realistic
>>> time.)
>>>
>>> However, having some experience with software development, I always 
>>> preferred bugs reported
>>> and fixed rather than concealed and lying in wait (or worse, found 
>>> first by a motivated
>>> adversary.) Forgive me this rant, I do not live from writing kernel 
>>> drivers, this is just a
>>> pet project as of time being ...
> Hi,
>> It is not forgotten - I was trying to reach out to the original author 
>> of the fixlet which worked for you. If that fails I will take it up on 
>> myself, but need to set aside some time to get into the exact problem 
>> space before I can vouch for the fix and send it on my own.
> That's good news. Possibly with some assistance I could bisect on pre 
> 4.16 kernels with the additional drivers.

Sorry, maybe I am confused, but from where does 4.16 come?

>> In the meantime definitely thanks a lot for testing this quickly and 
>> reporting back!
> Not at all, I considered it a privilege to assist your team.
>> What will happen next is, that when either the original author or 
>> myself are ready to send out the fix as a proper patch, you will be 
>> copied on it via the "Reported-by" and possibly "Tested-by" tags. 
>> Latter is if the patch remains identical. If it changes we might 
>> kindly ask you to re-test if possible.
> 
> I've seen the published patch and it seems like the same two lines 
> change (-1/+1).
> In case of a change, I will attempt to test with the same config, setup 
> and running programs.

Yes it is the same diff so no need to re-test really.

> I may need to correct myself in regard as to security aspect of this 
> patch as addressed in 786555987207.
> 
> QUOTE:
> 
>      Currently we create a new mmap_offset for every call to
>      mmap_offset_ioctl. This exposes ourselves to an abusive client that 
> may
>      simply create new mmap_offsets ad infinitum, which will exhaust 
> physical
>      memory and the virtual address space. In addition to the exhaustion, a
>      very long linear list of mmap_offsets causes other clients using the
>      object to incur long list walks -- these long lists can also be
>      generated by simply having many clients generate their own 
> mmap_offset.
> 
> It is unobvious whether the bug that caused chrome to trigger 30 
> memleaks could be exploited by an
> abusive script to exhaust larger parts of kernel memory and possibly 
> crash the kernel?

Indeed. Attackers imagination can be pretty impressive so I'd rather 
assume it is exploitable than that it isn't. Luckily it is "just" a 
memory leak rather and information leak or worse. Hopefully we can merge 
the fix soon, as soon as a willing reviewer is found.

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: LOOKS GOOD: Possible regression in drm/i915 driver: memleak
  2022-12-23 12:18             ` Tvrtko Ursulin
@ 2022-12-25 21:11               ` Mirsad Goran Todorovac
  0 siblings, 0 replies; 14+ messages in thread
From: Mirsad Goran Todorovac @ 2022-12-25 21:11 UTC (permalink / raw)
  To: Tvrtko Ursulin, srinivas pandruvada, LKML, jani.nikula,
	joonas.lahtinen, Rodrigo Vivi
  Cc: Thorsten Leemhuis, intel-gfx

On 23. 12. 2022. 13:18, Tvrtko Ursulin wrote:
> 

>>> It is not forgotten - I was trying to reach out to the original author of the fixlet which worked for you. If that fails I will 
>>> take it up on myself, but need to set aside some time to get into the exact problem space before I can vouch for the fix and send 
>>> it on my own.

>> That's good news. Possibly with some assistance I could bisect on pre 4.16 kernels with the additional drivers.
> 
> Sorry, maybe I am confused, but from where does 4.16 come?

Sorry, I forgot to refer to the memstick_check() memleak in drivers/memstick/core/memstick.c,
also discovered through CONFIG_KMEMLEAK=y option enabled.

The 4.16 is the last kernel I managed to start on my Lenovo desktop box which only reproduced
the memstick_check() leak.

Needless to say, this is not a i915-related bug.

Sorry for imprecision in my paragraph.

Regards,
Mirsad

--
Mirsad Goran Todorovac
Sistem inženjer
Grafički fakultet | Akademija likovnih umjetnosti
Sveučilište u Zagrebu
-- 
System engineer
Faculty of Graphic Arts | Academy of Fine Arts
University of Zagreb, Republic of Croatia
The European Union


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: LOOKS GOOD: Possible regression in drm/i915 driver: memleak
  2022-12-22  8:04         ` Tvrtko Ursulin
  2022-12-22 15:21           ` Mirsad Goran Todorovac
@ 2022-12-25 22:48           ` Mirsad Goran Todorovac
  2023-01-09 15:00             ` Tvrtko Ursulin
  1 sibling, 1 reply; 14+ messages in thread
From: Mirsad Goran Todorovac @ 2022-12-25 22:48 UTC (permalink / raw)
  To: Tvrtko Ursulin, srinivas pandruvada, LKML, jani.nikula,
	joonas.lahtinen, Rodrigo Vivi
  Cc: Thorsten Leemhuis, intel-gfx

On 22. 12. 2022. 09:04, Tvrtko Ursulin wrote:
> 
> In the meantime definitely thanks a lot for testing this quickly and reporting back!

Don't think much of it - anyone with CONFIG_KMEMLEAK enabled could have caught this bug.

I was surprised that you found the fix in less than an hour without me having to bisect :)

Kind regards,
Mirsad

--
Mirsad Goran Todorovac
Sistem inženjer
Grafički fakultet | Akademija likovnih umjetnosti
Sveučilište u Zagrebu
-- 
System engineer
Faculty of Graphic Arts | Academy of Fine Arts
University of Zagreb, Republic of Croatia
The European Union


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: LOOKS GOOD: Possible regression in drm/i915 driver: memleak
  2022-12-25 22:48           ` Mirsad Goran Todorovac
@ 2023-01-09 15:00             ` Tvrtko Ursulin
  2023-01-16  6:25               ` Mirsad Todorovac
  0 siblings, 1 reply; 14+ messages in thread
From: Tvrtko Ursulin @ 2023-01-09 15:00 UTC (permalink / raw)
  To: Mirsad Goran Todorovac, srinivas pandruvada, LKML, jani.nikula,
	joonas.lahtinen, Rodrigo Vivi
  Cc: Thorsten Leemhuis, intel-gfx


On 25/12/2022 22:48, Mirsad Goran Todorovac wrote:
> On 22. 12. 2022. 09:04, Tvrtko Ursulin wrote:
>>
>> In the meantime definitely thanks a lot for testing this quickly and 
>> reporting back!
> 
> Don't think much of it - anyone with CONFIG_KMEMLEAK enabled could have 
> caught this bug.
> 
> I was surprised that you found the fix in less than an hour without me 
> having to bisect :)

Fix sadly has a problem handling shared buffers so different version 
will hopefully appear soon.

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: LOOKS GOOD: Possible regression in drm/i915 driver: memleak
  2023-01-09 15:00             ` Tvrtko Ursulin
@ 2023-01-16  6:25               ` Mirsad Todorovac
  0 siblings, 0 replies; 14+ messages in thread
From: Mirsad Todorovac @ 2023-01-16  6:25 UTC (permalink / raw)
  To: Tvrtko Ursulin, srinivas pandruvada, LKML, jani.nikula,
	joonas.lahtinen, Rodrigo Vivi
  Cc: Thorsten Leemhuis, intel-gfx



On 1/9/23 16:00, Tvrtko Ursulin wrote:
> 
> On 25/12/2022 22:48, Mirsad Goran Todorovac wrote:
>> On 22. 12. 2022. 09:04, Tvrtko Ursulin wrote:
>>>
>>> In the meantime definitely thanks a lot for testing this quickly and reporting back!
>>
>> Don't think much of it - anyone with CONFIG_KMEMLEAK enabled could have caught this bug.
>>
>> I was surprised that you found the fix in less than an hour without me having to bisect :)
> 
> Fix sadly has a problem handling shared buffers so different version will hopefully appear soon.

Bummer.

Ready to test the new version in the same environment.

-- 
Mirsad Goran Todorovac
Sistem inženjer
Grafički fakultet | Akademija likovnih umjetnosti
Sveučilište u Zagrebu

System engineer
Faculty of Graphic Arts | Academy of Fine Arts
University of Zagreb, Republic of Croatia

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2023-01-16  6:25 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-20 14:33 Possible regression in drm/i915 driver: memleak Mirsad Todorovac
2022-12-20 15:22 ` srinivas pandruvada
2022-12-20 15:52   ` Tvrtko Ursulin
2022-12-20 17:20     ` Mirsad Goran Todorovac
2022-12-20 19:34     ` LOOKS GOOD: " Mirsad Todorovac
2022-12-21  7:15       ` Mirsad Goran Todorovac
2022-12-22  0:12       ` LOOKS GOOD: " Mirsad Goran Todorovac
2022-12-22  8:04         ` Tvrtko Ursulin
2022-12-22 15:21           ` Mirsad Goran Todorovac
2022-12-23 12:18             ` Tvrtko Ursulin
2022-12-25 21:11               ` Mirsad Goran Todorovac
2022-12-25 22:48           ` Mirsad Goran Todorovac
2023-01-09 15:00             ` Tvrtko Ursulin
2023-01-16  6:25               ` Mirsad Todorovac

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).