All of lore.kernel.org
 help / color / mirror / Atom feed
* [Intel-gfx] [PATCH] drm/i915: Avoid dereferencing a dead context
@ 2020-04-28  9:02 Chris Wilson
  2020-04-28 14:46 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
                   ` (4 more replies)
  0 siblings, 5 replies; 7+ messages in thread
From: Chris Wilson @ 2020-04-28  9:02 UTC (permalink / raw)
  To: intel-gfx; +Cc: Chris Wilson

Once the intel_context is closed, the GEM context may be freed and so
the link from intel_context.gem_context is invalid.

<3>[  219.782944] BUG: KASAN: use-after-free in intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
<3>[  219.782996] Read of size 8 at addr ffff8881d7dff0b8 by task kworker/0:1/12

<4>[  219.783052] CPU: 0 PID: 12 Comm: kworker/0:1 Tainted: G     U            5.7.0-rc2-g1f3ffd7683d54-kasan_118+ #1
<4>[  219.783055] Hardware name: System manufacturer System Product Name/Z170 PRO GAMING, BIOS 3402 04/26/2017
<4>[  219.783105] Workqueue: events heartbeat [i915]
<4>[  219.783109] Call Trace:
<4>[  219.783113]  <IRQ>
<4>[  219.783119]  dump_stack+0x96/0xdb
<4>[  219.783177]  ? intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
<4>[  219.783182]  print_address_description.constprop.6+0x16/0x310
<4>[  219.783239]  ? intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
<4>[  219.783295]  ? intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
<4>[  219.783300]  __kasan_report+0x137/0x190
<4>[  219.783359]  ? intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
<4>[  219.783366]  kasan_report+0x32/0x50
<4>[  219.783426]  intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
<4>[  219.783481]  execlists_reset+0x39c/0x13d0 [i915]
<4>[  219.783494]  ? mark_held_locks+0x9e/0xe0
<4>[  219.783546]  ? execlists_hold+0xfc0/0xfc0 [i915]
<4>[  219.783551]  ? lockdep_hardirqs_on+0x348/0x5f0
<4>[  219.783557]  ? _raw_spin_unlock_irqrestore+0x34/0x60
<4>[  219.783606]  ? execlists_submission_tasklet+0x118/0x3a0 [i915]
<4>[  219.783615]  tasklet_action_common.isra.14+0x13b/0x410
<4>[  219.783623]  ? __do_softirq+0x1e4/0x9a7
<4>[  219.783630]  __do_softirq+0x226/0x9a7
<4>[  219.783643]  do_softirq_own_stack+0x2a/0x40
<4>[  219.783647]  </IRQ>
<4>[  219.783692]  ? heartbeat+0x3e2/0x10f0 [i915]
<4>[  219.783696]  do_softirq.part.13+0x49/0x50
<4>[  219.783700]  __local_bh_enable_ip+0x1a2/0x1e0
<4>[  219.783748]  heartbeat+0x409/0x10f0 [i915]
<4>[  219.783801]  ? __live_idle_pulse+0x9f0/0x9f0 [i915]
<4>[  219.783806]  ? lock_acquire+0x1ac/0x8a0
<4>[  219.783811]  ? process_one_work+0x811/0x1870
<4>[  219.783827]  ? rcu_read_lock_sched_held+0x9c/0xd0
<4>[  219.783832]  ? rcu_read_lock_bh_held+0xb0/0xb0
<4>[  219.783836]  ? _raw_spin_unlock_irq+0x1f/0x40
<4>[  219.783845]  process_one_work+0x8ca/0x1870
<4>[  219.783848]  ? lock_acquire+0x1ac/0x8a0
<4>[  219.783852]  ? worker_thread+0x1d0/0xb80
<4>[  219.783864]  ? pwq_dec_nr_in_flight+0x2c0/0x2c0
<4>[  219.783870]  ? do_raw_spin_lock+0x129/0x290
<4>[  219.783886]  worker_thread+0x82/0xb80
<4>[  219.783895]  ? __kthread_parkme+0xaf/0x1b0
<4>[  219.783902]  ? process_one_work+0x1870/0x1870
<4>[  219.783906]  kthread+0x34e/0x420
<4>[  219.783911]  ? kthread_create_on_node+0xc0/0xc0
<4>[  219.783918]  ret_from_fork+0x3a/0x50

<3>[  219.783950] Allocated by task 1264:
<4>[  219.783975]  save_stack+0x19/0x40
<4>[  219.783978]  __kasan_kmalloc.constprop.3+0xa0/0xd0
<4>[  219.784029]  i915_gem_create_context+0xa2/0xab8 [i915]
<4>[  219.784081]  i915_gem_context_create_ioctl+0x1fa/0x450 [i915]
<4>[  219.784085]  drm_ioctl_kernel+0x1d8/0x270
<4>[  219.784088]  drm_ioctl+0x676/0x930
<4>[  219.784092]  ksys_ioctl+0xb7/0xe0
<4>[  219.784096]  __x64_sys_ioctl+0x6a/0xb0
<4>[  219.784100]  do_syscall_64+0x94/0x530
<4>[  219.784103]  entry_SYSCALL_64_after_hwframe+0x49/0xb3

<3>[  219.784120] Freed by task 12:
<4>[  219.784141]  save_stack+0x19/0x40
<4>[  219.784145]  __kasan_slab_free+0x130/0x180
<4>[  219.784148]  kmem_cache_free_bulk+0x1bd/0x500
<4>[  219.784152]  kfree_rcu_work+0x1d8/0x890
<4>[  219.784155]  process_one_work+0x8ca/0x1870
<4>[  219.784158]  worker_thread+0x82/0xb80
<4>[  219.784162]  kthread+0x34e/0x420
<4>[  219.784165]  ret_from_fork+0x3a/0x50

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gpu_error.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 4d54dba35302..a976cd67b3b3 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -1207,8 +1207,6 @@ static void engine_record_registers(struct intel_engine_coredump *ee)
 static void record_request(const struct i915_request *request,
 			   struct i915_request_coredump *erq)
 {
-	const struct i915_gem_context *ctx;
-
 	erq->flags = request->fence.flags;
 	erq->context = request->fence.context;
 	erq->seqno = request->fence.seqno;
@@ -1218,9 +1216,13 @@ static void record_request(const struct i915_request *request,
 
 	erq->pid = 0;
 	rcu_read_lock();
-	ctx = rcu_dereference(request->context->gem_context);
-	if (ctx)
-		erq->pid = pid_nr(ctx->pid);
+	if (!intel_context_is_closed(request->context)) {
+		const struct i915_gem_context *ctx;
+
+		ctx = rcu_dereference(request->context->gem_context);
+		if (ctx)
+			erq->pid = pid_nr(ctx->pid);
+	}
 	rcu_read_unlock();
 }
 
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Avoid dereferencing a dead context
  2020-04-28  9:02 [Intel-gfx] [PATCH] drm/i915: Avoid dereferencing a dead context Chris Wilson
@ 2020-04-28 14:46 ` Patchwork
  2020-04-28 15:10 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Patchwork @ 2020-04-28 14:46 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: drm/i915: Avoid dereferencing a dead context
URL   : https://patchwork.freedesktop.org/series/76584/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
214ef915d9c5 drm/i915: Avoid dereferencing a dead context
-:9: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#9: 
<3>[  219.782944] BUG: KASAN: use-after-free in intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]

total: 0 errors, 1 warnings, 0 checks, 24 lines checked

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915: Avoid dereferencing a dead context
  2020-04-28  9:02 [Intel-gfx] [PATCH] drm/i915: Avoid dereferencing a dead context Chris Wilson
  2020-04-28 14:46 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
@ 2020-04-28 15:10 ` Patchwork
  2020-04-28 17:44 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Patchwork @ 2020-04-28 15:10 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: drm/i915: Avoid dereferencing a dead context
URL   : https://patchwork.freedesktop.org/series/76584/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8382 -> Patchwork_17491
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17491/index.html

Known issues
------------

  Here are the changes found in Patchwork_17491 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@i915_selftest@live@gem:
    - fi-bwr-2160:        [PASS][1] -> [INCOMPLETE][2] ([i915#489])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8382/fi-bwr-2160/igt@i915_selftest@live@gem.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17491/fi-bwr-2160/igt@i915_selftest@live@gem.html

  
  [i915#489]: https://gitlab.freedesktop.org/drm/intel/issues/489


Participating hosts (48 -> 43)
------------------------------

  Missing    (5): fi-hsw-4200u fi-byt-squawks fi-bsw-cyan fi-byt-clapper fi-bdw-samus 


Build changes
-------------

  * CI: CI-20190529 -> None
  * Linux: CI_DRM_8382 -> Patchwork_17491

  CI-20190529: 20190529
  CI_DRM_8382: 0613efb5f36366a2a1e7d66e893b7a817860e83b @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5614: d095827add11d4e8158b87683971ee659749d9a4 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_17491: 214ef915d9c55d873578c57a390b851b560bc969 @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

214ef915d9c5 drm/i915: Avoid dereferencing a dead context

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17491/index.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915: Avoid dereferencing a dead context
  2020-04-28  9:02 [Intel-gfx] [PATCH] drm/i915: Avoid dereferencing a dead context Chris Wilson
  2020-04-28 14:46 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
  2020-04-28 15:10 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
@ 2020-04-28 17:44 ` Patchwork
  2020-04-28 18:06 ` [Intel-gfx] [PATCH] " Abodunrin, Akeem G
  2020-04-29 13:42 ` Tvrtko Ursulin
  4 siblings, 0 replies; 7+ messages in thread
From: Patchwork @ 2020-04-28 17:44 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: drm/i915: Avoid dereferencing a dead context
URL   : https://patchwork.freedesktop.org/series/76584/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8382_full -> Patchwork_17491_full
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  

Known issues
------------

  Here are the changes found in Patchwork_17491_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@i915_suspend@fence-restore-tiled2untiled:
    - shard-kbl:          [PASS][1] -> [DMESG-WARN][2] ([i915#180])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8382/shard-kbl2/igt@i915_suspend@fence-restore-tiled2untiled.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17491/shard-kbl1/igt@i915_suspend@fence-restore-tiled2untiled.html

  * igt@kms_cursor_crc@pipe-c-cursor-suspend:
    - shard-apl:          [PASS][3] -> [DMESG-WARN][4] ([i915#180]) +3 similar issues
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8382/shard-apl4/igt@kms_cursor_crc@pipe-c-cursor-suspend.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17491/shard-apl1/igt@kms_cursor_crc@pipe-c-cursor-suspend.html

  * igt@kms_cursor_legacy@2x-long-cursor-vs-flip-legacy:
    - shard-hsw:          [PASS][5] -> [FAIL][6] ([i915#96])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8382/shard-hsw6/igt@kms_cursor_legacy@2x-long-cursor-vs-flip-legacy.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17491/shard-hsw8/igt@kms_cursor_legacy@2x-long-cursor-vs-flip-legacy.html

  * igt@kms_draw_crc@draw-method-rgb565-mmap-gtt-ytiled:
    - shard-glk:          [PASS][7] -> [FAIL][8] ([i915#52] / [i915#54]) +1 similar issue
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8382/shard-glk9/igt@kms_draw_crc@draw-method-rgb565-mmap-gtt-ytiled.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17491/shard-glk5/igt@kms_draw_crc@draw-method-rgb565-mmap-gtt-ytiled.html

  * igt@kms_draw_crc@draw-method-rgb565-mmap-wc-xtiled:
    - shard-skl:          [PASS][9] -> [FAIL][10] ([i915#52] / [i915#54])
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8382/shard-skl4/igt@kms_draw_crc@draw-method-rgb565-mmap-wc-xtiled.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17491/shard-skl9/igt@kms_draw_crc@draw-method-rgb565-mmap-wc-xtiled.html

  * igt@kms_psr@psr2_primary_mmap_cpu:
    - shard-iclb:         [PASS][11] -> [SKIP][12] ([fdo#109441]) +1 similar issue
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8382/shard-iclb2/igt@kms_psr@psr2_primary_mmap_cpu.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17491/shard-iclb4/igt@kms_psr@psr2_primary_mmap_cpu.html

  * igt@kms_setmode@basic:
    - shard-glk:          [PASS][13] -> [FAIL][14] ([i915#31])
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8382/shard-glk8/igt@kms_setmode@basic.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17491/shard-glk1/igt@kms_setmode@basic.html

  
#### Possible fixes ####

  * igt@gem_ctx_persistence@engines-mixed-process@rcs0:
    - shard-skl:          [FAIL][15] ([i915#1528]) -> [PASS][16]
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8382/shard-skl2/igt@gem_ctx_persistence@engines-mixed-process@rcs0.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17491/shard-skl1/igt@gem_ctx_persistence@engines-mixed-process@rcs0.html

  * igt@gem_workarounds@suspend-resume-context:
    - shard-apl:          [DMESG-WARN][17] ([i915#180]) -> [PASS][18] +4 similar issues
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8382/shard-apl2/igt@gem_workarounds@suspend-resume-context.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17491/shard-apl3/igt@gem_workarounds@suspend-resume-context.html

  * igt@kms_cursor_crc@pipe-a-cursor-256x256-sliding:
    - shard-kbl:          [FAIL][19] ([i915#54] / [i915#93] / [i915#95]) -> [PASS][20]
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8382/shard-kbl2/igt@kms_cursor_crc@pipe-a-cursor-256x256-sliding.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17491/shard-kbl2/igt@kms_cursor_crc@pipe-a-cursor-256x256-sliding.html

  * igt@kms_cursor_legacy@flip-vs-cursor-atomic:
    - shard-skl:          [FAIL][21] ([IGT#5]) -> [PASS][22]
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8382/shard-skl6/igt@kms_cursor_legacy@flip-vs-cursor-atomic.html
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17491/shard-skl9/igt@kms_cursor_legacy@flip-vs-cursor-atomic.html

  * igt@kms_hdr@bpc-switch-suspend:
    - shard-skl:          [FAIL][23] ([i915#1188]) -> [PASS][24]
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8382/shard-skl2/igt@kms_hdr@bpc-switch-suspend.html
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17491/shard-skl1/igt@kms_hdr@bpc-switch-suspend.html

  * igt@kms_pipe_crc_basic@suspend-read-crc-pipe-c:
    - shard-skl:          [INCOMPLETE][25] ([i915#69]) -> [PASS][26]
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8382/shard-skl8/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-c.html
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17491/shard-skl4/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-c.html

  * igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min:
    - shard-skl:          [FAIL][27] ([fdo#108145] / [i915#265]) -> [PASS][28]
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8382/shard-skl2/igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min.html
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17491/shard-skl1/igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min.html

  * igt@kms_psr2_su@frontbuffer:
    - shard-iclb:         [SKIP][29] ([fdo#109642] / [fdo#111068]) -> [PASS][30]
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8382/shard-iclb4/igt@kms_psr2_su@frontbuffer.html
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17491/shard-iclb2/igt@kms_psr2_su@frontbuffer.html

  * igt@kms_psr@psr2_no_drrs:
    - shard-iclb:         [SKIP][31] ([fdo#109441]) -> [PASS][32] +2 similar issues
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8382/shard-iclb4/igt@kms_psr@psr2_no_drrs.html
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17491/shard-iclb2/igt@kms_psr@psr2_no_drrs.html

  
#### Warnings ####

  * igt@i915_pm_dc@dc6-dpms:
    - shard-skl:          [INCOMPLETE][33] ([i915#198]) -> [FAIL][34] ([i915#454])
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8382/shard-skl3/igt@i915_pm_dc@dc6-dpms.html
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17491/shard-skl2/igt@i915_pm_dc@dc6-dpms.html

  * igt@i915_pm_dc@dc6-psr:
    - shard-snb:          [INCOMPLETE][35] ([i915#82]) -> [SKIP][36] ([fdo#109271])
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8382/shard-snb1/igt@i915_pm_dc@dc6-psr.html
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17491/shard-snb5/igt@i915_pm_dc@dc6-psr.html

  * igt@i915_pm_rc6_residency@rc6-idle:
    - shard-iclb:         [WARN][37] ([i915#1515]) -> [FAIL][38] ([i915#1515])
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8382/shard-iclb4/igt@i915_pm_rc6_residency@rc6-idle.html
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17491/shard-iclb3/igt@i915_pm_rc6_residency@rc6-idle.html

  * igt@kms_psr2_su@page_flip:
    - shard-iclb:         [SKIP][39] ([fdo#109642] / [fdo#111068]) -> [FAIL][40] ([i915#608])
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8382/shard-iclb4/igt@kms_psr2_su@page_flip.html
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17491/shard-iclb2/igt@kms_psr2_su@page_flip.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [IGT#5]: https://gitlab.freedesktop.org/drm/igt-gpu-tools/issues/5
  [fdo#108145]: https://bugs.freedesktop.org/show_bug.cgi?id=108145
  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109441]: https://bugs.freedesktop.org/show_bug.cgi?id=109441
  [fdo#109642]: https://bugs.freedesktop.org/show_bug.cgi?id=109642
  [fdo#111068]: https://bugs.freedesktop.org/show_bug.cgi?id=111068
  [i915#1188]: https://gitlab.freedesktop.org/drm/intel/issues/1188
  [i915#1515]: https://gitlab.freedesktop.org/drm/intel/issues/1515
  [i915#1528]: https://gitlab.freedesktop.org/drm/intel/issues/1528
  [i915#1542]: https://gitlab.freedesktop.org/drm/intel/issues/1542
  [i915#180]: https://gitlab.freedesktop.org/drm/intel/issues/180
  [i915#198]: https://gitlab.freedesktop.org/drm/intel/issues/198
  [i915#265]: https://gitlab.freedesktop.org/drm/intel/issues/265
  [i915#31]: https://gitlab.freedesktop.org/drm/intel/issues/31
  [i915#454]: https://gitlab.freedesktop.org/drm/intel/issues/454
  [i915#52]: https://gitlab.freedesktop.org/drm/intel/issues/52
  [i915#54]: https://gitlab.freedesktop.org/drm/intel/issues/54
  [i915#608]: https://gitlab.freedesktop.org/drm/intel/issues/608
  [i915#69]: https://gitlab.freedesktop.org/drm/intel/issues/69
  [i915#82]: https://gitlab.freedesktop.org/drm/intel/issues/82
  [i915#93]: https://gitlab.freedesktop.org/drm/intel/issues/93
  [i915#95]: https://gitlab.freedesktop.org/drm/intel/issues/95
  [i915#96]: https://gitlab.freedesktop.org/drm/intel/issues/96


Participating hosts (10 -> 10)
------------------------------

  No changes in participating hosts


Build changes
-------------

  * CI: CI-20190529 -> None
  * Linux: CI_DRM_8382 -> Patchwork_17491

  CI-20190529: 20190529
  CI_DRM_8382: 0613efb5f36366a2a1e7d66e893b7a817860e83b @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5614: d095827add11d4e8158b87683971ee659749d9a4 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_17491: 214ef915d9c55d873578c57a390b851b560bc969 @ git://anongit.freedesktop.org/gfx-ci/linux
  piglit_4509: fdc5a4ca11124ab8413c7988896eec4c97336694 @ git://anongit.freedesktop.org/piglit

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17491/index.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: Avoid dereferencing a dead context
  2020-04-28  9:02 [Intel-gfx] [PATCH] drm/i915: Avoid dereferencing a dead context Chris Wilson
                   ` (2 preceding siblings ...)
  2020-04-28 17:44 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
@ 2020-04-28 18:06 ` Abodunrin, Akeem G
  2020-04-29 13:42 ` Tvrtko Ursulin
  4 siblings, 0 replies; 7+ messages in thread
From: Abodunrin, Akeem G @ 2020-04-28 18:06 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx



> -----Original Message-----
> From: Intel-gfx <intel-gfx-bounces@lists.freedesktop.org> On Behalf Of Chris
> Wilson
> Sent: Tuesday, April 28, 2020 2:03 AM
> To: intel-gfx@lists.freedesktop.org
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Subject: [Intel-gfx] [PATCH] drm/i915: Avoid dereferencing a dead context
> 
> Once the intel_context is closed, the GEM context may be freed and so the
> link from intel_context.gem_context is invalid.
> 
> <3>[  219.782944] BUG: KASAN: use-after-free in
> intel_engine_coredump_alloc+0x1bc3/0x2250 [i915] <3>[  219.782996] Read
> of size 8 at addr ffff8881d7dff0b8 by task kworker/0:1/12
> 
> <4>[  219.783052] CPU: 0 PID: 12 Comm: kworker/0:1 Tainted: G     U
> 5.7.0-rc2-g1f3ffd7683d54-kasan_118+ #1
> <4>[  219.783055] Hardware name: System manufacturer System Product
> Name/Z170 PRO GAMING, BIOS 3402 04/26/2017 <4>[  219.783105]
> Workqueue: events heartbeat [i915] <4>[  219.783109] Call Trace:
> <4>[  219.783113]  <IRQ>
> <4>[  219.783119]  dump_stack+0x96/0xdb
> <4>[  219.783177]  ? intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
> <4>[  219.783182]  print_address_description.constprop.6+0x16/0x310
> <4>[  219.783239]  ? intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
> <4>[  219.783295]  ? intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
> <4>[  219.783300]  __kasan_report+0x137/0x190 <4>[  219.783359]  ?
> intel_engine_coredump_alloc+0x1bc3/0x2250 [i915] <4>[  219.783366]
> kasan_report+0x32/0x50 <4>[  219.783426]
> intel_engine_coredump_alloc+0x1bc3/0x2250 [i915] <4>[  219.783481]
> execlists_reset+0x39c/0x13d0 [i915] <4>[  219.783494]  ?
> mark_held_locks+0x9e/0xe0 <4>[  219.783546]  ? execlists_hold+0xfc0/0xfc0
> [i915] <4>[  219.783551]  ? lockdep_hardirqs_on+0x348/0x5f0 <4>[
> 219.783557]  ? _raw_spin_unlock_irqrestore+0x34/0x60
> <4>[  219.783606]  ? execlists_submission_tasklet+0x118/0x3a0 [i915] <4>[
> 219.783615]  tasklet_action_common.isra.14+0x13b/0x410
> <4>[  219.783623]  ? __do_softirq+0x1e4/0x9a7 <4>[  219.783630]
> __do_softirq+0x226/0x9a7 <4>[  219.783643]
> do_softirq_own_stack+0x2a/0x40 <4>[  219.783647]  </IRQ> <4>[
> 219.783692]  ? heartbeat+0x3e2/0x10f0 [i915] <4>[  219.783696]
> do_softirq.part.13+0x49/0x50 <4>[  219.783700]
> __local_bh_enable_ip+0x1a2/0x1e0 <4>[  219.783748]
> heartbeat+0x409/0x10f0 [i915] <4>[  219.783801]  ?
> __live_idle_pulse+0x9f0/0x9f0 [i915] <4>[  219.783806]  ?
> lock_acquire+0x1ac/0x8a0 <4>[  219.783811]  ?
> process_one_work+0x811/0x1870 <4>[  219.783827]  ?
> rcu_read_lock_sched_held+0x9c/0xd0
> <4>[  219.783832]  ? rcu_read_lock_bh_held+0xb0/0xb0 <4>[  219.783836]  ?
> _raw_spin_unlock_irq+0x1f/0x40 <4>[  219.783845]
> process_one_work+0x8ca/0x1870 <4>[  219.783848]  ?
> lock_acquire+0x1ac/0x8a0 <4>[  219.783852]  ? worker_thread+0x1d0/0xb80
> <4>[  219.783864]  ? pwq_dec_nr_in_flight+0x2c0/0x2c0 <4>[  219.783870]  ?
> do_raw_spin_lock+0x129/0x290 <4>[  219.783886]
> worker_thread+0x82/0xb80 <4>[  219.783895]  ?
> __kthread_parkme+0xaf/0x1b0 <4>[  219.783902]  ?
> process_one_work+0x1870/0x1870 <4>[  219.783906]  kthread+0x34e/0x420
> <4>[  219.783911]  ? kthread_create_on_node+0xc0/0xc0 <4>[  219.783918]
> ret_from_fork+0x3a/0x50
> 
> <3>[  219.783950] Allocated by task 1264:
> <4>[  219.783975]  save_stack+0x19/0x40
> <4>[  219.783978]  __kasan_kmalloc.constprop.3+0xa0/0xd0
> <4>[  219.784029]  i915_gem_create_context+0xa2/0xab8 [i915] <4>[
> 219.784081]  i915_gem_context_create_ioctl+0x1fa/0x450 [i915] <4>[
> 219.784085]  drm_ioctl_kernel+0x1d8/0x270 <4>[  219.784088]
> drm_ioctl+0x676/0x930 <4>[  219.784092]  ksys_ioctl+0xb7/0xe0 <4>[
> 219.784096]  __x64_sys_ioctl+0x6a/0xb0 <4>[  219.784100]
> do_syscall_64+0x94/0x530 <4>[  219.784103]
> entry_SYSCALL_64_after_hwframe+0x49/0xb3
> 
> <3>[  219.784120] Freed by task 12:
> <4>[  219.784141]  save_stack+0x19/0x40
> <4>[  219.784145]  __kasan_slab_free+0x130/0x180 <4>[  219.784148]
> kmem_cache_free_bulk+0x1bd/0x500 <4>[  219.784152]
> kfree_rcu_work+0x1d8/0x890 <4>[  219.784155]
> process_one_work+0x8ca/0x1870 <4>[  219.784158]
> worker_thread+0x82/0xb80 <4>[  219.784162]  kthread+0x34e/0x420 <4>[
> 219.784165]  ret_from_fork+0x3a/0x50
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/i915/i915_gpu_error.c | 12 +++++++-----
>  1 file changed, 7 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c
> b/drivers/gpu/drm/i915/i915_gpu_error.c
> index 4d54dba35302..a976cd67b3b3 100644
> --- a/drivers/gpu/drm/i915/i915_gpu_error.c
> +++ b/drivers/gpu/drm/i915/i915_gpu_error.c
> @@ -1207,8 +1207,6 @@ static void engine_record_registers(struct
> intel_engine_coredump *ee)  static void record_request(const struct
> i915_request *request,
>  			   struct i915_request_coredump *erq)  {
> -	const struct i915_gem_context *ctx;
> -
>  	erq->flags = request->fence.flags;
>  	erq->context = request->fence.context;
>  	erq->seqno = request->fence.seqno;
> @@ -1218,9 +1216,13 @@ static void record_request(const struct
> i915_request *request,
> 
>  	erq->pid = 0;
>  	rcu_read_lock();
> -	ctx = rcu_dereference(request->context->gem_context);
> -	if (ctx)
> -		erq->pid = pid_nr(ctx->pid);
> +	if (!intel_context_is_closed(request->context)) {
> +		const struct i915_gem_context *ctx;
> +
> +		ctx = rcu_dereference(request->context->gem_context);
> +		if (ctx)
> +			erq->pid = pid_nr(ctx->pid);
> +	}
>  	rcu_read_unlock();
>  }

Fix the checkpatch warnings - the patch itself looks okay...
Acked-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com>

> 
> --
> 2.20.1
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: Avoid dereferencing a dead context
  2020-04-28  9:02 [Intel-gfx] [PATCH] drm/i915: Avoid dereferencing a dead context Chris Wilson
                   ` (3 preceding siblings ...)
  2020-04-28 18:06 ` [Intel-gfx] [PATCH] " Abodunrin, Akeem G
@ 2020-04-29 13:42 ` Tvrtko Ursulin
  2020-04-29 14:15   ` Chris Wilson
  4 siblings, 1 reply; 7+ messages in thread
From: Tvrtko Ursulin @ 2020-04-29 13:42 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 28/04/2020 10:02, Chris Wilson wrote:
> Once the intel_context is closed, the GEM context may be freed and so
> the link from intel_context.gem_context is invalid.
> 
> <3>[  219.782944] BUG: KASAN: use-after-free in intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
> <3>[  219.782996] Read of size 8 at addr ffff8881d7dff0b8 by task kworker/0:1/12
> 
> <4>[  219.783052] CPU: 0 PID: 12 Comm: kworker/0:1 Tainted: G     U            5.7.0-rc2-g1f3ffd7683d54-kasan_118+ #1
> <4>[  219.783055] Hardware name: System manufacturer System Product Name/Z170 PRO GAMING, BIOS 3402 04/26/2017
> <4>[  219.783105] Workqueue: events heartbeat [i915]
> <4>[  219.783109] Call Trace:
> <4>[  219.783113]  <IRQ>
> <4>[  219.783119]  dump_stack+0x96/0xdb
> <4>[  219.783177]  ? intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
> <4>[  219.783182]  print_address_description.constprop.6+0x16/0x310
> <4>[  219.783239]  ? intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
> <4>[  219.783295]  ? intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
> <4>[  219.783300]  __kasan_report+0x137/0x190
> <4>[  219.783359]  ? intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
> <4>[  219.783366]  kasan_report+0x32/0x50
> <4>[  219.783426]  intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
> <4>[  219.783481]  execlists_reset+0x39c/0x13d0 [i915]
> <4>[  219.783494]  ? mark_held_locks+0x9e/0xe0
> <4>[  219.783546]  ? execlists_hold+0xfc0/0xfc0 [i915]
> <4>[  219.783551]  ? lockdep_hardirqs_on+0x348/0x5f0
> <4>[  219.783557]  ? _raw_spin_unlock_irqrestore+0x34/0x60
> <4>[  219.783606]  ? execlists_submission_tasklet+0x118/0x3a0 [i915]
> <4>[  219.783615]  tasklet_action_common.isra.14+0x13b/0x410
> <4>[  219.783623]  ? __do_softirq+0x1e4/0x9a7
> <4>[  219.783630]  __do_softirq+0x226/0x9a7
> <4>[  219.783643]  do_softirq_own_stack+0x2a/0x40
> <4>[  219.783647]  </IRQ>
> <4>[  219.783692]  ? heartbeat+0x3e2/0x10f0 [i915]
> <4>[  219.783696]  do_softirq.part.13+0x49/0x50
> <4>[  219.783700]  __local_bh_enable_ip+0x1a2/0x1e0
> <4>[  219.783748]  heartbeat+0x409/0x10f0 [i915]
> <4>[  219.783801]  ? __live_idle_pulse+0x9f0/0x9f0 [i915]
> <4>[  219.783806]  ? lock_acquire+0x1ac/0x8a0
> <4>[  219.783811]  ? process_one_work+0x811/0x1870
> <4>[  219.783827]  ? rcu_read_lock_sched_held+0x9c/0xd0
> <4>[  219.783832]  ? rcu_read_lock_bh_held+0xb0/0xb0
> <4>[  219.783836]  ? _raw_spin_unlock_irq+0x1f/0x40
> <4>[  219.783845]  process_one_work+0x8ca/0x1870
> <4>[  219.783848]  ? lock_acquire+0x1ac/0x8a0
> <4>[  219.783852]  ? worker_thread+0x1d0/0xb80
> <4>[  219.783864]  ? pwq_dec_nr_in_flight+0x2c0/0x2c0
> <4>[  219.783870]  ? do_raw_spin_lock+0x129/0x290
> <4>[  219.783886]  worker_thread+0x82/0xb80
> <4>[  219.783895]  ? __kthread_parkme+0xaf/0x1b0
> <4>[  219.783902]  ? process_one_work+0x1870/0x1870
> <4>[  219.783906]  kthread+0x34e/0x420
> <4>[  219.783911]  ? kthread_create_on_node+0xc0/0xc0
> <4>[  219.783918]  ret_from_fork+0x3a/0x50
> 
> <3>[  219.783950] Allocated by task 1264:
> <4>[  219.783975]  save_stack+0x19/0x40
> <4>[  219.783978]  __kasan_kmalloc.constprop.3+0xa0/0xd0
> <4>[  219.784029]  i915_gem_create_context+0xa2/0xab8 [i915]
> <4>[  219.784081]  i915_gem_context_create_ioctl+0x1fa/0x450 [i915]
> <4>[  219.784085]  drm_ioctl_kernel+0x1d8/0x270
> <4>[  219.784088]  drm_ioctl+0x676/0x930
> <4>[  219.784092]  ksys_ioctl+0xb7/0xe0
> <4>[  219.784096]  __x64_sys_ioctl+0x6a/0xb0
> <4>[  219.784100]  do_syscall_64+0x94/0x530
> <4>[  219.784103]  entry_SYSCALL_64_after_hwframe+0x49/0xb3
> 
> <3>[  219.784120] Freed by task 12:
> <4>[  219.784141]  save_stack+0x19/0x40
> <4>[  219.784145]  __kasan_slab_free+0x130/0x180
> <4>[  219.784148]  kmem_cache_free_bulk+0x1bd/0x500
> <4>[  219.784152]  kfree_rcu_work+0x1d8/0x890
> <4>[  219.784155]  process_one_work+0x8ca/0x1870
> <4>[  219.784158]  worker_thread+0x82/0xb80
> <4>[  219.784162]  kthread+0x34e/0x420
> <4>[  219.784165]  ret_from_fork+0x3a/0x50
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>   drivers/gpu/drm/i915/i915_gpu_error.c | 12 +++++++-----
>   1 file changed, 7 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
> index 4d54dba35302..a976cd67b3b3 100644
> --- a/drivers/gpu/drm/i915/i915_gpu_error.c
> +++ b/drivers/gpu/drm/i915/i915_gpu_error.c
> @@ -1207,8 +1207,6 @@ static void engine_record_registers(struct intel_engine_coredump *ee)
>   static void record_request(const struct i915_request *request,
>   			   struct i915_request_coredump *erq)
>   {
> -	const struct i915_gem_context *ctx;
> -
>   	erq->flags = request->fence.flags;
>   	erq->context = request->fence.context;
>   	erq->seqno = request->fence.seqno;
> @@ -1218,9 +1216,13 @@ static void record_request(const struct i915_request *request,
>   
>   	erq->pid = 0;
>   	rcu_read_lock();
> -	ctx = rcu_dereference(request->context->gem_context);
> -	if (ctx)
> -		erq->pid = pid_nr(ctx->pid);
> +	if (!intel_context_is_closed(request->context)) {
> +		const struct i915_gem_context *ctx;
> +
> +		ctx = rcu_dereference(request->context->gem_context);
> +		if (ctx)
> +			erq->pid = pid_nr(ctx->pid);
> +	}
>   	rcu_read_unlock();
>   }
>   
> 

In the client busyness series I move the GEM ctx put to free_engines_rcu 
- at which point the closed check here is not needed any more. Should we 
delay this put right now to simplify? Maybe not.. I'll remember to tweak 
it in my series.

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko

P.S. Fixes: 2e46a2a0b0149f951b63be1b5df6514676fed213 ?
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: Avoid dereferencing a dead context
  2020-04-29 13:42 ` Tvrtko Ursulin
@ 2020-04-29 14:15   ` Chris Wilson
  0 siblings, 0 replies; 7+ messages in thread
From: Chris Wilson @ 2020-04-29 14:15 UTC (permalink / raw)
  To: Tvrtko Ursulin, intel-gfx

Quoting Tvrtko Ursulin (2020-04-29 14:42:44)
> 
> On 28/04/2020 10:02, Chris Wilson wrote:
> > Once the intel_context is closed, the GEM context may be freed and so
> > the link from intel_context.gem_context is invalid.
> > 
> > <3>[  219.782944] BUG: KASAN: use-after-free in intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
> > <3>[  219.782996] Read of size 8 at addr ffff8881d7dff0b8 by task kworker/0:1/12
> > 
> > <4>[  219.783052] CPU: 0 PID: 12 Comm: kworker/0:1 Tainted: G     U            5.7.0-rc2-g1f3ffd7683d54-kasan_118+ #1
> > <4>[  219.783055] Hardware name: System manufacturer System Product Name/Z170 PRO GAMING, BIOS 3402 04/26/2017
> > <4>[  219.783105] Workqueue: events heartbeat [i915]
> > <4>[  219.783109] Call Trace:
> > <4>[  219.783113]  <IRQ>
> > <4>[  219.783119]  dump_stack+0x96/0xdb
> > <4>[  219.783177]  ? intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
> > <4>[  219.783182]  print_address_description.constprop.6+0x16/0x310
> > <4>[  219.783239]  ? intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
> > <4>[  219.783295]  ? intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
> > <4>[  219.783300]  __kasan_report+0x137/0x190
> > <4>[  219.783359]  ? intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
> > <4>[  219.783366]  kasan_report+0x32/0x50
> > <4>[  219.783426]  intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
> > <4>[  219.783481]  execlists_reset+0x39c/0x13d0 [i915]
> > <4>[  219.783494]  ? mark_held_locks+0x9e/0xe0
> > <4>[  219.783546]  ? execlists_hold+0xfc0/0xfc0 [i915]
> > <4>[  219.783551]  ? lockdep_hardirqs_on+0x348/0x5f0
> > <4>[  219.783557]  ? _raw_spin_unlock_irqrestore+0x34/0x60
> > <4>[  219.783606]  ? execlists_submission_tasklet+0x118/0x3a0 [i915]
> > <4>[  219.783615]  tasklet_action_common.isra.14+0x13b/0x410
> > <4>[  219.783623]  ? __do_softirq+0x1e4/0x9a7
> > <4>[  219.783630]  __do_softirq+0x226/0x9a7
> > <4>[  219.783643]  do_softirq_own_stack+0x2a/0x40
> > <4>[  219.783647]  </IRQ>
> > <4>[  219.783692]  ? heartbeat+0x3e2/0x10f0 [i915]
> > <4>[  219.783696]  do_softirq.part.13+0x49/0x50
> > <4>[  219.783700]  __local_bh_enable_ip+0x1a2/0x1e0
> > <4>[  219.783748]  heartbeat+0x409/0x10f0 [i915]
> > <4>[  219.783801]  ? __live_idle_pulse+0x9f0/0x9f0 [i915]
> > <4>[  219.783806]  ? lock_acquire+0x1ac/0x8a0
> > <4>[  219.783811]  ? process_one_work+0x811/0x1870
> > <4>[  219.783827]  ? rcu_read_lock_sched_held+0x9c/0xd0
> > <4>[  219.783832]  ? rcu_read_lock_bh_held+0xb0/0xb0
> > <4>[  219.783836]  ? _raw_spin_unlock_irq+0x1f/0x40
> > <4>[  219.783845]  process_one_work+0x8ca/0x1870
> > <4>[  219.783848]  ? lock_acquire+0x1ac/0x8a0
> > <4>[  219.783852]  ? worker_thread+0x1d0/0xb80
> > <4>[  219.783864]  ? pwq_dec_nr_in_flight+0x2c0/0x2c0
> > <4>[  219.783870]  ? do_raw_spin_lock+0x129/0x290
> > <4>[  219.783886]  worker_thread+0x82/0xb80
> > <4>[  219.783895]  ? __kthread_parkme+0xaf/0x1b0
> > <4>[  219.783902]  ? process_one_work+0x1870/0x1870
> > <4>[  219.783906]  kthread+0x34e/0x420
> > <4>[  219.783911]  ? kthread_create_on_node+0xc0/0xc0
> > <4>[  219.783918]  ret_from_fork+0x3a/0x50
> > 
> > <3>[  219.783950] Allocated by task 1264:
> > <4>[  219.783975]  save_stack+0x19/0x40
> > <4>[  219.783978]  __kasan_kmalloc.constprop.3+0xa0/0xd0
> > <4>[  219.784029]  i915_gem_create_context+0xa2/0xab8 [i915]
> > <4>[  219.784081]  i915_gem_context_create_ioctl+0x1fa/0x450 [i915]
> > <4>[  219.784085]  drm_ioctl_kernel+0x1d8/0x270
> > <4>[  219.784088]  drm_ioctl+0x676/0x930
> > <4>[  219.784092]  ksys_ioctl+0xb7/0xe0
> > <4>[  219.784096]  __x64_sys_ioctl+0x6a/0xb0
> > <4>[  219.784100]  do_syscall_64+0x94/0x530
> > <4>[  219.784103]  entry_SYSCALL_64_after_hwframe+0x49/0xb3
> > 
> > <3>[  219.784120] Freed by task 12:
> > <4>[  219.784141]  save_stack+0x19/0x40
> > <4>[  219.784145]  __kasan_slab_free+0x130/0x180
> > <4>[  219.784148]  kmem_cache_free_bulk+0x1bd/0x500
> > <4>[  219.784152]  kfree_rcu_work+0x1d8/0x890
> > <4>[  219.784155]  process_one_work+0x8ca/0x1870
> > <4>[  219.784158]  worker_thread+0x82/0xb80
> > <4>[  219.784162]  kthread+0x34e/0x420
> > <4>[  219.784165]  ret_from_fork+0x3a/0x50
> > 
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > ---
> >   drivers/gpu/drm/i915/i915_gpu_error.c | 12 +++++++-----
> >   1 file changed, 7 insertions(+), 5 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
> > index 4d54dba35302..a976cd67b3b3 100644
> > --- a/drivers/gpu/drm/i915/i915_gpu_error.c
> > +++ b/drivers/gpu/drm/i915/i915_gpu_error.c
> > @@ -1207,8 +1207,6 @@ static void engine_record_registers(struct intel_engine_coredump *ee)
> >   static void record_request(const struct i915_request *request,
> >                          struct i915_request_coredump *erq)
> >   {
> > -     const struct i915_gem_context *ctx;
> > -
> >       erq->flags = request->fence.flags;
> >       erq->context = request->fence.context;
> >       erq->seqno = request->fence.seqno;
> > @@ -1218,9 +1216,13 @@ static void record_request(const struct i915_request *request,
> >   
> >       erq->pid = 0;
> >       rcu_read_lock();
> > -     ctx = rcu_dereference(request->context->gem_context);
> > -     if (ctx)
> > -             erq->pid = pid_nr(ctx->pid);
> > +     if (!intel_context_is_closed(request->context)) {
> > +             const struct i915_gem_context *ctx;
> > +
> > +             ctx = rcu_dereference(request->context->gem_context);
> > +             if (ctx)
> > +                     erq->pid = pid_nr(ctx->pid);
> > +     }
> >       rcu_read_unlock();
> >   }
> >   
> > 
> 
> In the client busyness series I move the GEM ctx put to free_engines_rcu 
> - at which point the closed check here is not needed any more. Should we 
> delay this put right now to simplify? Maybe not.. I'll remember to tweak 
> it in my series.

Yeah, it's not the right answer and I was hoping that you would have a
better plan :)

But it fixes a uaf we see atm, so a temporary bandaid.

> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> Regards,
> 
> Tvrtko
> 
> P.S. Fixes: 2e46a2a0b0149f951b63be1b5df6514676fed213 ?

Looks to be the culprit, yes.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-04-29 14:15 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-28  9:02 [Intel-gfx] [PATCH] drm/i915: Avoid dereferencing a dead context Chris Wilson
2020-04-28 14:46 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
2020-04-28 15:10 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2020-04-28 17:44 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
2020-04-28 18:06 ` [Intel-gfx] [PATCH] " Abodunrin, Akeem G
2020-04-29 13:42 ` Tvrtko Ursulin
2020-04-29 14:15   ` Chris Wilson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.