All of lore.kernel.org
 help / color / mirror / Atom feed
From: Michał Winiarski <michal@hardline.pl>
To: intel-gfx@lists.freedesktop.org
Subject: Re: [Intel-gfx] [RFC PATCH 1/4] drm/i915: Drop user contexts on driver remove
Date: Wed, 27 May 2020 16:23:10 +0200	[thread overview]
Message-ID: <159058939064.251664.1906826864732740129@macragge.hardline.pl> (raw)
In-Reply-To: <20200518181720.14625-2-janusz.krzysztofik@linux.intel.com>

Quoting Janusz Krzysztofik (2020-05-18 20:17:17)
> Contexts associated with open device file descriptors together with
> their assigned address spaces are now closed on device file close.  On
> address space closure its associated DMA mappings are revoked.  If the
> device is removed while being open, subsequent attempts to revoke
> those mappings while closing the device file descriptor may may be
> judged by intel-iommu code as a bug and result in kernel panic.
> 
> Since user contexts become useless after the device is no longer
> available, drop them on device removal.
> 
> <4> [36.900985] ------------[ cut here ]------------
> <2> [36.901005] kernel BUG at drivers/iommu/intel-iommu.c:3717!
> <4> [36.901105] invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
> <4> [36.901117] CPU: 0 PID: 39 Comm: kworker/u8:1 Tainted: G     U  W         5.7.0-rc5-CI-CI_DRM_8485+ #1
> <4> [36.901133] Hardware name: Intel Corporation Elkhart Lake Embedded Platform/ElkhartLake LPDDR4x T3 CRB, BIOS EHLSFWI1.R00.1484.A00.1911290833 11/29/2019
> <4> [36.901250] Workqueue: i915 __i915_vm_release [i915]
> <4> [36.901264] RIP: 0010:intel_unmap+0x1f5/0x230
> <4> [36.901274] Code: 01 e8 9f bc a9 ff 85 c0 74 09 80 3d df 60 09 01 00 74 19 65 ff 0d 13 12 97 7e 0f 85 fc fe ff ff e8 82 b0 95 ff e9 f2 fe ff ff <0f> 0b e8 d4 bd a9 ff 85 c0 75 de 48 c7 c2 10 84 2c 82 be 54 00 00
> <4> [36.901302] RSP: 0018:ffffc900001ebdc0 EFLAGS: 00010246
> <4> [36.901313] RAX: 0000000000000000 RBX: ffff8882561dd000 RCX: 0000000000000000
> <4> [36.901324] RDX: 0000000000001000 RSI: 00000000ffd9c000 RDI: ffff888274c94000
> <4> [36.901336] RBP: ffff888274c940b0 R08: 0000000000000000 R09: 0000000000000001
> <4> [36.901348] R10: 000000000a25d812 R11: 00000000112af2d4 R12: ffff888252c70200
> <4> [36.901360] R13: 00000000ffd9c000 R14: 0000000000001000 R15: ffff8882561dd010
> <4> [36.901372] FS:  0000000000000000(0000) GS:ffff888278000000(0000) knlGS:0000000000000000
> <4> [36.901386] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> <4> [36.901396] CR2: 00007f06def54950 CR3: 0000000255844000 CR4: 0000000000340ef0
> <4> [36.901408] Call Trace:
> <4> [36.901418]  ? process_one_work+0x1de/0x600
> <4> [36.901494]  cleanup_page_dma+0x37/0x70 [i915]
> <4> [36.901573]  free_pd+0x9/0x20 [i915]
> <4> [36.901644]  gen8_ppgtt_cleanup+0x59/0xc0 [i915]
> <4> [36.901721]  __i915_vm_release+0x14/0x30 [i915]
> <4> [36.901733]  process_one_work+0x268/0x600
> <4> [36.901744]  ? __schedule+0x307/0x8d0
> <4> [36.901756]  worker_thread+0x37/0x380
> <4> [36.901766]  ? process_one_work+0x600/0x600
> <4> [36.901775]  kthread+0x140/0x160
> <4> [36.901783]  ? kthread_park+0x80/0x80
> <4> [36.901792]  ret_from_fork+0x24/0x50
> <4> [36.901804] Modules linked in: mei_hdcp i915 x86_pkg_temp_thermal coretemp crct10dif_pclmul crc32_pclmul ghash_clmulni_intel ax88179_178a usbnet mii mei_me mei prime_numbers intel_lpss_pci
> <4> [36.901857] ---[ end trace 52d1b4d81f8d1ea7 ]---
> 
> Signed-off-by: Janusz Krzysztofik <janusz.krzysztofik@linux.intel.com>
> ---
>  drivers/gpu/drm/i915/gem/i915_gem_context.c | 38 +++++++++++++++++++++
>  drivers/gpu/drm/i915/gem/i915_gem_context.h |  1 +
>  drivers/gpu/drm/i915/i915_gem.c             |  2 ++
>  3 files changed, 41 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> index 900ea8b7fc8f..0096a69fbfd3 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> @@ -927,6 +927,44 @@ void i915_gem_driver_release__contexts(struct drm_i915_private *i915)
>         rcu_barrier(); /* and flush the left over RCU frees */
>  }
>  
> +void i915_gem_driver_remove__contexts(struct drm_i915_private *i915)
> +{
> +       struct i915_gem_context *ctx, *cn;
> +
> +       list_for_each_entry_safe(ctx, cn, &i915->gem.contexts.list, link) {

You're not removing ctx from gem.contexts.list inside this loop.

> +               struct drm_i915_file_private *file_priv = ctx->file_priv;
> +               struct i915_gem_context *entry;
> +               unsigned long int id;
> +
> +               if (i915_gem_context_is_closed(ctx) || IS_ERR(file_priv))
> +                       continue;
> +
> +               xa_for_each(&file_priv->context_xa, id, entry) {

We're iterating over contexts?
I thought we were already doing that by going over i915->gem.contexts.list.

> +                       struct i915_address_space *vm;
> +                       unsigned long int idx;
> +
> +                       if (entry != ctx)
> +                               continue;
> +
> +                       xa_erase(&file_priv->context_xa, id);
> +
> +                       if (id)
> +                               break;

Ok... So we're exiting early for !default contexts?

> +
> +                       xa_for_each(&file_priv->vm_xa, idx, vm) {
> +                               xa_erase(&file_priv->vm_xa, idx);
> +                               i915_vm_put(vm);
> +                       }

And for each vm as yet another inner loop?
Why? After the first round of going over contexts this is going to be empty.

> +
> +                       break;

Ah... But there won't be another round...

Yeah, the intent behind those loops is not clear to me.

Can this all be expressed as (note: pseudocode):

/* for all fds */
list_for_each(&dev.filelist) {
	/* for each contexts registered to this file */
	xa_for_each(&file_priv->context_xa) {
		/* drop ctx */
	}
	/* for each VM registered to this file */
	xa_for_each(&file_priv->vm_xa) {
		/* drop VM */
	}
}

Or am I missing something important here?

-Michał

> +               }
> +
> +               context_close(ctx);
> +       }
> +
> +       i915_gem_driver_release__contexts(i915);
> +}
> +
>  static int gem_context_register(struct i915_gem_context *ctx,
>                                 struct drm_i915_file_private *fpriv,
>                                 u32 *id)
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.h b/drivers/gpu/drm/i915/gem/i915_gem_context.h
> index 3702b2fb27ab..62808bea9239 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.h
> @@ -110,6 +110,7 @@ i915_gem_context_clear_user_engines(struct i915_gem_context *ctx)
>  
>  /* i915_gem_context.c */
>  void i915_gem_init__contexts(struct drm_i915_private *i915);
> +void i915_gem_driver_remove__contexts(struct drm_i915_private *i915);
>  void i915_gem_driver_release__contexts(struct drm_i915_private *i915);
>  
>  int i915_gem_context_open(struct drm_i915_private *i915,
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 0cbcb9f54e7d..87d3c4f5b6c6 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -1189,6 +1189,8 @@ void i915_gem_driver_remove(struct drm_i915_private *dev_priv)
>         /* Flush any outstanding unpin_work. */
>         i915_gem_drain_workqueue(dev_priv);
>  
> +       i915_gem_driver_remove__contexts(dev_priv);
> +
>         i915_gem_drain_freed_objects(dev_priv);
>  }
>  
> -- 
> 2.21.1
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2020-05-27 14:23 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-18 18:17 [Intel-gfx] [RFC PATCH 0/4] drm/i915: Resolve device hotunplug issues Janusz Krzysztofik
2020-05-18 18:17 ` [Intel-gfx] [RFC PATCH 1/4] drm/i915: Drop user contexts on driver remove Janusz Krzysztofik
2020-05-27 14:23   ` Michał Winiarski [this message]
2020-05-28 10:14   ` Tvrtko Ursulin
2020-05-28 12:10     ` Janusz Krzysztofik
2020-05-28 13:34       ` Tvrtko Ursulin
2020-05-28 13:41         ` Chris Wilson
2020-05-28 14:00           ` Janusz Krzysztofik
2020-05-18 18:17 ` [Intel-gfx] [RFC PATCH 2/4] drm/i915: Release GT resources " Janusz Krzysztofik
2020-05-27 18:35   ` Michał Winiarski
2020-05-18 18:17 ` [Intel-gfx] [RFC PATCH 3/4] drm/i915: Move GGTT cleanup from driver_release to _remove Janusz Krzysztofik
2020-05-27 19:12   ` Michał Winiarski
2020-05-28  9:12     ` Janusz Krzysztofik
2020-05-18 18:17 ` [Intel-gfx] [RFC PATCH 4/4] drm/i915: Move UC firmware " Janusz Krzysztofik
2020-05-27 19:15   ` Michał Winiarski
2020-05-28  9:13     ` Janusz Krzysztofik
2020-05-18 23:07 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Resolve device hotunplug issues Patchwork
2020-05-18 23:08 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-05-18 23:29 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2020-05-19  0:54 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=159058939064.251664.1906826864732740129@macragge.hardline.pl \
    --to=michal@hardline.pl \
    --cc=intel-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.