From: Matthew Auld <matthew.william.auld@gmail.com>
To: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Intel Graphics Development <intel-gfx@lists.freedesktop.org>
Subject: Re: [Intel-gfx] [PATCH 1/6] drm/i915/gem: Almagamate clflushes on suspend
Date: Tue, 19 Jan 2021 15:30:41 +0000 [thread overview]
Message-ID: <CAM0jSHPcQVc7SEVBhkAd2aVa=g-EAeKZ-5LeMK=tSGriBB8vkw@mail.gmail.com> (raw)
In-Reply-To: <20210119144912.12653-1-chris@chris-wilson.co.uk>
On Tue, 19 Jan 2021 at 14:49, Chris Wilson <chris@chris-wilson.co.uk> wrote:
>
> When flushing objects larger than the CPU cache it is preferrable to use
> a single wbinvd() rather than overlapping clflush(). At runtime, we
> avoid wbinvd() due to its system-wide latencies, but during
> singlethreaded suspend, no one will observe the imposed latency and we
> can opt for the faster wbinvd to clear all objects in a single hit.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
> drivers/gpu/drm/i915/gem/i915_gem_pm.c | 40 +++++++++-----------------
> 1 file changed, 13 insertions(+), 27 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pm.c b/drivers/gpu/drm/i915/gem/i915_gem_pm.c
> index 40d3e40500fa..38c1298cb14b 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_pm.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_pm.c
> @@ -11,6 +11,12 @@
>
> #include "i915_drv.h"
>
> +#if defined(CONFIG_X86)
> +#include <asm/smp.h>
> +#else
> +#define wbinvd_on_all_cpus()
> +#endif
> +
> void i915_gem_suspend(struct drm_i915_private *i915)
> {
> GEM_TRACE("%s\n", dev_name(i915->drm.dev));
> @@ -32,13 +38,6 @@ void i915_gem_suspend(struct drm_i915_private *i915)
> i915_gem_drain_freed_objects(i915);
> }
>
> -static struct drm_i915_gem_object *first_mm_object(struct list_head *list)
> -{
> - return list_first_entry_or_null(list,
> - struct drm_i915_gem_object,
> - mm.link);
> -}
> -
> void i915_gem_suspend_late(struct drm_i915_private *i915)
> {
> struct drm_i915_gem_object *obj;
> @@ -48,6 +47,7 @@ void i915_gem_suspend_late(struct drm_i915_private *i915)
> NULL
> }, **phase;
> unsigned long flags;
> + bool flush = false;
>
> /*
> * Neither the BIOS, ourselves or any other kernel
> @@ -73,29 +73,15 @@ void i915_gem_suspend_late(struct drm_i915_private *i915)
>
> spin_lock_irqsave(&i915->mm.obj_lock, flags);
> for (phase = phases; *phase; phase++) {
> - LIST_HEAD(keep);
> -
> - while ((obj = first_mm_object(*phase))) {
> - list_move_tail(&obj->mm.link, &keep);
> -
> - /* Beware the background _i915_gem_free_objects */
> - if (!kref_get_unless_zero(&obj->base.refcount))
> - continue;
> -
> - spin_unlock_irqrestore(&i915->mm.obj_lock, flags);
> -
> - i915_gem_object_lock(obj, NULL);
> - drm_WARN_ON(&i915->drm,
> - i915_gem_object_set_to_gtt_domain(obj, false));
> - i915_gem_object_unlock(obj);
> - i915_gem_object_put(obj);
> -
> - spin_lock_irqsave(&i915->mm.obj_lock, flags);
> + list_for_each_entry(obj, *phase, mm.link) {
> + if (!(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ))
> + flush |= (obj->read_domains & I915_GEM_DOMAIN_CPU) == 0;
> + __start_cpu_write(obj); /* presume auto-hibernate */
> }
> -
> - list_splice_tail(&keep, *phase);
> }
> spin_unlock_irqrestore(&i915->mm.obj_lock, flags);
> + if (flush)
> + wbinvd_on_all_cpus();
Hmmm, this builds on !CONFIG_X86?
> }
>
> void i915_gem_resume(struct drm_i915_private *i915)
> --
> 2.20.1
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2021-01-19 15:31 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-19 14:49 [Intel-gfx] [PATCH 1/6] drm/i915/gem: Almagamate clflushes on suspend Chris Wilson
2021-01-19 14:49 ` [Intel-gfx] [PATCH 2/6] drm/i915/gem: Almagamate clflushes on freeze Chris Wilson
2021-01-19 15:34 ` Matthew Auld
2021-01-23 14:46 ` Guenter Roeck
2021-01-23 14:53 ` Chris Wilson
2021-01-19 14:49 ` [Intel-gfx] [PATCH 3/6] drm/i915/gem: Move stolen node into GEM object union Chris Wilson
2021-01-19 15:40 ` Matthew Auld
2021-01-19 14:49 ` [Intel-gfx] [PATCH 4/6] drm/i915/gem: Use shrinkable status for unknown swizzle quirks Chris Wilson
2021-01-19 16:13 ` Matthew Auld
2021-01-19 14:49 ` [Intel-gfx] [PATCH 5/6] drm/i915/gem: Make i915_gem_object_flush_write_domain() static Chris Wilson
2021-01-19 16:16 ` Matthew Auld
2021-01-19 14:49 ` [Intel-gfx] [PATCH 6/6] drm/i915/gem: Drop lru bumping on display unpinning Chris Wilson
2021-01-19 16:38 ` Matthew Auld
2021-01-19 17:02 ` Chris Wilson
2021-01-19 17:14 ` Matthew Auld
2021-01-19 15:30 ` Matthew Auld [this message]
2021-01-19 15:37 ` [Intel-gfx] [PATCH 1/6] drm/i915/gem: Almagamate clflushes on suspend Chris Wilson
2021-01-19 17:26 ` Matthew Auld
2021-01-19 21:50 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for series starting with [1/6] drm/i915/gem: Almagamate clflushes on suspend (rev2) Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAM0jSHPcQVc7SEVBhkAd2aVa=g-EAeKZ-5LeMK=tSGriBB8vkw@mail.gmail.com' \
--to=matthew.william.auld@gmail.com \
--cc=chris@chris-wilson.co.uk \
--cc=intel-gfx@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).