All of lore.kernel.org
 help / color / mirror / Atom feed
From: akash.goel@intel.com
To: intel-gfx@lists.freedesktop.org
Cc: Akash Goel <akash.goel@intel.com>
Subject: [PATCH v2] drm/i915 : Avoid superfluous invalidation of CPU cache lines
Date: Wed, 25 Nov 2015 10:59:26 +0530	[thread overview]
Message-ID: <1448429366-15316-1-git-send-email-akash.goel@intel.com> (raw)
In-Reply-To: <20151124223938.GC16277@nuc-i3427.alporthouse.com>

From: Akash Goel <akash.goel@intel.com>

When the object is moved out of CPU read domain, the cachelines
are not invalidated immediately. The invalidation is deferred till
next time the object is brought back into CPU read domain.
But the invalidation is done unconditionally, i.e. even for the case
where the cachelines were flushed previously, when the object moved out
of CPU write domain. This is avoidable and would lead to some optimization.
Though this is not a hypothetical case, but is unlikely to occur often.
The aim is to detect changes to the backing storage whilst the
data is potentially in the CPU cache, and only clflush in those case.

v2: Made the comment more verbose (Ville/Chris)
    Added doc for 'cache_clean' field (Daniel)

Testcase: igt/gem_concurrent_blit
Testcase: igt/benchmarks/gem_set_domain
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Akash Goel <akash.goel@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h |  9 +++++++++
 drivers/gpu/drm/i915/i915_gem.c | 14 +++++++++++++-
 2 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index df9316f..b63d3ab 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2099,6 +2099,15 @@ struct drm_i915_gem_object {
 	unsigned int cache_level:3;
 	unsigned int cache_dirty:1;
 
+	/*
+	 * Tracks if the CPU cache has been completely clflushed.
+	 * !cache_clean does not imply cache_dirty (there is some data in the
+	 * CPU cachelines, but has not been dirtied), but cache_clean
+	 * does imply !cache_dirty (no data in cachelines, so not dirty also).
+	 * Actually cache_dirty tracks whether we have been omitting clflushes.
+	 */
+	unsigned int cache_clean:1;
+
 	unsigned int frontbuffer_bits:INTEL_FRONTBUFFER_BITS;
 
 	unsigned int pin_display;
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 19c282b..2b6a38d 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -3552,6 +3552,7 @@ i915_gem_clflush_object(struct drm_i915_gem_object *obj,
 	trace_i915_gem_object_clflush(obj);
 	drm_clflush_sg(obj->pages);
 	obj->cache_dirty = false;
+	obj->cache_clean = true;
 
 	return true;
 }
@@ -3982,7 +3983,18 @@ i915_gem_object_set_to_cpu_domain(struct drm_i915_gem_object *obj, bool write)
 
 	/* Flush the CPU cache if it's still invalid. */
 	if ((obj->base.read_domains & I915_GEM_DOMAIN_CPU) == 0) {
-		i915_gem_clflush_object(obj, false);
+		/* If an object is moved out of the CPU domain following a
+		 * CPU write and before a GPU or GTT write, we will clflush
+		 * it out of the CPU cache, and mark the cache as clean.
+		 * As the object has not been accessed on the CPU since
+		 * (i.e. the CPU cache is still clean and it is out of the CPU
+		 * domain), we know that no CPU cache line contains stale data
+		 * and so we can skip invalidating the CPU cache in preparing
+		 * to read from the object.
+		 */
+		if (!obj->cache_clean)
+			i915_gem_clflush_object(obj, false);
+		obj->cache_clean = false;
 
 		obj->base.read_domains |= I915_GEM_DOMAIN_CPU;
 	}
-- 
1.9.2

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2015-11-25  5:18 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-11-24 10:05 [PATCH] drm/i915 : Avoid superfluous invalidation of CPU cache lines akash.goel
2015-11-24 10:04 ` Ville Syrjälä
2015-11-24 18:14   ` Daniel Vetter
2015-11-24 22:39     ` Chris Wilson
2015-11-25  5:29       ` akash.goel [this message]
2015-11-25  9:21       ` Daniel Vetter
2015-11-25  9:27         ` Goel, Akash
2015-11-25 10:00           ` Daniel Vetter
2015-11-30  6:24             ` Goel, Akash
2015-11-30  8:15               ` Daniel Vetter
2015-12-01 12:07                 ` Goel, Akash
2015-11-25 11:02       ` Ville Syrjälä
2015-11-25 17:28         ` Chris Wilson
2015-11-26  3:39           ` Goel, Akash
2015-11-26 10:57             ` Chris Wilson
2015-11-30  7:11               ` [PATCH v3] " akash.goel
2015-12-01 12:34                 ` Ville Syrjälä
2015-12-01 13:09                   ` Chris Wilson
2015-12-01 13:28                     ` Ville Syrjälä
2015-12-01 13:49                       ` Chris Wilson
2015-12-01 14:00                         ` Ville Syrjälä
2015-12-01 15:00                           ` Goel, Akash
2015-12-02  8:07                             ` [PATCH v4] " akash.goel
2015-12-06 17:03                               ` Chris Wilson
2015-11-24 10:10 ` [PATCH] " Chris Wilson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1448429366-15316-1-git-send-email-akash.goel@intel.com \
    --to=akash.goel@intel.com \
    --cc=intel-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.