All of lore.kernel.org
 help / color / mirror / Atom feed
* [CI v2 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled.
@ 2017-12-18 10:24 Dhinakaran Pandiyan
  2017-12-18 10:24 ` [CI v2 2/5] drm/vblank: Restoring vblank counts after device runtime PM events Dhinakaran Pandiyan
                   ` (4 more replies)
  0 siblings, 5 replies; 6+ messages in thread
From: Dhinakaran Pandiyan @ 2017-12-18 10:24 UTC (permalink / raw)
  To: intel-gfx; +Cc: Dhinakaran Pandiyan

Updating the vblank counts requires register reads and these reads may not
return meaningful values after the vblank interrupts are disabled as the
device may go to low power state. An additional change would be to allow
the driver to save the vblank counts before entering a low power state, but
that's for the future.

Also, disable vblanks after reading the HW counter in the case where
_crtc_vblank_off() is disabling vblanks.

Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
---
 drivers/gpu/drm/drm_vblank.c | 23 +++++++++--------------
 1 file changed, 9 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
index 32d9bcf5be7f..7eee82c06ed8 100644
--- a/drivers/gpu/drm/drm_vblank.c
+++ b/drivers/gpu/drm/drm_vblank.c
@@ -347,23 +347,14 @@ void drm_vblank_disable_and_save(struct drm_device *dev, unsigned int pipe)
 	spin_lock_irqsave(&dev->vblank_time_lock, irqflags);
 
 	/*
-	 * Only disable vblank interrupts if they're enabled. This avoids
-	 * calling the ->disable_vblank() operation in atomic context with the
-	 * hardware potentially runtime suspended.
-	 */
-	if (vblank->enabled) {
-		__disable_vblank(dev, pipe);
-		vblank->enabled = false;
-	}
-
-	/*
-	 * Always update the count and timestamp to maintain the
+	 * Update the count and timestamp to maintain the
 	 * appearance that the counter has been ticking all along until
 	 * this time. This makes the count account for the entire time
 	 * between drm_crtc_vblank_on() and drm_crtc_vblank_off().
 	 */
 	drm_update_vblank_count(dev, pipe, false);
-
+	__disable_vblank(dev, pipe);
+	vblank->enabled = false;
 	spin_unlock_irqrestore(&dev->vblank_time_lock, irqflags);
 }
 
@@ -1122,8 +1113,12 @@ void drm_crtc_vblank_off(struct drm_crtc *crtc)
 		      pipe, vblank->enabled, vblank->inmodeset);
 
 	/* Avoid redundant vblank disables without previous
-	 * drm_crtc_vblank_on(). */
-	if (drm_core_check_feature(dev, DRIVER_ATOMIC) || !vblank->inmodeset)
+	 * drm_crtc_vblank_on() and only disable them if they're enabled. This
+	 * avoids calling the ->disable_vblank() operation in atomic context
+	 * with the hardware potentially runtime suspended.
+	 */
+	if ((drm_core_check_feature(dev, DRIVER_ATOMIC) || !vblank->inmodeset) &&
+	    vblank->enabled)
 		drm_vblank_disable_and_save(dev, pipe);
 
 	wake_up(&vblank->queue);
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [CI v2 2/5] drm/vblank: Restoring vblank counts after device runtime PM events.
  2017-12-18 10:24 [CI v2 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Dhinakaran Pandiyan
@ 2017-12-18 10:24 ` Dhinakaran Pandiyan
  2017-12-18 10:24 ` [CI v2 3/5] drm/i915: Use an atomic_t array to track power domain use count Dhinakaran Pandiyan
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Dhinakaran Pandiyan @ 2017-12-18 10:24 UTC (permalink / raw)
  To: intel-gfx; +Cc: Dhinakaran Pandiyan

The HW frame counter can get reset when devices enters low power
states and this messes up any following vblank count updates. So, compute
the missed vblank interrupts for that low power state duration using time
stamps. This is similar to _crtc_vblank_on() except that it doesn't enable
vblank interrupts because this function is expected to be called from
the driver _enable_vblank() vfunc.

Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
---
 drivers/gpu/drm/drm_vblank.c | 33 +++++++++++++++++++++++++++++++++
 include/drm/drm_vblank.h     |  1 +
 2 files changed, 34 insertions(+)

diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
index 7eee82c06ed8..494e2cff6e55 100644
--- a/drivers/gpu/drm/drm_vblank.c
+++ b/drivers/gpu/drm/drm_vblank.c
@@ -1230,6 +1230,39 @@ void drm_crtc_vblank_on(struct drm_crtc *crtc)
 }
 EXPORT_SYMBOL(drm_crtc_vblank_on);
 
+void drm_crtc_vblank_restore(struct drm_device *dev, unsigned int pipe)
+{
+	ktime_t t_vblank;
+	struct drm_vblank_crtc *vblank;
+	int framedur_ns;
+	u64 diff_ns;
+	u32 cur_vblank, diff = 1;
+	int count = DRM_TIMESTAMP_MAXRETRIES;
+
+	if (WARN_ON(pipe >= dev->num_crtcs))
+		return;
+
+	vblank = &dev->vblank[pipe];
+	WARN_ONCE((drm_debug & DRM_UT_VBL) && !vblank->framedur_ns,
+		  "Cannot compute missed vblanks without frame duration\n");
+	framedur_ns = vblank->framedur_ns;
+
+	do {
+		cur_vblank = __get_vblank_counter(dev, pipe);
+		drm_get_last_vbltimestamp(dev, pipe, &t_vblank, false);
+	} while (cur_vblank != __get_vblank_counter(dev, pipe) && --count > 0);
+
+	diff_ns = ktime_to_ns(ktime_sub(t_vblank, vblank->time));
+	if (framedur_ns)
+		diff = DIV_ROUND_CLOSEST_ULL(diff_ns, framedur_ns);
+
+
+	DRM_DEBUG_VBL("missed %d vblanks in %lld ns, frame duration=%d ns, hw_diff=%d\n",
+		      diff, diff_ns, framedur_ns, cur_vblank - vblank->last);
+	store_vblank(dev, pipe, diff, t_vblank, cur_vblank);
+}
+EXPORT_SYMBOL(drm_crtc_vblank_restore);
+
 static void drm_legacy_vblank_pre_modeset(struct drm_device *dev,
 					  unsigned int pipe)
 {
diff --git a/include/drm/drm_vblank.h b/include/drm/drm_vblank.h
index 848b463a0af5..aafcbef91bd7 100644
--- a/include/drm/drm_vblank.h
+++ b/include/drm/drm_vblank.h
@@ -180,6 +180,7 @@ void drm_crtc_vblank_off(struct drm_crtc *crtc);
 void drm_crtc_vblank_reset(struct drm_crtc *crtc);
 void drm_crtc_vblank_on(struct drm_crtc *crtc);
 u32 drm_crtc_accurate_vblank_count(struct drm_crtc *crtc);
+void drm_crtc_vblank_restore(struct drm_device *dev, unsigned int pipe);
 
 bool drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev,
 					   unsigned int pipe, int *max_error,
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [CI v2 3/5] drm/i915: Use an atomic_t array to track power domain use count.
  2017-12-18 10:24 [CI v2 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Dhinakaran Pandiyan
  2017-12-18 10:24 ` [CI v2 2/5] drm/vblank: Restoring vblank counts after device runtime PM events Dhinakaran Pandiyan
@ 2017-12-18 10:24 ` Dhinakaran Pandiyan
  2017-12-18 10:24 ` [CI v2 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts Dhinakaran Pandiyan
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Dhinakaran Pandiyan @ 2017-12-18 10:24 UTC (permalink / raw)
  To: intel-gfx; +Cc: Dhinakaran Pandiyan

Convert the power_domains->domain_use_count array that tracks per-domain
use count to atomic_t type. This is needed to be able to read/write the use
counts outside of the power domain mutex.

Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
---
 drivers/gpu/drm/i915/i915_debugfs.c     |  2 +-
 drivers/gpu/drm/i915/i915_drv.h         |  2 +-
 drivers/gpu/drm/i915/intel_runtime_pm.c | 11 +++++------
 3 files changed, 7 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index d8c6ec3cca71..2c4fd5149ffc 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -2772,7 +2772,7 @@ static int i915_power_domain_info(struct seq_file *m, void *unused)
 		for_each_power_domain(power_domain, power_well->domains)
 			seq_printf(m, "  %-23s %d\n",
 				 intel_display_power_domain_str(power_domain),
-				 power_domains->domain_use_count[power_domain]);
+				 atomic_read(&power_domains->domain_use_count[power_domain]));
 	}
 
 	mutex_unlock(&power_domains->lock);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 61196ff93901..a10f31c9e4a9 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1490,7 +1490,7 @@ struct i915_power_domains {
 	int power_well_count;
 
 	struct mutex lock;
-	int domain_use_count[POWER_DOMAIN_NUM];
+	atomic_t domain_use_count[POWER_DOMAIN_NUM];
 	struct i915_power_well *power_wells;
 };
 
diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c
index 96ab74f3d101..992caec1fbc4 100644
--- a/drivers/gpu/drm/i915/intel_runtime_pm.c
+++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
@@ -1453,7 +1453,7 @@ __intel_display_power_get_domain(struct drm_i915_private *dev_priv,
 	for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain))
 		intel_power_well_get(dev_priv, power_well);
 
-	power_domains->domain_use_count[domain]++;
+	atomic_inc(&power_domains->domain_use_count[domain]);
 }
 
 /**
@@ -1539,10 +1539,9 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
 
 	mutex_lock(&power_domains->lock);
 
-	WARN(!power_domains->domain_use_count[domain],
-	     "Use count on domain %s is already zero\n",
+	WARN(atomic_dec_return(&power_domains->domain_use_count[domain]) < 0,
+	     "Use count on domain %s was already zero\n",
 	     intel_display_power_domain_str(domain));
-	power_domains->domain_use_count[domain]--;
 
 	for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain))
 		intel_power_well_put(dev_priv, power_well);
@@ -3049,7 +3048,7 @@ static void intel_power_domains_dump_info(struct drm_i915_private *dev_priv)
 		for_each_power_domain(domain, power_well->domains)
 			DRM_DEBUG_DRIVER("  %-23s %d\n",
 					 intel_display_power_domain_str(domain),
-					 power_domains->domain_use_count[domain]);
+					 atomic_read(&power_domains->domain_use_count[domain]));
 	}
 }
 
@@ -3092,7 +3091,7 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv)
 
 		domains_count = 0;
 		for_each_power_domain(domain, power_well->domains)
-			domains_count += power_domains->domain_use_count[domain];
+			domains_count += atomic_read(&power_domains->domain_use_count[domain]);
 
 		if (power_well->count != domains_count) {
 			DRM_ERROR("power well %s refcount/domain refcount mismatch "
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [CI v2 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts
  2017-12-18 10:24 [CI v2 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Dhinakaran Pandiyan
  2017-12-18 10:24 ` [CI v2 2/5] drm/vblank: Restoring vblank counts after device runtime PM events Dhinakaran Pandiyan
  2017-12-18 10:24 ` [CI v2 3/5] drm/i915: Use an atomic_t array to track power domain use count Dhinakaran Pandiyan
@ 2017-12-18 10:24 ` Dhinakaran Pandiyan
  2017-12-18 10:24 ` [CI v2 5/5] drm/i915: Use the vblank power domain disallow or disable DC states Dhinakaran Pandiyan
  2017-12-18 10:45 ` ✗ Fi.CI.BAT: failure for series starting with [CI,v2,1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Patchwork
  4 siblings, 0 replies; 6+ messages in thread
From: Dhinakaran Pandiyan @ 2017-12-18 10:24 UTC (permalink / raw)
  To: intel-gfx; +Cc: Dhinakaran Pandiyan

When DC states are enabled and PSR is active, the hardware enters DC5/DC6
states resulting in frame counter resets. The frame counter resets mess
up the vblank counting logic. So in order to disable DC states when
vblank interrupts are required and to disallow DC states when vblanks
interrupts are already enabled, introduce a new power domain. Since this
power domain reference needs to be acquired and released in atomic context,
the corresponding _get() and _put() methods skip the power_domain mutex.

Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h         |   8 ++
 drivers/gpu/drm/i915/intel_drv.h        |   3 +
 drivers/gpu/drm/i915/intel_runtime_pm.c | 196 +++++++++++++++++++++++++++++---
 3 files changed, 193 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index a10f31c9e4a9..5494582fdfea 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -397,6 +397,7 @@ enum intel_display_power_domain {
 	POWER_DOMAIN_AUX_C,
 	POWER_DOMAIN_AUX_D,
 	POWER_DOMAIN_GMBUS,
+	POWER_DOMAIN_VBLANK,
 	POWER_DOMAIN_MODESET,
 	POWER_DOMAIN_GT_IRQ,
 	POWER_DOMAIN_INIT,
@@ -1476,7 +1477,14 @@ struct i915_power_well {
 			bool has_vga:1;
 			bool has_fuses:1;
 		} hsw;
+		struct {
+			bool was_disabled;
+		} dc_off;
 	};
+
+	spinlock_t lock;
+	bool supports_atomic_ctx;
+
 	const struct i915_power_well_ops *ops;
 };
 
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index 30f791f89d64..164e62cb047b 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -1797,6 +1797,9 @@ bool intel_display_power_get_if_enabled(struct drm_i915_private *dev_priv,
 					enum intel_display_power_domain domain);
 void intel_display_power_put(struct drm_i915_private *dev_priv,
 			     enum intel_display_power_domain domain);
+void intel_display_power_vblank_get(struct drm_i915_private *dev_priv,
+				    bool *needs_restore);
+void intel_display_power_vblank_put(struct drm_i915_private *dev_priv);
 
 static inline void
 assert_rpm_device_not_suspended(struct drm_i915_private *dev_priv)
diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c
index 992caec1fbc4..9cbb332cb418 100644
--- a/drivers/gpu/drm/i915/intel_runtime_pm.c
+++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
@@ -56,6 +56,16 @@ static struct i915_power_well *
 lookup_power_well(struct drm_i915_private *dev_priv,
 		  enum i915_power_well_id power_well_id);
 
+/* optimize for the case when this function is called from atomic context,
+ * although this is unlikely */
+#define power_well_lock(power_well, flags)			\
+	if (likely(power_well->supports_atomic_ctx))		\
+		spin_lock_irqsave(&power_well->lock, flags)
+
+#define power_well_unlock(power_well, flags)			\
+	if (likely(power_well->supports_atomic_ctx))		\
+		spin_unlock_irqrestore(&power_well->lock, flags)
+
 const char *
 intel_display_power_domain_str(enum intel_display_power_domain domain)
 {
@@ -126,6 +136,8 @@ intel_display_power_domain_str(enum intel_display_power_domain domain)
 		return "AUX_D";
 	case POWER_DOMAIN_GMBUS:
 		return "GMBUS";
+	case POWER_DOMAIN_VBLANK:
+		return "VBLANK";
 	case POWER_DOMAIN_INIT:
 		return "INIT";
 	case POWER_DOMAIN_MODESET:
@@ -141,6 +153,9 @@ intel_display_power_domain_str(enum intel_display_power_domain domain)
 static void intel_power_well_enable(struct drm_i915_private *dev_priv,
 				    struct i915_power_well *power_well)
 {
+	if (power_well->supports_atomic_ctx)
+		assert_spin_locked(&power_well->lock);
+
 	DRM_DEBUG_KMS("enabling %s\n", power_well->name);
 	power_well->ops->enable(dev_priv, power_well);
 	power_well->hw_enabled = true;
@@ -149,19 +164,34 @@ static void intel_power_well_enable(struct drm_i915_private *dev_priv,
 static void intel_power_well_disable(struct drm_i915_private *dev_priv,
 				     struct i915_power_well *power_well)
 {
+	if (power_well->supports_atomic_ctx)
+		assert_spin_locked(&power_well->lock);
+
 	DRM_DEBUG_KMS("disabling %s\n", power_well->name);
 	power_well->hw_enabled = false;
 	power_well->ops->disable(dev_priv, power_well);
 }
 
-static void intel_power_well_get(struct drm_i915_private *dev_priv,
+
+static void __intel_power_well_get(struct drm_i915_private *dev_priv,
 				 struct i915_power_well *power_well)
 {
 	if (!power_well->count++)
 		intel_power_well_enable(dev_priv, power_well);
 }
 
-static void intel_power_well_put(struct drm_i915_private *dev_priv,
+
+static void intel_power_well_get(struct drm_i915_private *dev_priv,
+				 struct i915_power_well *power_well)
+{
+	unsigned long flags = 0;
+
+	power_well_lock(power_well, flags);
+	__intel_power_well_get(dev_priv, power_well);
+	power_well_unlock(power_well, flags);
+}
+
+static void __intel_power_well_put(struct drm_i915_private *dev_priv,
 				 struct i915_power_well *power_well)
 {
 	WARN(!power_well->count, "Use count on power well %s is already zero",
@@ -171,6 +201,16 @@ static void intel_power_well_put(struct drm_i915_private *dev_priv,
 		intel_power_well_disable(dev_priv, power_well);
 }
 
+static void intel_power_well_put(struct drm_i915_private *dev_priv,
+				 struct i915_power_well *power_well)
+{
+	unsigned long flags = 0;
+
+	power_well_lock(power_well, flags);
+	__intel_power_well_put(dev_priv, power_well);
+	power_well_unlock(power_well, flags);
+}
+
 /**
  * __intel_display_power_is_enabled - unlocked check for a power domain
  * @dev_priv: i915 device instance
@@ -726,6 +766,7 @@ static void gen9_dc_off_power_well_disable(struct drm_i915_private *dev_priv,
 		skl_enable_dc6(dev_priv);
 	else if (dev_priv->csr.allowed_dc_mask & DC_STATE_EN_UPTO_DC5)
 		gen9_enable_dc5(dev_priv);
+	power_well->dc_off.was_disabled = true;
 }
 
 static void i9xx_power_well_sync_hw_noop(struct drm_i915_private *dev_priv,
@@ -1443,6 +1484,63 @@ static void chv_pipe_power_well_disable(struct drm_i915_private *dev_priv,
 	chv_set_pipe_power_well(dev_priv, power_well, false);
 }
 
+#define CAN_PSR(dev_priv) (HAS_PSR(dev_priv) && dev_priv->psr.sink_support)
+void intel_display_power_vblank_get(struct drm_i915_private *dev_priv,
+				    bool *needs_restore)
+{
+	struct i915_power_domains *power_domains  = &dev_priv->power_domains;
+	struct i915_power_well *power_well;
+
+	*needs_restore = false;
+
+	if (!HAS_CSR(dev_priv))
+		return;
+
+	if (!CAN_PSR(dev_priv))
+		return;
+
+	intel_runtime_pm_get_noresume(dev_priv);
+
+	for_each_power_domain_well(dev_priv, power_well, BIT_ULL(POWER_DOMAIN_VBLANK)) {
+		unsigned long flags = 0;
+
+		power_well_lock(power_well, flags);
+		__intel_power_well_get(dev_priv, power_well);
+		*needs_restore = power_well->dc_off.was_disabled;
+		power_well->dc_off.was_disabled = false;
+		power_well_unlock(power_well, flags);
+	}
+
+	atomic_inc(&power_domains->domain_use_count[POWER_DOMAIN_VBLANK]);
+}
+
+void intel_display_power_vblank_put(struct drm_i915_private *dev_priv)
+{
+	struct i915_power_domains *power_domains = &dev_priv->power_domains;
+	struct i915_power_well *power_well;
+
+	if (!HAS_CSR(dev_priv))
+		return;
+
+	if (!CAN_PSR(dev_priv))
+		return;
+
+	WARN(atomic_dec_return(&power_domains->domain_use_count[POWER_DOMAIN_VBLANK]) < 0,
+	     "Use count on domain %s was already zero\n",
+	     intel_display_power_domain_str(POWER_DOMAIN_VBLANK));
+
+	for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(POWER_DOMAIN_VBLANK)) {
+		unsigned long flags = 0;
+
+		power_well_lock(power_well, flags);
+		__intel_power_well_put(dev_priv, power_well);
+		power_well_unlock(power_well, flags);
+	}
+
+	intel_runtime_pm_put(dev_priv);
+}
+#undef CAN_PSR
+
 static void
 __intel_display_power_get_domain(struct drm_i915_private *dev_priv,
 				 enum intel_display_power_domain domain)
@@ -1482,6 +1580,38 @@ void intel_display_power_get(struct drm_i915_private *dev_priv,
 	mutex_unlock(&power_domains->lock);
 }
 
+static bool dc_off_get_if_enabled(struct drm_i915_private *dev_priv,
+				  enum intel_display_power_domain domain)
+{
+	struct i915_power_well *power_well;
+	bool is_enabled;
+	unsigned long flags = 0;
+
+	power_well = lookup_power_well(dev_priv, SKL_DISP_PW_DC_OFF);
+	if (!power_well || !(power_well->domains & domain))
+		return true;
+
+	power_well_lock(power_well, flags);
+	is_enabled = power_well->hw_enabled;
+	if (is_enabled)
+		__intel_power_well_get(dev_priv, power_well);
+	power_well_unlock(power_well, flags);
+
+	return is_enabled;
+}
+
+static void dc_off_put(struct drm_i915_private *dev_priv,
+		       enum intel_display_power_domain domain)
+{
+	struct i915_power_well *power_well;
+
+	power_well = lookup_power_well(dev_priv, SKL_DISP_PW_DC_OFF);
+	if (!power_well || !(power_well->domains & domain))
+		return;
+
+	intel_power_well_put(dev_priv, power_well);
+}
+
 /**
  * intel_display_power_get_if_enabled - grab a reference for an enabled display power domain
  * @dev_priv: i915 device instance
@@ -1498,20 +1628,25 @@ bool intel_display_power_get_if_enabled(struct drm_i915_private *dev_priv,
 					enum intel_display_power_domain domain)
 {
 	struct i915_power_domains *power_domains = &dev_priv->power_domains;
-	bool is_enabled;
+	bool is_enabled = false;
+
 
 	if (!intel_runtime_pm_get_if_in_use(dev_priv))
 		return false;
 
 	mutex_lock(&power_domains->lock);
 
+	if (!dc_off_get_if_enabled(dev_priv, domain))
+		goto out;
+
 	if (__intel_display_power_is_enabled(dev_priv, domain)) {
 		__intel_display_power_get_domain(dev_priv, domain);
 		is_enabled = true;
-	} else {
-		is_enabled = false;
 	}
 
+	dc_off_put(dev_priv, domain);
+
+out:
 	mutex_unlock(&power_domains->lock);
 
 	if (!is_enabled)
@@ -1709,6 +1844,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
 	BIT_ULL(POWER_DOMAIN_GT_IRQ) |			\
 	BIT_ULL(POWER_DOMAIN_MODESET) |			\
 	BIT_ULL(POWER_DOMAIN_AUX_A) |			\
+	BIT_ULL(POWER_DOMAIN_VBLANK) |			\
 	BIT_ULL(POWER_DOMAIN_INIT))
 
 #define BXT_DISPLAY_POWERWELL_2_POWER_DOMAINS (		\
@@ -1732,6 +1868,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
 	BIT_ULL(POWER_DOMAIN_GT_IRQ) |			\
 	BIT_ULL(POWER_DOMAIN_MODESET) |			\
 	BIT_ULL(POWER_DOMAIN_AUX_A) |			\
+	BIT_ULL(POWER_DOMAIN_VBLANK) |			\
 	BIT_ULL(POWER_DOMAIN_INIT))
 #define BXT_DPIO_CMN_A_POWER_DOMAINS (			\
 	BIT_ULL(POWER_DOMAIN_PORT_DDI_A_LANES) |		\
@@ -1791,6 +1928,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
 	BIT_ULL(POWER_DOMAIN_GT_IRQ) |			\
 	BIT_ULL(POWER_DOMAIN_MODESET) |			\
 	BIT_ULL(POWER_DOMAIN_AUX_A) |			\
+	BIT_ULL(POWER_DOMAIN_VBLANK) |			\
 	BIT_ULL(POWER_DOMAIN_INIT))
 
 #define CNL_DISPLAY_POWERWELL_2_POWER_DOMAINS (		\
@@ -1838,6 +1976,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
 	CNL_DISPLAY_POWERWELL_2_POWER_DOMAINS |		\
 	BIT_ULL(POWER_DOMAIN_MODESET) |			\
 	BIT_ULL(POWER_DOMAIN_AUX_A) |			\
+	BIT_ULL(POWER_DOMAIN_VBLANK) |			\
 	BIT_ULL(POWER_DOMAIN_INIT))
 
 static const struct i915_power_well_ops i9xx_always_on_power_well_ops = {
@@ -2071,9 +2210,12 @@ bool intel_display_power_well_is_enabled(struct drm_i915_private *dev_priv,
 {
 	struct i915_power_well *power_well;
 	bool ret;
+	unsigned long flags = 0;
 
 	power_well = lookup_power_well(dev_priv, power_well_id);
+	power_well_lock(power_well, flags);
 	ret = power_well->ops->is_enabled(dev_priv, power_well);
+	power_well_unlock(power_well, flags);
 
 	return ret;
 }
@@ -2108,6 +2250,7 @@ static struct i915_power_well skl_power_wells[] = {
 		.domains = SKL_DISPLAY_DC_OFF_POWER_DOMAINS,
 		.ops = &gen9_dc_off_power_well_ops,
 		.id = SKL_DISP_PW_DC_OFF,
+		.supports_atomic_ctx = true,
 	},
 	{
 		.name = "power well 2",
@@ -2168,6 +2311,7 @@ static struct i915_power_well bxt_power_wells[] = {
 		.domains = BXT_DISPLAY_DC_OFF_POWER_DOMAINS,
 		.ops = &gen9_dc_off_power_well_ops,
 		.id = SKL_DISP_PW_DC_OFF,
+		.supports_atomic_ctx = true,
 	},
 	{
 		.name = "power well 2",
@@ -2223,6 +2367,7 @@ static struct i915_power_well glk_power_wells[] = {
 		.domains = GLK_DISPLAY_DC_OFF_POWER_DOMAINS,
 		.ops = &gen9_dc_off_power_well_ops,
 		.id = SKL_DISP_PW_DC_OFF,
+		.supports_atomic_ctx = true,
 	},
 	{
 		.name = "power well 2",
@@ -2347,6 +2492,7 @@ static struct i915_power_well cnl_power_wells[] = {
 		.domains = CNL_DISPLAY_DC_OFF_POWER_DOMAINS,
 		.ops = &gen9_dc_off_power_well_ops,
 		.id = SKL_DISP_PW_DC_OFF,
+		.supports_atomic_ctx = true,
 	},
 	{
 		.name = "power well 2",
@@ -2475,6 +2621,7 @@ static void assert_power_well_ids_unique(struct drm_i915_private *dev_priv)
 int intel_power_domains_init(struct drm_i915_private *dev_priv)
 {
 	struct i915_power_domains *power_domains = &dev_priv->power_domains;
+	struct i915_power_well *power_well;
 
 	i915_modparams.disable_power_well =
 		sanitize_disable_power_well_option(dev_priv,
@@ -2512,6 +2659,10 @@ int intel_power_domains_init(struct drm_i915_private *dev_priv)
 		set_power_wells(power_domains, i9xx_always_on_power_well);
 	}
 
+	for_each_power_well(dev_priv, power_well)
+		if (power_well->supports_atomic_ctx)
+			spin_lock_init(&power_well->lock);
+
 	assert_power_well_ids_unique(dev_priv);
 
 	return 0;
@@ -2559,9 +2710,14 @@ static void intel_power_domains_sync_hw(struct drm_i915_private *dev_priv)
 
 	mutex_lock(&power_domains->lock);
 	for_each_power_well(dev_priv, power_well) {
+		unsigned long flags = 0;
+
+		power_well_lock(power_well, flags);
+
 		power_well->ops->sync_hw(dev_priv, power_well);
 		power_well->hw_enabled = power_well->ops->is_enabled(dev_priv,
 								     power_well);
+		power_well_unlock(power_well, flags);
 	}
 	mutex_unlock(&power_domains->lock);
 }
@@ -3034,16 +3190,18 @@ void intel_power_domains_suspend(struct drm_i915_private *dev_priv)
 		bxt_display_core_uninit(dev_priv);
 }
 
-static void intel_power_domains_dump_info(struct drm_i915_private *dev_priv)
+static void intel_power_domains_dump_info(struct drm_i915_private *dev_priv,
+					  const int *power_well_use)
 {
 	struct i915_power_domains *power_domains = &dev_priv->power_domains;
 	struct i915_power_well *power_well;
+	int i = 0;
 
 	for_each_power_well(dev_priv, power_well) {
 		enum intel_display_power_domain domain;
 
 		DRM_DEBUG_DRIVER("%-25s %d\n",
-				 power_well->name, power_well->count);
+				 power_well->name, power_well_use[i++]);
 
 		for_each_power_domain(domain, power_well->domains)
 			DRM_DEBUG_DRIVER("  %-23s %d\n",
@@ -3067,6 +3225,7 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv)
 	struct i915_power_domains *power_domains = &dev_priv->power_domains;
 	struct i915_power_well *power_well;
 	bool dump_domain_info;
+	int power_well_use[dev_priv->power_domains.power_well_count];
 
 	mutex_lock(&power_domains->lock);
 
@@ -3075,6 +3234,16 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv)
 		enum intel_display_power_domain domain;
 		int domains_count;
 		bool enabled;
+		int well_count, i = 0;
+		unsigned long flags = 0;
+
+
+		power_well_lock(power_well, flags);
+		well_count = power_well->count;
+		enabled = power_well->ops->is_enabled(dev_priv, power_well);
+		power_well_unlock(power_well, flags);
+
+		power_well_use[i++] = well_count;
 
 		/*
 		 * Power wells not belonging to any domain (like the MISC_IO
@@ -3084,20 +3253,19 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv)
 		if (!power_well->domains)
 			continue;
 
-		enabled = power_well->ops->is_enabled(dev_priv, power_well);
-		if ((power_well->count || power_well->always_on) != enabled)
+
+		if ((well_count || power_well->always_on) != enabled)
 			DRM_ERROR("power well %s state mismatch (refcount %d/enabled %d)",
-				  power_well->name, power_well->count, enabled);
+				  power_well->name, well_count, enabled);
 
 		domains_count = 0;
 		for_each_power_domain(domain, power_well->domains)
 			domains_count += atomic_read(&power_domains->domain_use_count[domain]);
 
-		if (power_well->count != domains_count) {
+		if (well_count != domains_count) {
 			DRM_ERROR("power well %s refcount/domain refcount mismatch "
 				  "(refcount %d/domains refcount %d)\n",
-				  power_well->name, power_well->count,
-				  domains_count);
+				  power_well->name, well_count, domains_count);
 			dump_domain_info = true;
 		}
 	}
@@ -3106,7 +3274,7 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv)
 		static bool dumped;
 
 		if (!dumped) {
-			intel_power_domains_dump_info(dev_priv);
+			intel_power_domains_dump_info(dev_priv, power_well_use);
 			dumped = true;
 		}
 	}
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [CI v2 5/5] drm/i915: Use the vblank power domain disallow or disable DC states.
  2017-12-18 10:24 [CI v2 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Dhinakaran Pandiyan
                   ` (2 preceding siblings ...)
  2017-12-18 10:24 ` [CI v2 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts Dhinakaran Pandiyan
@ 2017-12-18 10:24 ` Dhinakaran Pandiyan
  2017-12-18 10:45 ` ✗ Fi.CI.BAT: failure for series starting with [CI,v2,1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Patchwork
  4 siblings, 0 replies; 6+ messages in thread
From: Dhinakaran Pandiyan @ 2017-12-18 10:24 UTC (permalink / raw)
  To: intel-gfx; +Cc: Dhinakaran Pandiyan

Disable DC states before enabling vblank interrupts and conversely
enable DC states after disabling. Since the frame counter may have got
reset between disabling and enabling, use drm_crtc_vblank_restore() to
compute the missed vblanks.

Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
---
 drivers/gpu/drm/i915/i915_irq.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 3517c6548e2c..88b4ceac55d0 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -2963,6 +2963,11 @@ static int gen8_enable_vblank(struct drm_device *dev, unsigned int pipe)
 {
 	struct drm_i915_private *dev_priv = to_i915(dev);
 	unsigned long irqflags;
+	bool needs_restore = false;
+
+	intel_display_power_vblank_get(dev_priv, &needs_restore);
+	if (needs_restore)
+		drm_crtc_vblank_restore(dev, pipe);
 
 	spin_lock_irqsave(&dev_priv->irq_lock, irqflags);
 	bdw_enable_pipe_irq(dev_priv, pipe, GEN8_PIPE_VBLANK);
@@ -3015,6 +3020,7 @@ static void gen8_disable_vblank(struct drm_device *dev, unsigned int pipe)
 	spin_lock_irqsave(&dev_priv->irq_lock, irqflags);
 	bdw_disable_pipe_irq(dev_priv, pipe, GEN8_PIPE_VBLANK);
 	spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags);
+	intel_display_power_vblank_put(dev_priv);
 }
 
 static void ibx_irq_reset(struct drm_i915_private *dev_priv)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* ✗ Fi.CI.BAT: failure for series starting with [CI,v2,1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled.
  2017-12-18 10:24 [CI v2 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Dhinakaran Pandiyan
                   ` (3 preceding siblings ...)
  2017-12-18 10:24 ` [CI v2 5/5] drm/i915: Use the vblank power domain disallow or disable DC states Dhinakaran Pandiyan
@ 2017-12-18 10:45 ` Patchwork
  4 siblings, 0 replies; 6+ messages in thread
From: Patchwork @ 2017-12-18 10:45 UTC (permalink / raw)
  To: Dhinakaran Pandiyan; +Cc: intel-gfx

== Series Details ==

Series: series starting with [CI,v2,1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled.
URL   : https://patchwork.freedesktop.org/series/35501/
State : failure

== Summary ==

Series 35501v1 series starting with [CI,v2,1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled.
https://patchwork.freedesktop.org/api/1.0/series/35501/revisions/1/mbox/

Test debugfs_test:
        Subgroup read_all_entries:
                pass       -> INCOMPLETE (fi-snb-2520m) fdo#103713
Test gem_exec_suspend:
        Subgroup basic-s3:
                pass       -> INCOMPLETE (fi-bxt-j4205)
Test kms_psr_sink_crc:
        Subgroup psr_basic:
                dmesg-warn -> PASS       (fi-skl-6700hq) fdo#101144

fdo#103713 https://bugs.freedesktop.org/show_bug.cgi?id=103713
fdo#101144 https://bugs.freedesktop.org/show_bug.cgi?id=101144

fi-bdw-5557u     total:288  pass:267  dwarn:0   dfail:0   fail:0   skip:21  time:432s
fi-bdw-gvtdvm    total:288  pass:264  dwarn:0   dfail:0   fail:0   skip:24  time:435s
fi-blb-e6850     total:288  pass:223  dwarn:1   dfail:0   fail:0   skip:64  time:391s
fi-bsw-n3050     total:288  pass:242  dwarn:0   dfail:0   fail:0   skip:46  time:496s
fi-bwr-2160      total:288  pass:183  dwarn:0   dfail:0   fail:0   skip:105 time:275s
fi-bxt-dsi       total:288  pass:258  dwarn:0   dfail:0   fail:0   skip:30  time:495s
fi-bxt-j4205     total:108  pass:96   dwarn:0   dfail:0   fail:0   skip:11 
fi-byt-j1900     total:288  pass:253  dwarn:0   dfail:0   fail:0   skip:35  time:479s
fi-byt-n2820     total:288  pass:249  dwarn:0   dfail:0   fail:0   skip:39  time:466s
fi-elk-e7500     total:224  pass:163  dwarn:15  dfail:0   fail:0   skip:45 
fi-gdg-551       total:7    pass:5    dwarn:1   dfail:0   fail:0   skip:0  
fi-glk-1         total:288  pass:260  dwarn:0   dfail:0   fail:0   skip:28  time:533s
fi-hsw-4770      total:288  pass:261  dwarn:0   dfail:0   fail:0   skip:27  time:403s
fi-hsw-4770r     total:288  pass:261  dwarn:0   dfail:0   fail:0   skip:27  time:412s
fi-ilk-650       total:288  pass:228  dwarn:0   dfail:0   fail:0   skip:60  time:384s
fi-ivb-3520m     total:288  pass:259  dwarn:0   dfail:0   fail:0   skip:29  time:466s
fi-ivb-3770      total:288  pass:255  dwarn:0   dfail:0   fail:0   skip:33  time:431s
fi-kbl-7500u     total:288  pass:263  dwarn:1   dfail:0   fail:0   skip:24  time:479s
fi-kbl-7560u     total:288  pass:268  dwarn:1   dfail:0   fail:0   skip:19  time:515s
fi-kbl-7567u     total:288  pass:268  dwarn:0   dfail:0   fail:0   skip:20  time:466s
fi-kbl-r         total:288  pass:260  dwarn:1   dfail:0   fail:0   skip:27  time:520s
fi-pnv-d510      total:288  pass:222  dwarn:1   dfail:0   fail:0   skip:65  time:579s
fi-skl-6260u     total:288  pass:268  dwarn:0   dfail:0   fail:0   skip:20  time:443s
fi-skl-6600u     total:288  pass:260  dwarn:1   dfail:0   fail:0   skip:27  time:535s
fi-skl-6700hq    total:288  pass:262  dwarn:0   dfail:0   fail:0   skip:26  time:557s
fi-skl-6700k2    total:288  pass:264  dwarn:0   dfail:0   fail:0   skip:24  time:506s
fi-skl-6770hq    total:288  pass:268  dwarn:0   dfail:0   fail:0   skip:20  time:503s
fi-skl-gvtdvm    total:288  pass:265  dwarn:0   dfail:0   fail:0   skip:23  time:443s
fi-snb-2520m     total:3    pass:2    dwarn:0   dfail:0   fail:0   skip:0  
fi-snb-2600      total:288  pass:248  dwarn:0   dfail:0   fail:0   skip:40  time:415s
Blacklisted hosts:
fi-cfl-s2        total:288  pass:262  dwarn:0   dfail:0   fail:0   skip:26  time:596s
fi-glk-dsi       total:20   pass:19   dwarn:0   dfail:0   fail:0   skip:0  

780afbc71018cfa8cca45143c55b2051330402b6 drm-tip: 2017y-12m-18d-09h-33m-33s UTC integration manifest
bea77cd5e63e drm/i915: Use the vblank power domain disallow or disable DC states.
43de979e3ce8 drm/i915: Introduce a non-blocking power domain for vblank interrupts
5c292379a9e9 drm/i915: Use an atomic_t array to track power domain use count.
fabd57888ef8 drm/vblank: Restoring vblank counts after device runtime PM events.
673aa493d715 drm/vblank: Do not update vblank counts if vblanks are already disabled.

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_7522/issues.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2017-12-18 10:45 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-18 10:24 [CI v2 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Dhinakaran Pandiyan
2017-12-18 10:24 ` [CI v2 2/5] drm/vblank: Restoring vblank counts after device runtime PM events Dhinakaran Pandiyan
2017-12-18 10:24 ` [CI v2 3/5] drm/i915: Use an atomic_t array to track power domain use count Dhinakaran Pandiyan
2017-12-18 10:24 ` [CI v2 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts Dhinakaran Pandiyan
2017-12-18 10:24 ` [CI v2 5/5] drm/i915: Use the vblank power domain disallow or disable DC states Dhinakaran Pandiyan
2017-12-18 10:45 ` ✗ Fi.CI.BAT: failure for series starting with [CI,v2,1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.