All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled.
@ 2017-12-06 22:47 Dhinakaran Pandiyan
  2017-12-06 22:47 ` [PATCH 2/5] drm/vblank: Restoring vblank counts after device runtime PM events Dhinakaran Pandiyan
                   ` (4 more replies)
  0 siblings, 5 replies; 12+ messages in thread
From: Dhinakaran Pandiyan @ 2017-12-06 22:47 UTC (permalink / raw)
  To: intel-gfx; +Cc: Daniel Vetter, Dhinakaran Pandiyan, dri-devel, Rodrigo Vivi

Updating the vblank counts requires register reads and these reads may not
return meaningful values after the vblank interrupts are disabled as the
device may go to low power state. An additional change would be to allow
the driver to save the vblank counts before entering a low power state, but
that's for the future.

Also, disable vblanks after reading the HW counter in the case where
_crtc_vblank_off() is disabling vblanks.

Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
---
 drivers/gpu/drm/drm_vblank.c | 23 +++++++++--------------
 1 file changed, 9 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
index 32d9bcf5be7f..7eee82c06ed8 100644
--- a/drivers/gpu/drm/drm_vblank.c
+++ b/drivers/gpu/drm/drm_vblank.c
@@ -347,23 +347,14 @@ void drm_vblank_disable_and_save(struct drm_device *dev, unsigned int pipe)
 	spin_lock_irqsave(&dev->vblank_time_lock, irqflags);
 
 	/*
-	 * Only disable vblank interrupts if they're enabled. This avoids
-	 * calling the ->disable_vblank() operation in atomic context with the
-	 * hardware potentially runtime suspended.
-	 */
-	if (vblank->enabled) {
-		__disable_vblank(dev, pipe);
-		vblank->enabled = false;
-	}
-
-	/*
-	 * Always update the count and timestamp to maintain the
+	 * Update the count and timestamp to maintain the
 	 * appearance that the counter has been ticking all along until
 	 * this time. This makes the count account for the entire time
 	 * between drm_crtc_vblank_on() and drm_crtc_vblank_off().
 	 */
 	drm_update_vblank_count(dev, pipe, false);
-
+	__disable_vblank(dev, pipe);
+	vblank->enabled = false;
 	spin_unlock_irqrestore(&dev->vblank_time_lock, irqflags);
 }
 
@@ -1122,8 +1113,12 @@ void drm_crtc_vblank_off(struct drm_crtc *crtc)
 		      pipe, vblank->enabled, vblank->inmodeset);
 
 	/* Avoid redundant vblank disables without previous
-	 * drm_crtc_vblank_on(). */
-	if (drm_core_check_feature(dev, DRIVER_ATOMIC) || !vblank->inmodeset)
+	 * drm_crtc_vblank_on() and only disable them if they're enabled. This
+	 * avoids calling the ->disable_vblank() operation in atomic context
+	 * with the hardware potentially runtime suspended.
+	 */
+	if ((drm_core_check_feature(dev, DRIVER_ATOMIC) || !vblank->inmodeset) &&
+	    vblank->enabled)
 		drm_vblank_disable_and_save(dev, pipe);
 
 	wake_up(&vblank->queue);
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/5] drm/vblank: Restoring vblank counts after device runtime PM events.
  2017-12-06 22:47 [PATCH 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Dhinakaran Pandiyan
@ 2017-12-06 22:47 ` Dhinakaran Pandiyan
  2017-12-06 22:47 ` [PATCH 3/5] drm/i915: Use an atomic_t array to track power domain use count Dhinakaran Pandiyan
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 12+ messages in thread
From: Dhinakaran Pandiyan @ 2017-12-06 22:47 UTC (permalink / raw)
  To: intel-gfx; +Cc: Daniel Vetter, Dhinakaran Pandiyan, dri-devel, Rodrigo Vivi

The HW frame counter can get reset when devices enters low power
states and this messes up any following vblank count updates. So, compute
the missed vblank interrupts for that low power state duration using time
stamps. This is similar to _crtc_vblank_on() except that it doesn't enable
vblank interrupts because this function is expected to be called from
the driver _enable_vblank() vfunc.

Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
---
 drivers/gpu/drm/drm_vblank.c | 30 ++++++++++++++++++++++++++++++
 include/drm/drm_vblank.h     |  1 +
 2 files changed, 31 insertions(+)

diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
index 7eee82c06ed8..69d537cea149 100644
--- a/drivers/gpu/drm/drm_vblank.c
+++ b/drivers/gpu/drm/drm_vblank.c
@@ -1230,6 +1230,36 @@ void drm_crtc_vblank_on(struct drm_crtc *crtc)
 }
 EXPORT_SYMBOL(drm_crtc_vblank_on);
 
+void drm_crtc_vblank_restore(struct drm_device *dev, unsigned int pipe)
+{
+	ktime_t t_vblank;
+	struct drm_vblank_crtc *vblank;
+	int framedur_ns;
+	u64 diff_ns;
+	u32 cur_vblank, diff = 1;
+	int count = DRM_TIMESTAMP_MAXRETRIES;
+
+	if (WARN_ON(pipe >= dev->num_crtcs))
+		return;
+
+	vblank = &dev->vblank[pipe];
+	framedur_ns = vblank->framedur_ns;
+
+	do {
+		cur_vblank = __get_vblank_counter(dev, pipe);
+		drm_get_last_vbltimestamp(dev, pipe, &t_vblank, false);
+	} while (cur_vblank != __get_vblank_counter(dev, pipe) && --count > 0);
+
+	diff_ns = ktime_to_ns(ktime_sub(t_vblank, vblank->time));
+	if (framedur_ns)
+		diff = DIV_ROUND_CLOSEST_ULL(diff_ns, framedur_ns);
+
+	DRM_DEBUG_VBL("computing missed vblanks %lld/%d=%d after HW counter reset hw_diff=%d\n",
+		      diff_ns, framedur_ns, diff, cur_vblank - vblank->last);
+	store_vblank(dev, pipe, diff, t_vblank, cur_vblank);
+}
+EXPORT_SYMBOL(drm_crtc_vblank_restore);
+
 static void drm_legacy_vblank_pre_modeset(struct drm_device *dev,
 					  unsigned int pipe)
 {
diff --git a/include/drm/drm_vblank.h b/include/drm/drm_vblank.h
index 848b463a0af5..aafcbef91bd7 100644
--- a/include/drm/drm_vblank.h
+++ b/include/drm/drm_vblank.h
@@ -180,6 +180,7 @@ void drm_crtc_vblank_off(struct drm_crtc *crtc);
 void drm_crtc_vblank_reset(struct drm_crtc *crtc);
 void drm_crtc_vblank_on(struct drm_crtc *crtc);
 u32 drm_crtc_accurate_vblank_count(struct drm_crtc *crtc);
+void drm_crtc_vblank_restore(struct drm_device *dev, unsigned int pipe);
 
 bool drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev,
 					   unsigned int pipe, int *max_error,
-- 
2.11.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/5] drm/i915: Use an atomic_t array to track power domain use count.
  2017-12-06 22:47 [PATCH 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Dhinakaran Pandiyan
  2017-12-06 22:47 ` [PATCH 2/5] drm/vblank: Restoring vblank counts after device runtime PM events Dhinakaran Pandiyan
@ 2017-12-06 22:47 ` Dhinakaran Pandiyan
  2017-12-06 22:47 ` [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts Dhinakaran Pandiyan
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 12+ messages in thread
From: Dhinakaran Pandiyan @ 2017-12-06 22:47 UTC (permalink / raw)
  To: intel-gfx; +Cc: Daniel Vetter, Dhinakaran Pandiyan, dri-devel, Rodrigo Vivi

Convert the power_domains->domain_use_count array that tracks per-domain
use count to atomic_t type. This is needed to be able to read/write the use
counts outside of the power domain mutex.

Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
---
 drivers/gpu/drm/i915/i915_debugfs.c     |  2 +-
 drivers/gpu/drm/i915/i915_drv.h         |  2 +-
 drivers/gpu/drm/i915/intel_runtime_pm.c | 11 +++++------
 3 files changed, 7 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 28294470ae31..2a4ed54688d7 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -2805,7 +2805,7 @@ static int i915_power_domain_info(struct seq_file *m, void *unused)
 		for_each_power_domain(power_domain, power_well->domains)
 			seq_printf(m, "  %-23s %d\n",
 				 intel_display_power_domain_str(power_domain),
-				 power_domains->domain_use_count[power_domain]);
+				 atomic_read(&power_domains->domain_use_count[power_domain]));
 	}
 
 	mutex_unlock(&power_domains->lock);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 594fd14e66c5..18d42885205b 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1489,7 +1489,7 @@ struct i915_power_domains {
 	int power_well_count;
 
 	struct mutex lock;
-	int domain_use_count[POWER_DOMAIN_NUM];
+	atomic_t domain_use_count[POWER_DOMAIN_NUM];
 	struct i915_power_well *power_wells;
 };
 
diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c
index 8315499452dc..f88f2c070c5f 100644
--- a/drivers/gpu/drm/i915/intel_runtime_pm.c
+++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
@@ -1451,7 +1451,7 @@ __intel_display_power_get_domain(struct drm_i915_private *dev_priv,
 	for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain))
 		intel_power_well_get(dev_priv, power_well);
 
-	power_domains->domain_use_count[domain]++;
+	atomic_inc(&power_domains->domain_use_count[domain]);
 }
 
 /**
@@ -1537,10 +1537,9 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
 
 	mutex_lock(&power_domains->lock);
 
-	WARN(!power_domains->domain_use_count[domain],
-	     "Use count on domain %s is already zero\n",
+	WARN(atomic_dec_return(&power_domains->domain_use_count[domain]) < 0,
+	     "Use count on domain %s was already zero\n",
 	     intel_display_power_domain_str(domain));
-	power_domains->domain_use_count[domain]--;
 
 	for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain))
 		intel_power_well_put(dev_priv, power_well);
@@ -3044,7 +3043,7 @@ static void intel_power_domains_dump_info(struct drm_i915_private *dev_priv)
 		for_each_power_domain(domain, power_well->domains)
 			DRM_DEBUG_DRIVER("  %-23s %d\n",
 					 intel_display_power_domain_str(domain),
-					 power_domains->domain_use_count[domain]);
+					 atomic_read(&power_domains->domain_use_count[domain]));
 	}
 }
 
@@ -3087,7 +3086,7 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv)
 
 		domains_count = 0;
 		for_each_power_domain(domain, power_well->domains)
-			domains_count += power_domains->domain_use_count[domain];
+			domains_count += atomic_read(&power_domains->domain_use_count[domain]);
 
 		if (power_well->count != domains_count) {
 			DRM_ERROR("power well %s refcount/domain refcount mismatch "
-- 
2.11.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts
  2017-12-06 22:47 [PATCH 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Dhinakaran Pandiyan
  2017-12-06 22:47 ` [PATCH 2/5] drm/vblank: Restoring vblank counts after device runtime PM events Dhinakaran Pandiyan
  2017-12-06 22:47 ` [PATCH 3/5] drm/i915: Use an atomic_t array to track power domain use count Dhinakaran Pandiyan
@ 2017-12-06 22:47 ` Dhinakaran Pandiyan
  2017-12-06 23:54   ` Rodrigo Vivi
  2017-12-06 22:47 ` [PATCH 5/5] drm/i915: Use the vblank power domain disallow or disable DC states Dhinakaran Pandiyan
  2017-12-06 23:16 ` ✗ Fi.CI.BAT: failure for series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Patchwork
  4 siblings, 1 reply; 12+ messages in thread
From: Dhinakaran Pandiyan @ 2017-12-06 22:47 UTC (permalink / raw)
  To: intel-gfx; +Cc: Daniel Vetter, Dhinakaran Pandiyan, dri-devel, Rodrigo Vivi

When DC states are enabled and PSR is active, the hardware enters DC5/DC6
states resulting in frame counter resets. The frame counter resets mess
up the vblank counting logic. So in order to disable DC states when
vblank interrupts are required and to disallow DC states when vblanks
interrupts are already enabled, introduce a new power domain. Since this
power domain reference needs to be acquired and released in atomic context,
the corresponding _get() and _put() methods skip the power_domain mutex.

Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h         |   5 ++
 drivers/gpu/drm/i915/intel_drv.h        |   3 +-
 drivers/gpu/drm/i915/intel_runtime_pm.c | 125 +++++++++++++++++++++++++++++++-
 3 files changed, 128 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 18d42885205b..ba9107ec1ed1 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -397,6 +397,7 @@ enum intel_display_power_domain {
 	POWER_DOMAIN_AUX_C,
 	POWER_DOMAIN_AUX_D,
 	POWER_DOMAIN_GMBUS,
+	POWER_DOMAIN_VBLANK,
 	POWER_DOMAIN_MODESET,
 	POWER_DOMAIN_INIT,
 
@@ -1475,6 +1476,10 @@ struct i915_power_well {
 			bool has_vga:1;
 			bool has_fuses:1;
 		} hsw;
+		struct {
+			spinlock_t lock;
+			bool was_disabled;
+		} dc_off;
 	};
 	const struct i915_power_well_ops *ops;
 };
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index 30f791f89d64..93ca503f18bb 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -1865,7 +1865,8 @@ void chv_phy_powergate_lanes(struct intel_encoder *encoder,
 			     bool override, unsigned int mask);
 bool chv_phy_powergate_ch(struct drm_i915_private *dev_priv, enum dpio_phy phy,
 			  enum dpio_channel ch, bool override);
-
+bool intel_display_power_vblank_get(struct drm_i915_private *dev_priv);
+void intel_display_power_vblank_put(struct drm_i915_private *dev_priv);
 
 /* intel_pm.c */
 void intel_init_clock_gating(struct drm_i915_private *dev_priv);
diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c
index f88f2c070c5f..f1807bd74242 100644
--- a/drivers/gpu/drm/i915/intel_runtime_pm.c
+++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
@@ -126,6 +126,8 @@ intel_display_power_domain_str(enum intel_display_power_domain domain)
 		return "AUX_D";
 	case POWER_DOMAIN_GMBUS:
 		return "GMBUS";
+	case POWER_DOMAIN_VBLANK:
+		return "VBLANK";
 	case POWER_DOMAIN_INIT:
 		return "INIT";
 	case POWER_DOMAIN_MODESET:
@@ -196,10 +198,17 @@ bool __intel_display_power_is_enabled(struct drm_i915_private *dev_priv,
 		if (power_well->always_on)
 			continue;
 
-		if (!power_well->hw_enabled) {
+		if (power_well->id == SKL_DISP_PW_DC_OFF)
+			spin_lock(&power_well->dc_off.lock);
+
+		if (!power_well->hw_enabled)
 			is_enabled = false;
+
+		if (power_well->id == SKL_DISP_PW_DC_OFF)
+			spin_unlock(&power_well->dc_off.lock);
+
+		if (!is_enabled)
 			break;
-		}
 	}
 
 	return is_enabled;
@@ -724,6 +733,7 @@ static void gen9_dc_off_power_well_disable(struct drm_i915_private *dev_priv,
 		skl_enable_dc6(dev_priv);
 	else if (dev_priv->csr.allowed_dc_mask & DC_STATE_EN_UPTO_DC5)
 		gen9_enable_dc5(dev_priv);
+	power_well->dc_off.was_disabled = true;
 }
 
 static void i9xx_power_well_sync_hw_noop(struct drm_i915_private *dev_priv,
@@ -1441,6 +1451,77 @@ static void chv_pipe_power_well_disable(struct drm_i915_private *dev_priv,
 	chv_set_pipe_power_well(dev_priv, power_well, false);
 }
 
+/**
+ * intel_display_power_vblank_get - acquire a VBLANK power domain reference atomically
+ * @dev_priv: i915 device instance
+ *
+ * This function gets a POWER_DOMAIN_VBLANK reference without blocking and
+ * returns true if the DC_OFF power well was disabled since this function was
+ * called the last time.
+ */
+bool intel_display_power_vblank_get(struct drm_i915_private *dev_priv)
+{
+	struct i915_power_domains *power_domains = &dev_priv->power_domains;
+	struct i915_power_well *power_well;
+	bool needs_restore = false;
+
+	if (!HAS_CSR(dev_priv) || !dev_priv->psr.source_ok)
+		return false;
+
+	/* The corresponding CRTC should be active by the time driver turns on
+	 * vblank interrupts, which in turn means the enabled pipe power domain
+	 * would have acquired the device runtime pm reference.
+	 */
+	intel_runtime_pm_get_if_in_use(dev_priv);
+
+	for_each_power_domain_well(dev_priv,  power_well, BIT_ULL(POWER_DOMAIN_VBLANK)) {
+		if (power_well->id == SKL_DISP_PW_DC_OFF)
+			spin_lock(&power_well->dc_off.lock);
+
+		intel_power_well_get(dev_priv, power_well);
+
+		if (power_well->id == SKL_DISP_PW_DC_OFF) {
+			needs_restore = power_well->dc_off.was_disabled;
+			power_well->dc_off.was_disabled = false;
+			spin_unlock(&power_well->dc_off.lock);
+		}
+	}
+	atomic_inc(&power_domains->domain_use_count[POWER_DOMAIN_VBLANK]);
+
+	return needs_restore;
+}
+
+/**
+ * intel_display_power_vblank_put - drop a VBLANK power domain reference atomically
+ *
+ * A non-blocking version intel_display_power_put() specifically to control the
+ * DC_OFF power well in atomic contexts.
+ */
+void intel_display_power_vblank_put(struct drm_i915_private *dev_priv)
+{
+	struct i915_power_domains *power_domains = &dev_priv->power_domains;
+	struct i915_power_well *power_well;
+
+	if (!HAS_CSR(dev_priv) || !dev_priv->psr.source_ok)
+		return;
+
+	WARN(atomic_dec_return(&power_domains->domain_use_count[POWER_DOMAIN_VBLANK]) < 0,
+	     "Use count on domain %s was already zero\n",
+	     intel_display_power_domain_str(POWER_DOMAIN_VBLANK));
+
+	for_each_power_domain_well_rev(dev_priv,  power_well, BIT_ULL(POWER_DOMAIN_VBLANK)) {
+		if (power_well->id == SKL_DISP_PW_DC_OFF)
+			spin_lock(&power_well->dc_off.lock);
+
+		intel_power_well_put(dev_priv, power_well);
+
+		if (power_well->id == SKL_DISP_PW_DC_OFF)
+			spin_unlock(&power_well->dc_off.lock);
+	}
+
+	intel_runtime_pm_put(dev_priv);
+}
+
 static void
 __intel_display_power_get_domain(struct drm_i915_private *dev_priv,
 				 enum intel_display_power_domain domain)
@@ -1448,9 +1529,16 @@ __intel_display_power_get_domain(struct drm_i915_private *dev_priv,
 	struct i915_power_domains *power_domains = &dev_priv->power_domains;
 	struct i915_power_well *power_well;
 
-	for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain))
+	for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain)) {
+		if (power_well->id == SKL_DISP_PW_DC_OFF)
+			spin_lock(&power_well->dc_off.lock);
+
 		intel_power_well_get(dev_priv, power_well);
 
+		if (power_well->id == SKL_DISP_PW_DC_OFF)
+			spin_unlock(&power_well->dc_off.lock);
+	}
+
 	atomic_inc(&power_domains->domain_use_count[domain]);
 }
 
@@ -1541,9 +1629,16 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
 	     "Use count on domain %s was already zero\n",
 	     intel_display_power_domain_str(domain));
 
-	for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain))
+	for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain)) {
+		if (power_well->id == SKL_DISP_PW_DC_OFF)
+			spin_lock(&power_well->dc_off.lock);
+
 		intel_power_well_put(dev_priv, power_well);
 
+		if (power_well->id == SKL_DISP_PW_DC_OFF)
+			spin_unlock(&power_well->dc_off.lock);
+	}
+
 	mutex_unlock(&power_domains->lock);
 
 	intel_runtime_pm_put(dev_priv);
@@ -1706,6 +1801,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
 	SKL_DISPLAY_POWERWELL_2_POWER_DOMAINS |		\
 	BIT_ULL(POWER_DOMAIN_MODESET) |			\
 	BIT_ULL(POWER_DOMAIN_AUX_A) |			\
+	BIT_ULL(POWER_DOMAIN_VBLANK) |			\
 	BIT_ULL(POWER_DOMAIN_INIT))
 
 #define BXT_DISPLAY_POWERWELL_2_POWER_DOMAINS (		\
@@ -2342,6 +2438,9 @@ static struct i915_power_well cnl_power_wells[] = {
 		.domains = CNL_DISPLAY_DC_OFF_POWER_DOMAINS,
 		.ops = &gen9_dc_off_power_well_ops,
 		.id = SKL_DISP_PW_DC_OFF,
+		{
+			.dc_off.was_disabled = false,
+		},
 	},
 	{
 		.name = "power well 2",
@@ -2470,6 +2569,7 @@ static void assert_power_well_ids_unique(struct drm_i915_private *dev_priv)
 int intel_power_domains_init(struct drm_i915_private *dev_priv)
 {
 	struct i915_power_domains *power_domains = &dev_priv->power_domains;
+	struct i915_power_well *dc_off_pw;
 
 	i915_modparams.disable_power_well =
 		sanitize_disable_power_well_option(dev_priv,
@@ -2507,6 +2607,10 @@ int intel_power_domains_init(struct drm_i915_private *dev_priv)
 		set_power_wells(power_domains, i9xx_always_on_power_well);
 	}
 
+	dc_off_pw = lookup_power_well(dev_priv, SKL_DISP_PW_DC_OFF);
+	if (dc_off_pw)
+		spin_lock_init(&dc_off_pw->dc_off.lock);
+
 	assert_power_well_ids_unique(dev_priv);
 
 	return 0;
@@ -2555,8 +2659,14 @@ static void intel_power_domains_sync_hw(struct drm_i915_private *dev_priv)
 	mutex_lock(&power_domains->lock);
 	for_each_power_well(dev_priv, power_well) {
 		power_well->ops->sync_hw(dev_priv, power_well);
+
+		if (power_well->id == SKL_DISP_PW_DC_OFF)
+			spin_lock(&power_well->dc_off.lock);
+
 		power_well->hw_enabled = power_well->ops->is_enabled(dev_priv,
 								     power_well);
+		if (power_well->id == SKL_DISP_PW_DC_OFF)
+			spin_unlock(&power_well->dc_off.lock);
 	}
 	mutex_unlock(&power_domains->lock);
 }
@@ -3079,6 +3189,13 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv)
 		if (!power_well->domains)
 			continue;
 
+		/*
+		 * Reading SKL_DISP_PW_DC_OFF power_well->count without its
+		 * private spinlock should be safe here as power_well->count
+		 * gets modified only in power_well_get() and power_well_put()
+		 * and they are not called until drm_crtc_vblank_on().
+		 */
+
 		enabled = power_well->ops->is_enabled(dev_priv, power_well);
 		if ((power_well->count || power_well->always_on) != enabled)
 			DRM_ERROR("power well %s state mismatch (refcount %d/enabled %d)",
-- 
2.11.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 5/5] drm/i915: Use the vblank power domain disallow or disable DC states.
  2017-12-06 22:47 [PATCH 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Dhinakaran Pandiyan
                   ` (2 preceding siblings ...)
  2017-12-06 22:47 ` [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts Dhinakaran Pandiyan
@ 2017-12-06 22:47 ` Dhinakaran Pandiyan
  2017-12-06 23:16 ` ✗ Fi.CI.BAT: failure for series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Patchwork
  4 siblings, 0 replies; 12+ messages in thread
From: Dhinakaran Pandiyan @ 2017-12-06 22:47 UTC (permalink / raw)
  To: intel-gfx; +Cc: Daniel Vetter, Dhinakaran Pandiyan, dri-devel, Rodrigo Vivi

Disable DC states before enabling vblank interrupts and conversely
enable DC states after disabling. Since the frame counter may have got
reset between disabling and enabling, use drm_crtc_vblank_restore() to
compute the missed vblanks.

Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
---
 drivers/gpu/drm/i915/i915_irq.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 7cac07db89b9..c595b934e2dc 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -2964,6 +2964,9 @@ static int gen8_enable_vblank(struct drm_device *dev, unsigned int pipe)
 	struct drm_i915_private *dev_priv = to_i915(dev);
 	unsigned long irqflags;
 
+	if (intel_display_power_vblank_get(dev_priv))
+		drm_crtc_vblank_restore(dev, pipe);
+
 	spin_lock_irqsave(&dev_priv->irq_lock, irqflags);
 	bdw_enable_pipe_irq(dev_priv, pipe, GEN8_PIPE_VBLANK);
 	spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags);
@@ -3015,6 +3018,7 @@ static void gen8_disable_vblank(struct drm_device *dev, unsigned int pipe)
 	spin_lock_irqsave(&dev_priv->irq_lock, irqflags);
 	bdw_disable_pipe_irq(dev_priv, pipe, GEN8_PIPE_VBLANK);
 	spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags);
+	intel_display_power_vblank_put(dev_priv);
 }
 
 static void ibx_irq_reset(struct drm_i915_private *dev_priv)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* ✗ Fi.CI.BAT: failure for series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled.
  2017-12-06 22:47 [PATCH 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Dhinakaran Pandiyan
                   ` (3 preceding siblings ...)
  2017-12-06 22:47 ` [PATCH 5/5] drm/i915: Use the vblank power domain disallow or disable DC states Dhinakaran Pandiyan
@ 2017-12-06 23:16 ` Patchwork
  2017-12-07 18:46   ` Pandiyan, Dhinakaran
                     ` (2 more replies)
  4 siblings, 3 replies; 12+ messages in thread
From: Patchwork @ 2017-12-06 23:16 UTC (permalink / raw)
  To: Dhinakaran Pandiyan; +Cc: intel-gfx

== Series Details ==

Series: series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled.
URL   : https://patchwork.freedesktop.org/series/34996/
State : failure

== Summary ==

Series 34996v1 series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled.
https://patchwork.freedesktop.org/api/1.0/series/34996/revisions/1/mbox/

Test debugfs_test:
        Subgroup read_all_entries:
                dmesg-fail -> PASS       (fi-elk-e7500) fdo#103989 +1
Test gem_exec_reloc:
        Subgroup basic-cpu-read-active:
                pass       -> FAIL       (fi-gdg-551) fdo#102582 +3
Test kms_pipe_crc_basic:
        Subgroup read-crc-pipe-a:
                pass       -> FAIL       (fi-skl-6700k)
        Subgroup suspend-read-crc-pipe-b:
                pass       -> INCOMPLETE (fi-snb-2520m) fdo#103713
Test kms_psr_sink_crc:
        Subgroup psr_basic:
                pass       -> DMESG-WARN (fi-skl-6600u)
                pass       -> DMESG-WARN (fi-skl-6700hq)
                pass       -> DMESG-WARN (fi-kbl-7560u)
                pass       -> DMESG-WARN (fi-kbl-r)

fdo#103989 https://bugs.freedesktop.org/show_bug.cgi?id=103989
fdo#102582 https://bugs.freedesktop.org/show_bug.cgi?id=102582
fdo#103713 https://bugs.freedesktop.org/show_bug.cgi?id=103713

fi-bdw-5557u     total:288  pass:267  dwarn:0   dfail:0   fail:0   skip:21  time:438s
fi-blb-e6850     total:288  pass:223  dwarn:1   dfail:0   fail:0   skip:64  time:385s
fi-bsw-n3050     total:288  pass:242  dwarn:0   dfail:0   fail:0   skip:46  time:522s
fi-bwr-2160      total:288  pass:183  dwarn:0   dfail:0   fail:0   skip:105 time:280s
fi-bxt-dsi       total:288  pass:258  dwarn:0   dfail:0   fail:0   skip:30  time:505s
fi-bxt-j4205     total:288  pass:259  dwarn:0   dfail:0   fail:0   skip:29  time:511s
fi-byt-j1900     total:288  pass:253  dwarn:0   dfail:0   fail:0   skip:35  time:491s
fi-byt-n2820     total:288  pass:249  dwarn:0   dfail:0   fail:0   skip:39  time:472s
fi-elk-e7500     total:224  pass:163  dwarn:15  dfail:0   fail:0   skip:45 
fi-gdg-551       total:288  pass:174  dwarn:1   dfail:0   fail:5   skip:108 time:270s
fi-glk-1         total:288  pass:260  dwarn:0   dfail:0   fail:0   skip:28  time:542s
fi-hsw-4770      total:288  pass:261  dwarn:0   dfail:0   fail:0   skip:27  time:385s
fi-hsw-4770r     total:288  pass:224  dwarn:0   dfail:0   fail:0   skip:64  time:260s
fi-ivb-3520m     total:288  pass:259  dwarn:0   dfail:0   fail:0   skip:29  time:484s
fi-ivb-3770      total:288  pass:259  dwarn:0   dfail:0   fail:0   skip:29  time:444s
fi-kbl-7560u     total:288  pass:268  dwarn:1   dfail:0   fail:0   skip:19  time:526s
fi-kbl-7567u     total:288  pass:268  dwarn:0   dfail:0   fail:0   skip:20  time:478s
fi-kbl-r         total:288  pass:260  dwarn:1   dfail:0   fail:0   skip:27  time:531s
fi-pnv-d510      total:288  pass:222  dwarn:1   dfail:0   fail:0   skip:65  time:595s
fi-skl-6260u     total:288  pass:268  dwarn:0   dfail:0   fail:0   skip:20  time:456s
fi-skl-6600u     total:288  pass:260  dwarn:1   dfail:0   fail:0   skip:27  time:547s
fi-skl-6700hq    total:288  pass:261  dwarn:1   dfail:0   fail:0   skip:26  time:561s
fi-skl-6700k     total:288  pass:263  dwarn:0   dfail:0   fail:1   skip:24  time:519s
fi-skl-6770hq    total:288  pass:268  dwarn:0   dfail:0   fail:0   skip:20  time:496s
fi-snb-2520m     total:245  pass:211  dwarn:0   dfail:0   fail:0   skip:33 
fi-snb-2600      total:288  pass:248  dwarn:0   dfail:0   fail:0   skip:40  time:423s
Blacklisted hosts:
fi-cnl-y         total:249  pass:224  dwarn:0   dfail:0   fail:0   skip:24 
fi-glk-dsi       total:288  pass:186  dwarn:1   dfail:4   fail:0   skip:97  time:397s

95f37eb3ebfd65f500e0e80c6788df200f4c2f99 drm-tip: 2017y-12m-06d-21h-01m-04s UTC integration manifest
2864dbfcc343 drm/i915: Use the vblank power domain disallow or disable DC states.
b0c13977ae82 drm/i915: Introduce a non-blocking power domain for vblank interrupts
c32242c2a8b1 drm/i915: Use an atomic_t array to track power domain use count.
6de634ba83cf drm/vblank: Restoring vblank counts after device runtime PM events.
43170fd34edb drm/vblank: Do not update vblank counts if vblanks are already disabled.

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_7432/
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts
  2017-12-06 22:47 ` [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts Dhinakaran Pandiyan
@ 2017-12-06 23:54   ` Rodrigo Vivi
  2017-12-07 19:01     ` Pandiyan, Dhinakaran
  0 siblings, 1 reply; 12+ messages in thread
From: Rodrigo Vivi @ 2017-12-06 23:54 UTC (permalink / raw)
  To: Dhinakaran Pandiyan; +Cc: Daniel Vetter, intel-gfx, dri-devel

On Wed, Dec 06, 2017 at 10:47:40PM +0000, Dhinakaran Pandiyan wrote:
> When DC states are enabled and PSR is active, the hardware enters DC5/DC6
> states resulting in frame counter resets. The frame counter resets mess
> up the vblank counting logic. So in order to disable DC states when
> vblank interrupts are required and to disallow DC states when vblanks
> interrupts are already enabled, introduce a new power domain. Since this
> power domain reference needs to be acquired and released in atomic context,
> the corresponding _get() and _put() methods skip the power_domain mutex.

\o/

> 
> Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> ---
>  drivers/gpu/drm/i915/i915_drv.h         |   5 ++
>  drivers/gpu/drm/i915/intel_drv.h        |   3 +-
>  drivers/gpu/drm/i915/intel_runtime_pm.c | 125 +++++++++++++++++++++++++++++++-
>  3 files changed, 128 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index 18d42885205b..ba9107ec1ed1 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -397,6 +397,7 @@ enum intel_display_power_domain {
>  	POWER_DOMAIN_AUX_C,
>  	POWER_DOMAIN_AUX_D,
>  	POWER_DOMAIN_GMBUS,
> +	POWER_DOMAIN_VBLANK,
>  	POWER_DOMAIN_MODESET,
>  	POWER_DOMAIN_INIT,
>  
> @@ -1475,6 +1476,10 @@ struct i915_power_well {
>  			bool has_vga:1;
>  			bool has_fuses:1;
>  		} hsw;
> +		struct {
> +			spinlock_t lock;
> +			bool was_disabled;
> +		} dc_off;

what about a more generic name here?
something like

+		struct {
+			spinlock_t lock;
+			bool was_disabled;
+		} atomic;
+		is_atomic; //or needs_atomic

so...

>  	};
>  	const struct i915_power_well_ops *ops;
>  };
> diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
> index 30f791f89d64..93ca503f18bb 100644
> --- a/drivers/gpu/drm/i915/intel_drv.h
> +++ b/drivers/gpu/drm/i915/intel_drv.h
> @@ -1865,7 +1865,8 @@ void chv_phy_powergate_lanes(struct intel_encoder *encoder,
>  			     bool override, unsigned int mask);
>  bool chv_phy_powergate_ch(struct drm_i915_private *dev_priv, enum dpio_phy phy,
>  			  enum dpio_channel ch, bool override);
> -
> +bool intel_display_power_vblank_get(struct drm_i915_private *dev_priv);
> +void intel_display_power_vblank_put(struct drm_i915_private *dev_priv);
>  
>  /* intel_pm.c */
>  void intel_init_clock_gating(struct drm_i915_private *dev_priv);
> diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c
> index f88f2c070c5f..f1807bd74242 100644
> --- a/drivers/gpu/drm/i915/intel_runtime_pm.c
> +++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
> @@ -126,6 +126,8 @@ intel_display_power_domain_str(enum intel_display_power_domain domain)
>  		return "AUX_D";
>  	case POWER_DOMAIN_GMBUS:
>  		return "GMBUS";
> +	case POWER_DOMAIN_VBLANK:
> +		return "VBLANK";
>  	case POWER_DOMAIN_INIT:
>  		return "INIT";
>  	case POWER_DOMAIN_MODESET:
> @@ -196,10 +198,17 @@ bool __intel_display_power_is_enabled(struct drm_i915_private *dev_priv,
>  		if (power_well->always_on)
>  			continue;
>  
> -		if (!power_well->hw_enabled) {
> +		if (power_well->id == SKL_DISP_PW_DC_OFF)
> +			spin_lock(&power_well->dc_off.lock);

... instead of a specif pw check here you would have

+	if (power_well->is_atomic)
+		spin_lock(&power_well->atomic.lock)

> +
> +		if (!power_well->hw_enabled)
>  			is_enabled = false;
> +
> +		if (power_well->id == SKL_DISP_PW_DC_OFF)
> +			spin_unlock(&power_well->dc_off.lock);
> +
> +		if (!is_enabled)
>  			break;
> -		}
>  	}
>  
>  	return is_enabled;
> @@ -724,6 +733,7 @@ static void gen9_dc_off_power_well_disable(struct drm_i915_private *dev_priv,
>  		skl_enable_dc6(dev_priv);
>  	else if (dev_priv->csr.allowed_dc_mask & DC_STATE_EN_UPTO_DC5)
>  		gen9_enable_dc5(dev_priv);
> +	power_well->dc_off.was_disabled = true;
>  }
>  
>  static void i9xx_power_well_sync_hw_noop(struct drm_i915_private *dev_priv,
> @@ -1441,6 +1451,77 @@ static void chv_pipe_power_well_disable(struct drm_i915_private *dev_priv,
>  	chv_set_pipe_power_well(dev_priv, power_well, false);
>  }
>  
> +/**
> + * intel_display_power_vblank_get - acquire a VBLANK power domain reference atomically
> + * @dev_priv: i915 device instance
> + *
> + * This function gets a POWER_DOMAIN_VBLANK reference without blocking and
> + * returns true if the DC_OFF power well was disabled since this function was
> + * called the last time.
> + */
> +bool intel_display_power_vblank_get(struct drm_i915_private *dev_priv)
> +{
> +	struct i915_power_domains *power_domains = &dev_priv->power_domains;
> +	struct i915_power_well *power_well;
> +	bool needs_restore = false;
> +
> +	if (!HAS_CSR(dev_priv) || !dev_priv->psr.source_ok)
> +		return false;
> +
> +	/* The corresponding CRTC should be active by the time driver turns on
> +	 * vblank interrupts, which in turn means the enabled pipe power domain
> +	 * would have acquired the device runtime pm reference.
> +	 */
> +	intel_runtime_pm_get_if_in_use(dev_priv);
> +
> +	for_each_power_domain_well(dev_priv,  power_well, BIT_ULL(POWER_DOMAIN_VBLANK)) {
> +		if (power_well->id == SKL_DISP_PW_DC_OFF)
> +			spin_lock(&power_well->dc_off.lock);
> +
> +		intel_power_well_get(dev_priv, power_well);
> +
> +		if (power_well->id == SKL_DISP_PW_DC_OFF) {
> +			needs_restore = power_well->dc_off.was_disabled;
> +			power_well->dc_off.was_disabled = false;
> +			spin_unlock(&power_well->dc_off.lock);

I don't understand why you need the was_disabled here and not the counter like
the regular pw?

But also overall I can't see how this is avoiding the mutex_lock(&power_domains->lock);
in general. Is there a way to make it more generic in a way that we could define atomic pw and regular pw
and we could see which one would be taking the atomic path easily?

Thanks,
Rodrigo.

> +		}
> +	}
> +	atomic_inc(&power_domains->domain_use_count[POWER_DOMAIN_VBLANK]);
> +
> +	return needs_restore;
> +}
> +
> +/**
> + * intel_display_power_vblank_put - drop a VBLANK power domain reference atomically
> + *
> + * A non-blocking version intel_display_power_put() specifically to control the
> + * DC_OFF power well in atomic contexts.
> + */
> +void intel_display_power_vblank_put(struct drm_i915_private *dev_priv)
> +{
> +	struct i915_power_domains *power_domains = &dev_priv->power_domains;
> +	struct i915_power_well *power_well;
> +
> +	if (!HAS_CSR(dev_priv) || !dev_priv->psr.source_ok)
> +		return;
> +
> +	WARN(atomic_dec_return(&power_domains->domain_use_count[POWER_DOMAIN_VBLANK]) < 0,
> +	     "Use count on domain %s was already zero\n",
> +	     intel_display_power_domain_str(POWER_DOMAIN_VBLANK));
> +
> +	for_each_power_domain_well_rev(dev_priv,  power_well, BIT_ULL(POWER_DOMAIN_VBLANK)) {
> +		if (power_well->id == SKL_DISP_PW_DC_OFF)
> +			spin_lock(&power_well->dc_off.lock);
> +
> +		intel_power_well_put(dev_priv, power_well);
> +
> +		if (power_well->id == SKL_DISP_PW_DC_OFF)
> +			spin_unlock(&power_well->dc_off.lock);
> +	}
> +
> +	intel_runtime_pm_put(dev_priv);
> +}
> +
>  static void
>  __intel_display_power_get_domain(struct drm_i915_private *dev_priv,
>  				 enum intel_display_power_domain domain)
> @@ -1448,9 +1529,16 @@ __intel_display_power_get_domain(struct drm_i915_private *dev_priv,
>  	struct i915_power_domains *power_domains = &dev_priv->power_domains;
>  	struct i915_power_well *power_well;
>  
> -	for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain))
> +	for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain)) {
> +		if (power_well->id == SKL_DISP_PW_DC_OFF)
> +			spin_lock(&power_well->dc_off.lock);
> +
>  		intel_power_well_get(dev_priv, power_well);
>  
> +		if (power_well->id == SKL_DISP_PW_DC_OFF)
> +			spin_unlock(&power_well->dc_off.lock);
> +	}
> +
>  	atomic_inc(&power_domains->domain_use_count[domain]);
>  }
>  
> @@ -1541,9 +1629,16 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
>  	     "Use count on domain %s was already zero\n",
>  	     intel_display_power_domain_str(domain));
>  
> -	for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain))
> +	for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain)) {
> +		if (power_well->id == SKL_DISP_PW_DC_OFF)
> +			spin_lock(&power_well->dc_off.lock);
> +
>  		intel_power_well_put(dev_priv, power_well);
>  
> +		if (power_well->id == SKL_DISP_PW_DC_OFF)
> +			spin_unlock(&power_well->dc_off.lock);
> +	}
> +
>  	mutex_unlock(&power_domains->lock);
>  
>  	intel_runtime_pm_put(dev_priv);
> @@ -1706,6 +1801,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
>  	SKL_DISPLAY_POWERWELL_2_POWER_DOMAINS |		\
>  	BIT_ULL(POWER_DOMAIN_MODESET) |			\
>  	BIT_ULL(POWER_DOMAIN_AUX_A) |			\
> +	BIT_ULL(POWER_DOMAIN_VBLANK) |			\
>  	BIT_ULL(POWER_DOMAIN_INIT))
>  
>  #define BXT_DISPLAY_POWERWELL_2_POWER_DOMAINS (		\
> @@ -2342,6 +2438,9 @@ static struct i915_power_well cnl_power_wells[] = {
>  		.domains = CNL_DISPLAY_DC_OFF_POWER_DOMAINS,
>  		.ops = &gen9_dc_off_power_well_ops,
>  		.id = SKL_DISP_PW_DC_OFF,
> +		{
> +			.dc_off.was_disabled = false,
> +		},
>  	},
>  	{
>  		.name = "power well 2",
> @@ -2470,6 +2569,7 @@ static void assert_power_well_ids_unique(struct drm_i915_private *dev_priv)
>  int intel_power_domains_init(struct drm_i915_private *dev_priv)
>  {
>  	struct i915_power_domains *power_domains = &dev_priv->power_domains;
> +	struct i915_power_well *dc_off_pw;
>  
>  	i915_modparams.disable_power_well =
>  		sanitize_disable_power_well_option(dev_priv,
> @@ -2507,6 +2607,10 @@ int intel_power_domains_init(struct drm_i915_private *dev_priv)
>  		set_power_wells(power_domains, i9xx_always_on_power_well);
>  	}
>  
> +	dc_off_pw = lookup_power_well(dev_priv, SKL_DISP_PW_DC_OFF);
> +	if (dc_off_pw)
> +		spin_lock_init(&dc_off_pw->dc_off.lock);
> +
>  	assert_power_well_ids_unique(dev_priv);
>  
>  	return 0;
> @@ -2555,8 +2659,14 @@ static void intel_power_domains_sync_hw(struct drm_i915_private *dev_priv)
>  	mutex_lock(&power_domains->lock);
>  	for_each_power_well(dev_priv, power_well) {
>  		power_well->ops->sync_hw(dev_priv, power_well);
> +
> +		if (power_well->id == SKL_DISP_PW_DC_OFF)
> +			spin_lock(&power_well->dc_off.lock);
> +
>  		power_well->hw_enabled = power_well->ops->is_enabled(dev_priv,
>  								     power_well);
> +		if (power_well->id == SKL_DISP_PW_DC_OFF)
> +			spin_unlock(&power_well->dc_off.lock);
>  	}
>  	mutex_unlock(&power_domains->lock);
>  }
> @@ -3079,6 +3189,13 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv)
>  		if (!power_well->domains)
>  			continue;
>  
> +		/*
> +		 * Reading SKL_DISP_PW_DC_OFF power_well->count without its
> +		 * private spinlock should be safe here as power_well->count
> +		 * gets modified only in power_well_get() and power_well_put()
> +		 * and they are not called until drm_crtc_vblank_on().
> +		 */
> +
>  		enabled = power_well->ops->is_enabled(dev_priv, power_well);
>  		if ((power_well->count || power_well->always_on) != enabled)
>  			DRM_ERROR("power well %s state mismatch (refcount %d/enabled %d)",
> -- 
> 2.11.0
> 
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: ✗ Fi.CI.BAT: failure for series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled.
  2017-12-06 23:16 ` ✗ Fi.CI.BAT: failure for series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Patchwork
@ 2017-12-07 18:46   ` Pandiyan, Dhinakaran
  2017-12-07 18:46   ` Pandiyan, Dhinakaran
  2017-12-07 18:47   ` Pandiyan, Dhinakaran
  2 siblings, 0 replies; 12+ messages in thread
From: Pandiyan, Dhinakaran @ 2017-12-07 18:46 UTC (permalink / raw)
  To: intel-gfx




On Wed, 2017-12-06 at 23:16 +0000, Patchwork wrote:
> == Series Details ==
> 
> Series: series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled.
> URL   : https://patchwork.freedesktop.org/series/34996/
> State : failure
> 
> == Summary ==
> 
> Series 34996v1 series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled.
> https://patchwork.freedesktop.org/api/1.0/series/34996/revisions/1/mbox/
> 
> Test debugfs_test:
>         Subgroup read_all_entries:
>                 dmesg-fail -> PASS       (fi-elk-e7500) fdo#103989 +1
> Test gem_exec_reloc:
>         Subgroup basic-cpu-read-active:
>                 pass       -> FAIL       (fi-gdg-551) fdo#102582 +3
> Test kms_pipe_crc_basic:
>         Subgroup read-crc-pipe-a:
>                 pass       -> FAIL       (fi-skl-6700k)
>         Subgroup suspend-read-crc-pipe-b:
>                 pass       -> INCOMPLETE (fi-snb-2520m) fdo#103713
> Test kms_psr_sink_crc:
>         Subgroup psr_basic:
>                 pass       -> DMESG-WARN (fi-skl-6600u)
>                 pass       -> DMESG-WARN (fi-skl-6700hq)
>                 pass       -> DMESG-WARN (fi-kbl-7560u)
>                 pass       -> DMESG-WARN (fi-kbl-r)

Lockdep failures, I think switching to irqsave spin locks will fix
these.
> 
> fdo#103989 https://bugs.freedesktop.org/show_bug.cgi?id=103989
> fdo#102582 https://bugs.freedesktop.org/show_bug.cgi?id=102582
> fdo#103713 https://bugs.freedesktop.org/show_bug.cgi?id=103713
> 
> fi-bdw-5557u     total:288  pass:267  dwarn:0   dfail:0   fail:0   skip:21  time:438s
> fi-blb-e6850     total:288  pass:223  dwarn:1   dfail:0   fail:0   skip:64  time:385s
> fi-bsw-n3050     total:288  pass:242  dwarn:0   dfail:0   fail:0   skip:46  time:522s
> fi-bwr-2160      total:288  pass:183  dwarn:0   dfail:0   fail:0   skip:105 time:280s
> fi-bxt-dsi       total:288  pass:258  dwarn:0   dfail:0   fail:0   skip:30  time:505s
> fi-bxt-j4205     total:288  pass:259  dwarn:0   dfail:0   fail:0   skip:29  time:511s
> fi-byt-j1900     total:288  pass:253  dwarn:0   dfail:0   fail:0   skip:35  time:491s
> fi-byt-n2820     total:288  pass:249  dwarn:0   dfail:0   fail:0   skip:39  time:472s
> fi-elk-e7500     total:224  pass:163  dwarn:15  dfail:0   fail:0   skip:45 
> fi-gdg-551       total:288  pass:174  dwarn:1   dfail:0   fail:5   skip:108 time:270s
> fi-glk-1         total:288  pass:260  dwarn:0   dfail:0   fail:0   skip:28  time:542s
> fi-hsw-4770      total:288  pass:261  dwarn:0   dfail:0   fail:0   skip:27  time:385s
> fi-hsw-4770r     total:288  pass:224  dwarn:0   dfail:0   fail:0   skip:64  time:260s
> fi-ivb-3520m     total:288  pass:259  dwarn:0   dfail:0   fail:0   skip:29  time:484s
> fi-ivb-3770      total:288  pass:259  dwarn:0   dfail:0   fail:0   skip:29  time:444s
> fi-kbl-7560u     total:288  pass:268  dwarn:1   dfail:0   fail:0   skip:19  time:526s
> fi-kbl-7567u     total:288  pass:268  dwarn:0   dfail:0   fail:0   skip:20  time:478s
> fi-kbl-r         total:288  pass:260  dwarn:1   dfail:0   fail:0   skip:27  time:531s
> fi-pnv-d510      total:288  pass:222  dwarn:1   dfail:0   fail:0   skip:65  time:595s
> fi-skl-6260u     total:288  pass:268  dwarn:0   dfail:0   fail:0   skip:20  time:456s
> fi-skl-6600u     total:288  pass:260  dwarn:1   dfail:0   fail:0   skip:27  time:547s
> fi-skl-6700hq    total:288  pass:261  dwarn:1   dfail:0   fail:0   skip:26  time:561s
> fi-skl-6700k     total:288  pass:263  dwarn:0   dfail:0   fail:1   skip:24  time:519s
> fi-skl-6770hq    total:288  pass:268  dwarn:0   dfail:0   fail:0   skip:20  time:496s
> fi-snb-2520m     total:245  pass:211  dwarn:0   dfail:0   fail:0   skip:33 
> fi-snb-2600      total:288  pass:248  dwarn:0   dfail:0   fail:0   skip:40  time:423s
> Blacklisted hosts:
> fi-cnl-y         total:249  pass:224  dwarn:0   dfail:0   fail:0   skip:24 
> fi-glk-dsi       total:288  pass:186  dwarn:1   dfail:4   fail:0   skip:97  time:397s
> 
> 95f37eb3ebfd65f500e0e80c6788df200f4c2f99 drm-tip: 2017y-12m-06d-21h-01m-04s UTC integration manifest
> 2864dbfcc343 drm/i915: Use the vblank power domain disallow or disable DC states.
> b0c13977ae82 drm/i915: Introduce a non-blocking power domain for vblank interrupts
> c32242c2a8b1 drm/i915: Use an atomic_t array to track power domain use count.
> 6de634ba83cf drm/vblank: Restoring vblank counts after device runtime PM events.
> 43170fd34edb drm/vblank: Do not update vblank counts if vblanks are already disabled.
> 
> == Logs ==
> 
> For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_7432/
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: ✗ Fi.CI.BAT: failure for series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled.
  2017-12-06 23:16 ` ✗ Fi.CI.BAT: failure for series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Patchwork
  2017-12-07 18:46   ` Pandiyan, Dhinakaran
@ 2017-12-07 18:46   ` Pandiyan, Dhinakaran
  2017-12-07 18:47   ` Pandiyan, Dhinakaran
  2 siblings, 0 replies; 12+ messages in thread
From: Pandiyan, Dhinakaran @ 2017-12-07 18:46 UTC (permalink / raw)
  To: intel-gfx




On Wed, 2017-12-06 at 23:16 +0000, Patchwork wrote:
> == Series Details ==
> 
> Series: series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled.
> URL   : https://patchwork.freedesktop.org/series/34996/
> State : failure
> 
> == Summary ==
> 
> Series 34996v1 series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled.
> https://patchwork.freedesktop.org/api/1.0/series/34996/revisions/1/mbox/
> 
> Test debugfs_test:
>         Subgroup read_all_entries:
>                 dmesg-fail -> PASS       (fi-elk-e7500) fdo#103989 +1
> Test gem_exec_reloc:
>         Subgroup basic-cpu-read-active:
>                 pass       -> FAIL       (fi-gdg-551) fdo#102582 +3
> Test kms_pipe_crc_basic:
>         Subgroup read-crc-pipe-a:
>                 pass       -> FAIL       (fi-skl-6700k)
>         Subgroup suspend-read-crc-pipe-b:
>                 pass       -> INCOMPLETE (fi-snb-2520m) fdo#103713
> Test kms_psr_sink_crc:
>         Subgroup psr_basic:
>                 pass       -> DMESG-WARN (fi-skl-6600u)
>                 pass       -> DMESG-WARN (fi-skl-6700hq)
>                 pass       -> DMESG-WARN (fi-kbl-7560u)
>                 pass       -> DMESG-WARN (fi-kbl-r)

Lockdep failures, I think switching to irqsave spin locks will fix
these.
> 
> fdo#103989 https://bugs.freedesktop.org/show_bug.cgi?id=103989
> fdo#102582 https://bugs.freedesktop.org/show_bug.cgi?id=102582
> fdo#103713 https://bugs.freedesktop.org/show_bug.cgi?id=103713
> 
> fi-bdw-5557u     total:288  pass:267  dwarn:0   dfail:0   fail:0   skip:21  time:438s
> fi-blb-e6850     total:288  pass:223  dwarn:1   dfail:0   fail:0   skip:64  time:385s
> fi-bsw-n3050     total:288  pass:242  dwarn:0   dfail:0   fail:0   skip:46  time:522s
> fi-bwr-2160      total:288  pass:183  dwarn:0   dfail:0   fail:0   skip:105 time:280s
> fi-bxt-dsi       total:288  pass:258  dwarn:0   dfail:0   fail:0   skip:30  time:505s
> fi-bxt-j4205     total:288  pass:259  dwarn:0   dfail:0   fail:0   skip:29  time:511s
> fi-byt-j1900     total:288  pass:253  dwarn:0   dfail:0   fail:0   skip:35  time:491s
> fi-byt-n2820     total:288  pass:249  dwarn:0   dfail:0   fail:0   skip:39  time:472s
> fi-elk-e7500     total:224  pass:163  dwarn:15  dfail:0   fail:0   skip:45 
> fi-gdg-551       total:288  pass:174  dwarn:1   dfail:0   fail:5   skip:108 time:270s
> fi-glk-1         total:288  pass:260  dwarn:0   dfail:0   fail:0   skip:28  time:542s
> fi-hsw-4770      total:288  pass:261  dwarn:0   dfail:0   fail:0   skip:27  time:385s
> fi-hsw-4770r     total:288  pass:224  dwarn:0   dfail:0   fail:0   skip:64  time:260s
> fi-ivb-3520m     total:288  pass:259  dwarn:0   dfail:0   fail:0   skip:29  time:484s
> fi-ivb-3770      total:288  pass:259  dwarn:0   dfail:0   fail:0   skip:29  time:444s
> fi-kbl-7560u     total:288  pass:268  dwarn:1   dfail:0   fail:0   skip:19  time:526s
> fi-kbl-7567u     total:288  pass:268  dwarn:0   dfail:0   fail:0   skip:20  time:478s
> fi-kbl-r         total:288  pass:260  dwarn:1   dfail:0   fail:0   skip:27  time:531s
> fi-pnv-d510      total:288  pass:222  dwarn:1   dfail:0   fail:0   skip:65  time:595s
> fi-skl-6260u     total:288  pass:268  dwarn:0   dfail:0   fail:0   skip:20  time:456s
> fi-skl-6600u     total:288  pass:260  dwarn:1   dfail:0   fail:0   skip:27  time:547s
> fi-skl-6700hq    total:288  pass:261  dwarn:1   dfail:0   fail:0   skip:26  time:561s
> fi-skl-6700k     total:288  pass:263  dwarn:0   dfail:0   fail:1   skip:24  time:519s
> fi-skl-6770hq    total:288  pass:268  dwarn:0   dfail:0   fail:0   skip:20  time:496s
> fi-snb-2520m     total:245  pass:211  dwarn:0   dfail:0   fail:0   skip:33 
> fi-snb-2600      total:288  pass:248  dwarn:0   dfail:0   fail:0   skip:40  time:423s
> Blacklisted hosts:
> fi-cnl-y         total:249  pass:224  dwarn:0   dfail:0   fail:0   skip:24 
> fi-glk-dsi       total:288  pass:186  dwarn:1   dfail:4   fail:0   skip:97  time:397s
> 
> 95f37eb3ebfd65f500e0e80c6788df200f4c2f99 drm-tip: 2017y-12m-06d-21h-01m-04s UTC integration manifest
> 2864dbfcc343 drm/i915: Use the vblank power domain disallow or disable DC states.
> b0c13977ae82 drm/i915: Introduce a non-blocking power domain for vblank interrupts
> c32242c2a8b1 drm/i915: Use an atomic_t array to track power domain use count.
> 6de634ba83cf drm/vblank: Restoring vblank counts after device runtime PM events.
> 43170fd34edb drm/vblank: Do not update vblank counts if vblanks are already disabled.
> 
> == Logs ==
> 
> For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_7432/
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: ✗ Fi.CI.BAT: failure for series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled.
  2017-12-06 23:16 ` ✗ Fi.CI.BAT: failure for series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Patchwork
  2017-12-07 18:46   ` Pandiyan, Dhinakaran
  2017-12-07 18:46   ` Pandiyan, Dhinakaran
@ 2017-12-07 18:47   ` Pandiyan, Dhinakaran
  2 siblings, 0 replies; 12+ messages in thread
From: Pandiyan, Dhinakaran @ 2017-12-07 18:47 UTC (permalink / raw)
  To: intel-gfx




On Wed, 2017-12-06 at 23:16 +0000, Patchwork wrote:
> == Series Details ==
> 
> Series: series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled.
> URL   : https://patchwork.freedesktop.org/series/34996/
> State : failure
> 
> == Summary ==
> 
> Series 34996v1 series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled.
> https://patchwork.freedesktop.org/api/1.0/series/34996/revisions/1/mbox/
> 
> Test debugfs_test:
>         Subgroup read_all_entries:
>                 dmesg-fail -> PASS       (fi-elk-e7500) fdo#103989 +1
> Test gem_exec_reloc:
>         Subgroup basic-cpu-read-active:
>                 pass       -> FAIL       (fi-gdg-551) fdo#102582 +3
> Test kms_pipe_crc_basic:
>         Subgroup read-crc-pipe-a:
>                 pass       -> FAIL       (fi-skl-6700k)
>         Subgroup suspend-read-crc-pipe-b:
>                 pass       -> INCOMPLETE (fi-snb-2520m) fdo#103713
> Test kms_psr_sink_crc:
>         Subgroup psr_basic:
>                 pass       -> DMESG-WARN (fi-skl-6600u)
>                 pass       -> DMESG-WARN (fi-skl-6700hq)
>                 pass       -> DMESG-WARN (fi-kbl-7560u)
>                 pass       -> DMESG-WARN (fi-kbl-r)

Lockdep failures, I think switching to irqsave spin locks will fix
these.
> 
> fdo#103989 https://bugs.freedesktop.org/show_bug.cgi?id=103989
> fdo#102582 https://bugs.freedesktop.org/show_bug.cgi?id=102582
> fdo#103713 https://bugs.freedesktop.org/show_bug.cgi?id=103713
> 
> fi-bdw-5557u     total:288  pass:267  dwarn:0   dfail:0   fail:0   skip:21  time:438s
> fi-blb-e6850     total:288  pass:223  dwarn:1   dfail:0   fail:0   skip:64  time:385s
> fi-bsw-n3050     total:288  pass:242  dwarn:0   dfail:0   fail:0   skip:46  time:522s
> fi-bwr-2160      total:288  pass:183  dwarn:0   dfail:0   fail:0   skip:105 time:280s
> fi-bxt-dsi       total:288  pass:258  dwarn:0   dfail:0   fail:0   skip:30  time:505s
> fi-bxt-j4205     total:288  pass:259  dwarn:0   dfail:0   fail:0   skip:29  time:511s
> fi-byt-j1900     total:288  pass:253  dwarn:0   dfail:0   fail:0   skip:35  time:491s
> fi-byt-n2820     total:288  pass:249  dwarn:0   dfail:0   fail:0   skip:39  time:472s
> fi-elk-e7500     total:224  pass:163  dwarn:15  dfail:0   fail:0   skip:45 
> fi-gdg-551       total:288  pass:174  dwarn:1   dfail:0   fail:5   skip:108 time:270s
> fi-glk-1         total:288  pass:260  dwarn:0   dfail:0   fail:0   skip:28  time:542s
> fi-hsw-4770      total:288  pass:261  dwarn:0   dfail:0   fail:0   skip:27  time:385s
> fi-hsw-4770r     total:288  pass:224  dwarn:0   dfail:0   fail:0   skip:64  time:260s
> fi-ivb-3520m     total:288  pass:259  dwarn:0   dfail:0   fail:0   skip:29  time:484s
> fi-ivb-3770      total:288  pass:259  dwarn:0   dfail:0   fail:0   skip:29  time:444s
> fi-kbl-7560u     total:288  pass:268  dwarn:1   dfail:0   fail:0   skip:19  time:526s
> fi-kbl-7567u     total:288  pass:268  dwarn:0   dfail:0   fail:0   skip:20  time:478s
> fi-kbl-r         total:288  pass:260  dwarn:1   dfail:0   fail:0   skip:27  time:531s
> fi-pnv-d510      total:288  pass:222  dwarn:1   dfail:0   fail:0   skip:65  time:595s
> fi-skl-6260u     total:288  pass:268  dwarn:0   dfail:0   fail:0   skip:20  time:456s
> fi-skl-6600u     total:288  pass:260  dwarn:1   dfail:0   fail:0   skip:27  time:547s
> fi-skl-6700hq    total:288  pass:261  dwarn:1   dfail:0   fail:0   skip:26  time:561s
> fi-skl-6700k     total:288  pass:263  dwarn:0   dfail:0   fail:1   skip:24  time:519s
> fi-skl-6770hq    total:288  pass:268  dwarn:0   dfail:0   fail:0   skip:20  time:496s
> fi-snb-2520m     total:245  pass:211  dwarn:0   dfail:0   fail:0   skip:33 
> fi-snb-2600      total:288  pass:248  dwarn:0   dfail:0   fail:0   skip:40  time:423s
> Blacklisted hosts:
> fi-cnl-y         total:249  pass:224  dwarn:0   dfail:0   fail:0   skip:24 
> fi-glk-dsi       total:288  pass:186  dwarn:1   dfail:4   fail:0   skip:97  time:397s
> 
> 95f37eb3ebfd65f500e0e80c6788df200f4c2f99 drm-tip: 2017y-12m-06d-21h-01m-04s UTC integration manifest
> 2864dbfcc343 drm/i915: Use the vblank power domain disallow or disable DC states.
> b0c13977ae82 drm/i915: Introduce a non-blocking power domain for vblank interrupts
> c32242c2a8b1 drm/i915: Use an atomic_t array to track power domain use count.
> 6de634ba83cf drm/vblank: Restoring vblank counts after device runtime PM events.
> 43170fd34edb drm/vblank: Do not update vblank counts if vblanks are already disabled.
> 
> == Logs ==
> 
> For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_7432/
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts
  2017-12-06 23:54   ` Rodrigo Vivi
@ 2017-12-07 19:01     ` Pandiyan, Dhinakaran
  0 siblings, 0 replies; 12+ messages in thread
From: Pandiyan, Dhinakaran @ 2017-12-07 19:01 UTC (permalink / raw)
  To: Vivi, Rodrigo; +Cc: daniel.vetter, intel-gfx, dri-devel




On Wed, 2017-12-06 at 15:54 -0800, Rodrigo Vivi wrote:
> On Wed, Dec 06, 2017 at 10:47:40PM +0000, Dhinakaran Pandiyan wrote:
> > When DC states are enabled and PSR is active, the hardware enters DC5/DC6
> > states resulting in frame counter resets. The frame counter resets mess
> > up the vblank counting logic. So in order to disable DC states when
> > vblank interrupts are required and to disallow DC states when vblanks
> > interrupts are already enabled, introduce a new power domain. Since this
> > power domain reference needs to be acquired and released in atomic context,
> > the corresponding _get() and _put() methods skip the power_domain mutex.
> 
> \o/
> 
> > 
> > Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> > ---
> >  drivers/gpu/drm/i915/i915_drv.h         |   5 ++
> >  drivers/gpu/drm/i915/intel_drv.h        |   3 +-
> >  drivers/gpu/drm/i915/intel_runtime_pm.c | 125 +++++++++++++++++++++++++++++++-
> >  3 files changed, 128 insertions(+), 5 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> > index 18d42885205b..ba9107ec1ed1 100644
> > --- a/drivers/gpu/drm/i915/i915_drv.h
> > +++ b/drivers/gpu/drm/i915/i915_drv.h
> > @@ -397,6 +397,7 @@ enum intel_display_power_domain {
> >  	POWER_DOMAIN_AUX_C,
> >  	POWER_DOMAIN_AUX_D,
> >  	POWER_DOMAIN_GMBUS,
> > +	POWER_DOMAIN_VBLANK,
> >  	POWER_DOMAIN_MODESET,
> >  	POWER_DOMAIN_INIT,
> >  
> > @@ -1475,6 +1476,10 @@ struct i915_power_well {
> >  			bool has_vga:1;
> >  			bool has_fuses:1;
> >  		} hsw;
> > +		struct {
> > +			spinlock_t lock;
> > +			bool was_disabled;
> > +		} dc_off;
> 
> what about a more generic name here?
> something like
> 
> +		struct {
> +			spinlock_t lock;
> +			bool was_disabled;
> +		} atomic;
> +		is_atomic; //or needs_atomic
> 
> so...
> 
> >  	};
> >  	const struct i915_power_well_ops *ops;
> >  };
> > diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
> > index 30f791f89d64..93ca503f18bb 100644
> > --- a/drivers/gpu/drm/i915/intel_drv.h
> > +++ b/drivers/gpu/drm/i915/intel_drv.h
> > @@ -1865,7 +1865,8 @@ void chv_phy_powergate_lanes(struct intel_encoder *encoder,
> >  			     bool override, unsigned int mask);
> >  bool chv_phy_powergate_ch(struct drm_i915_private *dev_priv, enum dpio_phy phy,
> >  			  enum dpio_channel ch, bool override);
> > -
> > +bool intel_display_power_vblank_get(struct drm_i915_private *dev_priv);
> > +void intel_display_power_vblank_put(struct drm_i915_private *dev_priv);
> >  
> >  /* intel_pm.c */
> >  void intel_init_clock_gating(struct drm_i915_private *dev_priv);
> > diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c
> > index f88f2c070c5f..f1807bd74242 100644
> > --- a/drivers/gpu/drm/i915/intel_runtime_pm.c
> > +++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
> > @@ -126,6 +126,8 @@ intel_display_power_domain_str(enum intel_display_power_domain domain)
> >  		return "AUX_D";
> >  	case POWER_DOMAIN_GMBUS:
> >  		return "GMBUS";
> > +	case POWER_DOMAIN_VBLANK:
> > +		return "VBLANK";
> >  	case POWER_DOMAIN_INIT:
> >  		return "INIT";
> >  	case POWER_DOMAIN_MODESET:
> > @@ -196,10 +198,17 @@ bool __intel_display_power_is_enabled(struct drm_i915_private *dev_priv,
> >  		if (power_well->always_on)
> >  			continue;
> >  
> > -		if (!power_well->hw_enabled) {
> > +		if (power_well->id == SKL_DISP_PW_DC_OFF)
> > +			spin_lock(&power_well->dc_off.lock);
> 
> ... instead of a specif pw check here you would have
> 
> +	if (power_well->is_atomic)
> +		spin_lock(&power_well->atomic.lock)
> 
> > +
> > +		if (!power_well->hw_enabled)
> >  			is_enabled = false;
> > +
> > +		if (power_well->id == SKL_DISP_PW_DC_OFF)
> > +			spin_unlock(&power_well->dc_off.lock);
> > +
> > +		if (!is_enabled)
> >  			break;
> > -		}
> >  	}
> >  
> >  	return is_enabled;
> > @@ -724,6 +733,7 @@ static void gen9_dc_off_power_well_disable(struct drm_i915_private *dev_priv,
> >  		skl_enable_dc6(dev_priv);
> >  	else if (dev_priv->csr.allowed_dc_mask & DC_STATE_EN_UPTO_DC5)
> >  		gen9_enable_dc5(dev_priv);
> > +	power_well->dc_off.was_disabled = true;
> >  }
> >  
> >  static void i9xx_power_well_sync_hw_noop(struct drm_i915_private *dev_priv,
> > @@ -1441,6 +1451,77 @@ static void chv_pipe_power_well_disable(struct drm_i915_private *dev_priv,
> >  	chv_set_pipe_power_well(dev_priv, power_well, false);
> >  }
> >  
> > +/**
> > + * intel_display_power_vblank_get - acquire a VBLANK power domain reference atomically
> > + * @dev_priv: i915 device instance
> > + *
> > + * This function gets a POWER_DOMAIN_VBLANK reference without blocking and
> > + * returns true if the DC_OFF power well was disabled since this function was
> > + * called the last time.
> > + */
> > +bool intel_display_power_vblank_get(struct drm_i915_private *dev_priv)
> > +{
> > +	struct i915_power_domains *power_domains = &dev_priv->power_domains;
> > +	struct i915_power_well *power_well;
> > +	bool needs_restore = false;
> > +
> > +	if (!HAS_CSR(dev_priv) || !dev_priv->psr.source_ok)
> > +		return false;
> > +
> > +	/* The corresponding CRTC should be active by the time driver turns on
> > +	 * vblank interrupts, which in turn means the enabled pipe power domain
> > +	 * would have acquired the device runtime pm reference.
> > +	 */
> > +	intel_runtime_pm_get_if_in_use(dev_priv);
> > +
> > +	for_each_power_domain_well(dev_priv,  power_well, BIT_ULL(POWER_DOMAIN_VBLANK)) {
> > +		if (power_well->id == SKL_DISP_PW_DC_OFF)
> > +			spin_lock(&power_well->dc_off.lock);
> > +
> > +		intel_power_well_get(dev_priv, power_well);
> > +
> > +		if (power_well->id == SKL_DISP_PW_DC_OFF) {
> > +			needs_restore = power_well->dc_off.was_disabled;
> > +			power_well->dc_off.was_disabled = false;
> > +			spin_unlock(&power_well->dc_off.lock);
> 
> I don't understand why you need the was_disabled here and not the counter like
> the regular pw?
> 
The return from this function is used to determine whether we need to
calculate the missed vblanks.

Let's say we returned true only when the power well was enabled (based
on refcount), we'll end up not fixing up the vblanks in this scenario -

_vblank_put() 		-> DC off disabled(ref count=0)
<frame counter reset>
_power_get(AUX_A) 	-> DC off enabled (ref count=1)
_vblank_get()		-> 		  (ref count=2)



> But also overall I can't see how this is avoiding the mutex_lock(&power_domains->lock);
> in general. Is there a way to make it more generic in a way that we could define atomic pw and regular pw
> and we could see which one would be taking the atomic path easily?
> 

I did try to create a generic interface with _power_nb_get(domain) and
_power_nb_put(domain), but the above problem meant that I had to write
code specific to the VBLANK power domain inside the generic
_power_nb_get(domain) method.


> Thanks,
> Rodrigo.
> 
> > +		}
> > +	}
> > +	atomic_inc(&power_domains->domain_use_count[POWER_DOMAIN_VBLANK]);
> > +
> > +	return needs_restore;
> > +}
> > +
> > +/**
> > + * intel_display_power_vblank_put - drop a VBLANK power domain reference atomically
> > + *
> > + * A non-blocking version intel_display_power_put() specifically to control the
> > + * DC_OFF power well in atomic contexts.
> > + */
> > +void intel_display_power_vblank_put(struct drm_i915_private *dev_priv)
> > +{
> > +	struct i915_power_domains *power_domains = &dev_priv->power_domains;
> > +	struct i915_power_well *power_well;
> > +
> > +	if (!HAS_CSR(dev_priv) || !dev_priv->psr.source_ok)
> > +		return;
> > +
> > +	WARN(atomic_dec_return(&power_domains->domain_use_count[POWER_DOMAIN_VBLANK]) < 0,
> > +	     "Use count on domain %s was already zero\n",
> > +	     intel_display_power_domain_str(POWER_DOMAIN_VBLANK));
> > +
> > +	for_each_power_domain_well_rev(dev_priv,  power_well, BIT_ULL(POWER_DOMAIN_VBLANK)) {
> > +		if (power_well->id == SKL_DISP_PW_DC_OFF)
> > +			spin_lock(&power_well->dc_off.lock);
> > +
> > +		intel_power_well_put(dev_priv, power_well);
> > +
> > +		if (power_well->id == SKL_DISP_PW_DC_OFF)
> > +			spin_unlock(&power_well->dc_off.lock);
> > +	}
> > +
> > +	intel_runtime_pm_put(dev_priv);
> > +}
> > +
> >  static void
> >  __intel_display_power_get_domain(struct drm_i915_private *dev_priv,
> >  				 enum intel_display_power_domain domain)
> > @@ -1448,9 +1529,16 @@ __intel_display_power_get_domain(struct drm_i915_private *dev_priv,
> >  	struct i915_power_domains *power_domains = &dev_priv->power_domains;
> >  	struct i915_power_well *power_well;
> >  
> > -	for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain))
> > +	for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain)) {
> > +		if (power_well->id == SKL_DISP_PW_DC_OFF)
> > +			spin_lock(&power_well->dc_off.lock);
> > +
> >  		intel_power_well_get(dev_priv, power_well);
> >  
> > +		if (power_well->id == SKL_DISP_PW_DC_OFF)
> > +			spin_unlock(&power_well->dc_off.lock);
> > +	}
> > +
> >  	atomic_inc(&power_domains->domain_use_count[domain]);
> >  }
> >  
> > @@ -1541,9 +1629,16 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
> >  	     "Use count on domain %s was already zero\n",
> >  	     intel_display_power_domain_str(domain));
> >  
> > -	for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain))
> > +	for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain)) {
> > +		if (power_well->id == SKL_DISP_PW_DC_OFF)
> > +			spin_lock(&power_well->dc_off.lock);
> > +
> >  		intel_power_well_put(dev_priv, power_well);
> >  
> > +		if (power_well->id == SKL_DISP_PW_DC_OFF)
> > +			spin_unlock(&power_well->dc_off.lock);
> > +	}
> > +
> >  	mutex_unlock(&power_domains->lock);
> >  
> >  	intel_runtime_pm_put(dev_priv);
> > @@ -1706,6 +1801,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
> >  	SKL_DISPLAY_POWERWELL_2_POWER_DOMAINS |		\
> >  	BIT_ULL(POWER_DOMAIN_MODESET) |			\
> >  	BIT_ULL(POWER_DOMAIN_AUX_A) |			\
> > +	BIT_ULL(POWER_DOMAIN_VBLANK) |			\
> >  	BIT_ULL(POWER_DOMAIN_INIT))
> >  
> >  #define BXT_DISPLAY_POWERWELL_2_POWER_DOMAINS (		\
> > @@ -2342,6 +2438,9 @@ static struct i915_power_well cnl_power_wells[] = {
> >  		.domains = CNL_DISPLAY_DC_OFF_POWER_DOMAINS,
> >  		.ops = &gen9_dc_off_power_well_ops,
> >  		.id = SKL_DISP_PW_DC_OFF,
> > +		{
> > +			.dc_off.was_disabled = false,
> > +		},
> >  	},
> >  	{
> >  		.name = "power well 2",
> > @@ -2470,6 +2569,7 @@ static void assert_power_well_ids_unique(struct drm_i915_private *dev_priv)
> >  int intel_power_domains_init(struct drm_i915_private *dev_priv)
> >  {
> >  	struct i915_power_domains *power_domains = &dev_priv->power_domains;
> > +	struct i915_power_well *dc_off_pw;
> >  
> >  	i915_modparams.disable_power_well =
> >  		sanitize_disable_power_well_option(dev_priv,
> > @@ -2507,6 +2607,10 @@ int intel_power_domains_init(struct drm_i915_private *dev_priv)
> >  		set_power_wells(power_domains, i9xx_always_on_power_well);
> >  	}
> >  
> > +	dc_off_pw = lookup_power_well(dev_priv, SKL_DISP_PW_DC_OFF);
> > +	if (dc_off_pw)
> > +		spin_lock_init(&dc_off_pw->dc_off.lock);
> > +
> >  	assert_power_well_ids_unique(dev_priv);
> >  
> >  	return 0;
> > @@ -2555,8 +2659,14 @@ static void intel_power_domains_sync_hw(struct drm_i915_private *dev_priv)
> >  	mutex_lock(&power_domains->lock);
> >  	for_each_power_well(dev_priv, power_well) {
> >  		power_well->ops->sync_hw(dev_priv, power_well);
> > +
> > +		if (power_well->id == SKL_DISP_PW_DC_OFF)
> > +			spin_lock(&power_well->dc_off.lock);
> > +
> >  		power_well->hw_enabled = power_well->ops->is_enabled(dev_priv,
> >  								     power_well);
> > +		if (power_well->id == SKL_DISP_PW_DC_OFF)
> > +			spin_unlock(&power_well->dc_off.lock);
> >  	}
> >  	mutex_unlock(&power_domains->lock);
> >  }
> > @@ -3079,6 +3189,13 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv)
> >  		if (!power_well->domains)
> >  			continue;
> >  
> > +		/*
> > +		 * Reading SKL_DISP_PW_DC_OFF power_well->count without its
> > +		 * private spinlock should be safe here as power_well->count
> > +		 * gets modified only in power_well_get() and power_well_put()
> > +		 * and they are not called until drm_crtc_vblank_on().
> > +		 */
> > +
> >  		enabled = power_well->ops->is_enabled(dev_priv, power_well);
> >  		if ((power_well->count || power_well->always_on) != enabled)
> >  			DRM_ERROR("power well %s state mismatch (refcount %d/enabled %d)",
> > -- 
> > 2.11.0
> > 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 5/5] drm/i915: Use the vblank power domain disallow or disable DC states.
  2018-01-03 20:39 [PATCH 1/5] " Dhinakaran Pandiyan
@ 2018-01-03 20:40 ` Dhinakaran Pandiyan
  0 siblings, 0 replies; 12+ messages in thread
From: Dhinakaran Pandiyan @ 2018-01-03 20:40 UTC (permalink / raw)
  To: intel-gfx; +Cc: Daniel Vetter, Dhinakaran Pandiyan, Rodrigo Vivi

Disable DC states before enabling vblank interrupts and conversely
enable DC states after disabling. Since the frame counter may have got
reset between disabling and enabling, use drm_crtc_vblank_restore() to
compute the missed vblanks.

Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
---
 drivers/gpu/drm/i915/i915_irq.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 3517c6548e2c..88b4ceac55d0 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -2963,6 +2963,11 @@ static int gen8_enable_vblank(struct drm_device *dev, unsigned int pipe)
 {
 	struct drm_i915_private *dev_priv = to_i915(dev);
 	unsigned long irqflags;
+	bool needs_restore = false;
+
+	intel_display_power_vblank_get(dev_priv, &needs_restore);
+	if (needs_restore)
+		drm_crtc_vblank_restore(dev, pipe);
 
 	spin_lock_irqsave(&dev_priv->irq_lock, irqflags);
 	bdw_enable_pipe_irq(dev_priv, pipe, GEN8_PIPE_VBLANK);
@@ -3015,6 +3020,7 @@ static void gen8_disable_vblank(struct drm_device *dev, unsigned int pipe)
 	spin_lock_irqsave(&dev_priv->irq_lock, irqflags);
 	bdw_disable_pipe_irq(dev_priv, pipe, GEN8_PIPE_VBLANK);
 	spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags);
+	intel_display_power_vblank_put(dev_priv);
 }
 
 static void ibx_irq_reset(struct drm_i915_private *dev_priv)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2018-01-03 20:41 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-06 22:47 [PATCH 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Dhinakaran Pandiyan
2017-12-06 22:47 ` [PATCH 2/5] drm/vblank: Restoring vblank counts after device runtime PM events Dhinakaran Pandiyan
2017-12-06 22:47 ` [PATCH 3/5] drm/i915: Use an atomic_t array to track power domain use count Dhinakaran Pandiyan
2017-12-06 22:47 ` [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts Dhinakaran Pandiyan
2017-12-06 23:54   ` Rodrigo Vivi
2017-12-07 19:01     ` Pandiyan, Dhinakaran
2017-12-06 22:47 ` [PATCH 5/5] drm/i915: Use the vblank power domain disallow or disable DC states Dhinakaran Pandiyan
2017-12-06 23:16 ` ✗ Fi.CI.BAT: failure for series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Patchwork
2017-12-07 18:46   ` Pandiyan, Dhinakaran
2017-12-07 18:46   ` Pandiyan, Dhinakaran
2017-12-07 18:47   ` Pandiyan, Dhinakaran
2018-01-03 20:39 [PATCH 1/5] " Dhinakaran Pandiyan
2018-01-03 20:40 ` [PATCH 5/5] drm/i915: Use the vblank power domain disallow or disable DC states Dhinakaran Pandiyan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.