* [PATCH 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled. @ 2017-12-06 22:47 Dhinakaran Pandiyan 2017-12-06 22:47 ` [PATCH 2/5] drm/vblank: Restoring vblank counts after device runtime PM events Dhinakaran Pandiyan ` (4 more replies) 0 siblings, 5 replies; 16+ messages in thread From: Dhinakaran Pandiyan @ 2017-12-06 22:47 UTC (permalink / raw) To: intel-gfx; +Cc: Daniel Vetter, Dhinakaran Pandiyan, dri-devel, Rodrigo Vivi Updating the vblank counts requires register reads and these reads may not return meaningful values after the vblank interrupts are disabled as the device may go to low power state. An additional change would be to allow the driver to save the vblank counts before entering a low power state, but that's for the future. Also, disable vblanks after reading the HW counter in the case where _crtc_vblank_off() is disabling vblanks. Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> --- drivers/gpu/drm/drm_vblank.c | 23 +++++++++-------------- 1 file changed, 9 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c index 32d9bcf5be7f..7eee82c06ed8 100644 --- a/drivers/gpu/drm/drm_vblank.c +++ b/drivers/gpu/drm/drm_vblank.c @@ -347,23 +347,14 @@ void drm_vblank_disable_and_save(struct drm_device *dev, unsigned int pipe) spin_lock_irqsave(&dev->vblank_time_lock, irqflags); /* - * Only disable vblank interrupts if they're enabled. This avoids - * calling the ->disable_vblank() operation in atomic context with the - * hardware potentially runtime suspended. - */ - if (vblank->enabled) { - __disable_vblank(dev, pipe); - vblank->enabled = false; - } - - /* - * Always update the count and timestamp to maintain the + * Update the count and timestamp to maintain the * appearance that the counter has been ticking all along until * this time. This makes the count account for the entire time * between drm_crtc_vblank_on() and drm_crtc_vblank_off(). */ drm_update_vblank_count(dev, pipe, false); - + __disable_vblank(dev, pipe); + vblank->enabled = false; spin_unlock_irqrestore(&dev->vblank_time_lock, irqflags); } @@ -1122,8 +1113,12 @@ void drm_crtc_vblank_off(struct drm_crtc *crtc) pipe, vblank->enabled, vblank->inmodeset); /* Avoid redundant vblank disables without previous - * drm_crtc_vblank_on(). */ - if (drm_core_check_feature(dev, DRIVER_ATOMIC) || !vblank->inmodeset) + * drm_crtc_vblank_on() and only disable them if they're enabled. This + * avoids calling the ->disable_vblank() operation in atomic context + * with the hardware potentially runtime suspended. + */ + if ((drm_core_check_feature(dev, DRIVER_ATOMIC) || !vblank->inmodeset) && + vblank->enabled) drm_vblank_disable_and_save(dev, pipe); wake_up(&vblank->queue); -- 2.11.0 _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 2/5] drm/vblank: Restoring vblank counts after device runtime PM events. 2017-12-06 22:47 [PATCH 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Dhinakaran Pandiyan @ 2017-12-06 22:47 ` Dhinakaran Pandiyan 2017-12-06 22:47 ` [PATCH 3/5] drm/i915: Use an atomic_t array to track power domain use count Dhinakaran Pandiyan ` (3 subsequent siblings) 4 siblings, 0 replies; 16+ messages in thread From: Dhinakaran Pandiyan @ 2017-12-06 22:47 UTC (permalink / raw) To: intel-gfx; +Cc: Daniel Vetter, Dhinakaran Pandiyan, dri-devel, Rodrigo Vivi The HW frame counter can get reset when devices enters low power states and this messes up any following vblank count updates. So, compute the missed vblank interrupts for that low power state duration using time stamps. This is similar to _crtc_vblank_on() except that it doesn't enable vblank interrupts because this function is expected to be called from the driver _enable_vblank() vfunc. Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> --- drivers/gpu/drm/drm_vblank.c | 30 ++++++++++++++++++++++++++++++ include/drm/drm_vblank.h | 1 + 2 files changed, 31 insertions(+) diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c index 7eee82c06ed8..69d537cea149 100644 --- a/drivers/gpu/drm/drm_vblank.c +++ b/drivers/gpu/drm/drm_vblank.c @@ -1230,6 +1230,36 @@ void drm_crtc_vblank_on(struct drm_crtc *crtc) } EXPORT_SYMBOL(drm_crtc_vblank_on); +void drm_crtc_vblank_restore(struct drm_device *dev, unsigned int pipe) +{ + ktime_t t_vblank; + struct drm_vblank_crtc *vblank; + int framedur_ns; + u64 diff_ns; + u32 cur_vblank, diff = 1; + int count = DRM_TIMESTAMP_MAXRETRIES; + + if (WARN_ON(pipe >= dev->num_crtcs)) + return; + + vblank = &dev->vblank[pipe]; + framedur_ns = vblank->framedur_ns; + + do { + cur_vblank = __get_vblank_counter(dev, pipe); + drm_get_last_vbltimestamp(dev, pipe, &t_vblank, false); + } while (cur_vblank != __get_vblank_counter(dev, pipe) && --count > 0); + + diff_ns = ktime_to_ns(ktime_sub(t_vblank, vblank->time)); + if (framedur_ns) + diff = DIV_ROUND_CLOSEST_ULL(diff_ns, framedur_ns); + + DRM_DEBUG_VBL("computing missed vblanks %lld/%d=%d after HW counter reset hw_diff=%d\n", + diff_ns, framedur_ns, diff, cur_vblank - vblank->last); + store_vblank(dev, pipe, diff, t_vblank, cur_vblank); +} +EXPORT_SYMBOL(drm_crtc_vblank_restore); + static void drm_legacy_vblank_pre_modeset(struct drm_device *dev, unsigned int pipe) { diff --git a/include/drm/drm_vblank.h b/include/drm/drm_vblank.h index 848b463a0af5..aafcbef91bd7 100644 --- a/include/drm/drm_vblank.h +++ b/include/drm/drm_vblank.h @@ -180,6 +180,7 @@ void drm_crtc_vblank_off(struct drm_crtc *crtc); void drm_crtc_vblank_reset(struct drm_crtc *crtc); void drm_crtc_vblank_on(struct drm_crtc *crtc); u32 drm_crtc_accurate_vblank_count(struct drm_crtc *crtc); +void drm_crtc_vblank_restore(struct drm_device *dev, unsigned int pipe); bool drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev, unsigned int pipe, int *max_error, -- 2.11.0 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 3/5] drm/i915: Use an atomic_t array to track power domain use count. 2017-12-06 22:47 [PATCH 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Dhinakaran Pandiyan 2017-12-06 22:47 ` [PATCH 2/5] drm/vblank: Restoring vblank counts after device runtime PM events Dhinakaran Pandiyan @ 2017-12-06 22:47 ` Dhinakaran Pandiyan 2017-12-06 22:47 ` [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts Dhinakaran Pandiyan ` (2 subsequent siblings) 4 siblings, 0 replies; 16+ messages in thread From: Dhinakaran Pandiyan @ 2017-12-06 22:47 UTC (permalink / raw) To: intel-gfx; +Cc: Daniel Vetter, Dhinakaran Pandiyan, dri-devel, Rodrigo Vivi Convert the power_domains->domain_use_count array that tracks per-domain use count to atomic_t type. This is needed to be able to read/write the use counts outside of the power domain mutex. Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> --- drivers/gpu/drm/i915/i915_debugfs.c | 2 +- drivers/gpu/drm/i915/i915_drv.h | 2 +- drivers/gpu/drm/i915/intel_runtime_pm.c | 11 +++++------ 3 files changed, 7 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index 28294470ae31..2a4ed54688d7 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -2805,7 +2805,7 @@ static int i915_power_domain_info(struct seq_file *m, void *unused) for_each_power_domain(power_domain, power_well->domains) seq_printf(m, " %-23s %d\n", intel_display_power_domain_str(power_domain), - power_domains->domain_use_count[power_domain]); + atomic_read(&power_domains->domain_use_count[power_domain])); } mutex_unlock(&power_domains->lock); diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 594fd14e66c5..18d42885205b 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -1489,7 +1489,7 @@ struct i915_power_domains { int power_well_count; struct mutex lock; - int domain_use_count[POWER_DOMAIN_NUM]; + atomic_t domain_use_count[POWER_DOMAIN_NUM]; struct i915_power_well *power_wells; }; diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c index 8315499452dc..f88f2c070c5f 100644 --- a/drivers/gpu/drm/i915/intel_runtime_pm.c +++ b/drivers/gpu/drm/i915/intel_runtime_pm.c @@ -1451,7 +1451,7 @@ __intel_display_power_get_domain(struct drm_i915_private *dev_priv, for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain)) intel_power_well_get(dev_priv, power_well); - power_domains->domain_use_count[domain]++; + atomic_inc(&power_domains->domain_use_count[domain]); } /** @@ -1537,10 +1537,9 @@ void intel_display_power_put(struct drm_i915_private *dev_priv, mutex_lock(&power_domains->lock); - WARN(!power_domains->domain_use_count[domain], - "Use count on domain %s is already zero\n", + WARN(atomic_dec_return(&power_domains->domain_use_count[domain]) < 0, + "Use count on domain %s was already zero\n", intel_display_power_domain_str(domain)); - power_domains->domain_use_count[domain]--; for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain)) intel_power_well_put(dev_priv, power_well); @@ -3044,7 +3043,7 @@ static void intel_power_domains_dump_info(struct drm_i915_private *dev_priv) for_each_power_domain(domain, power_well->domains) DRM_DEBUG_DRIVER(" %-23s %d\n", intel_display_power_domain_str(domain), - power_domains->domain_use_count[domain]); + atomic_read(&power_domains->domain_use_count[domain])); } } @@ -3087,7 +3086,7 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv) domains_count = 0; for_each_power_domain(domain, power_well->domains) - domains_count += power_domains->domain_use_count[domain]; + domains_count += atomic_read(&power_domains->domain_use_count[domain]); if (power_well->count != domains_count) { DRM_ERROR("power well %s refcount/domain refcount mismatch " -- 2.11.0 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts 2017-12-06 22:47 [PATCH 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Dhinakaran Pandiyan 2017-12-06 22:47 ` [PATCH 2/5] drm/vblank: Restoring vblank counts after device runtime PM events Dhinakaran Pandiyan 2017-12-06 22:47 ` [PATCH 3/5] drm/i915: Use an atomic_t array to track power domain use count Dhinakaran Pandiyan @ 2017-12-06 22:47 ` Dhinakaran Pandiyan 2017-12-06 23:54 ` Rodrigo Vivi 2017-12-06 22:47 ` [PATCH 5/5] drm/i915: Use the vblank power domain disallow or disable DC states Dhinakaran Pandiyan 2017-12-06 23:16 ` ✗ Fi.CI.BAT: failure for series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Patchwork 4 siblings, 1 reply; 16+ messages in thread From: Dhinakaran Pandiyan @ 2017-12-06 22:47 UTC (permalink / raw) To: intel-gfx; +Cc: Daniel Vetter, Dhinakaran Pandiyan, dri-devel, Rodrigo Vivi When DC states are enabled and PSR is active, the hardware enters DC5/DC6 states resulting in frame counter resets. The frame counter resets mess up the vblank counting logic. So in order to disable DC states when vblank interrupts are required and to disallow DC states when vblanks interrupts are already enabled, introduce a new power domain. Since this power domain reference needs to be acquired and released in atomic context, the corresponding _get() and _put() methods skip the power_domain mutex. Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> --- drivers/gpu/drm/i915/i915_drv.h | 5 ++ drivers/gpu/drm/i915/intel_drv.h | 3 +- drivers/gpu/drm/i915/intel_runtime_pm.c | 125 +++++++++++++++++++++++++++++++- 3 files changed, 128 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 18d42885205b..ba9107ec1ed1 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -397,6 +397,7 @@ enum intel_display_power_domain { POWER_DOMAIN_AUX_C, POWER_DOMAIN_AUX_D, POWER_DOMAIN_GMBUS, + POWER_DOMAIN_VBLANK, POWER_DOMAIN_MODESET, POWER_DOMAIN_INIT, @@ -1475,6 +1476,10 @@ struct i915_power_well { bool has_vga:1; bool has_fuses:1; } hsw; + struct { + spinlock_t lock; + bool was_disabled; + } dc_off; }; const struct i915_power_well_ops *ops; }; diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h index 30f791f89d64..93ca503f18bb 100644 --- a/drivers/gpu/drm/i915/intel_drv.h +++ b/drivers/gpu/drm/i915/intel_drv.h @@ -1865,7 +1865,8 @@ void chv_phy_powergate_lanes(struct intel_encoder *encoder, bool override, unsigned int mask); bool chv_phy_powergate_ch(struct drm_i915_private *dev_priv, enum dpio_phy phy, enum dpio_channel ch, bool override); - +bool intel_display_power_vblank_get(struct drm_i915_private *dev_priv); +void intel_display_power_vblank_put(struct drm_i915_private *dev_priv); /* intel_pm.c */ void intel_init_clock_gating(struct drm_i915_private *dev_priv); diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c index f88f2c070c5f..f1807bd74242 100644 --- a/drivers/gpu/drm/i915/intel_runtime_pm.c +++ b/drivers/gpu/drm/i915/intel_runtime_pm.c @@ -126,6 +126,8 @@ intel_display_power_domain_str(enum intel_display_power_domain domain) return "AUX_D"; case POWER_DOMAIN_GMBUS: return "GMBUS"; + case POWER_DOMAIN_VBLANK: + return "VBLANK"; case POWER_DOMAIN_INIT: return "INIT"; case POWER_DOMAIN_MODESET: @@ -196,10 +198,17 @@ bool __intel_display_power_is_enabled(struct drm_i915_private *dev_priv, if (power_well->always_on) continue; - if (!power_well->hw_enabled) { + if (power_well->id == SKL_DISP_PW_DC_OFF) + spin_lock(&power_well->dc_off.lock); + + if (!power_well->hw_enabled) is_enabled = false; + + if (power_well->id == SKL_DISP_PW_DC_OFF) + spin_unlock(&power_well->dc_off.lock); + + if (!is_enabled) break; - } } return is_enabled; @@ -724,6 +733,7 @@ static void gen9_dc_off_power_well_disable(struct drm_i915_private *dev_priv, skl_enable_dc6(dev_priv); else if (dev_priv->csr.allowed_dc_mask & DC_STATE_EN_UPTO_DC5) gen9_enable_dc5(dev_priv); + power_well->dc_off.was_disabled = true; } static void i9xx_power_well_sync_hw_noop(struct drm_i915_private *dev_priv, @@ -1441,6 +1451,77 @@ static void chv_pipe_power_well_disable(struct drm_i915_private *dev_priv, chv_set_pipe_power_well(dev_priv, power_well, false); } +/** + * intel_display_power_vblank_get - acquire a VBLANK power domain reference atomically + * @dev_priv: i915 device instance + * + * This function gets a POWER_DOMAIN_VBLANK reference without blocking and + * returns true if the DC_OFF power well was disabled since this function was + * called the last time. + */ +bool intel_display_power_vblank_get(struct drm_i915_private *dev_priv) +{ + struct i915_power_domains *power_domains = &dev_priv->power_domains; + struct i915_power_well *power_well; + bool needs_restore = false; + + if (!HAS_CSR(dev_priv) || !dev_priv->psr.source_ok) + return false; + + /* The corresponding CRTC should be active by the time driver turns on + * vblank interrupts, which in turn means the enabled pipe power domain + * would have acquired the device runtime pm reference. + */ + intel_runtime_pm_get_if_in_use(dev_priv); + + for_each_power_domain_well(dev_priv, power_well, BIT_ULL(POWER_DOMAIN_VBLANK)) { + if (power_well->id == SKL_DISP_PW_DC_OFF) + spin_lock(&power_well->dc_off.lock); + + intel_power_well_get(dev_priv, power_well); + + if (power_well->id == SKL_DISP_PW_DC_OFF) { + needs_restore = power_well->dc_off.was_disabled; + power_well->dc_off.was_disabled = false; + spin_unlock(&power_well->dc_off.lock); + } + } + atomic_inc(&power_domains->domain_use_count[POWER_DOMAIN_VBLANK]); + + return needs_restore; +} + +/** + * intel_display_power_vblank_put - drop a VBLANK power domain reference atomically + * + * A non-blocking version intel_display_power_put() specifically to control the + * DC_OFF power well in atomic contexts. + */ +void intel_display_power_vblank_put(struct drm_i915_private *dev_priv) +{ + struct i915_power_domains *power_domains = &dev_priv->power_domains; + struct i915_power_well *power_well; + + if (!HAS_CSR(dev_priv) || !dev_priv->psr.source_ok) + return; + + WARN(atomic_dec_return(&power_domains->domain_use_count[POWER_DOMAIN_VBLANK]) < 0, + "Use count on domain %s was already zero\n", + intel_display_power_domain_str(POWER_DOMAIN_VBLANK)); + + for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(POWER_DOMAIN_VBLANK)) { + if (power_well->id == SKL_DISP_PW_DC_OFF) + spin_lock(&power_well->dc_off.lock); + + intel_power_well_put(dev_priv, power_well); + + if (power_well->id == SKL_DISP_PW_DC_OFF) + spin_unlock(&power_well->dc_off.lock); + } + + intel_runtime_pm_put(dev_priv); +} + static void __intel_display_power_get_domain(struct drm_i915_private *dev_priv, enum intel_display_power_domain domain) @@ -1448,9 +1529,16 @@ __intel_display_power_get_domain(struct drm_i915_private *dev_priv, struct i915_power_domains *power_domains = &dev_priv->power_domains; struct i915_power_well *power_well; - for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain)) + for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain)) { + if (power_well->id == SKL_DISP_PW_DC_OFF) + spin_lock(&power_well->dc_off.lock); + intel_power_well_get(dev_priv, power_well); + if (power_well->id == SKL_DISP_PW_DC_OFF) + spin_unlock(&power_well->dc_off.lock); + } + atomic_inc(&power_domains->domain_use_count[domain]); } @@ -1541,9 +1629,16 @@ void intel_display_power_put(struct drm_i915_private *dev_priv, "Use count on domain %s was already zero\n", intel_display_power_domain_str(domain)); - for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain)) + for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain)) { + if (power_well->id == SKL_DISP_PW_DC_OFF) + spin_lock(&power_well->dc_off.lock); + intel_power_well_put(dev_priv, power_well); + if (power_well->id == SKL_DISP_PW_DC_OFF) + spin_unlock(&power_well->dc_off.lock); + } + mutex_unlock(&power_domains->lock); intel_runtime_pm_put(dev_priv); @@ -1706,6 +1801,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv, SKL_DISPLAY_POWERWELL_2_POWER_DOMAINS | \ BIT_ULL(POWER_DOMAIN_MODESET) | \ BIT_ULL(POWER_DOMAIN_AUX_A) | \ + BIT_ULL(POWER_DOMAIN_VBLANK) | \ BIT_ULL(POWER_DOMAIN_INIT)) #define BXT_DISPLAY_POWERWELL_2_POWER_DOMAINS ( \ @@ -2342,6 +2438,9 @@ static struct i915_power_well cnl_power_wells[] = { .domains = CNL_DISPLAY_DC_OFF_POWER_DOMAINS, .ops = &gen9_dc_off_power_well_ops, .id = SKL_DISP_PW_DC_OFF, + { + .dc_off.was_disabled = false, + }, }, { .name = "power well 2", @@ -2470,6 +2569,7 @@ static void assert_power_well_ids_unique(struct drm_i915_private *dev_priv) int intel_power_domains_init(struct drm_i915_private *dev_priv) { struct i915_power_domains *power_domains = &dev_priv->power_domains; + struct i915_power_well *dc_off_pw; i915_modparams.disable_power_well = sanitize_disable_power_well_option(dev_priv, @@ -2507,6 +2607,10 @@ int intel_power_domains_init(struct drm_i915_private *dev_priv) set_power_wells(power_domains, i9xx_always_on_power_well); } + dc_off_pw = lookup_power_well(dev_priv, SKL_DISP_PW_DC_OFF); + if (dc_off_pw) + spin_lock_init(&dc_off_pw->dc_off.lock); + assert_power_well_ids_unique(dev_priv); return 0; @@ -2555,8 +2659,14 @@ static void intel_power_domains_sync_hw(struct drm_i915_private *dev_priv) mutex_lock(&power_domains->lock); for_each_power_well(dev_priv, power_well) { power_well->ops->sync_hw(dev_priv, power_well); + + if (power_well->id == SKL_DISP_PW_DC_OFF) + spin_lock(&power_well->dc_off.lock); + power_well->hw_enabled = power_well->ops->is_enabled(dev_priv, power_well); + if (power_well->id == SKL_DISP_PW_DC_OFF) + spin_unlock(&power_well->dc_off.lock); } mutex_unlock(&power_domains->lock); } @@ -3079,6 +3189,13 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv) if (!power_well->domains) continue; + /* + * Reading SKL_DISP_PW_DC_OFF power_well->count without its + * private spinlock should be safe here as power_well->count + * gets modified only in power_well_get() and power_well_put() + * and they are not called until drm_crtc_vblank_on(). + */ + enabled = power_well->ops->is_enabled(dev_priv, power_well); if ((power_well->count || power_well->always_on) != enabled) DRM_ERROR("power well %s state mismatch (refcount %d/enabled %d)", -- 2.11.0 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts 2017-12-06 22:47 ` [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts Dhinakaran Pandiyan @ 2017-12-06 23:54 ` Rodrigo Vivi 2017-12-07 19:01 ` Pandiyan, Dhinakaran 0 siblings, 1 reply; 16+ messages in thread From: Rodrigo Vivi @ 2017-12-06 23:54 UTC (permalink / raw) To: Dhinakaran Pandiyan; +Cc: Daniel Vetter, intel-gfx, dri-devel On Wed, Dec 06, 2017 at 10:47:40PM +0000, Dhinakaran Pandiyan wrote: > When DC states are enabled and PSR is active, the hardware enters DC5/DC6 > states resulting in frame counter resets. The frame counter resets mess > up the vblank counting logic. So in order to disable DC states when > vblank interrupts are required and to disallow DC states when vblanks > interrupts are already enabled, introduce a new power domain. Since this > power domain reference needs to be acquired and released in atomic context, > the corresponding _get() and _put() methods skip the power_domain mutex. \o/ > > Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> > --- > drivers/gpu/drm/i915/i915_drv.h | 5 ++ > drivers/gpu/drm/i915/intel_drv.h | 3 +- > drivers/gpu/drm/i915/intel_runtime_pm.c | 125 +++++++++++++++++++++++++++++++- > 3 files changed, 128 insertions(+), 5 deletions(-) > > diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h > index 18d42885205b..ba9107ec1ed1 100644 > --- a/drivers/gpu/drm/i915/i915_drv.h > +++ b/drivers/gpu/drm/i915/i915_drv.h > @@ -397,6 +397,7 @@ enum intel_display_power_domain { > POWER_DOMAIN_AUX_C, > POWER_DOMAIN_AUX_D, > POWER_DOMAIN_GMBUS, > + POWER_DOMAIN_VBLANK, > POWER_DOMAIN_MODESET, > POWER_DOMAIN_INIT, > > @@ -1475,6 +1476,10 @@ struct i915_power_well { > bool has_vga:1; > bool has_fuses:1; > } hsw; > + struct { > + spinlock_t lock; > + bool was_disabled; > + } dc_off; what about a more generic name here? something like + struct { + spinlock_t lock; + bool was_disabled; + } atomic; + is_atomic; //or needs_atomic so... > }; > const struct i915_power_well_ops *ops; > }; > diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h > index 30f791f89d64..93ca503f18bb 100644 > --- a/drivers/gpu/drm/i915/intel_drv.h > +++ b/drivers/gpu/drm/i915/intel_drv.h > @@ -1865,7 +1865,8 @@ void chv_phy_powergate_lanes(struct intel_encoder *encoder, > bool override, unsigned int mask); > bool chv_phy_powergate_ch(struct drm_i915_private *dev_priv, enum dpio_phy phy, > enum dpio_channel ch, bool override); > - > +bool intel_display_power_vblank_get(struct drm_i915_private *dev_priv); > +void intel_display_power_vblank_put(struct drm_i915_private *dev_priv); > > /* intel_pm.c */ > void intel_init_clock_gating(struct drm_i915_private *dev_priv); > diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c > index f88f2c070c5f..f1807bd74242 100644 > --- a/drivers/gpu/drm/i915/intel_runtime_pm.c > +++ b/drivers/gpu/drm/i915/intel_runtime_pm.c > @@ -126,6 +126,8 @@ intel_display_power_domain_str(enum intel_display_power_domain domain) > return "AUX_D"; > case POWER_DOMAIN_GMBUS: > return "GMBUS"; > + case POWER_DOMAIN_VBLANK: > + return "VBLANK"; > case POWER_DOMAIN_INIT: > return "INIT"; > case POWER_DOMAIN_MODESET: > @@ -196,10 +198,17 @@ bool __intel_display_power_is_enabled(struct drm_i915_private *dev_priv, > if (power_well->always_on) > continue; > > - if (!power_well->hw_enabled) { > + if (power_well->id == SKL_DISP_PW_DC_OFF) > + spin_lock(&power_well->dc_off.lock); ... instead of a specif pw check here you would have + if (power_well->is_atomic) + spin_lock(&power_well->atomic.lock) > + > + if (!power_well->hw_enabled) > is_enabled = false; > + > + if (power_well->id == SKL_DISP_PW_DC_OFF) > + spin_unlock(&power_well->dc_off.lock); > + > + if (!is_enabled) > break; > - } > } > > return is_enabled; > @@ -724,6 +733,7 @@ static void gen9_dc_off_power_well_disable(struct drm_i915_private *dev_priv, > skl_enable_dc6(dev_priv); > else if (dev_priv->csr.allowed_dc_mask & DC_STATE_EN_UPTO_DC5) > gen9_enable_dc5(dev_priv); > + power_well->dc_off.was_disabled = true; > } > > static void i9xx_power_well_sync_hw_noop(struct drm_i915_private *dev_priv, > @@ -1441,6 +1451,77 @@ static void chv_pipe_power_well_disable(struct drm_i915_private *dev_priv, > chv_set_pipe_power_well(dev_priv, power_well, false); > } > > +/** > + * intel_display_power_vblank_get - acquire a VBLANK power domain reference atomically > + * @dev_priv: i915 device instance > + * > + * This function gets a POWER_DOMAIN_VBLANK reference without blocking and > + * returns true if the DC_OFF power well was disabled since this function was > + * called the last time. > + */ > +bool intel_display_power_vblank_get(struct drm_i915_private *dev_priv) > +{ > + struct i915_power_domains *power_domains = &dev_priv->power_domains; > + struct i915_power_well *power_well; > + bool needs_restore = false; > + > + if (!HAS_CSR(dev_priv) || !dev_priv->psr.source_ok) > + return false; > + > + /* The corresponding CRTC should be active by the time driver turns on > + * vblank interrupts, which in turn means the enabled pipe power domain > + * would have acquired the device runtime pm reference. > + */ > + intel_runtime_pm_get_if_in_use(dev_priv); > + > + for_each_power_domain_well(dev_priv, power_well, BIT_ULL(POWER_DOMAIN_VBLANK)) { > + if (power_well->id == SKL_DISP_PW_DC_OFF) > + spin_lock(&power_well->dc_off.lock); > + > + intel_power_well_get(dev_priv, power_well); > + > + if (power_well->id == SKL_DISP_PW_DC_OFF) { > + needs_restore = power_well->dc_off.was_disabled; > + power_well->dc_off.was_disabled = false; > + spin_unlock(&power_well->dc_off.lock); I don't understand why you need the was_disabled here and not the counter like the regular pw? But also overall I can't see how this is avoiding the mutex_lock(&power_domains->lock); in general. Is there a way to make it more generic in a way that we could define atomic pw and regular pw and we could see which one would be taking the atomic path easily? Thanks, Rodrigo. > + } > + } > + atomic_inc(&power_domains->domain_use_count[POWER_DOMAIN_VBLANK]); > + > + return needs_restore; > +} > + > +/** > + * intel_display_power_vblank_put - drop a VBLANK power domain reference atomically > + * > + * A non-blocking version intel_display_power_put() specifically to control the > + * DC_OFF power well in atomic contexts. > + */ > +void intel_display_power_vblank_put(struct drm_i915_private *dev_priv) > +{ > + struct i915_power_domains *power_domains = &dev_priv->power_domains; > + struct i915_power_well *power_well; > + > + if (!HAS_CSR(dev_priv) || !dev_priv->psr.source_ok) > + return; > + > + WARN(atomic_dec_return(&power_domains->domain_use_count[POWER_DOMAIN_VBLANK]) < 0, > + "Use count on domain %s was already zero\n", > + intel_display_power_domain_str(POWER_DOMAIN_VBLANK)); > + > + for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(POWER_DOMAIN_VBLANK)) { > + if (power_well->id == SKL_DISP_PW_DC_OFF) > + spin_lock(&power_well->dc_off.lock); > + > + intel_power_well_put(dev_priv, power_well); > + > + if (power_well->id == SKL_DISP_PW_DC_OFF) > + spin_unlock(&power_well->dc_off.lock); > + } > + > + intel_runtime_pm_put(dev_priv); > +} > + > static void > __intel_display_power_get_domain(struct drm_i915_private *dev_priv, > enum intel_display_power_domain domain) > @@ -1448,9 +1529,16 @@ __intel_display_power_get_domain(struct drm_i915_private *dev_priv, > struct i915_power_domains *power_domains = &dev_priv->power_domains; > struct i915_power_well *power_well; > > - for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain)) > + for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain)) { > + if (power_well->id == SKL_DISP_PW_DC_OFF) > + spin_lock(&power_well->dc_off.lock); > + > intel_power_well_get(dev_priv, power_well); > > + if (power_well->id == SKL_DISP_PW_DC_OFF) > + spin_unlock(&power_well->dc_off.lock); > + } > + > atomic_inc(&power_domains->domain_use_count[domain]); > } > > @@ -1541,9 +1629,16 @@ void intel_display_power_put(struct drm_i915_private *dev_priv, > "Use count on domain %s was already zero\n", > intel_display_power_domain_str(domain)); > > - for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain)) > + for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain)) { > + if (power_well->id == SKL_DISP_PW_DC_OFF) > + spin_lock(&power_well->dc_off.lock); > + > intel_power_well_put(dev_priv, power_well); > > + if (power_well->id == SKL_DISP_PW_DC_OFF) > + spin_unlock(&power_well->dc_off.lock); > + } > + > mutex_unlock(&power_domains->lock); > > intel_runtime_pm_put(dev_priv); > @@ -1706,6 +1801,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv, > SKL_DISPLAY_POWERWELL_2_POWER_DOMAINS | \ > BIT_ULL(POWER_DOMAIN_MODESET) | \ > BIT_ULL(POWER_DOMAIN_AUX_A) | \ > + BIT_ULL(POWER_DOMAIN_VBLANK) | \ > BIT_ULL(POWER_DOMAIN_INIT)) > > #define BXT_DISPLAY_POWERWELL_2_POWER_DOMAINS ( \ > @@ -2342,6 +2438,9 @@ static struct i915_power_well cnl_power_wells[] = { > .domains = CNL_DISPLAY_DC_OFF_POWER_DOMAINS, > .ops = &gen9_dc_off_power_well_ops, > .id = SKL_DISP_PW_DC_OFF, > + { > + .dc_off.was_disabled = false, > + }, > }, > { > .name = "power well 2", > @@ -2470,6 +2569,7 @@ static void assert_power_well_ids_unique(struct drm_i915_private *dev_priv) > int intel_power_domains_init(struct drm_i915_private *dev_priv) > { > struct i915_power_domains *power_domains = &dev_priv->power_domains; > + struct i915_power_well *dc_off_pw; > > i915_modparams.disable_power_well = > sanitize_disable_power_well_option(dev_priv, > @@ -2507,6 +2607,10 @@ int intel_power_domains_init(struct drm_i915_private *dev_priv) > set_power_wells(power_domains, i9xx_always_on_power_well); > } > > + dc_off_pw = lookup_power_well(dev_priv, SKL_DISP_PW_DC_OFF); > + if (dc_off_pw) > + spin_lock_init(&dc_off_pw->dc_off.lock); > + > assert_power_well_ids_unique(dev_priv); > > return 0; > @@ -2555,8 +2659,14 @@ static void intel_power_domains_sync_hw(struct drm_i915_private *dev_priv) > mutex_lock(&power_domains->lock); > for_each_power_well(dev_priv, power_well) { > power_well->ops->sync_hw(dev_priv, power_well); > + > + if (power_well->id == SKL_DISP_PW_DC_OFF) > + spin_lock(&power_well->dc_off.lock); > + > power_well->hw_enabled = power_well->ops->is_enabled(dev_priv, > power_well); > + if (power_well->id == SKL_DISP_PW_DC_OFF) > + spin_unlock(&power_well->dc_off.lock); > } > mutex_unlock(&power_domains->lock); > } > @@ -3079,6 +3189,13 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv) > if (!power_well->domains) > continue; > > + /* > + * Reading SKL_DISP_PW_DC_OFF power_well->count without its > + * private spinlock should be safe here as power_well->count > + * gets modified only in power_well_get() and power_well_put() > + * and they are not called until drm_crtc_vblank_on(). > + */ > + > enabled = power_well->ops->is_enabled(dev_priv, power_well); > if ((power_well->count || power_well->always_on) != enabled) > DRM_ERROR("power well %s state mismatch (refcount %d/enabled %d)", > -- > 2.11.0 > _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts 2017-12-06 23:54 ` Rodrigo Vivi @ 2017-12-07 19:01 ` Pandiyan, Dhinakaran 0 siblings, 0 replies; 16+ messages in thread From: Pandiyan, Dhinakaran @ 2017-12-07 19:01 UTC (permalink / raw) To: Vivi, Rodrigo; +Cc: daniel.vetter, intel-gfx, dri-devel On Wed, 2017-12-06 at 15:54 -0800, Rodrigo Vivi wrote: > On Wed, Dec 06, 2017 at 10:47:40PM +0000, Dhinakaran Pandiyan wrote: > > When DC states are enabled and PSR is active, the hardware enters DC5/DC6 > > states resulting in frame counter resets. The frame counter resets mess > > up the vblank counting logic. So in order to disable DC states when > > vblank interrupts are required and to disallow DC states when vblanks > > interrupts are already enabled, introduce a new power domain. Since this > > power domain reference needs to be acquired and released in atomic context, > > the corresponding _get() and _put() methods skip the power_domain mutex. > > \o/ > > > > > Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> > > --- > > drivers/gpu/drm/i915/i915_drv.h | 5 ++ > > drivers/gpu/drm/i915/intel_drv.h | 3 +- > > drivers/gpu/drm/i915/intel_runtime_pm.c | 125 +++++++++++++++++++++++++++++++- > > 3 files changed, 128 insertions(+), 5 deletions(-) > > > > diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h > > index 18d42885205b..ba9107ec1ed1 100644 > > --- a/drivers/gpu/drm/i915/i915_drv.h > > +++ b/drivers/gpu/drm/i915/i915_drv.h > > @@ -397,6 +397,7 @@ enum intel_display_power_domain { > > POWER_DOMAIN_AUX_C, > > POWER_DOMAIN_AUX_D, > > POWER_DOMAIN_GMBUS, > > + POWER_DOMAIN_VBLANK, > > POWER_DOMAIN_MODESET, > > POWER_DOMAIN_INIT, > > > > @@ -1475,6 +1476,10 @@ struct i915_power_well { > > bool has_vga:1; > > bool has_fuses:1; > > } hsw; > > + struct { > > + spinlock_t lock; > > + bool was_disabled; > > + } dc_off; > > what about a more generic name here? > something like > > + struct { > + spinlock_t lock; > + bool was_disabled; > + } atomic; > + is_atomic; //or needs_atomic > > so... > > > }; > > const struct i915_power_well_ops *ops; > > }; > > diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h > > index 30f791f89d64..93ca503f18bb 100644 > > --- a/drivers/gpu/drm/i915/intel_drv.h > > +++ b/drivers/gpu/drm/i915/intel_drv.h > > @@ -1865,7 +1865,8 @@ void chv_phy_powergate_lanes(struct intel_encoder *encoder, > > bool override, unsigned int mask); > > bool chv_phy_powergate_ch(struct drm_i915_private *dev_priv, enum dpio_phy phy, > > enum dpio_channel ch, bool override); > > - > > +bool intel_display_power_vblank_get(struct drm_i915_private *dev_priv); > > +void intel_display_power_vblank_put(struct drm_i915_private *dev_priv); > > > > /* intel_pm.c */ > > void intel_init_clock_gating(struct drm_i915_private *dev_priv); > > diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c > > index f88f2c070c5f..f1807bd74242 100644 > > --- a/drivers/gpu/drm/i915/intel_runtime_pm.c > > +++ b/drivers/gpu/drm/i915/intel_runtime_pm.c > > @@ -126,6 +126,8 @@ intel_display_power_domain_str(enum intel_display_power_domain domain) > > return "AUX_D"; > > case POWER_DOMAIN_GMBUS: > > return "GMBUS"; > > + case POWER_DOMAIN_VBLANK: > > + return "VBLANK"; > > case POWER_DOMAIN_INIT: > > return "INIT"; > > case POWER_DOMAIN_MODESET: > > @@ -196,10 +198,17 @@ bool __intel_display_power_is_enabled(struct drm_i915_private *dev_priv, > > if (power_well->always_on) > > continue; > > > > - if (!power_well->hw_enabled) { > > + if (power_well->id == SKL_DISP_PW_DC_OFF) > > + spin_lock(&power_well->dc_off.lock); > > ... instead of a specif pw check here you would have > > + if (power_well->is_atomic) > + spin_lock(&power_well->atomic.lock) > > > + > > + if (!power_well->hw_enabled) > > is_enabled = false; > > + > > + if (power_well->id == SKL_DISP_PW_DC_OFF) > > + spin_unlock(&power_well->dc_off.lock); > > + > > + if (!is_enabled) > > break; > > - } > > } > > > > return is_enabled; > > @@ -724,6 +733,7 @@ static void gen9_dc_off_power_well_disable(struct drm_i915_private *dev_priv, > > skl_enable_dc6(dev_priv); > > else if (dev_priv->csr.allowed_dc_mask & DC_STATE_EN_UPTO_DC5) > > gen9_enable_dc5(dev_priv); > > + power_well->dc_off.was_disabled = true; > > } > > > > static void i9xx_power_well_sync_hw_noop(struct drm_i915_private *dev_priv, > > @@ -1441,6 +1451,77 @@ static void chv_pipe_power_well_disable(struct drm_i915_private *dev_priv, > > chv_set_pipe_power_well(dev_priv, power_well, false); > > } > > > > +/** > > + * intel_display_power_vblank_get - acquire a VBLANK power domain reference atomically > > + * @dev_priv: i915 device instance > > + * > > + * This function gets a POWER_DOMAIN_VBLANK reference without blocking and > > + * returns true if the DC_OFF power well was disabled since this function was > > + * called the last time. > > + */ > > +bool intel_display_power_vblank_get(struct drm_i915_private *dev_priv) > > +{ > > + struct i915_power_domains *power_domains = &dev_priv->power_domains; > > + struct i915_power_well *power_well; > > + bool needs_restore = false; > > + > > + if (!HAS_CSR(dev_priv) || !dev_priv->psr.source_ok) > > + return false; > > + > > + /* The corresponding CRTC should be active by the time driver turns on > > + * vblank interrupts, which in turn means the enabled pipe power domain > > + * would have acquired the device runtime pm reference. > > + */ > > + intel_runtime_pm_get_if_in_use(dev_priv); > > + > > + for_each_power_domain_well(dev_priv, power_well, BIT_ULL(POWER_DOMAIN_VBLANK)) { > > + if (power_well->id == SKL_DISP_PW_DC_OFF) > > + spin_lock(&power_well->dc_off.lock); > > + > > + intel_power_well_get(dev_priv, power_well); > > + > > + if (power_well->id == SKL_DISP_PW_DC_OFF) { > > + needs_restore = power_well->dc_off.was_disabled; > > + power_well->dc_off.was_disabled = false; > > + spin_unlock(&power_well->dc_off.lock); > > I don't understand why you need the was_disabled here and not the counter like > the regular pw? > The return from this function is used to determine whether we need to calculate the missed vblanks. Let's say we returned true only when the power well was enabled (based on refcount), we'll end up not fixing up the vblanks in this scenario - _vblank_put() -> DC off disabled(ref count=0) <frame counter reset> _power_get(AUX_A) -> DC off enabled (ref count=1) _vblank_get() -> (ref count=2) > But also overall I can't see how this is avoiding the mutex_lock(&power_domains->lock); > in general. Is there a way to make it more generic in a way that we could define atomic pw and regular pw > and we could see which one would be taking the atomic path easily? > I did try to create a generic interface with _power_nb_get(domain) and _power_nb_put(domain), but the above problem meant that I had to write code specific to the VBLANK power domain inside the generic _power_nb_get(domain) method. > Thanks, > Rodrigo. > > > + } > > + } > > + atomic_inc(&power_domains->domain_use_count[POWER_DOMAIN_VBLANK]); > > + > > + return needs_restore; > > +} > > + > > +/** > > + * intel_display_power_vblank_put - drop a VBLANK power domain reference atomically > > + * > > + * A non-blocking version intel_display_power_put() specifically to control the > > + * DC_OFF power well in atomic contexts. > > + */ > > +void intel_display_power_vblank_put(struct drm_i915_private *dev_priv) > > +{ > > + struct i915_power_domains *power_domains = &dev_priv->power_domains; > > + struct i915_power_well *power_well; > > + > > + if (!HAS_CSR(dev_priv) || !dev_priv->psr.source_ok) > > + return; > > + > > + WARN(atomic_dec_return(&power_domains->domain_use_count[POWER_DOMAIN_VBLANK]) < 0, > > + "Use count on domain %s was already zero\n", > > + intel_display_power_domain_str(POWER_DOMAIN_VBLANK)); > > + > > + for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(POWER_DOMAIN_VBLANK)) { > > + if (power_well->id == SKL_DISP_PW_DC_OFF) > > + spin_lock(&power_well->dc_off.lock); > > + > > + intel_power_well_put(dev_priv, power_well); > > + > > + if (power_well->id == SKL_DISP_PW_DC_OFF) > > + spin_unlock(&power_well->dc_off.lock); > > + } > > + > > + intel_runtime_pm_put(dev_priv); > > +} > > + > > static void > > __intel_display_power_get_domain(struct drm_i915_private *dev_priv, > > enum intel_display_power_domain domain) > > @@ -1448,9 +1529,16 @@ __intel_display_power_get_domain(struct drm_i915_private *dev_priv, > > struct i915_power_domains *power_domains = &dev_priv->power_domains; > > struct i915_power_well *power_well; > > > > - for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain)) > > + for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain)) { > > + if (power_well->id == SKL_DISP_PW_DC_OFF) > > + spin_lock(&power_well->dc_off.lock); > > + > > intel_power_well_get(dev_priv, power_well); > > > > + if (power_well->id == SKL_DISP_PW_DC_OFF) > > + spin_unlock(&power_well->dc_off.lock); > > + } > > + > > atomic_inc(&power_domains->domain_use_count[domain]); > > } > > > > @@ -1541,9 +1629,16 @@ void intel_display_power_put(struct drm_i915_private *dev_priv, > > "Use count on domain %s was already zero\n", > > intel_display_power_domain_str(domain)); > > > > - for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain)) > > + for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain)) { > > + if (power_well->id == SKL_DISP_PW_DC_OFF) > > + spin_lock(&power_well->dc_off.lock); > > + > > intel_power_well_put(dev_priv, power_well); > > > > + if (power_well->id == SKL_DISP_PW_DC_OFF) > > + spin_unlock(&power_well->dc_off.lock); > > + } > > + > > mutex_unlock(&power_domains->lock); > > > > intel_runtime_pm_put(dev_priv); > > @@ -1706,6 +1801,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv, > > SKL_DISPLAY_POWERWELL_2_POWER_DOMAINS | \ > > BIT_ULL(POWER_DOMAIN_MODESET) | \ > > BIT_ULL(POWER_DOMAIN_AUX_A) | \ > > + BIT_ULL(POWER_DOMAIN_VBLANK) | \ > > BIT_ULL(POWER_DOMAIN_INIT)) > > > > #define BXT_DISPLAY_POWERWELL_2_POWER_DOMAINS ( \ > > @@ -2342,6 +2438,9 @@ static struct i915_power_well cnl_power_wells[] = { > > .domains = CNL_DISPLAY_DC_OFF_POWER_DOMAINS, > > .ops = &gen9_dc_off_power_well_ops, > > .id = SKL_DISP_PW_DC_OFF, > > + { > > + .dc_off.was_disabled = false, > > + }, > > }, > > { > > .name = "power well 2", > > @@ -2470,6 +2569,7 @@ static void assert_power_well_ids_unique(struct drm_i915_private *dev_priv) > > int intel_power_domains_init(struct drm_i915_private *dev_priv) > > { > > struct i915_power_domains *power_domains = &dev_priv->power_domains; > > + struct i915_power_well *dc_off_pw; > > > > i915_modparams.disable_power_well = > > sanitize_disable_power_well_option(dev_priv, > > @@ -2507,6 +2607,10 @@ int intel_power_domains_init(struct drm_i915_private *dev_priv) > > set_power_wells(power_domains, i9xx_always_on_power_well); > > } > > > > + dc_off_pw = lookup_power_well(dev_priv, SKL_DISP_PW_DC_OFF); > > + if (dc_off_pw) > > + spin_lock_init(&dc_off_pw->dc_off.lock); > > + > > assert_power_well_ids_unique(dev_priv); > > > > return 0; > > @@ -2555,8 +2659,14 @@ static void intel_power_domains_sync_hw(struct drm_i915_private *dev_priv) > > mutex_lock(&power_domains->lock); > > for_each_power_well(dev_priv, power_well) { > > power_well->ops->sync_hw(dev_priv, power_well); > > + > > + if (power_well->id == SKL_DISP_PW_DC_OFF) > > + spin_lock(&power_well->dc_off.lock); > > + > > power_well->hw_enabled = power_well->ops->is_enabled(dev_priv, > > power_well); > > + if (power_well->id == SKL_DISP_PW_DC_OFF) > > + spin_unlock(&power_well->dc_off.lock); > > } > > mutex_unlock(&power_domains->lock); > > } > > @@ -3079,6 +3189,13 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv) > > if (!power_well->domains) > > continue; > > > > + /* > > + * Reading SKL_DISP_PW_DC_OFF power_well->count without its > > + * private spinlock should be safe here as power_well->count > > + * gets modified only in power_well_get() and power_well_put() > > + * and they are not called until drm_crtc_vblank_on(). > > + */ > > + > > enabled = power_well->ops->is_enabled(dev_priv, power_well); > > if ((power_well->count || power_well->always_on) != enabled) > > DRM_ERROR("power well %s state mismatch (refcount %d/enabled %d)", > > -- > > 2.11.0 > > > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/intel-gfx _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 5/5] drm/i915: Use the vblank power domain disallow or disable DC states. 2017-12-06 22:47 [PATCH 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Dhinakaran Pandiyan ` (2 preceding siblings ...) 2017-12-06 22:47 ` [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts Dhinakaran Pandiyan @ 2017-12-06 22:47 ` Dhinakaran Pandiyan 2017-12-06 23:16 ` ✗ Fi.CI.BAT: failure for series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Patchwork 4 siblings, 0 replies; 16+ messages in thread From: Dhinakaran Pandiyan @ 2017-12-06 22:47 UTC (permalink / raw) To: intel-gfx; +Cc: Daniel Vetter, Dhinakaran Pandiyan, dri-devel, Rodrigo Vivi Disable DC states before enabling vblank interrupts and conversely enable DC states after disabling. Since the frame counter may have got reset between disabling and enabling, use drm_crtc_vblank_restore() to compute the missed vblanks. Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> --- drivers/gpu/drm/i915/i915_irq.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c index 7cac07db89b9..c595b934e2dc 100644 --- a/drivers/gpu/drm/i915/i915_irq.c +++ b/drivers/gpu/drm/i915/i915_irq.c @@ -2964,6 +2964,9 @@ static int gen8_enable_vblank(struct drm_device *dev, unsigned int pipe) struct drm_i915_private *dev_priv = to_i915(dev); unsigned long irqflags; + if (intel_display_power_vblank_get(dev_priv)) + drm_crtc_vblank_restore(dev, pipe); + spin_lock_irqsave(&dev_priv->irq_lock, irqflags); bdw_enable_pipe_irq(dev_priv, pipe, GEN8_PIPE_VBLANK); spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags); @@ -3015,6 +3018,7 @@ static void gen8_disable_vblank(struct drm_device *dev, unsigned int pipe) spin_lock_irqsave(&dev_priv->irq_lock, irqflags); bdw_disable_pipe_irq(dev_priv, pipe, GEN8_PIPE_VBLANK); spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags); + intel_display_power_vblank_put(dev_priv); } static void ibx_irq_reset(struct drm_i915_private *dev_priv) -- 2.11.0 _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply related [flat|nested] 16+ messages in thread
* ✗ Fi.CI.BAT: failure for series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled. 2017-12-06 22:47 [PATCH 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Dhinakaran Pandiyan ` (3 preceding siblings ...) 2017-12-06 22:47 ` [PATCH 5/5] drm/i915: Use the vblank power domain disallow or disable DC states Dhinakaran Pandiyan @ 2017-12-06 23:16 ` Patchwork 2017-12-07 18:46 ` Pandiyan, Dhinakaran ` (2 more replies) 4 siblings, 3 replies; 16+ messages in thread From: Patchwork @ 2017-12-06 23:16 UTC (permalink / raw) To: Dhinakaran Pandiyan; +Cc: intel-gfx == Series Details == Series: series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled. URL : https://patchwork.freedesktop.org/series/34996/ State : failure == Summary == Series 34996v1 series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled. https://patchwork.freedesktop.org/api/1.0/series/34996/revisions/1/mbox/ Test debugfs_test: Subgroup read_all_entries: dmesg-fail -> PASS (fi-elk-e7500) fdo#103989 +1 Test gem_exec_reloc: Subgroup basic-cpu-read-active: pass -> FAIL (fi-gdg-551) fdo#102582 +3 Test kms_pipe_crc_basic: Subgroup read-crc-pipe-a: pass -> FAIL (fi-skl-6700k) Subgroup suspend-read-crc-pipe-b: pass -> INCOMPLETE (fi-snb-2520m) fdo#103713 Test kms_psr_sink_crc: Subgroup psr_basic: pass -> DMESG-WARN (fi-skl-6600u) pass -> DMESG-WARN (fi-skl-6700hq) pass -> DMESG-WARN (fi-kbl-7560u) pass -> DMESG-WARN (fi-kbl-r) fdo#103989 https://bugs.freedesktop.org/show_bug.cgi?id=103989 fdo#102582 https://bugs.freedesktop.org/show_bug.cgi?id=102582 fdo#103713 https://bugs.freedesktop.org/show_bug.cgi?id=103713 fi-bdw-5557u total:288 pass:267 dwarn:0 dfail:0 fail:0 skip:21 time:438s fi-blb-e6850 total:288 pass:223 dwarn:1 dfail:0 fail:0 skip:64 time:385s fi-bsw-n3050 total:288 pass:242 dwarn:0 dfail:0 fail:0 skip:46 time:522s fi-bwr-2160 total:288 pass:183 dwarn:0 dfail:0 fail:0 skip:105 time:280s fi-bxt-dsi total:288 pass:258 dwarn:0 dfail:0 fail:0 skip:30 time:505s fi-bxt-j4205 total:288 pass:259 dwarn:0 dfail:0 fail:0 skip:29 time:511s fi-byt-j1900 total:288 pass:253 dwarn:0 dfail:0 fail:0 skip:35 time:491s fi-byt-n2820 total:288 pass:249 dwarn:0 dfail:0 fail:0 skip:39 time:472s fi-elk-e7500 total:224 pass:163 dwarn:15 dfail:0 fail:0 skip:45 fi-gdg-551 total:288 pass:174 dwarn:1 dfail:0 fail:5 skip:108 time:270s fi-glk-1 total:288 pass:260 dwarn:0 dfail:0 fail:0 skip:28 time:542s fi-hsw-4770 total:288 pass:261 dwarn:0 dfail:0 fail:0 skip:27 time:385s fi-hsw-4770r total:288 pass:224 dwarn:0 dfail:0 fail:0 skip:64 time:260s fi-ivb-3520m total:288 pass:259 dwarn:0 dfail:0 fail:0 skip:29 time:484s fi-ivb-3770 total:288 pass:259 dwarn:0 dfail:0 fail:0 skip:29 time:444s fi-kbl-7560u total:288 pass:268 dwarn:1 dfail:0 fail:0 skip:19 time:526s fi-kbl-7567u total:288 pass:268 dwarn:0 dfail:0 fail:0 skip:20 time:478s fi-kbl-r total:288 pass:260 dwarn:1 dfail:0 fail:0 skip:27 time:531s fi-pnv-d510 total:288 pass:222 dwarn:1 dfail:0 fail:0 skip:65 time:595s fi-skl-6260u total:288 pass:268 dwarn:0 dfail:0 fail:0 skip:20 time:456s fi-skl-6600u total:288 pass:260 dwarn:1 dfail:0 fail:0 skip:27 time:547s fi-skl-6700hq total:288 pass:261 dwarn:1 dfail:0 fail:0 skip:26 time:561s fi-skl-6700k total:288 pass:263 dwarn:0 dfail:0 fail:1 skip:24 time:519s fi-skl-6770hq total:288 pass:268 dwarn:0 dfail:0 fail:0 skip:20 time:496s fi-snb-2520m total:245 pass:211 dwarn:0 dfail:0 fail:0 skip:33 fi-snb-2600 total:288 pass:248 dwarn:0 dfail:0 fail:0 skip:40 time:423s Blacklisted hosts: fi-cnl-y total:249 pass:224 dwarn:0 dfail:0 fail:0 skip:24 fi-glk-dsi total:288 pass:186 dwarn:1 dfail:4 fail:0 skip:97 time:397s 95f37eb3ebfd65f500e0e80c6788df200f4c2f99 drm-tip: 2017y-12m-06d-21h-01m-04s UTC integration manifest 2864dbfcc343 drm/i915: Use the vblank power domain disallow or disable DC states. b0c13977ae82 drm/i915: Introduce a non-blocking power domain for vblank interrupts c32242c2a8b1 drm/i915: Use an atomic_t array to track power domain use count. 6de634ba83cf drm/vblank: Restoring vblank counts after device runtime PM events. 43170fd34edb drm/vblank: Do not update vblank counts if vblanks are already disabled. == Logs == For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_7432/ _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: ✗ Fi.CI.BAT: failure for series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled. 2017-12-06 23:16 ` ✗ Fi.CI.BAT: failure for series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Patchwork @ 2017-12-07 18:46 ` Pandiyan, Dhinakaran 2017-12-07 18:46 ` Pandiyan, Dhinakaran 2017-12-07 18:47 ` Pandiyan, Dhinakaran 2 siblings, 0 replies; 16+ messages in thread From: Pandiyan, Dhinakaran @ 2017-12-07 18:46 UTC (permalink / raw) To: intel-gfx On Wed, 2017-12-06 at 23:16 +0000, Patchwork wrote: > == Series Details == > > Series: series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled. > URL : https://patchwork.freedesktop.org/series/34996/ > State : failure > > == Summary == > > Series 34996v1 series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled. > https://patchwork.freedesktop.org/api/1.0/series/34996/revisions/1/mbox/ > > Test debugfs_test: > Subgroup read_all_entries: > dmesg-fail -> PASS (fi-elk-e7500) fdo#103989 +1 > Test gem_exec_reloc: > Subgroup basic-cpu-read-active: > pass -> FAIL (fi-gdg-551) fdo#102582 +3 > Test kms_pipe_crc_basic: > Subgroup read-crc-pipe-a: > pass -> FAIL (fi-skl-6700k) > Subgroup suspend-read-crc-pipe-b: > pass -> INCOMPLETE (fi-snb-2520m) fdo#103713 > Test kms_psr_sink_crc: > Subgroup psr_basic: > pass -> DMESG-WARN (fi-skl-6600u) > pass -> DMESG-WARN (fi-skl-6700hq) > pass -> DMESG-WARN (fi-kbl-7560u) > pass -> DMESG-WARN (fi-kbl-r) Lockdep failures, I think switching to irqsave spin locks will fix these. > > fdo#103989 https://bugs.freedesktop.org/show_bug.cgi?id=103989 > fdo#102582 https://bugs.freedesktop.org/show_bug.cgi?id=102582 > fdo#103713 https://bugs.freedesktop.org/show_bug.cgi?id=103713 > > fi-bdw-5557u total:288 pass:267 dwarn:0 dfail:0 fail:0 skip:21 time:438s > fi-blb-e6850 total:288 pass:223 dwarn:1 dfail:0 fail:0 skip:64 time:385s > fi-bsw-n3050 total:288 pass:242 dwarn:0 dfail:0 fail:0 skip:46 time:522s > fi-bwr-2160 total:288 pass:183 dwarn:0 dfail:0 fail:0 skip:105 time:280s > fi-bxt-dsi total:288 pass:258 dwarn:0 dfail:0 fail:0 skip:30 time:505s > fi-bxt-j4205 total:288 pass:259 dwarn:0 dfail:0 fail:0 skip:29 time:511s > fi-byt-j1900 total:288 pass:253 dwarn:0 dfail:0 fail:0 skip:35 time:491s > fi-byt-n2820 total:288 pass:249 dwarn:0 dfail:0 fail:0 skip:39 time:472s > fi-elk-e7500 total:224 pass:163 dwarn:15 dfail:0 fail:0 skip:45 > fi-gdg-551 total:288 pass:174 dwarn:1 dfail:0 fail:5 skip:108 time:270s > fi-glk-1 total:288 pass:260 dwarn:0 dfail:0 fail:0 skip:28 time:542s > fi-hsw-4770 total:288 pass:261 dwarn:0 dfail:0 fail:0 skip:27 time:385s > fi-hsw-4770r total:288 pass:224 dwarn:0 dfail:0 fail:0 skip:64 time:260s > fi-ivb-3520m total:288 pass:259 dwarn:0 dfail:0 fail:0 skip:29 time:484s > fi-ivb-3770 total:288 pass:259 dwarn:0 dfail:0 fail:0 skip:29 time:444s > fi-kbl-7560u total:288 pass:268 dwarn:1 dfail:0 fail:0 skip:19 time:526s > fi-kbl-7567u total:288 pass:268 dwarn:0 dfail:0 fail:0 skip:20 time:478s > fi-kbl-r total:288 pass:260 dwarn:1 dfail:0 fail:0 skip:27 time:531s > fi-pnv-d510 total:288 pass:222 dwarn:1 dfail:0 fail:0 skip:65 time:595s > fi-skl-6260u total:288 pass:268 dwarn:0 dfail:0 fail:0 skip:20 time:456s > fi-skl-6600u total:288 pass:260 dwarn:1 dfail:0 fail:0 skip:27 time:547s > fi-skl-6700hq total:288 pass:261 dwarn:1 dfail:0 fail:0 skip:26 time:561s > fi-skl-6700k total:288 pass:263 dwarn:0 dfail:0 fail:1 skip:24 time:519s > fi-skl-6770hq total:288 pass:268 dwarn:0 dfail:0 fail:0 skip:20 time:496s > fi-snb-2520m total:245 pass:211 dwarn:0 dfail:0 fail:0 skip:33 > fi-snb-2600 total:288 pass:248 dwarn:0 dfail:0 fail:0 skip:40 time:423s > Blacklisted hosts: > fi-cnl-y total:249 pass:224 dwarn:0 dfail:0 fail:0 skip:24 > fi-glk-dsi total:288 pass:186 dwarn:1 dfail:4 fail:0 skip:97 time:397s > > 95f37eb3ebfd65f500e0e80c6788df200f4c2f99 drm-tip: 2017y-12m-06d-21h-01m-04s UTC integration manifest > 2864dbfcc343 drm/i915: Use the vblank power domain disallow or disable DC states. > b0c13977ae82 drm/i915: Introduce a non-blocking power domain for vblank interrupts > c32242c2a8b1 drm/i915: Use an atomic_t array to track power domain use count. > 6de634ba83cf drm/vblank: Restoring vblank counts after device runtime PM events. > 43170fd34edb drm/vblank: Do not update vblank counts if vblanks are already disabled. > > == Logs == > > For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_7432/ > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/intel-gfx _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: ✗ Fi.CI.BAT: failure for series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled. 2017-12-06 23:16 ` ✗ Fi.CI.BAT: failure for series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Patchwork 2017-12-07 18:46 ` Pandiyan, Dhinakaran @ 2017-12-07 18:46 ` Pandiyan, Dhinakaran 2017-12-07 18:47 ` Pandiyan, Dhinakaran 2 siblings, 0 replies; 16+ messages in thread From: Pandiyan, Dhinakaran @ 2017-12-07 18:46 UTC (permalink / raw) To: intel-gfx On Wed, 2017-12-06 at 23:16 +0000, Patchwork wrote: > == Series Details == > > Series: series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled. > URL : https://patchwork.freedesktop.org/series/34996/ > State : failure > > == Summary == > > Series 34996v1 series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled. > https://patchwork.freedesktop.org/api/1.0/series/34996/revisions/1/mbox/ > > Test debugfs_test: > Subgroup read_all_entries: > dmesg-fail -> PASS (fi-elk-e7500) fdo#103989 +1 > Test gem_exec_reloc: > Subgroup basic-cpu-read-active: > pass -> FAIL (fi-gdg-551) fdo#102582 +3 > Test kms_pipe_crc_basic: > Subgroup read-crc-pipe-a: > pass -> FAIL (fi-skl-6700k) > Subgroup suspend-read-crc-pipe-b: > pass -> INCOMPLETE (fi-snb-2520m) fdo#103713 > Test kms_psr_sink_crc: > Subgroup psr_basic: > pass -> DMESG-WARN (fi-skl-6600u) > pass -> DMESG-WARN (fi-skl-6700hq) > pass -> DMESG-WARN (fi-kbl-7560u) > pass -> DMESG-WARN (fi-kbl-r) Lockdep failures, I think switching to irqsave spin locks will fix these. > > fdo#103989 https://bugs.freedesktop.org/show_bug.cgi?id=103989 > fdo#102582 https://bugs.freedesktop.org/show_bug.cgi?id=102582 > fdo#103713 https://bugs.freedesktop.org/show_bug.cgi?id=103713 > > fi-bdw-5557u total:288 pass:267 dwarn:0 dfail:0 fail:0 skip:21 time:438s > fi-blb-e6850 total:288 pass:223 dwarn:1 dfail:0 fail:0 skip:64 time:385s > fi-bsw-n3050 total:288 pass:242 dwarn:0 dfail:0 fail:0 skip:46 time:522s > fi-bwr-2160 total:288 pass:183 dwarn:0 dfail:0 fail:0 skip:105 time:280s > fi-bxt-dsi total:288 pass:258 dwarn:0 dfail:0 fail:0 skip:30 time:505s > fi-bxt-j4205 total:288 pass:259 dwarn:0 dfail:0 fail:0 skip:29 time:511s > fi-byt-j1900 total:288 pass:253 dwarn:0 dfail:0 fail:0 skip:35 time:491s > fi-byt-n2820 total:288 pass:249 dwarn:0 dfail:0 fail:0 skip:39 time:472s > fi-elk-e7500 total:224 pass:163 dwarn:15 dfail:0 fail:0 skip:45 > fi-gdg-551 total:288 pass:174 dwarn:1 dfail:0 fail:5 skip:108 time:270s > fi-glk-1 total:288 pass:260 dwarn:0 dfail:0 fail:0 skip:28 time:542s > fi-hsw-4770 total:288 pass:261 dwarn:0 dfail:0 fail:0 skip:27 time:385s > fi-hsw-4770r total:288 pass:224 dwarn:0 dfail:0 fail:0 skip:64 time:260s > fi-ivb-3520m total:288 pass:259 dwarn:0 dfail:0 fail:0 skip:29 time:484s > fi-ivb-3770 total:288 pass:259 dwarn:0 dfail:0 fail:0 skip:29 time:444s > fi-kbl-7560u total:288 pass:268 dwarn:1 dfail:0 fail:0 skip:19 time:526s > fi-kbl-7567u total:288 pass:268 dwarn:0 dfail:0 fail:0 skip:20 time:478s > fi-kbl-r total:288 pass:260 dwarn:1 dfail:0 fail:0 skip:27 time:531s > fi-pnv-d510 total:288 pass:222 dwarn:1 dfail:0 fail:0 skip:65 time:595s > fi-skl-6260u total:288 pass:268 dwarn:0 dfail:0 fail:0 skip:20 time:456s > fi-skl-6600u total:288 pass:260 dwarn:1 dfail:0 fail:0 skip:27 time:547s > fi-skl-6700hq total:288 pass:261 dwarn:1 dfail:0 fail:0 skip:26 time:561s > fi-skl-6700k total:288 pass:263 dwarn:0 dfail:0 fail:1 skip:24 time:519s > fi-skl-6770hq total:288 pass:268 dwarn:0 dfail:0 fail:0 skip:20 time:496s > fi-snb-2520m total:245 pass:211 dwarn:0 dfail:0 fail:0 skip:33 > fi-snb-2600 total:288 pass:248 dwarn:0 dfail:0 fail:0 skip:40 time:423s > Blacklisted hosts: > fi-cnl-y total:249 pass:224 dwarn:0 dfail:0 fail:0 skip:24 > fi-glk-dsi total:288 pass:186 dwarn:1 dfail:4 fail:0 skip:97 time:397s > > 95f37eb3ebfd65f500e0e80c6788df200f4c2f99 drm-tip: 2017y-12m-06d-21h-01m-04s UTC integration manifest > 2864dbfcc343 drm/i915: Use the vblank power domain disallow or disable DC states. > b0c13977ae82 drm/i915: Introduce a non-blocking power domain for vblank interrupts > c32242c2a8b1 drm/i915: Use an atomic_t array to track power domain use count. > 6de634ba83cf drm/vblank: Restoring vblank counts after device runtime PM events. > 43170fd34edb drm/vblank: Do not update vblank counts if vblanks are already disabled. > > == Logs == > > For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_7432/ > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/intel-gfx _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: ✗ Fi.CI.BAT: failure for series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled. 2017-12-06 23:16 ` ✗ Fi.CI.BAT: failure for series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Patchwork 2017-12-07 18:46 ` Pandiyan, Dhinakaran 2017-12-07 18:46 ` Pandiyan, Dhinakaran @ 2017-12-07 18:47 ` Pandiyan, Dhinakaran 2 siblings, 0 replies; 16+ messages in thread From: Pandiyan, Dhinakaran @ 2017-12-07 18:47 UTC (permalink / raw) To: intel-gfx On Wed, 2017-12-06 at 23:16 +0000, Patchwork wrote: > == Series Details == > > Series: series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled. > URL : https://patchwork.freedesktop.org/series/34996/ > State : failure > > == Summary == > > Series 34996v1 series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled. > https://patchwork.freedesktop.org/api/1.0/series/34996/revisions/1/mbox/ > > Test debugfs_test: > Subgroup read_all_entries: > dmesg-fail -> PASS (fi-elk-e7500) fdo#103989 +1 > Test gem_exec_reloc: > Subgroup basic-cpu-read-active: > pass -> FAIL (fi-gdg-551) fdo#102582 +3 > Test kms_pipe_crc_basic: > Subgroup read-crc-pipe-a: > pass -> FAIL (fi-skl-6700k) > Subgroup suspend-read-crc-pipe-b: > pass -> INCOMPLETE (fi-snb-2520m) fdo#103713 > Test kms_psr_sink_crc: > Subgroup psr_basic: > pass -> DMESG-WARN (fi-skl-6600u) > pass -> DMESG-WARN (fi-skl-6700hq) > pass -> DMESG-WARN (fi-kbl-7560u) > pass -> DMESG-WARN (fi-kbl-r) Lockdep failures, I think switching to irqsave spin locks will fix these. > > fdo#103989 https://bugs.freedesktop.org/show_bug.cgi?id=103989 > fdo#102582 https://bugs.freedesktop.org/show_bug.cgi?id=102582 > fdo#103713 https://bugs.freedesktop.org/show_bug.cgi?id=103713 > > fi-bdw-5557u total:288 pass:267 dwarn:0 dfail:0 fail:0 skip:21 time:438s > fi-blb-e6850 total:288 pass:223 dwarn:1 dfail:0 fail:0 skip:64 time:385s > fi-bsw-n3050 total:288 pass:242 dwarn:0 dfail:0 fail:0 skip:46 time:522s > fi-bwr-2160 total:288 pass:183 dwarn:0 dfail:0 fail:0 skip:105 time:280s > fi-bxt-dsi total:288 pass:258 dwarn:0 dfail:0 fail:0 skip:30 time:505s > fi-bxt-j4205 total:288 pass:259 dwarn:0 dfail:0 fail:0 skip:29 time:511s > fi-byt-j1900 total:288 pass:253 dwarn:0 dfail:0 fail:0 skip:35 time:491s > fi-byt-n2820 total:288 pass:249 dwarn:0 dfail:0 fail:0 skip:39 time:472s > fi-elk-e7500 total:224 pass:163 dwarn:15 dfail:0 fail:0 skip:45 > fi-gdg-551 total:288 pass:174 dwarn:1 dfail:0 fail:5 skip:108 time:270s > fi-glk-1 total:288 pass:260 dwarn:0 dfail:0 fail:0 skip:28 time:542s > fi-hsw-4770 total:288 pass:261 dwarn:0 dfail:0 fail:0 skip:27 time:385s > fi-hsw-4770r total:288 pass:224 dwarn:0 dfail:0 fail:0 skip:64 time:260s > fi-ivb-3520m total:288 pass:259 dwarn:0 dfail:0 fail:0 skip:29 time:484s > fi-ivb-3770 total:288 pass:259 dwarn:0 dfail:0 fail:0 skip:29 time:444s > fi-kbl-7560u total:288 pass:268 dwarn:1 dfail:0 fail:0 skip:19 time:526s > fi-kbl-7567u total:288 pass:268 dwarn:0 dfail:0 fail:0 skip:20 time:478s > fi-kbl-r total:288 pass:260 dwarn:1 dfail:0 fail:0 skip:27 time:531s > fi-pnv-d510 total:288 pass:222 dwarn:1 dfail:0 fail:0 skip:65 time:595s > fi-skl-6260u total:288 pass:268 dwarn:0 dfail:0 fail:0 skip:20 time:456s > fi-skl-6600u total:288 pass:260 dwarn:1 dfail:0 fail:0 skip:27 time:547s > fi-skl-6700hq total:288 pass:261 dwarn:1 dfail:0 fail:0 skip:26 time:561s > fi-skl-6700k total:288 pass:263 dwarn:0 dfail:0 fail:1 skip:24 time:519s > fi-skl-6770hq total:288 pass:268 dwarn:0 dfail:0 fail:0 skip:20 time:496s > fi-snb-2520m total:245 pass:211 dwarn:0 dfail:0 fail:0 skip:33 > fi-snb-2600 total:288 pass:248 dwarn:0 dfail:0 fail:0 skip:40 time:423s > Blacklisted hosts: > fi-cnl-y total:249 pass:224 dwarn:0 dfail:0 fail:0 skip:24 > fi-glk-dsi total:288 pass:186 dwarn:1 dfail:4 fail:0 skip:97 time:397s > > 95f37eb3ebfd65f500e0e80c6788df200f4c2f99 drm-tip: 2017y-12m-06d-21h-01m-04s UTC integration manifest > 2864dbfcc343 drm/i915: Use the vblank power domain disallow or disable DC states. > b0c13977ae82 drm/i915: Introduce a non-blocking power domain for vblank interrupts > c32242c2a8b1 drm/i915: Use an atomic_t array to track power domain use count. > 6de634ba83cf drm/vblank: Restoring vblank counts after device runtime PM events. > 43170fd34edb drm/vblank: Do not update vblank counts if vblanks are already disabled. > > == Logs == > > For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_7432/ > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/intel-gfx _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled. @ 2018-01-03 20:39 Dhinakaran Pandiyan 2018-01-03 20:40 ` [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts Dhinakaran Pandiyan 0 siblings, 1 reply; 16+ messages in thread From: Dhinakaran Pandiyan @ 2018-01-03 20:39 UTC (permalink / raw) To: intel-gfx; +Cc: Daniel Vetter, Dhinakaran Pandiyan, Rodrigo Vivi Updating the vblank counts requires register reads and these reads may not return meaningful values after the vblank interrupts are disabled as the device may go to low power state. An additional change would be to allow the driver to save the vblank counts before entering a low power state, but that's for the future. Also, disable vblanks after reading the HW counter in the case where _crtc_vblank_off() is disabling vblanks. Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> --- drivers/gpu/drm/drm_vblank.c | 23 +++++++++-------------- 1 file changed, 9 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c index 32d9bcf5be7f..7eee82c06ed8 100644 --- a/drivers/gpu/drm/drm_vblank.c +++ b/drivers/gpu/drm/drm_vblank.c @@ -347,23 +347,14 @@ void drm_vblank_disable_and_save(struct drm_device *dev, unsigned int pipe) spin_lock_irqsave(&dev->vblank_time_lock, irqflags); /* - * Only disable vblank interrupts if they're enabled. This avoids - * calling the ->disable_vblank() operation in atomic context with the - * hardware potentially runtime suspended. - */ - if (vblank->enabled) { - __disable_vblank(dev, pipe); - vblank->enabled = false; - } - - /* - * Always update the count and timestamp to maintain the + * Update the count and timestamp to maintain the * appearance that the counter has been ticking all along until * this time. This makes the count account for the entire time * between drm_crtc_vblank_on() and drm_crtc_vblank_off(). */ drm_update_vblank_count(dev, pipe, false); - + __disable_vblank(dev, pipe); + vblank->enabled = false; spin_unlock_irqrestore(&dev->vblank_time_lock, irqflags); } @@ -1122,8 +1113,12 @@ void drm_crtc_vblank_off(struct drm_crtc *crtc) pipe, vblank->enabled, vblank->inmodeset); /* Avoid redundant vblank disables without previous - * drm_crtc_vblank_on(). */ - if (drm_core_check_feature(dev, DRIVER_ATOMIC) || !vblank->inmodeset) + * drm_crtc_vblank_on() and only disable them if they're enabled. This + * avoids calling the ->disable_vblank() operation in atomic context + * with the hardware potentially runtime suspended. + */ + if ((drm_core_check_feature(dev, DRIVER_ATOMIC) || !vblank->inmodeset) && + vblank->enabled) drm_vblank_disable_and_save(dev, pipe); wake_up(&vblank->queue); -- 2.11.0 _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts 2018-01-03 20:39 [PATCH 1/5] " Dhinakaran Pandiyan @ 2018-01-03 20:40 ` Dhinakaran Pandiyan 2018-01-05 2:08 ` Rodrigo Vivi 0 siblings, 1 reply; 16+ messages in thread From: Dhinakaran Pandiyan @ 2018-01-03 20:40 UTC (permalink / raw) To: intel-gfx; +Cc: Daniel Vetter, Dhinakaran Pandiyan, Rodrigo Vivi When DC states are enabled and PSR is active, the hardware enters DC5/DC6 states resulting in the frame counter resetting. The frame counter reset mess up the vblank counting logic as the diff between the new frame counter value and the previous is negative, and this negative diff gets applied as an unsigned value to the vblank count. We cannot reject negative diffs altogether because they can arise from legitimate frame counter overflows when there is a long period with vblank disabled. So, this approach allows for the driver to notice a DC state toggle between a vblank disable and enable and fill in the missed vblanks. But, in order to disable DC states when vblank interrupts are required, the DC_OFF power well has to be disabled in an atomic context. So, introduce a new VBLANK power domain that can be acquired and released in atomic contexts with these changes - 1) _vblank_get() and _vblank_put() methods skip the power_domain mutex and use a spin lock for the DC power well. 2) power_domains->domain_use_count is converted to an atomic_t array so that it can be updated outside of the power domain mutex. v3: Squash domain_use_count atomic_t conversion (Maarten) v2: Fix deadlock by switching irqsave spinlock. Implement atomic version of get_if_enabled. Modify power_domain_verify_state to check power well use count and enabled status atomically. Rewrite of intel_power_well_{get,put} Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> --- drivers/gpu/drm/i915/i915_debugfs.c | 2 +- drivers/gpu/drm/i915/i915_drv.h | 19 +++- drivers/gpu/drm/i915/intel_display.h | 1 + drivers/gpu/drm/i915/intel_drv.h | 3 + drivers/gpu/drm/i915/intel_runtime_pm.c | 195 ++++++++++++++++++++++++++++---- 5 files changed, 199 insertions(+), 21 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index d81cb2513069..5a7ce734de02 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -2746,7 +2746,7 @@ static int i915_power_domain_info(struct seq_file *m, void *unused) for_each_power_domain(power_domain, power_well->domains) seq_printf(m, " %-23s %d\n", intel_display_power_domain_str(power_domain), - power_domains->domain_use_count[power_domain]); + atomic_read(&power_domains->domain_use_count[power_domain])); } mutex_unlock(&power_domains->lock); diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index caebd5825279..61a635f03af7 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -1032,6 +1032,23 @@ struct i915_power_well { bool has_fuses:1; } hsw; }; + + /* Lock to serialize access to count, hw_enabled and ops, used for + * power wells that have supports_atomix_ctx set to True. + */ + spinlock_t lock; + + /* Indicates that the get/put methods for this power well can be called + * in atomic contexts, requires .ops to not sleep. This is valid + * only for the DC_OFF power well currently. + */ + bool supports_atomic_ctx; + + /* DC_OFF power well was disabled since the last time vblanks were + * disabled. + */ + bool dc_off_disabled; + const struct i915_power_well_ops *ops; }; @@ -1045,7 +1062,7 @@ struct i915_power_domains { int power_well_count; struct mutex lock; - int domain_use_count[POWER_DOMAIN_NUM]; + atomic_t domain_use_count[POWER_DOMAIN_NUM]; struct i915_power_well *power_wells; }; diff --git a/drivers/gpu/drm/i915/intel_display.h b/drivers/gpu/drm/i915/intel_display.h index a0d2b6169361..3e9671ff6f79 100644 --- a/drivers/gpu/drm/i915/intel_display.h +++ b/drivers/gpu/drm/i915/intel_display.h @@ -172,6 +172,7 @@ enum intel_display_power_domain { POWER_DOMAIN_AUX_C, POWER_DOMAIN_AUX_D, POWER_DOMAIN_GMBUS, + POWER_DOMAIN_VBLANK, POWER_DOMAIN_MODESET, POWER_DOMAIN_GT_IRQ, POWER_DOMAIN_INIT, diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h index 30f791f89d64..164e62cb047b 100644 --- a/drivers/gpu/drm/i915/intel_drv.h +++ b/drivers/gpu/drm/i915/intel_drv.h @@ -1797,6 +1797,9 @@ bool intel_display_power_get_if_enabled(struct drm_i915_private *dev_priv, enum intel_display_power_domain domain); void intel_display_power_put(struct drm_i915_private *dev_priv, enum intel_display_power_domain domain); +void intel_display_power_vblank_get(struct drm_i915_private *dev_priv, + bool *needs_restore); +void intel_display_power_vblank_put(struct drm_i915_private *dev_priv); static inline void assert_rpm_device_not_suspended(struct drm_i915_private *dev_priv) diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c index d758da6156a8..93b324dcc55d 100644 --- a/drivers/gpu/drm/i915/intel_runtime_pm.c +++ b/drivers/gpu/drm/i915/intel_runtime_pm.c @@ -56,6 +56,19 @@ static struct i915_power_well * lookup_power_well(struct drm_i915_private *dev_priv, enum i915_power_well_id power_well_id); +/* Optimize for the case when this is called from atomic contexts, + * although the case is unlikely. + */ +#define power_well_lock(power_well, flags) do { \ + if (likely(power_well->supports_atomic_ctx)) \ + spin_lock_irqsave(&power_well->lock, flags); \ + } while (0) + +#define power_well_unlock(power_well, flags) do { \ + if (likely(power_well->supports_atomic_ctx)) \ + spin_unlock_irqrestore(&power_well->lock, flags); \ + } while (0) + const char * intel_display_power_domain_str(enum intel_display_power_domain domain) { @@ -126,6 +139,8 @@ intel_display_power_domain_str(enum intel_display_power_domain domain) return "AUX_D"; case POWER_DOMAIN_GMBUS: return "GMBUS"; + case POWER_DOMAIN_VBLANK: + return "VBLANK"; case POWER_DOMAIN_INIT: return "INIT"; case POWER_DOMAIN_MODESET: @@ -141,6 +156,9 @@ intel_display_power_domain_str(enum intel_display_power_domain domain) static void intel_power_well_enable(struct drm_i915_private *dev_priv, struct i915_power_well *power_well) { + if (power_well->supports_atomic_ctx) + assert_spin_locked(&power_well->lock); + DRM_DEBUG_KMS("enabling %s\n", power_well->name); power_well->ops->enable(dev_priv, power_well); power_well->hw_enabled = true; @@ -149,26 +167,43 @@ static void intel_power_well_enable(struct drm_i915_private *dev_priv, static void intel_power_well_disable(struct drm_i915_private *dev_priv, struct i915_power_well *power_well) { + if (power_well->supports_atomic_ctx) + assert_spin_locked(&power_well->lock); + DRM_DEBUG_KMS("disabling %s\n", power_well->name); power_well->hw_enabled = false; power_well->ops->disable(dev_priv, power_well); } -static void intel_power_well_get(struct drm_i915_private *dev_priv, +static void __intel_power_well_get(struct drm_i915_private *dev_priv, struct i915_power_well *power_well) { if (!power_well->count++) intel_power_well_enable(dev_priv, power_well); } +static void intel_power_well_get(struct drm_i915_private *dev_priv, + struct i915_power_well *power_well) +{ + unsigned long flags = 0; + + power_well_lock(power_well, flags); + __intel_power_well_get(dev_priv, power_well); + power_well_unlock(power_well, flags); +} + static void intel_power_well_put(struct drm_i915_private *dev_priv, struct i915_power_well *power_well) { + unsigned long flags = 0; + + power_well_lock(power_well, flags); WARN(!power_well->count, "Use count on power well %s is already zero", power_well->name); if (!--power_well->count) intel_power_well_disable(dev_priv, power_well); + power_well_unlock(power_well, flags); } /** @@ -736,6 +771,7 @@ static void gen9_dc_off_power_well_disable(struct drm_i915_private *dev_priv, skl_enable_dc6(dev_priv); else if (dev_priv->csr.allowed_dc_mask & DC_STATE_EN_UPTO_DC5) gen9_enable_dc5(dev_priv); + power_well->dc_off_disabled = true; } static void i9xx_power_well_sync_hw_noop(struct drm_i915_private *dev_priv, @@ -1453,6 +1489,58 @@ static void chv_pipe_power_well_disable(struct drm_i915_private *dev_priv, chv_set_pipe_power_well(dev_priv, power_well, false); } +void intel_display_power_vblank_get(struct drm_i915_private *dev_priv, + bool *needs_restore) +{ + struct i915_power_domains *power_domains = &dev_priv->power_domains; + struct i915_power_well *power_well; + enum intel_display_power_domain domain = POWER_DOMAIN_VBLANK; + + *needs_restore = false; + + if (!HAS_CSR(dev_priv)) + return; + + if (!(HAS_PSR(dev_priv) && dev_priv->psr.sink_support)) + return; + + intel_runtime_pm_get_noresume(dev_priv); + + for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain)) { + unsigned long flags = 0; + + power_well_lock(power_well, flags); + __intel_power_well_get(dev_priv, power_well); + *needs_restore = power_well->dc_off_disabled; + power_well->dc_off_disabled = false; + power_well_unlock(power_well, flags); + } + + atomic_inc(&power_domains->domain_use_count[domain]); +} + +void intel_display_power_vblank_put(struct drm_i915_private *dev_priv) +{ + struct i915_power_domains *power_domains = &dev_priv->power_domains; + struct i915_power_well *power_well; + enum intel_display_power_domain domain = POWER_DOMAIN_VBLANK; + + if (!HAS_CSR(dev_priv)) + return; + + if (!(HAS_PSR(dev_priv) && dev_priv->psr.sink_support)) + return; + + WARN(atomic_dec_return(&power_domains->domain_use_count[domain]) < 0, + "Use count on domain %s was already zero\n", + intel_display_power_domain_str(domain)); + + for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain)) + intel_power_well_put(dev_priv, power_well); + + intel_runtime_pm_put(dev_priv); +} + static void __intel_display_power_get_domain(struct drm_i915_private *dev_priv, enum intel_display_power_domain domain) @@ -1463,7 +1551,7 @@ __intel_display_power_get_domain(struct drm_i915_private *dev_priv, for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain)) intel_power_well_get(dev_priv, power_well); - power_domains->domain_use_count[domain]++; + atomic_inc(&power_domains->domain_use_count[domain]); } /** @@ -1492,6 +1580,38 @@ void intel_display_power_get(struct drm_i915_private *dev_priv, mutex_unlock(&power_domains->lock); } +static bool dc_off_get_if_enabled(struct drm_i915_private *dev_priv, + enum intel_display_power_domain domain) +{ + struct i915_power_well *power_well; + bool is_enabled; + unsigned long flags = 0; + + power_well = lookup_power_well(dev_priv, SKL_DISP_PW_DC_OFF); + if (!power_well || !(power_well->domains & domain)) + return true; + + power_well_lock(power_well, flags); + is_enabled = power_well->hw_enabled; + if (is_enabled) + __intel_power_well_get(dev_priv, power_well); + power_well_unlock(power_well, flags); + + return is_enabled; +} + +static void dc_off_put(struct drm_i915_private *dev_priv, + enum intel_display_power_domain domain) +{ + struct i915_power_well *power_well; + + power_well = lookup_power_well(dev_priv, SKL_DISP_PW_DC_OFF); + if (!power_well || !(power_well->domains & domain)) + return; + + intel_power_well_put(dev_priv, power_well); +} + /** * intel_display_power_get_if_enabled - grab a reference for an enabled display power domain * @dev_priv: i915 device instance @@ -1508,20 +1628,24 @@ bool intel_display_power_get_if_enabled(struct drm_i915_private *dev_priv, enum intel_display_power_domain domain) { struct i915_power_domains *power_domains = &dev_priv->power_domains; - bool is_enabled; + bool is_enabled = false; if (!intel_runtime_pm_get_if_in_use(dev_priv)) return false; mutex_lock(&power_domains->lock); + if (!dc_off_get_if_enabled(dev_priv, domain)) + goto out; + if (__intel_display_power_is_enabled(dev_priv, domain)) { __intel_display_power_get_domain(dev_priv, domain); is_enabled = true; - } else { - is_enabled = false; } + dc_off_put(dev_priv, domain); + +out: mutex_unlock(&power_domains->lock); if (!is_enabled) @@ -1549,10 +1673,9 @@ void intel_display_power_put(struct drm_i915_private *dev_priv, mutex_lock(&power_domains->lock); - WARN(!power_domains->domain_use_count[domain], - "Use count on domain %s is already zero\n", + WARN(atomic_dec_return(&power_domains->domain_use_count[domain]) < 0, + "Use count on domain %s was already zero\n", intel_display_power_domain_str(domain)); - power_domains->domain_use_count[domain]--; for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain)) intel_power_well_put(dev_priv, power_well); @@ -1720,6 +1843,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv, BIT_ULL(POWER_DOMAIN_GT_IRQ) | \ BIT_ULL(POWER_DOMAIN_MODESET) | \ BIT_ULL(POWER_DOMAIN_AUX_A) | \ + BIT_ULL(POWER_DOMAIN_VBLANK) | \ BIT_ULL(POWER_DOMAIN_INIT)) #define BXT_DISPLAY_POWERWELL_2_POWER_DOMAINS ( \ @@ -1743,6 +1867,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv, BIT_ULL(POWER_DOMAIN_MODESET) | \ BIT_ULL(POWER_DOMAIN_AUX_A) | \ BIT_ULL(POWER_DOMAIN_GMBUS) | \ + BIT_ULL(POWER_DOMAIN_VBLANK) | \ BIT_ULL(POWER_DOMAIN_INIT)) #define BXT_DPIO_CMN_A_POWER_DOMAINS ( \ BIT_ULL(POWER_DOMAIN_PORT_DDI_A_LANES) | \ @@ -1803,6 +1928,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv, BIT_ULL(POWER_DOMAIN_MODESET) | \ BIT_ULL(POWER_DOMAIN_AUX_A) | \ BIT_ULL(POWER_DOMAIN_GMBUS) | \ + BIT_ULL(POWER_DOMAIN_VBLANK) | \ BIT_ULL(POWER_DOMAIN_INIT)) #define CNL_DISPLAY_POWERWELL_2_POWER_DOMAINS ( \ @@ -1850,6 +1976,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv, CNL_DISPLAY_POWERWELL_2_POWER_DOMAINS | \ BIT_ULL(POWER_DOMAIN_MODESET) | \ BIT_ULL(POWER_DOMAIN_AUX_A) | \ + BIT_ULL(POWER_DOMAIN_VBLANK) | \ BIT_ULL(POWER_DOMAIN_INIT)) static const struct i915_power_well_ops i9xx_always_on_power_well_ops = { @@ -2083,9 +2210,12 @@ bool intel_display_power_well_is_enabled(struct drm_i915_private *dev_priv, { struct i915_power_well *power_well; bool ret; + unsigned long flags = 0; power_well = lookup_power_well(dev_priv, power_well_id); + power_well_lock(power_well, flags); ret = power_well->ops->is_enabled(dev_priv, power_well); + power_well_unlock(power_well, flags); return ret; } @@ -2120,6 +2250,8 @@ static struct i915_power_well skl_power_wells[] = { .domains = SKL_DISPLAY_DC_OFF_POWER_DOMAINS, .ops = &gen9_dc_off_power_well_ops, .id = SKL_DISP_PW_DC_OFF, + .supports_atomic_ctx = true, + .dc_off_disabled = false, }, { .name = "power well 2", @@ -2180,6 +2312,8 @@ static struct i915_power_well bxt_power_wells[] = { .domains = BXT_DISPLAY_DC_OFF_POWER_DOMAINS, .ops = &gen9_dc_off_power_well_ops, .id = SKL_DISP_PW_DC_OFF, + .supports_atomic_ctx = true, + .dc_off_disabled = false, }, { .name = "power well 2", @@ -2235,6 +2369,8 @@ static struct i915_power_well glk_power_wells[] = { .domains = GLK_DISPLAY_DC_OFF_POWER_DOMAINS, .ops = &gen9_dc_off_power_well_ops, .id = SKL_DISP_PW_DC_OFF, + .supports_atomic_ctx = true, + .dc_off_disabled = false, }, { .name = "power well 2", @@ -2359,6 +2495,8 @@ static struct i915_power_well cnl_power_wells[] = { .domains = CNL_DISPLAY_DC_OFF_POWER_DOMAINS, .ops = &gen9_dc_off_power_well_ops, .id = SKL_DISP_PW_DC_OFF, + .supports_atomic_ctx = true, + .dc_off_disabled = false, }, { .name = "power well 2", @@ -2487,6 +2625,7 @@ static void assert_power_well_ids_unique(struct drm_i915_private *dev_priv) int intel_power_domains_init(struct drm_i915_private *dev_priv) { struct i915_power_domains *power_domains = &dev_priv->power_domains; + struct i915_power_well *power_well; i915_modparams.disable_power_well = sanitize_disable_power_well_option(dev_priv, @@ -2524,6 +2663,10 @@ int intel_power_domains_init(struct drm_i915_private *dev_priv) set_power_wells(power_domains, i9xx_always_on_power_well); } + for_each_power_well(dev_priv, power_well) + if (power_well->supports_atomic_ctx) + spin_lock_init(&power_well->lock); + assert_power_well_ids_unique(dev_priv); return 0; @@ -2571,9 +2714,13 @@ static void intel_power_domains_sync_hw(struct drm_i915_private *dev_priv) mutex_lock(&power_domains->lock); for_each_power_well(dev_priv, power_well) { + unsigned long flags = 0; + + power_well_lock(power_well, flags); power_well->ops->sync_hw(dev_priv, power_well); power_well->hw_enabled = power_well->ops->is_enabled(dev_priv, power_well); + power_well_unlock(power_well, flags); } mutex_unlock(&power_domains->lock); } @@ -3046,21 +3193,23 @@ void intel_power_domains_suspend(struct drm_i915_private *dev_priv) bxt_display_core_uninit(dev_priv); } -static void intel_power_domains_dump_info(struct drm_i915_private *dev_priv) +static void intel_power_domains_dump_info(struct drm_i915_private *dev_priv, + const int *power_well_use) { struct i915_power_domains *power_domains = &dev_priv->power_domains; struct i915_power_well *power_well; + int i = 0; for_each_power_well(dev_priv, power_well) { enum intel_display_power_domain domain; DRM_DEBUG_DRIVER("%-25s %d\n", - power_well->name, power_well->count); + power_well->name, power_well_use[i++]); for_each_power_domain(domain, power_well->domains) DRM_DEBUG_DRIVER(" %-23s %d\n", intel_display_power_domain_str(domain), - power_domains->domain_use_count[domain]); + atomic_read(&power_domains->domain_use_count[domain])); } } @@ -3079,6 +3228,7 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv) struct i915_power_domains *power_domains = &dev_priv->power_domains; struct i915_power_well *power_well; bool dump_domain_info; + int power_well_use[dev_priv->power_domains.power_well_count]; mutex_lock(&power_domains->lock); @@ -3087,6 +3237,14 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv) enum intel_display_power_domain domain; int domains_count; bool enabled; + int well_count, i = 0; + unsigned long flags = 0; + + power_well_lock(power_well, flags); + well_count = power_well->count; + enabled = power_well->ops->is_enabled(dev_priv, power_well); + power_well_unlock(power_well, flags); + power_well_use[i++] = well_count; /* * Power wells not belonging to any domain (like the MISC_IO @@ -3096,20 +3254,19 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv) if (!power_well->domains) continue; - enabled = power_well->ops->is_enabled(dev_priv, power_well); - if ((power_well->count || power_well->always_on) != enabled) + + if ((well_count || power_well->always_on) != enabled) DRM_ERROR("power well %s state mismatch (refcount %d/enabled %d)", - power_well->name, power_well->count, enabled); + power_well->name, well_count, enabled); domains_count = 0; for_each_power_domain(domain, power_well->domains) - domains_count += power_domains->domain_use_count[domain]; + domains_count += atomic_read(&power_domains->domain_use_count[domain]); - if (power_well->count != domains_count) { + if (well_count != domains_count) { DRM_ERROR("power well %s refcount/domain refcount mismatch " "(refcount %d/domains refcount %d)\n", - power_well->name, power_well->count, - domains_count); + power_well->name, well_count, domains_count); dump_domain_info = true; } } @@ -3118,7 +3275,7 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv) static bool dumped; if (!dumped) { - intel_power_domains_dump_info(dev_priv); + intel_power_domains_dump_info(dev_priv, power_well_use); dumped = true; } } -- 2.11.0 _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts 2018-01-03 20:40 ` [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts Dhinakaran Pandiyan @ 2018-01-05 2:08 ` Rodrigo Vivi 2018-01-08 19:59 ` Pandiyan, Dhinakaran 0 siblings, 1 reply; 16+ messages in thread From: Rodrigo Vivi @ 2018-01-05 2:08 UTC (permalink / raw) To: Dhinakaran Pandiyan; +Cc: Daniel Vetter, intel-gfx On Wed, Jan 03, 2018 at 08:40:00PM +0000, Dhinakaran Pandiyan wrote: > When DC states are enabled and PSR is active, the hardware enters DC5/DC6 > states resulting in the frame counter resetting. The frame counter reset > mess up the vblank counting logic as the diff between the new frame > counter value and the previous is negative, and this negative diff gets > applied as an unsigned value to the vblank count. We cannot reject negative > diffs altogether because they can arise from legitimate frame counter > overflows when there is a long period with vblank disabled. So, this > approach allows for the driver to notice a DC state toggle between a vblank > disable and enable and fill in the missed vblanks. > > But, in order to disable DC states when vblank interrupts are required, > the DC_OFF power well has to be disabled in an atomic context. So, > introduce a new VBLANK power domain that can be acquired and released in > atomic contexts with these changes - > 1) _vblank_get() and _vblank_put() methods skip the power_domain mutex > and use a spin lock for the DC power well. > 2) power_domains->domain_use_count is converted to an atomic_t array so > that it can be updated outside of the power domain mutex. > > v3: Squash domain_use_count atomic_t conversion (Maarten) > v2: Fix deadlock by switching irqsave spinlock. > Implement atomic version of get_if_enabled. > Modify power_domain_verify_state to check power well use count and > enabled status atomically. > Rewrite of intel_power_well_{get,put} > > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch> > Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> > Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> > Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> > --- > drivers/gpu/drm/i915/i915_debugfs.c | 2 +- > drivers/gpu/drm/i915/i915_drv.h | 19 +++- > drivers/gpu/drm/i915/intel_display.h | 1 + > drivers/gpu/drm/i915/intel_drv.h | 3 + > drivers/gpu/drm/i915/intel_runtime_pm.c | 195 ++++++++++++++++++++++++++++---- > 5 files changed, 199 insertions(+), 21 deletions(-) > > diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c > index d81cb2513069..5a7ce734de02 100644 > --- a/drivers/gpu/drm/i915/i915_debugfs.c > +++ b/drivers/gpu/drm/i915/i915_debugfs.c > @@ -2746,7 +2746,7 @@ static int i915_power_domain_info(struct seq_file *m, void *unused) > for_each_power_domain(power_domain, power_well->domains) > seq_printf(m, " %-23s %d\n", > intel_display_power_domain_str(power_domain), > - power_domains->domain_use_count[power_domain]); > + atomic_read(&power_domains->domain_use_count[power_domain])); > } > > mutex_unlock(&power_domains->lock); > diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h > index caebd5825279..61a635f03af7 100644 > --- a/drivers/gpu/drm/i915/i915_drv.h > +++ b/drivers/gpu/drm/i915/i915_drv.h > @@ -1032,6 +1032,23 @@ struct i915_power_well { > bool has_fuses:1; > } hsw; > }; > + > + /* Lock to serialize access to count, hw_enabled and ops, used for > + * power wells that have supports_atomix_ctx set to True. > + */ > + spinlock_t lock; > + > + /* Indicates that the get/put methods for this power well can be called > + * in atomic contexts, requires .ops to not sleep. This is valid > + * only for the DC_OFF power well currently. > + */ > + bool supports_atomic_ctx; > + > + /* DC_OFF power well was disabled since the last time vblanks were > + * disabled. > + */ > + bool dc_off_disabled; > + > const struct i915_power_well_ops *ops; > }; > > @@ -1045,7 +1062,7 @@ struct i915_power_domains { > int power_well_count; > > struct mutex lock; > - int domain_use_count[POWER_DOMAIN_NUM]; > + atomic_t domain_use_count[POWER_DOMAIN_NUM]; > struct i915_power_well *power_wells; > }; > > diff --git a/drivers/gpu/drm/i915/intel_display.h b/drivers/gpu/drm/i915/intel_display.h > index a0d2b6169361..3e9671ff6f79 100644 > --- a/drivers/gpu/drm/i915/intel_display.h > +++ b/drivers/gpu/drm/i915/intel_display.h > @@ -172,6 +172,7 @@ enum intel_display_power_domain { > POWER_DOMAIN_AUX_C, > POWER_DOMAIN_AUX_D, > POWER_DOMAIN_GMBUS, > + POWER_DOMAIN_VBLANK, Maybe we could start a new category of atomic domains and on interations that makes sense go over both or making the domains in an array with is_atomic bool in case we can transform the atomic get,put to be really generic or into .enable .disable function pointers. > POWER_DOMAIN_MODESET, > POWER_DOMAIN_GT_IRQ, > POWER_DOMAIN_INIT, > diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h > index 30f791f89d64..164e62cb047b 100644 > --- a/drivers/gpu/drm/i915/intel_drv.h > +++ b/drivers/gpu/drm/i915/intel_drv.h > @@ -1797,6 +1797,9 @@ bool intel_display_power_get_if_enabled(struct drm_i915_private *dev_priv, > enum intel_display_power_domain domain); > void intel_display_power_put(struct drm_i915_private *dev_priv, > enum intel_display_power_domain domain); > +void intel_display_power_vblank_get(struct drm_i915_private *dev_priv, > + bool *needs_restore); can we dump the needs_restore and always restore the counter? if so, we could use regular intel_power_domain_{get,put} functions and built the generic atomic inside it. > +void intel_display_power_vblank_put(struct drm_i915_private *dev_priv); > > static inline void > assert_rpm_device_not_suspended(struct drm_i915_private *dev_priv) > diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c > index d758da6156a8..93b324dcc55d 100644 > --- a/drivers/gpu/drm/i915/intel_runtime_pm.c > +++ b/drivers/gpu/drm/i915/intel_runtime_pm.c > @@ -56,6 +56,19 @@ static struct i915_power_well * > lookup_power_well(struct drm_i915_private *dev_priv, > enum i915_power_well_id power_well_id); > > +/* Optimize for the case when this is called from atomic contexts, > + * although the case is unlikely. > + */ > +#define power_well_lock(power_well, flags) do { \ > + if (likely(power_well->supports_atomic_ctx)) \ > + spin_lock_irqsave(&power_well->lock, flags); \ > + } while (0) > + > +#define power_well_unlock(power_well, flags) do { \ > + if (likely(power_well->supports_atomic_ctx)) \ > + spin_unlock_irqrestore(&power_well->lock, flags); \ > + } while (0) > + > const char * > intel_display_power_domain_str(enum intel_display_power_domain domain) > { > @@ -126,6 +139,8 @@ intel_display_power_domain_str(enum intel_display_power_domain domain) > return "AUX_D"; > case POWER_DOMAIN_GMBUS: > return "GMBUS"; > + case POWER_DOMAIN_VBLANK: > + return "VBLANK"; > case POWER_DOMAIN_INIT: > return "INIT"; > case POWER_DOMAIN_MODESET: > @@ -141,6 +156,9 @@ intel_display_power_domain_str(enum intel_display_power_domain domain) > static void intel_power_well_enable(struct drm_i915_private *dev_priv, > struct i915_power_well *power_well) > { > + if (power_well->supports_atomic_ctx) > + assert_spin_locked(&power_well->lock); > + > DRM_DEBUG_KMS("enabling %s\n", power_well->name); > power_well->ops->enable(dev_priv, power_well); > power_well->hw_enabled = true; > @@ -149,26 +167,43 @@ static void intel_power_well_enable(struct drm_i915_private *dev_priv, > static void intel_power_well_disable(struct drm_i915_private *dev_priv, > struct i915_power_well *power_well) > { > + if (power_well->supports_atomic_ctx) > + assert_spin_locked(&power_well->lock); > + > DRM_DEBUG_KMS("disabling %s\n", power_well->name); > power_well->hw_enabled = false; > power_well->ops->disable(dev_priv, power_well); > } > > -static void intel_power_well_get(struct drm_i915_private *dev_priv, > +static void __intel_power_well_get(struct drm_i915_private *dev_priv, > struct i915_power_well *power_well) > { > if (!power_well->count++) > intel_power_well_enable(dev_priv, power_well); > } > > +static void intel_power_well_get(struct drm_i915_private *dev_priv, > + struct i915_power_well *power_well) > +{ > + unsigned long flags = 0; > + > + power_well_lock(power_well, flags); > + __intel_power_well_get(dev_priv, power_well); > + power_well_unlock(power_well, flags); > +} > + > static void intel_power_well_put(struct drm_i915_private *dev_priv, > struct i915_power_well *power_well) > { > + unsigned long flags = 0; > + > + power_well_lock(power_well, flags); > WARN(!power_well->count, "Use count on power well %s is already zero", > power_well->name); > > if (!--power_well->count) > intel_power_well_disable(dev_priv, power_well); > + power_well_unlock(power_well, flags); > } > > /** > @@ -736,6 +771,7 @@ static void gen9_dc_off_power_well_disable(struct drm_i915_private *dev_priv, > skl_enable_dc6(dev_priv); > else if (dev_priv->csr.allowed_dc_mask & DC_STATE_EN_UPTO_DC5) > gen9_enable_dc5(dev_priv); > + power_well->dc_off_disabled = true; > } > > static void i9xx_power_well_sync_hw_noop(struct drm_i915_private *dev_priv, > @@ -1453,6 +1489,58 @@ static void chv_pipe_power_well_disable(struct drm_i915_private *dev_priv, > chv_set_pipe_power_well(dev_priv, power_well, false); > } > > +void intel_display_power_vblank_get(struct drm_i915_private *dev_priv, > + bool *needs_restore) > +{ > + struct i915_power_domains *power_domains = &dev_priv->power_domains; > + struct i915_power_well *power_well; > + enum intel_display_power_domain domain = POWER_DOMAIN_VBLANK; > + > + *needs_restore = false; > + > + if (!HAS_CSR(dev_priv)) > + return; > + > + if (!(HAS_PSR(dev_priv) && dev_priv->psr.sink_support)) > + return; > + > + intel_runtime_pm_get_noresume(dev_priv); > + > + for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain)) { > + unsigned long flags = 0; > + > + power_well_lock(power_well, flags); > + __intel_power_well_get(dev_priv, power_well); > + *needs_restore = power_well->dc_off_disabled; > + power_well->dc_off_disabled = false; it seem also that by always restoring we don't need to add this specific dc_off_disabled variable inside the generic pw struct. > + power_well_unlock(power_well, flags); > + } > + > + atomic_inc(&power_domains->domain_use_count[domain]); > +} > + > +void intel_display_power_vblank_put(struct drm_i915_private *dev_priv) > +{ > + struct i915_power_domains *power_domains = &dev_priv->power_domains; > + struct i915_power_well *power_well; > + enum intel_display_power_domain domain = POWER_DOMAIN_VBLANK; > + > + if (!HAS_CSR(dev_priv)) > + return; > + > + if (!(HAS_PSR(dev_priv) && dev_priv->psr.sink_support)) > + return; Can we remove these checks and do this for any vblank domain? I believe this kind of association should be only on the domain<->well association and not check for feature here. > + > + WARN(atomic_dec_return(&power_domains->domain_use_count[domain]) < 0, > + "Use count on domain %s was already zero\n", > + intel_display_power_domain_str(domain)); is the atomic safe enough here? or we need a spinlock? > + > + for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain)) > + intel_power_well_put(dev_priv, power_well); > + > + intel_runtime_pm_put(dev_priv); > +} > + > static void > __intel_display_power_get_domain(struct drm_i915_private *dev_priv, > enum intel_display_power_domain domain) > @@ -1463,7 +1551,7 @@ __intel_display_power_get_domain(struct drm_i915_private *dev_priv, > for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain)) > intel_power_well_get(dev_priv, power_well); > > - power_domains->domain_use_count[domain]++; > + atomic_inc(&power_domains->domain_use_count[domain]); > } > > /** > @@ -1492,6 +1580,38 @@ void intel_display_power_get(struct drm_i915_private *dev_priv, > mutex_unlock(&power_domains->lock); > } > > +static bool dc_off_get_if_enabled(struct drm_i915_private *dev_priv, > + enum intel_display_power_domain domain) > +{ > + struct i915_power_well *power_well; > + bool is_enabled; > + unsigned long flags = 0; > + > + power_well = lookup_power_well(dev_priv, SKL_DISP_PW_DC_OFF); > + if (!power_well || !(power_well->domains & domain)) > + return true; > + > + power_well_lock(power_well, flags); > + is_enabled = power_well->hw_enabled; > + if (is_enabled) > + __intel_power_well_get(dev_priv, power_well); > + power_well_unlock(power_well, flags); > + > + return is_enabled; > +} > + > +static void dc_off_put(struct drm_i915_private *dev_priv, > + enum intel_display_power_domain domain) > +{ > + struct i915_power_well *power_well; > + > + power_well = lookup_power_well(dev_priv, SKL_DISP_PW_DC_OFF); > + if (!power_well || !(power_well->domains & domain)) > + return; > + > + intel_power_well_put(dev_priv, power_well); > +} > + > /** > * intel_display_power_get_if_enabled - grab a reference for an enabled display power domain > * @dev_priv: i915 device instance > @@ -1508,20 +1628,24 @@ bool intel_display_power_get_if_enabled(struct drm_i915_private *dev_priv, > enum intel_display_power_domain domain) > { > struct i915_power_domains *power_domains = &dev_priv->power_domains; > - bool is_enabled; > + bool is_enabled = false; > > if (!intel_runtime_pm_get_if_in_use(dev_priv)) > return false; > > mutex_lock(&power_domains->lock); > > + if (!dc_off_get_if_enabled(dev_priv, domain)) > + goto out; > + > if (__intel_display_power_is_enabled(dev_priv, domain)) { > __intel_display_power_get_domain(dev_priv, domain); > is_enabled = true; > - } else { > - is_enabled = false; > } > > + dc_off_put(dev_priv, domain); > + > +out: > mutex_unlock(&power_domains->lock); > > if (!is_enabled) > @@ -1549,10 +1673,9 @@ void intel_display_power_put(struct drm_i915_private *dev_priv, > > mutex_lock(&power_domains->lock); > > - WARN(!power_domains->domain_use_count[domain], > - "Use count on domain %s is already zero\n", > + WARN(atomic_dec_return(&power_domains->domain_use_count[domain]) < 0, > + "Use count on domain %s was already zero\n", > intel_display_power_domain_str(domain)); > - power_domains->domain_use_count[domain]--; > > for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain)) > intel_power_well_put(dev_priv, power_well); > @@ -1720,6 +1843,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv, > BIT_ULL(POWER_DOMAIN_GT_IRQ) | \ > BIT_ULL(POWER_DOMAIN_MODESET) | \ > BIT_ULL(POWER_DOMAIN_AUX_A) | \ > + BIT_ULL(POWER_DOMAIN_VBLANK) | \ > BIT_ULL(POWER_DOMAIN_INIT)) > > #define BXT_DISPLAY_POWERWELL_2_POWER_DOMAINS ( \ > @@ -1743,6 +1867,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv, > BIT_ULL(POWER_DOMAIN_MODESET) | \ > BIT_ULL(POWER_DOMAIN_AUX_A) | \ > BIT_ULL(POWER_DOMAIN_GMBUS) | \ > + BIT_ULL(POWER_DOMAIN_VBLANK) | \ > BIT_ULL(POWER_DOMAIN_INIT)) > #define BXT_DPIO_CMN_A_POWER_DOMAINS ( \ > BIT_ULL(POWER_DOMAIN_PORT_DDI_A_LANES) | \ > @@ -1803,6 +1928,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv, > BIT_ULL(POWER_DOMAIN_MODESET) | \ > BIT_ULL(POWER_DOMAIN_AUX_A) | \ > BIT_ULL(POWER_DOMAIN_GMBUS) | \ > + BIT_ULL(POWER_DOMAIN_VBLANK) | \ > BIT_ULL(POWER_DOMAIN_INIT)) > > #define CNL_DISPLAY_POWERWELL_2_POWER_DOMAINS ( \ > @@ -1850,6 +1976,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv, > CNL_DISPLAY_POWERWELL_2_POWER_DOMAINS | \ > BIT_ULL(POWER_DOMAIN_MODESET) | \ > BIT_ULL(POWER_DOMAIN_AUX_A) | \ > + BIT_ULL(POWER_DOMAIN_VBLANK) | \ > BIT_ULL(POWER_DOMAIN_INIT)) > > static const struct i915_power_well_ops i9xx_always_on_power_well_ops = { > @@ -2083,9 +2210,12 @@ bool intel_display_power_well_is_enabled(struct drm_i915_private *dev_priv, > { > struct i915_power_well *power_well; > bool ret; > + unsigned long flags = 0; > > power_well = lookup_power_well(dev_priv, power_well_id); > + power_well_lock(power_well, flags); > ret = power_well->ops->is_enabled(dev_priv, power_well); > + power_well_unlock(power_well, flags); > > return ret; > } > @@ -2120,6 +2250,8 @@ static struct i915_power_well skl_power_wells[] = { > .domains = SKL_DISPLAY_DC_OFF_POWER_DOMAINS, > .ops = &gen9_dc_off_power_well_ops, > .id = SKL_DISP_PW_DC_OFF, > + .supports_atomic_ctx = true, > + .dc_off_disabled = false, > }, > { > .name = "power well 2", > @@ -2180,6 +2312,8 @@ static struct i915_power_well bxt_power_wells[] = { > .domains = BXT_DISPLAY_DC_OFF_POWER_DOMAINS, > .ops = &gen9_dc_off_power_well_ops, > .id = SKL_DISP_PW_DC_OFF, > + .supports_atomic_ctx = true, > + .dc_off_disabled = false, > }, > { > .name = "power well 2", > @@ -2235,6 +2369,8 @@ static struct i915_power_well glk_power_wells[] = { > .domains = GLK_DISPLAY_DC_OFF_POWER_DOMAINS, > .ops = &gen9_dc_off_power_well_ops, > .id = SKL_DISP_PW_DC_OFF, > + .supports_atomic_ctx = true, > + .dc_off_disabled = false, > }, > { > .name = "power well 2", > @@ -2359,6 +2495,8 @@ static struct i915_power_well cnl_power_wells[] = { > .domains = CNL_DISPLAY_DC_OFF_POWER_DOMAINS, > .ops = &gen9_dc_off_power_well_ops, > .id = SKL_DISP_PW_DC_OFF, > + .supports_atomic_ctx = true, > + .dc_off_disabled = false, > }, > { > .name = "power well 2", > @@ -2487,6 +2625,7 @@ static void assert_power_well_ids_unique(struct drm_i915_private *dev_priv) > int intel_power_domains_init(struct drm_i915_private *dev_priv) > { > struct i915_power_domains *power_domains = &dev_priv->power_domains; > + struct i915_power_well *power_well; > > i915_modparams.disable_power_well = > sanitize_disable_power_well_option(dev_priv, > @@ -2524,6 +2663,10 @@ int intel_power_domains_init(struct drm_i915_private *dev_priv) > set_power_wells(power_domains, i9xx_always_on_power_well); > } > > + for_each_power_well(dev_priv, power_well) > + if (power_well->supports_atomic_ctx) > + spin_lock_init(&power_well->lock); > + > assert_power_well_ids_unique(dev_priv); > > return 0; > @@ -2571,9 +2714,13 @@ static void intel_power_domains_sync_hw(struct drm_i915_private *dev_priv) > > mutex_lock(&power_domains->lock); > for_each_power_well(dev_priv, power_well) { > + unsigned long flags = 0; > + > + power_well_lock(power_well, flags); > power_well->ops->sync_hw(dev_priv, power_well); > power_well->hw_enabled = power_well->ops->is_enabled(dev_priv, > power_well); > + power_well_unlock(power_well, flags); > } > mutex_unlock(&power_domains->lock); > } > @@ -3046,21 +3193,23 @@ void intel_power_domains_suspend(struct drm_i915_private *dev_priv) > bxt_display_core_uninit(dev_priv); > } > > -static void intel_power_domains_dump_info(struct drm_i915_private *dev_priv) > +static void intel_power_domains_dump_info(struct drm_i915_private *dev_priv, > + const int *power_well_use) > { > struct i915_power_domains *power_domains = &dev_priv->power_domains; > struct i915_power_well *power_well; > + int i = 0; > > for_each_power_well(dev_priv, power_well) { > enum intel_display_power_domain domain; > > DRM_DEBUG_DRIVER("%-25s %d\n", > - power_well->name, power_well->count); > + power_well->name, power_well_use[i++]); > > for_each_power_domain(domain, power_well->domains) > DRM_DEBUG_DRIVER(" %-23s %d\n", > intel_display_power_domain_str(domain), > - power_domains->domain_use_count[domain]); > + atomic_read(&power_domains->domain_use_count[domain])); > } > } > > @@ -3079,6 +3228,7 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv) > struct i915_power_domains *power_domains = &dev_priv->power_domains; > struct i915_power_well *power_well; > bool dump_domain_info; > + int power_well_use[dev_priv->power_domains.power_well_count]; > > mutex_lock(&power_domains->lock); > > @@ -3087,6 +3237,14 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv) > enum intel_display_power_domain domain; > int domains_count; > bool enabled; > + int well_count, i = 0; > + unsigned long flags = 0; > + > + power_well_lock(power_well, flags); > + well_count = power_well->count; > + enabled = power_well->ops->is_enabled(dev_priv, power_well); > + power_well_unlock(power_well, flags); > + power_well_use[i++] = well_count; > > /* > * Power wells not belonging to any domain (like the MISC_IO > @@ -3096,20 +3254,19 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv) > if (!power_well->domains) > continue; > > - enabled = power_well->ops->is_enabled(dev_priv, power_well); > - if ((power_well->count || power_well->always_on) != enabled) > + > + if ((well_count || power_well->always_on) != enabled) > DRM_ERROR("power well %s state mismatch (refcount %d/enabled %d)", > - power_well->name, power_well->count, enabled); > + power_well->name, well_count, enabled); > > domains_count = 0; > for_each_power_domain(domain, power_well->domains) > - domains_count += power_domains->domain_use_count[domain]; > + domains_count += atomic_read(&power_domains->domain_use_count[domain]); > > - if (power_well->count != domains_count) { > + if (well_count != domains_count) { > DRM_ERROR("power well %s refcount/domain refcount mismatch " > "(refcount %d/domains refcount %d)\n", > - power_well->name, power_well->count, > - domains_count); > + power_well->name, well_count, domains_count); > dump_domain_info = true; > } > } > @@ -3118,7 +3275,7 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv) > static bool dumped; > > if (!dumped) { > - intel_power_domains_dump_info(dev_priv); > + intel_power_domains_dump_info(dev_priv, power_well_use); > dumped = true; > } > } > -- > 2.11.0 > I will probably have more comments later, but just doing a brain dump now since I end up forgetting to write yesterday... The approach here in general is good and much better than that pre,post hooks. But I just believe we can do this here in a more generic approach than deviating the initial power well and domains. Thanks, Rodrigo. _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts 2018-01-05 2:08 ` Rodrigo Vivi @ 2018-01-08 19:59 ` Pandiyan, Dhinakaran 2018-01-09 9:38 ` Maarten Lankhorst 0 siblings, 1 reply; 16+ messages in thread From: Pandiyan, Dhinakaran @ 2018-01-08 19:59 UTC (permalink / raw) To: Vivi, Rodrigo; +Cc: daniel.vetter, intel-gfx, Zanoni, Paulo R On Thu, 2018-01-04 at 18:08 -0800, Rodrigo Vivi wrote: > On Wed, Jan 03, 2018 at 08:40:00PM +0000, Dhinakaran Pandiyan wrote: > > When DC states are enabled and PSR is active, the hardware enters DC5/DC6 > > states resulting in the frame counter resetting. The frame counter reset > > mess up the vblank counting logic as the diff between the new frame > > counter value and the previous is negative, and this negative diff gets > > applied as an unsigned value to the vblank count. We cannot reject negative > > diffs altogether because they can arise from legitimate frame counter > > overflows when there is a long period with vblank disabled. So, this > > approach allows for the driver to notice a DC state toggle between a vblank > > disable and enable and fill in the missed vblanks. > > > > But, in order to disable DC states when vblank interrupts are required, > > the DC_OFF power well has to be disabled in an atomic context. So, > > introduce a new VBLANK power domain that can be acquired and released in > > atomic contexts with these changes - > > 1) _vblank_get() and _vblank_put() methods skip the power_domain mutex > > and use a spin lock for the DC power well. > > 2) power_domains->domain_use_count is converted to an atomic_t array so > > that it can be updated outside of the power domain mutex. > > > > v3: Squash domain_use_count atomic_t conversion (Maarten) > > v2: Fix deadlock by switching irqsave spinlock. > > Implement atomic version of get_if_enabled. > > Modify power_domain_verify_state to check power well use count and > > enabled status atomically. > > Rewrite of intel_power_well_{get,put} > > > > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> > > Cc: Daniel Vetter <daniel.vetter@ffwll.ch> > > Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> > > Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> > > Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> > > --- > > drivers/gpu/drm/i915/i915_debugfs.c | 2 +- > > drivers/gpu/drm/i915/i915_drv.h | 19 +++- > > drivers/gpu/drm/i915/intel_display.h | 1 + > > drivers/gpu/drm/i915/intel_drv.h | 3 + > > drivers/gpu/drm/i915/intel_runtime_pm.c | 195 ++++++++++++++++++++++++++++---- > > 5 files changed, 199 insertions(+), 21 deletions(-) > > > > diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c > > index d81cb2513069..5a7ce734de02 100644 > > --- a/drivers/gpu/drm/i915/i915_debugfs.c > > +++ b/drivers/gpu/drm/i915/i915_debugfs.c > > @@ -2746,7 +2746,7 @@ static int i915_power_domain_info(struct seq_file *m, void *unused) > > for_each_power_domain(power_domain, power_well->domains) > > seq_printf(m, " %-23s %d\n", > > intel_display_power_domain_str(power_domain), > > - power_domains->domain_use_count[power_domain]); > > + atomic_read(&power_domains->domain_use_count[power_domain])); > > } > > > > mutex_unlock(&power_domains->lock); > > diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h > > index caebd5825279..61a635f03af7 100644 > > --- a/drivers/gpu/drm/i915/i915_drv.h > > +++ b/drivers/gpu/drm/i915/i915_drv.h > > @@ -1032,6 +1032,23 @@ struct i915_power_well { > > bool has_fuses:1; > > } hsw; > > }; > > + > > + /* Lock to serialize access to count, hw_enabled and ops, used for > > + * power wells that have supports_atomix_ctx set to True. > > + */ > > + spinlock_t lock; > > + > > + /* Indicates that the get/put methods for this power well can be called > > + * in atomic contexts, requires .ops to not sleep. This is valid > > + * only for the DC_OFF power well currently. > > + */ > > + bool supports_atomic_ctx; > > + > > + /* DC_OFF power well was disabled since the last time vblanks were > > + * disabled. > > + */ > > + bool dc_off_disabled; > > + > > const struct i915_power_well_ops *ops; > > }; > > > > @@ -1045,7 +1062,7 @@ struct i915_power_domains { > > int power_well_count; > > > > struct mutex lock; > > - int domain_use_count[POWER_DOMAIN_NUM]; > > + atomic_t domain_use_count[POWER_DOMAIN_NUM]; > > struct i915_power_well *power_wells; > > }; > > > > diff --git a/drivers/gpu/drm/i915/intel_display.h b/drivers/gpu/drm/i915/intel_display.h > > index a0d2b6169361..3e9671ff6f79 100644 > > --- a/drivers/gpu/drm/i915/intel_display.h > > +++ b/drivers/gpu/drm/i915/intel_display.h > > @@ -172,6 +172,7 @@ enum intel_display_power_domain { > > POWER_DOMAIN_AUX_C, > > POWER_DOMAIN_AUX_D, > > POWER_DOMAIN_GMBUS, > > + POWER_DOMAIN_VBLANK, > > Maybe we could start a new category of atomic domains and on interations that makes sense go over both > or making the domains in an array with is_atomic bool in case we can transform the atomic get,put to > be really generic or into .enable .disable function pointers. The generic interfaces we've discussed until now to get/put vblank power domain atomically are still not generic enough. We might as well accept DC_OFF to be a special case and deal with it as such - there is no other power well that we need to enable/disable atomically. We can always rewrite the code in the future if a better idea comes up. > > > POWER_DOMAIN_MODESET, > > POWER_DOMAIN_GT_IRQ, > > POWER_DOMAIN_INIT, > > diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h > > index 30f791f89d64..164e62cb047b 100644 > > --- a/drivers/gpu/drm/i915/intel_drv.h > > +++ b/drivers/gpu/drm/i915/intel_drv.h > > @@ -1797,6 +1797,9 @@ bool intel_display_power_get_if_enabled(struct drm_i915_private *dev_priv, > > enum intel_display_power_domain domain); > > void intel_display_power_put(struct drm_i915_private *dev_priv, > > enum intel_display_power_domain domain); > > +void intel_display_power_vblank_get(struct drm_i915_private *dev_priv, > > + bool *needs_restore); > > can we dump the needs_restore and always restore the counter? I checked this, seems possible. > if so, we could use regular intel_power_domain_{get,put} functions and built the generic atomic inside it. > > > +void intel_display_power_vblank_put(struct drm_i915_private *dev_priv); > > > > static inline void > > assert_rpm_device_not_suspended(struct drm_i915_private *dev_priv) > > diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c > > index d758da6156a8..93b324dcc55d 100644 > > --- a/drivers/gpu/drm/i915/intel_runtime_pm.c > > +++ b/drivers/gpu/drm/i915/intel_runtime_pm.c > > @@ -56,6 +56,19 @@ static struct i915_power_well * > > lookup_power_well(struct drm_i915_private *dev_priv, > > enum i915_power_well_id power_well_id); > > > > +/* Optimize for the case when this is called from atomic contexts, > > + * although the case is unlikely. > > + */ > > +#define power_well_lock(power_well, flags) do { \ > > + if (likely(power_well->supports_atomic_ctx)) \ > > + spin_lock_irqsave(&power_well->lock, flags); \ > > + } while (0) > > + > > +#define power_well_unlock(power_well, flags) do { \ > > + if (likely(power_well->supports_atomic_ctx)) \ > > + spin_unlock_irqrestore(&power_well->lock, flags); \ > > + } while (0) > > + > > const char * > > intel_display_power_domain_str(enum intel_display_power_domain domain) > > { > > @@ -126,6 +139,8 @@ intel_display_power_domain_str(enum intel_display_power_domain domain) > > return "AUX_D"; > > case POWER_DOMAIN_GMBUS: > > return "GMBUS"; > > + case POWER_DOMAIN_VBLANK: > > + return "VBLANK"; > > case POWER_DOMAIN_INIT: > > return "INIT"; > > case POWER_DOMAIN_MODESET: > > @@ -141,6 +156,9 @@ intel_display_power_domain_str(enum intel_display_power_domain domain) > > static void intel_power_well_enable(struct drm_i915_private *dev_priv, > > struct i915_power_well *power_well) > > { > > + if (power_well->supports_atomic_ctx) > > + assert_spin_locked(&power_well->lock); > > + > > DRM_DEBUG_KMS("enabling %s\n", power_well->name); > > power_well->ops->enable(dev_priv, power_well); > > power_well->hw_enabled = true; > > @@ -149,26 +167,43 @@ static void intel_power_well_enable(struct drm_i915_private *dev_priv, > > static void intel_power_well_disable(struct drm_i915_private *dev_priv, > > struct i915_power_well *power_well) > > { > > + if (power_well->supports_atomic_ctx) > > + assert_spin_locked(&power_well->lock); > > + > > DRM_DEBUG_KMS("disabling %s\n", power_well->name); > > power_well->hw_enabled = false; > > power_well->ops->disable(dev_priv, power_well); > > } > > > > -static void intel_power_well_get(struct drm_i915_private *dev_priv, > > +static void __intel_power_well_get(struct drm_i915_private *dev_priv, > > struct i915_power_well *power_well) > > { > > if (!power_well->count++) > > intel_power_well_enable(dev_priv, power_well); > > } > > > > +static void intel_power_well_get(struct drm_i915_private *dev_priv, > > + struct i915_power_well *power_well) > > +{ > > + unsigned long flags = 0; > > + > > + power_well_lock(power_well, flags); > > + __intel_power_well_get(dev_priv, power_well); > > + power_well_unlock(power_well, flags); > > +} > > + > > static void intel_power_well_put(struct drm_i915_private *dev_priv, > > struct i915_power_well *power_well) > > { > > + unsigned long flags = 0; > > + > > + power_well_lock(power_well, flags); > > WARN(!power_well->count, "Use count on power well %s is already zero", > > power_well->name); > > > > if (!--power_well->count) > > intel_power_well_disable(dev_priv, power_well); > > + power_well_unlock(power_well, flags); > > } > > > > /** > > @@ -736,6 +771,7 @@ static void gen9_dc_off_power_well_disable(struct drm_i915_private *dev_priv, > > skl_enable_dc6(dev_priv); > > else if (dev_priv->csr.allowed_dc_mask & DC_STATE_EN_UPTO_DC5) > > gen9_enable_dc5(dev_priv); > > + power_well->dc_off_disabled = true; > > } > > > > static void i9xx_power_well_sync_hw_noop(struct drm_i915_private *dev_priv, > > @@ -1453,6 +1489,58 @@ static void chv_pipe_power_well_disable(struct drm_i915_private *dev_priv, > > chv_set_pipe_power_well(dev_priv, power_well, false); > > } > > > > +void intel_display_power_vblank_get(struct drm_i915_private *dev_priv, > > + bool *needs_restore) > > +{ > > + struct i915_power_domains *power_domains = &dev_priv->power_domains; > > + struct i915_power_well *power_well; > > + enum intel_display_power_domain domain = POWER_DOMAIN_VBLANK; > > + > > + *needs_restore = false; > > + > > + if (!HAS_CSR(dev_priv)) > > + return; > > + > > + if (!(HAS_PSR(dev_priv) && dev_priv->psr.sink_support)) > > + return; > > + > > + intel_runtime_pm_get_noresume(dev_priv); > > + > > + for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain)) { > > + unsigned long flags = 0; > > + > > + power_well_lock(power_well, flags); > > + __intel_power_well_get(dev_priv, power_well); > > + *needs_restore = power_well->dc_off_disabled; > > + power_well->dc_off_disabled = false; > > it seem also that by always restoring we don't need to add this specific dc_off_disabled > variable inside the generic pw struct. > > > + power_well_unlock(power_well, flags); > > + } > > + > > + atomic_inc(&power_domains->domain_use_count[domain]); > > +} > > + > > +void intel_display_power_vblank_put(struct drm_i915_private *dev_priv) > > +{ > > + struct i915_power_domains *power_domains = &dev_priv->power_domains; > > + struct i915_power_well *power_well; > > + enum intel_display_power_domain domain = POWER_DOMAIN_VBLANK; > > + > > + if (!HAS_CSR(dev_priv)) > > + return; > > + > > + if (!(HAS_PSR(dev_priv) && dev_priv->psr.sink_support)) > > + return; > > Can we remove these checks and do this for any vblank domain? > I believe this kind of association should be only on the domain<->well association > and not check for feature here. Do you recommend moving it up to the caller? I want to avoid acquiring locks, updating ref counts etc, when it is not needed. Related to your question, if my understanding is correct, the hardware does not really enter DC5/6 without PSR when a pipe is enabled. So instead of allowing DC5/6, we might as well disable it when there is no PSR. > > > + > > + WARN(atomic_dec_return(&power_domains->domain_use_count[domain]) < 0, > > + "Use count on domain %s was already zero\n", > > + intel_display_power_domain_str(domain)); > > is the atomic safe enough here? or we need a spinlock? I believe it is safe, domain_use_count[x] does not affect any subsequent operation. We can get away with modifying it outside the power well spin lock. > > > + > > + for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain)) > > + intel_power_well_put(dev_priv, power_well); > > + > > + intel_runtime_pm_put(dev_priv); > > +} > > + > > static void > > __intel_display_power_get_domain(struct drm_i915_private *dev_priv, > > enum intel_display_power_domain domain) > > @@ -1463,7 +1551,7 @@ __intel_display_power_get_domain(struct drm_i915_private *dev_priv, > > for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain)) > > intel_power_well_get(dev_priv, power_well); > > > > - power_domains->domain_use_count[domain]++; > > + atomic_inc(&power_domains->domain_use_count[domain]); > > } > > > > /** > > @@ -1492,6 +1580,38 @@ void intel_display_power_get(struct drm_i915_private *dev_priv, > > mutex_unlock(&power_domains->lock); > > } > > > > +static bool dc_off_get_if_enabled(struct drm_i915_private *dev_priv, > > + enum intel_display_power_domain domain) > > +{ > > + struct i915_power_well *power_well; > > + bool is_enabled; > > + unsigned long flags = 0; > > + > > + power_well = lookup_power_well(dev_priv, SKL_DISP_PW_DC_OFF); > > + if (!power_well || !(power_well->domains & domain)) > > + return true; > > + > > + power_well_lock(power_well, flags); > > + is_enabled = power_well->hw_enabled; > > + if (is_enabled) > > + __intel_power_well_get(dev_priv, power_well); > > + power_well_unlock(power_well, flags); > > + > > + return is_enabled; > > +} > > + > > +static void dc_off_put(struct drm_i915_private *dev_priv, > > + enum intel_display_power_domain domain) > > +{ > > + struct i915_power_well *power_well; > > + > > + power_well = lookup_power_well(dev_priv, SKL_DISP_PW_DC_OFF); > > + if (!power_well || !(power_well->domains & domain)) > > + return; > > + > > + intel_power_well_put(dev_priv, power_well); > > +} > > + > > /** > > * intel_display_power_get_if_enabled - grab a reference for an enabled display power domain > > * @dev_priv: i915 device instance > > @@ -1508,20 +1628,24 @@ bool intel_display_power_get_if_enabled(struct drm_i915_private *dev_priv, > > enum intel_display_power_domain domain) > > { > > struct i915_power_domains *power_domains = &dev_priv->power_domains; > > - bool is_enabled; > > + bool is_enabled = false; > > > > if (!intel_runtime_pm_get_if_in_use(dev_priv)) > > return false; > > > > mutex_lock(&power_domains->lock); > > > > + if (!dc_off_get_if_enabled(dev_priv, domain)) > > + goto out; > > + > > if (__intel_display_power_is_enabled(dev_priv, domain)) { > > __intel_display_power_get_domain(dev_priv, domain); > > is_enabled = true; > > - } else { > > - is_enabled = false; > > } > > > > + dc_off_put(dev_priv, domain); > > + > > +out: > > mutex_unlock(&power_domains->lock); > > > > if (!is_enabled) > > @@ -1549,10 +1673,9 @@ void intel_display_power_put(struct drm_i915_private *dev_priv, > > > > mutex_lock(&power_domains->lock); > > > > - WARN(!power_domains->domain_use_count[domain], > > - "Use count on domain %s is already zero\n", > > + WARN(atomic_dec_return(&power_domains->domain_use_count[domain]) < 0, > > + "Use count on domain %s was already zero\n", > > intel_display_power_domain_str(domain)); > > - power_domains->domain_use_count[domain]--; > > > > for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain)) > > intel_power_well_put(dev_priv, power_well); > > @@ -1720,6 +1843,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv, > > BIT_ULL(POWER_DOMAIN_GT_IRQ) | \ > > BIT_ULL(POWER_DOMAIN_MODESET) | \ > > BIT_ULL(POWER_DOMAIN_AUX_A) | \ > > + BIT_ULL(POWER_DOMAIN_VBLANK) | \ > > BIT_ULL(POWER_DOMAIN_INIT)) > > > > #define BXT_DISPLAY_POWERWELL_2_POWER_DOMAINS ( \ > > @@ -1743,6 +1867,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv, > > BIT_ULL(POWER_DOMAIN_MODESET) | \ > > BIT_ULL(POWER_DOMAIN_AUX_A) | \ > > BIT_ULL(POWER_DOMAIN_GMBUS) | \ > > + BIT_ULL(POWER_DOMAIN_VBLANK) | \ > > BIT_ULL(POWER_DOMAIN_INIT)) > > #define BXT_DPIO_CMN_A_POWER_DOMAINS ( \ > > BIT_ULL(POWER_DOMAIN_PORT_DDI_A_LANES) | \ > > @@ -1803,6 +1928,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv, > > BIT_ULL(POWER_DOMAIN_MODESET) | \ > > BIT_ULL(POWER_DOMAIN_AUX_A) | \ > > BIT_ULL(POWER_DOMAIN_GMBUS) | \ > > + BIT_ULL(POWER_DOMAIN_VBLANK) | \ > > BIT_ULL(POWER_DOMAIN_INIT)) > > > > #define CNL_DISPLAY_POWERWELL_2_POWER_DOMAINS ( \ > > @@ -1850,6 +1976,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv, > > CNL_DISPLAY_POWERWELL_2_POWER_DOMAINS | \ > > BIT_ULL(POWER_DOMAIN_MODESET) | \ > > BIT_ULL(POWER_DOMAIN_AUX_A) | \ > > + BIT_ULL(POWER_DOMAIN_VBLANK) | \ > > BIT_ULL(POWER_DOMAIN_INIT)) > > > > static const struct i915_power_well_ops i9xx_always_on_power_well_ops = { > > @@ -2083,9 +2210,12 @@ bool intel_display_power_well_is_enabled(struct drm_i915_private *dev_priv, > > { > > struct i915_power_well *power_well; > > bool ret; > > + unsigned long flags = 0; > > > > power_well = lookup_power_well(dev_priv, power_well_id); > > + power_well_lock(power_well, flags); > > ret = power_well->ops->is_enabled(dev_priv, power_well); > > + power_well_unlock(power_well, flags); > > > > return ret; > > } > > @@ -2120,6 +2250,8 @@ static struct i915_power_well skl_power_wells[] = { > > .domains = SKL_DISPLAY_DC_OFF_POWER_DOMAINS, > > .ops = &gen9_dc_off_power_well_ops, > > .id = SKL_DISP_PW_DC_OFF, > > + .supports_atomic_ctx = true, > > + .dc_off_disabled = false, > > }, > > { > > .name = "power well 2", > > @@ -2180,6 +2312,8 @@ static struct i915_power_well bxt_power_wells[] = { > > .domains = BXT_DISPLAY_DC_OFF_POWER_DOMAINS, > > .ops = &gen9_dc_off_power_well_ops, > > .id = SKL_DISP_PW_DC_OFF, > > + .supports_atomic_ctx = true, > > + .dc_off_disabled = false, > > }, > > { > > .name = "power well 2", > > @@ -2235,6 +2369,8 @@ static struct i915_power_well glk_power_wells[] = { > > .domains = GLK_DISPLAY_DC_OFF_POWER_DOMAINS, > > .ops = &gen9_dc_off_power_well_ops, > > .id = SKL_DISP_PW_DC_OFF, > > + .supports_atomic_ctx = true, > > + .dc_off_disabled = false, > > }, > > { > > .name = "power well 2", > > @@ -2359,6 +2495,8 @@ static struct i915_power_well cnl_power_wells[] = { > > .domains = CNL_DISPLAY_DC_OFF_POWER_DOMAINS, > > .ops = &gen9_dc_off_power_well_ops, > > .id = SKL_DISP_PW_DC_OFF, > > + .supports_atomic_ctx = true, > > + .dc_off_disabled = false, > > }, > > { > > .name = "power well 2", > > @@ -2487,6 +2625,7 @@ static void assert_power_well_ids_unique(struct drm_i915_private *dev_priv) > > int intel_power_domains_init(struct drm_i915_private *dev_priv) > > { > > struct i915_power_domains *power_domains = &dev_priv->power_domains; > > + struct i915_power_well *power_well; > > > > i915_modparams.disable_power_well = > > sanitize_disable_power_well_option(dev_priv, > > @@ -2524,6 +2663,10 @@ int intel_power_domains_init(struct drm_i915_private *dev_priv) > > set_power_wells(power_domains, i9xx_always_on_power_well); > > } > > > > + for_each_power_well(dev_priv, power_well) > > + if (power_well->supports_atomic_ctx) > > + spin_lock_init(&power_well->lock); > > + > > assert_power_well_ids_unique(dev_priv); > > > > return 0; > > @@ -2571,9 +2714,13 @@ static void intel_power_domains_sync_hw(struct drm_i915_private *dev_priv) > > > > mutex_lock(&power_domains->lock); > > for_each_power_well(dev_priv, power_well) { > > + unsigned long flags = 0; > > + > > + power_well_lock(power_well, flags); > > power_well->ops->sync_hw(dev_priv, power_well); > > power_well->hw_enabled = power_well->ops->is_enabled(dev_priv, > > power_well); > > + power_well_unlock(power_well, flags); > > } > > mutex_unlock(&power_domains->lock); > > } > > @@ -3046,21 +3193,23 @@ void intel_power_domains_suspend(struct drm_i915_private *dev_priv) > > bxt_display_core_uninit(dev_priv); > > } > > > > -static void intel_power_domains_dump_info(struct drm_i915_private *dev_priv) > > +static void intel_power_domains_dump_info(struct drm_i915_private *dev_priv, > > + const int *power_well_use) > > { > > struct i915_power_domains *power_domains = &dev_priv->power_domains; > > struct i915_power_well *power_well; > > + int i = 0; > > > > for_each_power_well(dev_priv, power_well) { > > enum intel_display_power_domain domain; > > > > DRM_DEBUG_DRIVER("%-25s %d\n", > > - power_well->name, power_well->count); > > + power_well->name, power_well_use[i++]); > > > > for_each_power_domain(domain, power_well->domains) > > DRM_DEBUG_DRIVER(" %-23s %d\n", > > intel_display_power_domain_str(domain), > > - power_domains->domain_use_count[domain]); > > + atomic_read(&power_domains->domain_use_count[domain])); > > } > > } > > > > @@ -3079,6 +3228,7 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv) > > struct i915_power_domains *power_domains = &dev_priv->power_domains; > > struct i915_power_well *power_well; > > bool dump_domain_info; > > + int power_well_use[dev_priv->power_domains.power_well_count]; > > > > mutex_lock(&power_domains->lock); > > > > @@ -3087,6 +3237,14 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv) > > enum intel_display_power_domain domain; > > int domains_count; > > bool enabled; > > + int well_count, i = 0; > > + unsigned long flags = 0; > > + > > + power_well_lock(power_well, flags); > > + well_count = power_well->count; > > + enabled = power_well->ops->is_enabled(dev_priv, power_well); > > + power_well_unlock(power_well, flags); > > + power_well_use[i++] = well_count; > > > > /* > > * Power wells not belonging to any domain (like the MISC_IO > > @@ -3096,20 +3254,19 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv) > > if (!power_well->domains) > > continue; > > > > - enabled = power_well->ops->is_enabled(dev_priv, power_well); > > - if ((power_well->count || power_well->always_on) != enabled) > > + > > + if ((well_count || power_well->always_on) != enabled) > > DRM_ERROR("power well %s state mismatch (refcount %d/enabled %d)", > > - power_well->name, power_well->count, enabled); > > + power_well->name, well_count, enabled); > > > > domains_count = 0; > > for_each_power_domain(domain, power_well->domains) > > - domains_count += power_domains->domain_use_count[domain]; > > + domains_count += atomic_read(&power_domains->domain_use_count[domain]); > > > > - if (power_well->count != domains_count) { > > + if (well_count != domains_count) { > > DRM_ERROR("power well %s refcount/domain refcount mismatch " > > "(refcount %d/domains refcount %d)\n", > > - power_well->name, power_well->count, > > - domains_count); > > + power_well->name, well_count, domains_count); > > dump_domain_info = true; > > } > > } > > @@ -3118,7 +3275,7 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv) > > static bool dumped; > > > > if (!dumped) { > > - intel_power_domains_dump_info(dev_priv); > > + intel_power_domains_dump_info(dev_priv, power_well_use); > > dumped = true; > > } > > } > > -- > > 2.11.0 > > > > I will probably have more comments later, but just doing a brain dump now > since I end up forgetting to write yesterday... > > The approach here in general is good and much better than that pre,post hooks. > But I just believe we can do this here in a more generic approach than deviating > the initial power well and domains. I would have liked a generic approach (for display_power_{get,put}), but I think this case is special enough that making it stand out is better. > > Thanks, > Rodrigo. > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/intel-gfx _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts 2018-01-08 19:59 ` Pandiyan, Dhinakaran @ 2018-01-09 9:38 ` Maarten Lankhorst 2018-01-16 19:56 ` Pandiyan, Dhinakaran 0 siblings, 1 reply; 16+ messages in thread From: Maarten Lankhorst @ 2018-01-09 9:38 UTC (permalink / raw) To: Pandiyan, Dhinakaran, Vivi, Rodrigo Cc: daniel.vetter, intel-gfx, Zanoni, Paulo R Op 08-01-18 om 20:59 schreef Pandiyan, Dhinakaran: > On Thu, 2018-01-04 at 18:08 -0800, Rodrigo Vivi wrote: >> On Wed, Jan 03, 2018 at 08:40:00PM +0000, Dhinakaran Pandiyan wrote: >>> When DC states are enabled and PSR is active, the hardware enters DC5/DC6 >>> states resulting in the frame counter resetting. The frame counter reset >>> mess up the vblank counting logic as the diff between the new frame >>> counter value and the previous is negative, and this negative diff gets >>> applied as an unsigned value to the vblank count. We cannot reject negative >>> diffs altogether because they can arise from legitimate frame counter >>> overflows when there is a long period with vblank disabled. So, this >>> approach allows for the driver to notice a DC state toggle between a vblank >>> disable and enable and fill in the missed vblanks. >>> >>> But, in order to disable DC states when vblank interrupts are required, >>> the DC_OFF power well has to be disabled in an atomic context. So, >>> introduce a new VBLANK power domain that can be acquired and released in >>> atomic contexts with these changes - >>> 1) _vblank_get() and _vblank_put() methods skip the power_domain mutex >>> and use a spin lock for the DC power well. >>> 2) power_domains->domain_use_count is converted to an atomic_t array so >>> that it can be updated outside of the power domain mutex. >>> >>> v3: Squash domain_use_count atomic_t conversion (Maarten) >>> v2: Fix deadlock by switching irqsave spinlock. >>> Implement atomic version of get_if_enabled. >>> Modify power_domain_verify_state to check power well use count and >>> enabled status atomically. >>> Rewrite of intel_power_well_{get,put} >>> >>> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> >>> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> >>> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> >>> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> >>> Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> >>> --- >>> drivers/gpu/drm/i915/i915_debugfs.c | 2 +- >>> drivers/gpu/drm/i915/i915_drv.h | 19 +++- >>> drivers/gpu/drm/i915/intel_display.h | 1 + >>> drivers/gpu/drm/i915/intel_drv.h | 3 + >>> drivers/gpu/drm/i915/intel_runtime_pm.c | 195 ++++++++++++++++++++++++++++---- >>> 5 files changed, 199 insertions(+), 21 deletions(-) >>> >>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c >>> index d81cb2513069..5a7ce734de02 100644 >>> --- a/drivers/gpu/drm/i915/i915_debugfs.c >>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c >>> @@ -2746,7 +2746,7 @@ static int i915_power_domain_info(struct seq_file *m, void *unused) >>> for_each_power_domain(power_domain, power_well->domains) >>> seq_printf(m, " %-23s %d\n", >>> intel_display_power_domain_str(power_domain), >>> - power_domains->domain_use_count[power_domain]); >>> + atomic_read(&power_domains->domain_use_count[power_domain])); >>> } >>> >>> mutex_unlock(&power_domains->lock); >>> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h >>> index caebd5825279..61a635f03af7 100644 >>> --- a/drivers/gpu/drm/i915/i915_drv.h >>> +++ b/drivers/gpu/drm/i915/i915_drv.h >>> @@ -1032,6 +1032,23 @@ struct i915_power_well { >>> bool has_fuses:1; >>> } hsw; >>> }; >>> + >>> + /* Lock to serialize access to count, hw_enabled and ops, used for >>> + * power wells that have supports_atomix_ctx set to True. >>> + */ >>> + spinlock_t lock; >>> + >>> + /* Indicates that the get/put methods for this power well can be called >>> + * in atomic contexts, requires .ops to not sleep. This is valid >>> + * only for the DC_OFF power well currently. >>> + */ >>> + bool supports_atomic_ctx; >>> + >>> + /* DC_OFF power well was disabled since the last time vblanks were >>> + * disabled. >>> + */ >>> + bool dc_off_disabled; >>> + >>> const struct i915_power_well_ops *ops; >>> }; >>> >>> @@ -1045,7 +1062,7 @@ struct i915_power_domains { >>> int power_well_count; >>> >>> struct mutex lock; >>> - int domain_use_count[POWER_DOMAIN_NUM]; >>> + atomic_t domain_use_count[POWER_DOMAIN_NUM]; >>> struct i915_power_well *power_wells; >>> }; >>> >>> diff --git a/drivers/gpu/drm/i915/intel_display.h b/drivers/gpu/drm/i915/intel_display.h >>> index a0d2b6169361..3e9671ff6f79 100644 >>> --- a/drivers/gpu/drm/i915/intel_display.h >>> +++ b/drivers/gpu/drm/i915/intel_display.h >>> @@ -172,6 +172,7 @@ enum intel_display_power_domain { >>> POWER_DOMAIN_AUX_C, >>> POWER_DOMAIN_AUX_D, >>> POWER_DOMAIN_GMBUS, >>> + POWER_DOMAIN_VBLANK, >> Maybe we could start a new category of atomic domains and on interations that makes sense go over both >> or making the domains in an array with is_atomic bool in case we can transform the atomic get,put to >> be really generic or into .enable .disable function pointers. > > The generic interfaces we've discussed until now to get/put vblank power > domain atomically are still not generic enough. We might as well accept > DC_OFF to be a special case and deal with it as such - there is no other > power well that we need to enable/disable atomically. We can always > rewrite the code in the future if a better idea comes up. > >>> POWER_DOMAIN_MODESET, >>> POWER_DOMAIN_GT_IRQ, >>> POWER_DOMAIN_INIT, >>> diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h >>> index 30f791f89d64..164e62cb047b 100644 >>> --- a/drivers/gpu/drm/i915/intel_drv.h >>> +++ b/drivers/gpu/drm/i915/intel_drv.h >>> @@ -1797,6 +1797,9 @@ bool intel_display_power_get_if_enabled(struct drm_i915_private *dev_priv, >>> enum intel_display_power_domain domain); >>> void intel_display_power_put(struct drm_i915_private *dev_priv, >>> enum intel_display_power_domain domain); >>> +void intel_display_power_vblank_get(struct drm_i915_private *dev_priv, >>> + bool *needs_restore); >> can we dump the needs_restore and always restore the counter? > I checked this, seems possible. > >> if so, we could use regular intel_power_domain_{get,put} functions and built the generic atomic inside it. >> >>> +void intel_display_power_vblank_put(struct drm_i915_private *dev_priv); >>> >>> static inline void >>> assert_rpm_device_not_suspended(struct drm_i915_private *dev_priv) >>> diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c >>> index d758da6156a8..93b324dcc55d 100644 >>> --- a/drivers/gpu/drm/i915/intel_runtime_pm.c >>> +++ b/drivers/gpu/drm/i915/intel_runtime_pm.c >>> @@ -56,6 +56,19 @@ static struct i915_power_well * >>> lookup_power_well(struct drm_i915_private *dev_priv, >>> enum i915_power_well_id power_well_id); >>> >>> +/* Optimize for the case when this is called from atomic contexts, >>> + * although the case is unlikely. >>> + */ >>> +#define power_well_lock(power_well, flags) do { \ >>> + if (likely(power_well->supports_atomic_ctx)) \ >>> + spin_lock_irqsave(&power_well->lock, flags); \ >>> + } while (0) >>> + >>> +#define power_well_unlock(power_well, flags) do { \ >>> + if (likely(power_well->supports_atomic_ctx)) \ >>> + spin_unlock_irqrestore(&power_well->lock, flags); \ >>> + } while (0) >>> + >>> const char * >>> intel_display_power_domain_str(enum intel_display_power_domain domain) >>> { >>> @@ -126,6 +139,8 @@ intel_display_power_domain_str(enum intel_display_power_domain domain) >>> return "AUX_D"; >>> case POWER_DOMAIN_GMBUS: >>> return "GMBUS"; >>> + case POWER_DOMAIN_VBLANK: >>> + return "VBLANK"; >>> case POWER_DOMAIN_INIT: >>> return "INIT"; >>> case POWER_DOMAIN_MODESET: >>> @@ -141,6 +156,9 @@ intel_display_power_domain_str(enum intel_display_power_domain domain) >>> static void intel_power_well_enable(struct drm_i915_private *dev_priv, >>> struct i915_power_well *power_well) >>> { >>> + if (power_well->supports_atomic_ctx) >>> + assert_spin_locked(&power_well->lock); >>> + >>> DRM_DEBUG_KMS("enabling %s\n", power_well->name); >>> power_well->ops->enable(dev_priv, power_well); >>> power_well->hw_enabled = true; >>> @@ -149,26 +167,43 @@ static void intel_power_well_enable(struct drm_i915_private *dev_priv, >>> static void intel_power_well_disable(struct drm_i915_private *dev_priv, >>> struct i915_power_well *power_well) >>> { >>> + if (power_well->supports_atomic_ctx) >>> + assert_spin_locked(&power_well->lock); >>> + >>> DRM_DEBUG_KMS("disabling %s\n", power_well->name); >>> power_well->hw_enabled = false; >>> power_well->ops->disable(dev_priv, power_well); >>> } >>> >>> -static void intel_power_well_get(struct drm_i915_private *dev_priv, >>> +static void __intel_power_well_get(struct drm_i915_private *dev_priv, >>> struct i915_power_well *power_well) >>> { >>> if (!power_well->count++) >>> intel_power_well_enable(dev_priv, power_well); >>> } >>> >>> +static void intel_power_well_get(struct drm_i915_private *dev_priv, >>> + struct i915_power_well *power_well) >>> +{ >>> + unsigned long flags = 0; >>> + >>> + power_well_lock(power_well, flags); >>> + __intel_power_well_get(dev_priv, power_well); >>> + power_well_unlock(power_well, flags); >>> +} >>> + >>> static void intel_power_well_put(struct drm_i915_private *dev_priv, >>> struct i915_power_well *power_well) >>> { >>> + unsigned long flags = 0; >>> + >>> + power_well_lock(power_well, flags); >>> WARN(!power_well->count, "Use count on power well %s is already zero", >>> power_well->name); >>> >>> if (!--power_well->count) >>> intel_power_well_disable(dev_priv, power_well); >>> + power_well_unlock(power_well, flags); >>> } >>> >>> /** >>> @@ -736,6 +771,7 @@ static void gen9_dc_off_power_well_disable(struct drm_i915_private *dev_priv, >>> skl_enable_dc6(dev_priv); >>> else if (dev_priv->csr.allowed_dc_mask & DC_STATE_EN_UPTO_DC5) >>> gen9_enable_dc5(dev_priv); >>> + power_well->dc_off_disabled = true; >>> } >>> >>> static void i9xx_power_well_sync_hw_noop(struct drm_i915_private *dev_priv, >>> @@ -1453,6 +1489,58 @@ static void chv_pipe_power_well_disable(struct drm_i915_private *dev_priv, >>> chv_set_pipe_power_well(dev_priv, power_well, false); >>> } >>> >>> +void intel_display_power_vblank_get(struct drm_i915_private *dev_priv, >>> + bool *needs_restore) >>> +{ >>> + struct i915_power_domains *power_domains = &dev_priv->power_domains; >>> + struct i915_power_well *power_well; >>> + enum intel_display_power_domain domain = POWER_DOMAIN_VBLANK; >>> + >>> + *needs_restore = false; >>> + >>> + if (!HAS_CSR(dev_priv)) >>> + return; >>> + >>> + if (!(HAS_PSR(dev_priv) && dev_priv->psr.sink_support)) >>> + return; >>> + >>> + intel_runtime_pm_get_noresume(dev_priv); >>> + >>> + for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain)) { >>> + unsigned long flags = 0; >>> + >>> + power_well_lock(power_well, flags); >>> + __intel_power_well_get(dev_priv, power_well); >>> + *needs_restore = power_well->dc_off_disabled; >>> + power_well->dc_off_disabled = false; >> it seem also that by always restoring we don't need to add this specific dc_off_disabled >> variable inside the generic pw struct. >> >>> + power_well_unlock(power_well, flags); >>> + } >>> + >>> + atomic_inc(&power_domains->domain_use_count[domain]); >>> +} >>> + >>> +void intel_display_power_vblank_put(struct drm_i915_private *dev_priv) >>> +{ >>> + struct i915_power_domains *power_domains = &dev_priv->power_domains; >>> + struct i915_power_well *power_well; >>> + enum intel_display_power_domain domain = POWER_DOMAIN_VBLANK; >>> + >>> + if (!HAS_CSR(dev_priv)) >>> + return; >>> + >>> + if (!(HAS_PSR(dev_priv) && dev_priv->psr.sink_support)) >>> + return; >> Can we remove these checks and do this for any vblank domain? >> I believe this kind of association should be only on the domain<->well association >> and not check for feature here. > Do you recommend moving it up to the caller? I want to avoid acquiring > locks, updating ref counts etc, when it is not needed. > > Related to your question, if my understanding is correct, the hardware > does not really enter DC5/6 without PSR when a pipe is enabled. So > instead of allowing DC5/6, we might as well disable it when there is no > PSR. > > >>> + >>> + WARN(atomic_dec_return(&power_domains->domain_use_count[domain]) < 0, >>> + "Use count on domain %s was already zero\n", >>> + intel_display_power_domain_str(domain)); >> is the atomic safe enough here? or we need a spinlock? > I believe it is safe, domain_use_count[x] does not affect any subsequent > operation. We can get away with modifying it outside the power well spin > lock. >>> + >>> + for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain)) >>> + intel_power_well_put(dev_priv, power_well); >>> + >>> + intel_runtime_pm_put(dev_priv); >>> +} >>> + >>> static void >>> __intel_display_power_get_domain(struct drm_i915_private *dev_priv, >>> enum intel_display_power_domain domain) >>> @@ -1463,7 +1551,7 @@ __intel_display_power_get_domain(struct drm_i915_private *dev_priv, >>> for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain)) >>> intel_power_well_get(dev_priv, power_well); >>> >>> - power_domains->domain_use_count[domain]++; >>> + atomic_inc(&power_domains->domain_use_count[domain]); >>> } >>> >>> /** >>> @@ -1492,6 +1580,38 @@ void intel_display_power_get(struct drm_i915_private *dev_priv, >>> mutex_unlock(&power_domains->lock); >>> } >>> >>> +static bool dc_off_get_if_enabled(struct drm_i915_private *dev_priv, >>> + enum intel_display_power_domain domain) >>> +{ >>> + struct i915_power_well *power_well; >>> + bool is_enabled; >>> + unsigned long flags = 0; >>> + >>> + power_well = lookup_power_well(dev_priv, SKL_DISP_PW_DC_OFF); >>> + if (!power_well || !(power_well->domains & domain)) >>> + return true; >>> + >>> + power_well_lock(power_well, flags); >>> + is_enabled = power_well->hw_enabled; >>> + if (is_enabled) >>> + __intel_power_well_get(dev_priv, power_well); >>> + power_well_unlock(power_well, flags); >>> + >>> + return is_enabled; >>> +} >>> + >>> +static void dc_off_put(struct drm_i915_private *dev_priv, >>> + enum intel_display_power_domain domain) >>> +{ >>> + struct i915_power_well *power_well; >>> + >>> + power_well = lookup_power_well(dev_priv, SKL_DISP_PW_DC_OFF); >>> + if (!power_well || !(power_well->domains & domain)) >>> + return; >>> + >>> + intel_power_well_put(dev_priv, power_well); >>> +} >>> + >>> /** >>> * intel_display_power_get_if_enabled - grab a reference for an enabled display power domain >>> * @dev_priv: i915 device instance >>> @@ -1508,20 +1628,24 @@ bool intel_display_power_get_if_enabled(struct drm_i915_private *dev_priv, >>> enum intel_display_power_domain domain) >>> { >>> struct i915_power_domains *power_domains = &dev_priv->power_domains; >>> - bool is_enabled; >>> + bool is_enabled = false; >>> >>> if (!intel_runtime_pm_get_if_in_use(dev_priv)) >>> return false; >>> >>> mutex_lock(&power_domains->lock); >>> >>> + if (!dc_off_get_if_enabled(dev_priv, domain)) >>> + goto out; >>> + >>> if (__intel_display_power_is_enabled(dev_priv, domain)) { >>> __intel_display_power_get_domain(dev_priv, domain); >>> is_enabled = true; >>> - } else { >>> - is_enabled = false; >>> } >>> >>> + dc_off_put(dev_priv, domain); >>> + >>> +out: >>> mutex_unlock(&power_domains->lock); >>> >>> if (!is_enabled) >>> @@ -1549,10 +1673,9 @@ void intel_display_power_put(struct drm_i915_private *dev_priv, >>> >>> mutex_lock(&power_domains->lock); >>> >>> - WARN(!power_domains->domain_use_count[domain], >>> - "Use count on domain %s is already zero\n", >>> + WARN(atomic_dec_return(&power_domains->domain_use_count[domain]) < 0, >>> + "Use count on domain %s was already zero\n", >>> intel_display_power_domain_str(domain)); >>> - power_domains->domain_use_count[domain]--; >>> >>> for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain)) >>> intel_power_well_put(dev_priv, power_well); >>> @@ -1720,6 +1843,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv, >>> BIT_ULL(POWER_DOMAIN_GT_IRQ) | \ >>> BIT_ULL(POWER_DOMAIN_MODESET) | \ >>> BIT_ULL(POWER_DOMAIN_AUX_A) | \ >>> + BIT_ULL(POWER_DOMAIN_VBLANK) | \ >>> BIT_ULL(POWER_DOMAIN_INIT)) >>> >>> #define BXT_DISPLAY_POWERWELL_2_POWER_DOMAINS ( \ >>> @@ -1743,6 +1867,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv, >>> BIT_ULL(POWER_DOMAIN_MODESET) | \ >>> BIT_ULL(POWER_DOMAIN_AUX_A) | \ >>> BIT_ULL(POWER_DOMAIN_GMBUS) | \ >>> + BIT_ULL(POWER_DOMAIN_VBLANK) | \ >>> BIT_ULL(POWER_DOMAIN_INIT)) >>> #define BXT_DPIO_CMN_A_POWER_DOMAINS ( \ >>> BIT_ULL(POWER_DOMAIN_PORT_DDI_A_LANES) | \ >>> @@ -1803,6 +1928,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv, >>> BIT_ULL(POWER_DOMAIN_MODESET) | \ >>> BIT_ULL(POWER_DOMAIN_AUX_A) | \ >>> BIT_ULL(POWER_DOMAIN_GMBUS) | \ >>> + BIT_ULL(POWER_DOMAIN_VBLANK) | \ >>> BIT_ULL(POWER_DOMAIN_INIT)) >>> >>> #define CNL_DISPLAY_POWERWELL_2_POWER_DOMAINS ( \ >>> @@ -1850,6 +1976,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv, >>> CNL_DISPLAY_POWERWELL_2_POWER_DOMAINS | \ >>> BIT_ULL(POWER_DOMAIN_MODESET) | \ >>> BIT_ULL(POWER_DOMAIN_AUX_A) | \ >>> + BIT_ULL(POWER_DOMAIN_VBLANK) | \ >>> BIT_ULL(POWER_DOMAIN_INIT)) >>> >>> static const struct i915_power_well_ops i9xx_always_on_power_well_ops = { >>> @@ -2083,9 +2210,12 @@ bool intel_display_power_well_is_enabled(struct drm_i915_private *dev_priv, >>> { >>> struct i915_power_well *power_well; >>> bool ret; >>> + unsigned long flags = 0; >>> >>> power_well = lookup_power_well(dev_priv, power_well_id); >>> + power_well_lock(power_well, flags); >>> ret = power_well->ops->is_enabled(dev_priv, power_well); >>> + power_well_unlock(power_well, flags); >>> >>> return ret; >>> } >>> @@ -2120,6 +2250,8 @@ static struct i915_power_well skl_power_wells[] = { >>> .domains = SKL_DISPLAY_DC_OFF_POWER_DOMAINS, >>> .ops = &gen9_dc_off_power_well_ops, >>> .id = SKL_DISP_PW_DC_OFF, >>> + .supports_atomic_ctx = true, >>> + .dc_off_disabled = false, >>> }, >>> { >>> .name = "power well 2", >>> @@ -2180,6 +2312,8 @@ static struct i915_power_well bxt_power_wells[] = { >>> .domains = BXT_DISPLAY_DC_OFF_POWER_DOMAINS, >>> .ops = &gen9_dc_off_power_well_ops, >>> .id = SKL_DISP_PW_DC_OFF, >>> + .supports_atomic_ctx = true, >>> + .dc_off_disabled = false, >>> }, >>> { >>> .name = "power well 2", >>> @@ -2235,6 +2369,8 @@ static struct i915_power_well glk_power_wells[] = { >>> .domains = GLK_DISPLAY_DC_OFF_POWER_DOMAINS, >>> .ops = &gen9_dc_off_power_well_ops, >>> .id = SKL_DISP_PW_DC_OFF, >>> + .supports_atomic_ctx = true, >>> + .dc_off_disabled = false, >>> }, >>> { >>> .name = "power well 2", >>> @@ -2359,6 +2495,8 @@ static struct i915_power_well cnl_power_wells[] = { >>> .domains = CNL_DISPLAY_DC_OFF_POWER_DOMAINS, >>> .ops = &gen9_dc_off_power_well_ops, >>> .id = SKL_DISP_PW_DC_OFF, >>> + .supports_atomic_ctx = true, >>> + .dc_off_disabled = false, >>> }, >>> { >>> .name = "power well 2", >>> @@ -2487,6 +2625,7 @@ static void assert_power_well_ids_unique(struct drm_i915_private *dev_priv) >>> int intel_power_domains_init(struct drm_i915_private *dev_priv) >>> { >>> struct i915_power_domains *power_domains = &dev_priv->power_domains; >>> + struct i915_power_well *power_well; >>> >>> i915_modparams.disable_power_well = >>> sanitize_disable_power_well_option(dev_priv, >>> @@ -2524,6 +2663,10 @@ int intel_power_domains_init(struct drm_i915_private *dev_priv) >>> set_power_wells(power_domains, i9xx_always_on_power_well); >>> } >>> >>> + for_each_power_well(dev_priv, power_well) >>> + if (power_well->supports_atomic_ctx) >>> + spin_lock_init(&power_well->lock); >>> + >>> assert_power_well_ids_unique(dev_priv); >>> >>> return 0; >>> @@ -2571,9 +2714,13 @@ static void intel_power_domains_sync_hw(struct drm_i915_private *dev_priv) >>> >>> mutex_lock(&power_domains->lock); >>> for_each_power_well(dev_priv, power_well) { >>> + unsigned long flags = 0; >>> + >>> + power_well_lock(power_well, flags); >>> power_well->ops->sync_hw(dev_priv, power_well); >>> power_well->hw_enabled = power_well->ops->is_enabled(dev_priv, >>> power_well); >>> + power_well_unlock(power_well, flags); >>> } >>> mutex_unlock(&power_domains->lock); >>> } >>> @@ -3046,21 +3193,23 @@ void intel_power_domains_suspend(struct drm_i915_private *dev_priv) >>> bxt_display_core_uninit(dev_priv); >>> } >>> >>> -static void intel_power_domains_dump_info(struct drm_i915_private *dev_priv) >>> +static void intel_power_domains_dump_info(struct drm_i915_private *dev_priv, >>> + const int *power_well_use) >>> { >>> struct i915_power_domains *power_domains = &dev_priv->power_domains; >>> struct i915_power_well *power_well; >>> + int i = 0; >>> >>> for_each_power_well(dev_priv, power_well) { >>> enum intel_display_power_domain domain; >>> >>> DRM_DEBUG_DRIVER("%-25s %d\n", >>> - power_well->name, power_well->count); >>> + power_well->name, power_well_use[i++]); >>> >>> for_each_power_domain(domain, power_well->domains) >>> DRM_DEBUG_DRIVER(" %-23s %d\n", >>> intel_display_power_domain_str(domain), >>> - power_domains->domain_use_count[domain]); >>> + atomic_read(&power_domains->domain_use_count[domain])); >>> } >>> } >>> >>> @@ -3079,6 +3228,7 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv) >>> struct i915_power_domains *power_domains = &dev_priv->power_domains; >>> struct i915_power_well *power_well; >>> bool dump_domain_info; >>> + int power_well_use[dev_priv->power_domains.power_well_count]; >>> >>> mutex_lock(&power_domains->lock); >>> >>> @@ -3087,6 +3237,14 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv) >>> enum intel_display_power_domain domain; >>> int domains_count; >>> bool enabled; >>> + int well_count, i = 0; >>> + unsigned long flags = 0; >>> + >>> + power_well_lock(power_well, flags); >>> + well_count = power_well->count; >>> + enabled = power_well->ops->is_enabled(dev_priv, power_well); >>> + power_well_unlock(power_well, flags); >>> + power_well_use[i++] = well_count; >>> >>> /* >>> * Power wells not belonging to any domain (like the MISC_IO >>> @@ -3096,20 +3254,19 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv) >>> if (!power_well->domains) >>> continue; >>> >>> - enabled = power_well->ops->is_enabled(dev_priv, power_well); >>> - if ((power_well->count || power_well->always_on) != enabled) >>> + >>> + if ((well_count || power_well->always_on) != enabled) >>> DRM_ERROR("power well %s state mismatch (refcount %d/enabled %d)", >>> - power_well->name, power_well->count, enabled); >>> + power_well->name, well_count, enabled); >>> >>> domains_count = 0; >>> for_each_power_domain(domain, power_well->domains) >>> - domains_count += power_domains->domain_use_count[domain]; >>> + domains_count += atomic_read(&power_domains->domain_use_count[domain]); >>> >>> - if (power_well->count != domains_count) { >>> + if (well_count != domains_count) { >>> DRM_ERROR("power well %s refcount/domain refcount mismatch " >>> "(refcount %d/domains refcount %d)\n", >>> - power_well->name, power_well->count, >>> - domains_count); >>> + power_well->name, well_count, domains_count); >>> dump_domain_info = true; >>> } >>> } >>> @@ -3118,7 +3275,7 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv) >>> static bool dumped; >>> >>> if (!dumped) { >>> - intel_power_domains_dump_info(dev_priv); >>> + intel_power_domains_dump_info(dev_priv, power_well_use); >>> dumped = true; >>> } >>> } >>> -- >>> 2.11.0 >>> >> I will probably have more comments later, but just doing a brain dump now >> since I end up forgetting to write yesterday... >> >> The approach here in general is good and much better than that pre,post hooks. >> But I just believe we can do this here in a more generic approach than deviating >> the initial power well and domains. > I would have liked a generic approach (for display_power_{get,put}), but > I think this case is special enough that making it stand out is better. Agreed, I can think of no other way myself without making the generic case too complicated. The whole runtime power management became way too complex for a special case. ~Maarten _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts 2018-01-09 9:38 ` Maarten Lankhorst @ 2018-01-16 19:56 ` Pandiyan, Dhinakaran 0 siblings, 0 replies; 16+ messages in thread From: Pandiyan, Dhinakaran @ 2018-01-16 19:56 UTC (permalink / raw) To: maarten.lankhorst Cc: daniel.vetter, intel-gfx, Zanoni, Paulo R, Vivi, Rodrigo On Tue, 2018-01-09 at 10:38 +0100, Maarten Lankhorst wrote: > Op 08-01-18 om 20:59 schreef Pandiyan, Dhinakaran: > > On Thu, 2018-01-04 at 18:08 -0800, Rodrigo Vivi wrote: <snip> > >> I will probably have more comments later, but just doing a brain dump now > >> since I end up forgetting to write yesterday... > >> > >> The approach here in general is good and much better than that pre,post hooks. > >> But I just believe we can do this here in a more generic approach than deviating > >> the initial power well and domains. > > I would have liked a generic approach (for display_power_{get,put}), but > > I think this case is special enough that making it stand out is better. > Agreed, I can think of no other way myself without making the generic case too complicated. The whole runtime power management became way too complex for a special case. > I found out that DMC keeps the hardware out of DC5/6 when vblank interrupts are enabled. This simplified the solution a lot (https://patchwork.freedesktop.org/series/36435/) Thanks for your review on this series, would appreciate any feedback on the new one too :) -DK > ~Maarten > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/intel-gfx _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2018-01-16 19:56 UTC | newest] Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2017-12-06 22:47 [PATCH 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Dhinakaran Pandiyan 2017-12-06 22:47 ` [PATCH 2/5] drm/vblank: Restoring vblank counts after device runtime PM events Dhinakaran Pandiyan 2017-12-06 22:47 ` [PATCH 3/5] drm/i915: Use an atomic_t array to track power domain use count Dhinakaran Pandiyan 2017-12-06 22:47 ` [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts Dhinakaran Pandiyan 2017-12-06 23:54 ` Rodrigo Vivi 2017-12-07 19:01 ` Pandiyan, Dhinakaran 2017-12-06 22:47 ` [PATCH 5/5] drm/i915: Use the vblank power domain disallow or disable DC states Dhinakaran Pandiyan 2017-12-06 23:16 ` ✗ Fi.CI.BAT: failure for series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Patchwork 2017-12-07 18:46 ` Pandiyan, Dhinakaran 2017-12-07 18:46 ` Pandiyan, Dhinakaran 2017-12-07 18:47 ` Pandiyan, Dhinakaran 2018-01-03 20:39 [PATCH 1/5] " Dhinakaran Pandiyan 2018-01-03 20:40 ` [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts Dhinakaran Pandiyan 2018-01-05 2:08 ` Rodrigo Vivi 2018-01-08 19:59 ` Pandiyan, Dhinakaran 2018-01-09 9:38 ` Maarten Lankhorst 2018-01-16 19:56 ` Pandiyan, Dhinakaran
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.