All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Ville Syrjälä" <ville.syrjala@linux.intel.com>
To: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: intel-gfx@lists.freedesktop.org
Subject: Re: [Intel-gfx] [PATCH] drm/i915: Kick rcu harder to free objects
Date: Thu, 8 Sep 2022 16:30:07 +0300	[thread overview]
Message-ID: <YxnuX805XuuSGPUY@intel.com> (raw)
In-Reply-To: <c21535d3-8f71-b385-4ef6-1b10a783c347@linux.intel.com>

On Thu, Sep 08, 2022 at 01:23:50PM +0100, Tvrtko Ursulin wrote:
> 
> On 06/09/2022 18:46, Ville Syrjala wrote:
> > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > 
> > On gen3 the selftests are pretty much always tripping this:
> > <4> [383.822424] pci 0000:00:02.0: drm_WARN_ON(dev_priv->mm.shrink_count)
> > <4> [383.822546] WARNING: CPU: 2 PID: 3560 at drivers/gpu/drm/i915/i915_gem.c:1223 i915_gem_cleanup_early+0x96/0xb0 [i915]
> > 
> > Looks to be due to the status page object lingering on the
> > purge_list. Call synchronize_rcu() ahead of it to make more
> > sure all objects have been freed.
> > 
> > Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > ---
> >   drivers/gpu/drm/i915/i915_gem.c | 1 +
> >   1 file changed, 1 insertion(+)
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> > index 0f49ec9d494a..5b61f7ad6473 100644
> > --- a/drivers/gpu/drm/i915/i915_gem.c
> > +++ b/drivers/gpu/drm/i915/i915_gem.c
> > @@ -1098,6 +1098,7 @@ void i915_gem_drain_freed_objects(struct drm_i915_private *i915)
> >   		flush_delayed_work(&i915->bdev.wq);
> >   		rcu_barrier();
> >   	}
> > +	synchronize_rcu();
> 
> Looks a bit suspicious that a loop would not free all but one last rcu 
> grace would. Definitely fixes the issue in your testing?

Definite is a bit hard to say with fuzzy stuff like this. But yes,
so far didn't see the warn triggering anymore. CI results show the same.

> 
> Perhaps the fact there is a cond_resched in __i915_gem_free_objects, but 
> then again free count should reflect the state and keep it looping in here..
> 
> Regards,
> 
> Tvrtko

-- 
Ville Syrjälä
Intel

  reply	other threads:[~2022-09-08 13:30 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-06 17:46 [Intel-gfx] [PATCH] drm/i915: Kick rcu harder to free objects Ville Syrjala
2022-09-06 18:40 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
2022-09-06 18:54 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2022-09-06 22:10 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
2022-09-08 12:23 ` [Intel-gfx] [PATCH] " Tvrtko Ursulin
2022-09-08 13:30   ` Ville Syrjälä [this message]
2022-09-08 14:32 ` Das, Nirmoy
2022-09-08 14:55   ` Tvrtko Ursulin
2022-09-08 19:22     ` Das, Nirmoy
2022-09-08 15:11   ` Ville Syrjälä
2022-09-08 19:34     ` Das, Nirmoy
2022-09-09  7:29       ` Ville Syrjälä
2022-09-21  7:56         ` Ville Syrjälä
2022-09-22  8:39           ` Das, Nirmoy
2022-09-08 15:15 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for drm/i915: Kick rcu harder to free objects (rev2) Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YxnuX805XuuSGPUY@intel.com \
    --to=ville.syrjala@linux.intel.com \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=tvrtko.ursulin@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.