intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Mika Kuoppala <mika.kuoppala@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>, intel-gfx@lists.freedesktop.org
Subject: Re: [Intel-gfx] [PATCH] drm/i915/gt: Defend against concurrent updates to execlists->active
Date: Mon, 09 Mar 2020 18:38:49 +0200	[thread overview]
Message-ID: <878sk932li.fsf@gaia.fi.intel.com> (raw)
In-Reply-To: <158377077310.4769.2840055823228121182@build.alporthouse.com>

Chris Wilson <chris@chris-wilson.co.uk> writes:

> Quoting Mika Kuoppala (2020-03-09 15:34:40)
>> Chris Wilson <chris@chris-wilson.co.uk> writes:
>> 
>> > [  206.875637] BUG: KCSAN: data-race in __i915_schedule+0x7fc/0x930 [i915]
>> > [  206.875654]
>> > [  206.875666] race at unknown origin, with read to 0xffff8881f7644480 of 8 bytes by task 703 on cpu 3:
>> > [  206.875901]  __i915_schedule+0x7fc/0x930 [i915]
>> > [  206.876130]  __bump_priority+0x63/0x80 [i915]
>> > [  206.876361]  __i915_sched_node_add_dependency+0x258/0x300 [i915]
>> > [  206.876593]  i915_sched_node_add_dependency+0x50/0xa0 [i915]
>> > [  206.876824]  i915_request_await_dma_fence+0x1da/0x530 [i915]
>> > [  206.877057]  i915_request_await_object+0x2fe/0x470 [i915]
>> > [  206.877287]  i915_gem_do_execbuffer+0x45dc/0x4c20 [i915]
>> > [  206.877517]  i915_gem_execbuffer2_ioctl+0x2c3/0x580 [i915]
>> > [  206.877535]  drm_ioctl_kernel+0xe4/0x120
>> > [  206.877549]  drm_ioctl+0x297/0x4c7
>> > [  206.877563]  ksys_ioctl+0x89/0xb0
>> > [  206.877577]  __x64_sys_ioctl+0x42/0x60
>> > [  206.877591]  do_syscall_64+0x6e/0x2c0
>> > [  206.877606]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
>> >
>> > References: https://gitlab.freedesktop.org/drm/intel/issues/1318
>> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
>> > ---
>> >  drivers/gpu/drm/i915/gt/intel_engine.h | 12 +++++++++++-
>> >  1 file changed, 11 insertions(+), 1 deletion(-)
>> >
>> > diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h b/drivers/gpu/drm/i915/gt/intel_engine.h
>> > index 29c8c03c5caa..f267f51c457c 100644
>> > --- a/drivers/gpu/drm/i915/gt/intel_engine.h
>> > +++ b/drivers/gpu/drm/i915/gt/intel_engine.h
>> > @@ -107,7 +107,17 @@ execlists_num_ports(const struct intel_engine_execlists * const execlists)
>> >  static inline struct i915_request *
>> >  execlists_active(const struct intel_engine_execlists *execlists)
>> >  {
>> > -     return *READ_ONCE(execlists->active);
>> > +     struct i915_request * const *cur = READ_ONCE(execlists->active);
>> > +     struct i915_request * const *old;
>> > +     struct i915_request *active;
>> > +
>> > +     do {
>> > +             old = cur;
>> > +             active = READ_ONCE(*cur);
>> > +             cur = READ_ONCE(execlists->active);
>> > +     } while (cur != old);
>> > +
>> > +     return active;
>> 
>> The updated side is scary. We are updating the execlists->active
>> in two phases and handling the array copying in between.
>> 
>> as WRITE_ONCE only guarantees ordering inside one context, due to
>> it is for compiler only, it makes me very suspicious about
>> how the memcpy of pending->inflight might unravel between two cpus.
>> 
>> smb_store_mb(execlists->active, execlists->pending);
>> memcpy(inflight, pending)
>> smb_wmb();
>> smb_store_mb(execlists->active, execlists->inflight);
>> smb_store_mb(execlists->pending[0], NULL);
>
> Not quite. You've overkill on the mb there.
>
> If you want to be pedantic,
>
> WRITE_ONCE(active, pending);
> smp_wmb();
>
> memcpy(inflight, pending);
> smp_wmb();
> WRITE_ONCE(active, inflight);

This is the crux of it, needing rmb counterpart.
-Mika

>
> The update of pending is not part of this sequence.
>
> But do we need that, and I still think we do not.
>
>> This in paired with:
>> 
>> active = READ_ONCE(*cur);
>> smb_rmb();
>> cur = READ_ONCE(execlists->active);
>> 
>> With this, it should not matter at which point the execlists->active
>> is sampled as the pending would be guaranteed to be
>> immutable if it sampled early and inflight immutable if it
>> sampled late?
>
> Simply because we don't care about the sampling, just that the read
> dependency gives us a valid pointer. (We are not looking at a snapshot
> of several reads, but a _single_ read and the data dependency from
> that.)
> -Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2020-03-09 16:40 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-09 11:24 [Intel-gfx] [PATCH] drm/i915/gt: Defend against concurrent updates to execlists->active Chris Wilson
2020-03-09 13:14 ` [Intel-gfx] ✓ Fi.CI.BAT: success for " Patchwork
2020-03-09 15:34 ` [Intel-gfx] [PATCH] " Mika Kuoppala
2020-03-09 16:19   ` Chris Wilson
2020-03-09 16:38     ` Mika Kuoppala [this message]
2020-03-09 17:01       ` Chris Wilson
2020-03-09 17:05 ` [Intel-gfx] [PATCH v2] " Chris Wilson
2020-03-09 17:41   ` Mika Kuoppala
2020-03-09 19:34 ` [Intel-gfx] ✓ Fi.CI.IGT: success for " Patchwork
2020-03-10 13:57 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for drm/i915/gt: Defend against concurrent updates to execlists->active (rev2) Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=878sk932li.fsf@gaia.fi.intel.com \
    --to=mika.kuoppala@linux.intel.com \
    --cc=chris@chris-wilson.co.uk \
    --cc=intel-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).