All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
	Michael Cheng <michael.cheng@intel.com>,
	intel-gfx@lists.freedesktop.org
Cc: wayne.boyer@intel.com, daniel.vetter@ffwll.ch,
	casey.g.bowman@intel.com, lucas.demarchi@intel.com,
	dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk,
	Matthew Auld <matthew.auld@intel.com>
Subject: Re: [PATCH 0/4] Drop wbinvd_on_all_cpus usage
Date: Tue, 22 Mar 2022 11:26:58 +0100	[thread overview]
Message-ID: <5931be1a37dbb9ccdce127f6173d42fa4dbee593.camel@linux.intel.com> (raw)
In-Reply-To: <31310790-4bc5-b9a7-8d35-c0f542b4d658@linux.intel.com>

On Tue, 2022-03-22 at 10:13 +0000, Tvrtko Ursulin wrote:
> 
> On 21/03/2022 15:15, Thomas Hellström wrote:
> > On Mon, 2022-03-21 at 14:43 +0000, Tvrtko Ursulin wrote:
> > > 
> > > On 21/03/2022 13:40, Thomas Hellström wrote:
> > > > Hi,
> > > > 
> > > > On Mon, 2022-03-21 at 13:12 +0000, Tvrtko Ursulin wrote:
> > > > > 
> > > > > On 21/03/2022 12:33, Thomas Hellström wrote:
> > > > > > On Mon, 2022-03-21 at 12:22 +0000, Tvrtko Ursulin wrote:
> > > > > > > 
> > > > > > > On 21/03/2022 11:03, Thomas Hellström wrote:
> > > > > > > > Hi, Tvrtko.
> > > > > > > > 
> > > > > > > > On 3/21/22 11:27, Tvrtko Ursulin wrote:
> > > > > > > > > 
> > > > > > > > > On 19/03/2022 19:42, Michael Cheng wrote:
> > > > > > > > > > To align with the discussion in [1][2], this patch
> > > > > > > > > > series
> > > > > > > > > > drops
> > > > > > > > > > all
> > > > > > > > > > usage of
> > > > > > > > > > wbvind_on_all_cpus within i915 by either replacing
> > > > > > > > > > the
> > > > > > > > > > call
> > > > > > > > > > with certain
> > > > > > > > > > drm clflush helpers, or reverting to a previous
> > > > > > > > > > logic.
> > > > > > > > > 
> > > > > > > > > AFAIU, complaint from [1] was that it is wrong to
> > > > > > > > > provide
> > > > > > > > > non
> > > > > > > > > x86
> > > > > > > > > implementations under the wbinvd_on_all_cpus name.
> > > > > > > > > Instead an
> > > > > > > > > arch
> > > > > > > > > agnostic helper which achieves the same effect could
> > > > > > > > > be
> > > > > > > > > created.
> > > > > > > > > Does
> > > > > > > > > Arm have such concept?
> > > > > > > > 
> > > > > > > > I also understand Linus' email like we shouldn't leak
> > > > > > > > incoherent
> > > > > > > > IO
> > > > > > > > to
> > > > > > > > other architectures, meaning any remaining wbinvd()s
> > > > > > > > should
> > > > > > > > be
> > > > > > > > X86
> > > > > > > > only.
> > > > > > > 
> > > > > > > The last part is completely obvious since it is a x86
> > > > > > > instruction
> > > > > > > name.
> > > > > > 
> > > > > > Yeah, I meant the function implementing wbinvd() semantics.
> > > > > > 
> > > > > > > 
> > > > > > > But I think we can't pick a solution until we know how
> > > > > > > the
> > > > > > > concept
> > > > > > > maps
> > > > > > > to Arm and that will also include seeing how the
> > > > > > > drm_clflush_sg for
> > > > > > > Arm
> > > > > > > would look. Is there a range based solution, or just a
> > > > > > > big
> > > > > > > hammer
> > > > > > > there.
> > > > > > > If the latter, then it is no good to churn all these
> > > > > > > reverts
> > > > > > > but
> > > > > > > instead
> > > > > > > an arch agnostic wrapper, with a generic name, would be
> > > > > > > the
> > > > > > > way to
> > > > > > > go.
> > > > > > 
> > > > > > But my impression was that ARM would not need the range-
> > > > > > based
> > > > > > interface
> > > > > > either, because ARM is only for discrete and with discrete
> > > > > > we're
> > > > > > always
> > > > > > coherent.
> > > > > 
> > > > > Not sure what you mean here - what about flushing system
> > > > > memory
> > > > > objects
> > > > > on discrete? Those still need flushing on paths like suspend
> > > > > which this
> > > > > series touches. Am I missing something?
> > > > 
> > > > System bos on discrete should always have
> > > > 
> > > > I915_BO_CACHE_COHERENT_FOR_READ |
> > > > I915_BO_CACHE_COHERENT_FOR_WRITE
> > > > 
> > > > either by the gpu being fully cache coherent (or us mapping
> > > > system
> > > > write-combined). Hence no need for cache clflushes or wbinvd()
> > > > for
> > > > incoherent IO.
> > > 
> > > Hmm so you are talking about the shmem ttm backend. It ends up
> > > depending on the result of i915_ttm_cache_level, yes? It cannot
> > > end
> > > up with I915_CACHE_NONE from that function?
> > 
> > If the object is allocated with allowable placement in either LMEM
> > or
> > SYSTEM, and it ends in system, it gets allocated with
> > I915_CACHE_NONE,
> > but then the shmem ttm backend isn't used but TTM's wc pools, and
> > the
> > object should *always* be mapped wc. Even in system.
> 
> I am not familiar with neither TTM backend or wc pools so maybe a
> missed 
> question - if obj->cache_level can be set to none, and 
> obj->cache_coherency to zero, then during object lifetime helpers
> which 
> consult those fields (like i915_gem_cpu_write_needs_clflush, 
> __start_cpu_write, etc) are giving out incorrect answers? That is, it
> is 
> irrelevant that they would say flushes are required, since in
> actuality 
> those objects can never ever and from anywhere be mapped other than
> WC 
> so flushes aren't actually required?

If we map other than WC somewhere in these situations, that should be a
bug needing a fix. It might be that some of these helpers that you
mention might still flag that a clflush is needed, and in that case
that's an oversight that also needs fixing.

> 
> > > I also found in i915_drm.h:
> > > 
> > >           * As caching mode when specifying
> > > `I915_MMAP_OFFSET_FIXED`,
> > > WC or WB will
> > >           * be used, depending on the object placement on
> > > creation. WB
> > > will be used
> > >           * when the object can only exist in system memory, WC
> > > otherwise.
> > > 
> > > If what you say is true, that on discrete it is _always_ WC, then
> > > that needs updating as well.
> > 
> > If an object is allocated as system only, then it is mapped WB, and
> > we're relying on the gpu being cache coherent to avoid clflushes.
> > Same
> > is actually currently true if the object happens to be accessed by
> > the
> > cpu while evicted. Might need an update for that.
> 
> Hmm okay, I think I actually misunderstood something here. I think
> the 
> reason for difference bbtween smem+lmem object which happens to be in
> smem and smem only object is eluding me.
> 
> > > > 
> > > > That's adhering to Linus'
> > > > 
> > > > "And I sincerely hope to the gods that no cache-incoherent i915
> > > > mess
> > > > ever makes it out of the x86 world. Incoherent IO was always a
> > > > historical mistake and should never ever happen again, so we
> > > > should
> > > > not spread that horrific pattern around."
> > > 
> > > Sure, but I was not talking about IO - just the CPU side access
> > > to
> > > CPU side objects.
> > 
> > OK, I was under the impression that clflushes() and wbinvd()s in
> > i915
> > was only ever used to make data visible to non-snooping GPUs.
> > 
> > Do you mean that there are other uses as well? Agreed the wb cache
> > flush on on suspend only if gpu is
> > !I915_BO_CACHE_COHERENT_FOR_READ?
> > looks to not fit this pattern completely.
> 
> Don't know, I was first trying to understand handling of the 
> obj->cache_coherent as discussed in the first quote block. Are the
> flags 
> consistently set and how the Arm low level code will look.
> 
> > Otherwise, for architectures where memory isn't always fully
> > coherent
> > with the cpu cache, I'd expect them to use the apis in
> > asm/cacheflush.h, like flush_cache_range() and similar, which are
> > nops
> > on x86.
> 
> Hm do you know why there are no-ops? Like why wouldn't they map to
> clflush?

I think it mostly boils down to the PIPT caches on x86. Everything is
assumed to be coherent. Whereas some architextures keep different cache
entries for different virtual addresses even if the physical page is
the same...

clflushes and wbinvds on x86 are for odd arch-specific situations
where, for example where we change caching attributes of the linear
kernel map mappings.

/Thomas


> 
> Regards,
> 
> Tvrtko



WARNING: multiple messages have this Message-ID (diff)
From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
	Michael Cheng <michael.cheng@intel.com>,
	intel-gfx@lists.freedesktop.org
Cc: daniel.vetter@ffwll.ch, lucas.demarchi@intel.com,
	dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk,
	Matthew Auld <matthew.auld@intel.com>
Subject: Re: [Intel-gfx] [PATCH 0/4] Drop wbinvd_on_all_cpus usage
Date: Tue, 22 Mar 2022 11:26:58 +0100	[thread overview]
Message-ID: <5931be1a37dbb9ccdce127f6173d42fa4dbee593.camel@linux.intel.com> (raw)
In-Reply-To: <31310790-4bc5-b9a7-8d35-c0f542b4d658@linux.intel.com>

On Tue, 2022-03-22 at 10:13 +0000, Tvrtko Ursulin wrote:
> 
> On 21/03/2022 15:15, Thomas Hellström wrote:
> > On Mon, 2022-03-21 at 14:43 +0000, Tvrtko Ursulin wrote:
> > > 
> > > On 21/03/2022 13:40, Thomas Hellström wrote:
> > > > Hi,
> > > > 
> > > > On Mon, 2022-03-21 at 13:12 +0000, Tvrtko Ursulin wrote:
> > > > > 
> > > > > On 21/03/2022 12:33, Thomas Hellström wrote:
> > > > > > On Mon, 2022-03-21 at 12:22 +0000, Tvrtko Ursulin wrote:
> > > > > > > 
> > > > > > > On 21/03/2022 11:03, Thomas Hellström wrote:
> > > > > > > > Hi, Tvrtko.
> > > > > > > > 
> > > > > > > > On 3/21/22 11:27, Tvrtko Ursulin wrote:
> > > > > > > > > 
> > > > > > > > > On 19/03/2022 19:42, Michael Cheng wrote:
> > > > > > > > > > To align with the discussion in [1][2], this patch
> > > > > > > > > > series
> > > > > > > > > > drops
> > > > > > > > > > all
> > > > > > > > > > usage of
> > > > > > > > > > wbvind_on_all_cpus within i915 by either replacing
> > > > > > > > > > the
> > > > > > > > > > call
> > > > > > > > > > with certain
> > > > > > > > > > drm clflush helpers, or reverting to a previous
> > > > > > > > > > logic.
> > > > > > > > > 
> > > > > > > > > AFAIU, complaint from [1] was that it is wrong to
> > > > > > > > > provide
> > > > > > > > > non
> > > > > > > > > x86
> > > > > > > > > implementations under the wbinvd_on_all_cpus name.
> > > > > > > > > Instead an
> > > > > > > > > arch
> > > > > > > > > agnostic helper which achieves the same effect could
> > > > > > > > > be
> > > > > > > > > created.
> > > > > > > > > Does
> > > > > > > > > Arm have such concept?
> > > > > > > > 
> > > > > > > > I also understand Linus' email like we shouldn't leak
> > > > > > > > incoherent
> > > > > > > > IO
> > > > > > > > to
> > > > > > > > other architectures, meaning any remaining wbinvd()s
> > > > > > > > should
> > > > > > > > be
> > > > > > > > X86
> > > > > > > > only.
> > > > > > > 
> > > > > > > The last part is completely obvious since it is a x86
> > > > > > > instruction
> > > > > > > name.
> > > > > > 
> > > > > > Yeah, I meant the function implementing wbinvd() semantics.
> > > > > > 
> > > > > > > 
> > > > > > > But I think we can't pick a solution until we know how
> > > > > > > the
> > > > > > > concept
> > > > > > > maps
> > > > > > > to Arm and that will also include seeing how the
> > > > > > > drm_clflush_sg for
> > > > > > > Arm
> > > > > > > would look. Is there a range based solution, or just a
> > > > > > > big
> > > > > > > hammer
> > > > > > > there.
> > > > > > > If the latter, then it is no good to churn all these
> > > > > > > reverts
> > > > > > > but
> > > > > > > instead
> > > > > > > an arch agnostic wrapper, with a generic name, would be
> > > > > > > the
> > > > > > > way to
> > > > > > > go.
> > > > > > 
> > > > > > But my impression was that ARM would not need the range-
> > > > > > based
> > > > > > interface
> > > > > > either, because ARM is only for discrete and with discrete
> > > > > > we're
> > > > > > always
> > > > > > coherent.
> > > > > 
> > > > > Not sure what you mean here - what about flushing system
> > > > > memory
> > > > > objects
> > > > > on discrete? Those still need flushing on paths like suspend
> > > > > which this
> > > > > series touches. Am I missing something?
> > > > 
> > > > System bos on discrete should always have
> > > > 
> > > > I915_BO_CACHE_COHERENT_FOR_READ |
> > > > I915_BO_CACHE_COHERENT_FOR_WRITE
> > > > 
> > > > either by the gpu being fully cache coherent (or us mapping
> > > > system
> > > > write-combined). Hence no need for cache clflushes or wbinvd()
> > > > for
> > > > incoherent IO.
> > > 
> > > Hmm so you are talking about the shmem ttm backend. It ends up
> > > depending on the result of i915_ttm_cache_level, yes? It cannot
> > > end
> > > up with I915_CACHE_NONE from that function?
> > 
> > If the object is allocated with allowable placement in either LMEM
> > or
> > SYSTEM, and it ends in system, it gets allocated with
> > I915_CACHE_NONE,
> > but then the shmem ttm backend isn't used but TTM's wc pools, and
> > the
> > object should *always* be mapped wc. Even in system.
> 
> I am not familiar with neither TTM backend or wc pools so maybe a
> missed 
> question - if obj->cache_level can be set to none, and 
> obj->cache_coherency to zero, then during object lifetime helpers
> which 
> consult those fields (like i915_gem_cpu_write_needs_clflush, 
> __start_cpu_write, etc) are giving out incorrect answers? That is, it
> is 
> irrelevant that they would say flushes are required, since in
> actuality 
> those objects can never ever and from anywhere be mapped other than
> WC 
> so flushes aren't actually required?

If we map other than WC somewhere in these situations, that should be a
bug needing a fix. It might be that some of these helpers that you
mention might still flag that a clflush is needed, and in that case
that's an oversight that also needs fixing.

> 
> > > I also found in i915_drm.h:
> > > 
> > >           * As caching mode when specifying
> > > `I915_MMAP_OFFSET_FIXED`,
> > > WC or WB will
> > >           * be used, depending on the object placement on
> > > creation. WB
> > > will be used
> > >           * when the object can only exist in system memory, WC
> > > otherwise.
> > > 
> > > If what you say is true, that on discrete it is _always_ WC, then
> > > that needs updating as well.
> > 
> > If an object is allocated as system only, then it is mapped WB, and
> > we're relying on the gpu being cache coherent to avoid clflushes.
> > Same
> > is actually currently true if the object happens to be accessed by
> > the
> > cpu while evicted. Might need an update for that.
> 
> Hmm okay, I think I actually misunderstood something here. I think
> the 
> reason for difference bbtween smem+lmem object which happens to be in
> smem and smem only object is eluding me.
> 
> > > > 
> > > > That's adhering to Linus'
> > > > 
> > > > "And I sincerely hope to the gods that no cache-incoherent i915
> > > > mess
> > > > ever makes it out of the x86 world. Incoherent IO was always a
> > > > historical mistake and should never ever happen again, so we
> > > > should
> > > > not spread that horrific pattern around."
> > > 
> > > Sure, but I was not talking about IO - just the CPU side access
> > > to
> > > CPU side objects.
> > 
> > OK, I was under the impression that clflushes() and wbinvd()s in
> > i915
> > was only ever used to make data visible to non-snooping GPUs.
> > 
> > Do you mean that there are other uses as well? Agreed the wb cache
> > flush on on suspend only if gpu is
> > !I915_BO_CACHE_COHERENT_FOR_READ?
> > looks to not fit this pattern completely.
> 
> Don't know, I was first trying to understand handling of the 
> obj->cache_coherent as discussed in the first quote block. Are the
> flags 
> consistently set and how the Arm low level code will look.
> 
> > Otherwise, for architectures where memory isn't always fully
> > coherent
> > with the cpu cache, I'd expect them to use the apis in
> > asm/cacheflush.h, like flush_cache_range() and similar, which are
> > nops
> > on x86.
> 
> Hm do you know why there are no-ops? Like why wouldn't they map to
> clflush?

I think it mostly boils down to the PIPT caches on x86. Everything is
assumed to be coherent. Whereas some architextures keep different cache
entries for different virtual addresses even if the physical page is
the same...

clflushes and wbinvds on x86 are for odd arch-specific situations
where, for example where we change caching attributes of the linear
kernel map mappings.

/Thomas


> 
> Regards,
> 
> Tvrtko



  reply	other threads:[~2022-03-22 10:27 UTC|newest]

Thread overview: 60+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-19 19:42 [PATCH 0/4] Drop wbinvd_on_all_cpus usage Michael Cheng
2022-03-19 19:42 ` [Intel-gfx] " Michael Cheng
2022-03-19 19:42 ` [PATCH 1/4] i915/gem: drop " Michael Cheng
2022-03-19 19:42   ` [Intel-gfx] " Michael Cheng
2022-03-21 10:30   ` Tvrtko Ursulin
2022-03-21 10:30     ` [Intel-gfx] " Tvrtko Ursulin
2022-03-21 11:07     ` Thomas Hellström
2022-03-21 11:07       ` [Intel-gfx] " Thomas Hellström
2022-03-21 18:51       ` Michael Cheng
2022-03-21 18:51         ` [Intel-gfx] " Michael Cheng
2022-03-21 16:31     ` Michael Cheng
2022-03-21 16:31       ` [Intel-gfx] " Michael Cheng
2022-03-21 17:28       ` Tvrtko Ursulin
2022-03-21 17:28         ` [Intel-gfx] " Tvrtko Ursulin
2022-03-21 17:42         ` Michael Cheng
2022-03-21 17:42           ` [Intel-gfx] " Michael Cheng
2022-03-22 14:35           ` Daniel Vetter
2022-03-22 14:35             ` Daniel Vetter
2022-03-21 17:51         ` Michael Cheng
2022-03-21 17:51           ` [Intel-gfx] " Michael Cheng
2022-03-19 19:42 ` [PATCH 2/4] Revert "drm/i915/gem: Almagamate clflushes on suspend" Michael Cheng
2022-03-19 19:42   ` [Intel-gfx] " Michael Cheng
2022-03-19 19:42 ` [PATCH 3/4] i915/gem: Revert i915_gem_freeze to previous logic Michael Cheng
2022-03-19 19:42   ` [Intel-gfx] " Michael Cheng
2022-03-19 19:42 ` [PATCH 4/4] drm/i915/gt: Revert ggtt_resume " Michael Cheng
2022-03-19 19:42   ` [Intel-gfx] " Michael Cheng
2022-03-19 20:15 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Drop wbinvd_on_all_cpus usage Patchwork
2022-03-19 20:16 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2022-03-19 20:45 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2022-03-19 22:04 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
2022-03-21 10:27 ` [PATCH 0/4] " Tvrtko Ursulin
2022-03-21 10:27   ` [Intel-gfx] " Tvrtko Ursulin
2022-03-21 11:03   ` Thomas Hellström
2022-03-21 11:03     ` [Intel-gfx] " Thomas Hellström
2022-03-21 12:22     ` Tvrtko Ursulin
2022-03-21 12:22       ` [Intel-gfx] " Tvrtko Ursulin
2022-03-21 12:33       ` Thomas Hellström
2022-03-21 12:33         ` [Intel-gfx] " Thomas Hellström
2022-03-21 13:12         ` Tvrtko Ursulin
2022-03-21 13:12           ` [Intel-gfx] " Tvrtko Ursulin
2022-03-21 13:40           ` Thomas Hellström
2022-03-21 13:40             ` [Intel-gfx] " Thomas Hellström
2022-03-21 14:43             ` Tvrtko Ursulin
2022-03-21 14:43               ` [Intel-gfx] " Tvrtko Ursulin
2022-03-21 15:15               ` Thomas Hellström
2022-03-21 15:15                 ` [Intel-gfx] " Thomas Hellström
2022-03-22 10:13                 ` Tvrtko Ursulin
2022-03-22 10:13                   ` [Intel-gfx] " Tvrtko Ursulin
2022-03-22 10:26                   ` Thomas Hellström [this message]
2022-03-22 10:26                     ` Thomas Hellström
2022-03-22 10:41                     ` Thomas Hellström
2022-03-22 10:41                       ` [Intel-gfx] " Thomas Hellström
2022-03-22 11:20                     ` Tvrtko Ursulin
2022-03-22 11:20                       ` [Intel-gfx] " Tvrtko Ursulin
2022-03-22 11:37                       ` Thomas Hellström
2022-03-22 11:37                         ` [Intel-gfx] " Thomas Hellström
2022-03-22 12:53                         ` Tvrtko Ursulin
2022-03-22 12:53                           ` [Intel-gfx] " Tvrtko Ursulin
2022-03-22 15:07                           ` Thomas Hellström
2022-03-22 15:07                             ` [Intel-gfx] " Thomas Hellström

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5931be1a37dbb9ccdce127f6173d42fa4dbee593.camel@linux.intel.com \
    --to=thomas.hellstrom@linux.intel.com \
    --cc=casey.g.bowman@intel.com \
    --cc=chris@chris-wilson.co.uk \
    --cc=daniel.vetter@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=lucas.demarchi@intel.com \
    --cc=matthew.auld@intel.com \
    --cc=michael.cheng@intel.com \
    --cc=tvrtko.ursulin@linux.intel.com \
    --cc=wayne.boyer@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.