* [PATCH] drm/i915: Combine cleanup_status_page()
@ 2018-02-01 8:36 Chris Wilson
2018-02-01 8:40 ` Chris Wilson
` (3 more replies)
0 siblings, 4 replies; 6+ messages in thread
From: Chris Wilson @ 2018-02-01 8:36 UTC (permalink / raw)
To: intel-gfx
Pull the physical status page cleanup into a common
cleanup_status_page() for caller simplicity.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/intel_engine_cs.c | 23 +++++++----------------
1 file changed, 7 insertions(+), 16 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
index 7eebfbb95e89..a3ad6925abaa 100644
--- a/drivers/gpu/drm/i915/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/intel_engine_cs.c
@@ -488,21 +488,15 @@ static void intel_engine_cleanup_scratch(struct intel_engine_cs *engine)
i915_vma_unpin_and_release(&engine->scratch);
}
-static void cleanup_phys_status_page(struct intel_engine_cs *engine)
-{
- struct drm_i915_private *dev_priv = engine->i915;
-
- if (!dev_priv->status_page_dmah)
- return;
-
- drm_pci_free(&dev_priv->drm, dev_priv->status_page_dmah);
- engine->status_page.page_addr = NULL;
-}
-
static void cleanup_status_page(struct intel_engine_cs *engine)
{
- struct i915_vma *vma;
struct drm_i915_gem_object *obj;
+ struct drm_dma_handle *dmah;
+ struct i915_vma *vma;
+
+ dmah = fetch_and_zero(&engine->i915->status_page_dmah);
+ if (dmah)
+ drm_pci_free(&engine->i915->drm, dmah);
vma = fetch_and_zero(&engine->status_page.vma);
if (!vma)
@@ -674,10 +668,7 @@ void intel_engine_cleanup_common(struct intel_engine_cs *engine)
{
intel_engine_cleanup_scratch(engine);
- if (HWS_NEEDS_PHYSICAL(engine->i915))
- cleanup_phys_status_page(engine);
- else
- cleanup_status_page(engine);
+ cleanup_status_page(engine);
intel_engine_fini_breadcrumbs(engine);
intel_engine_cleanup_cmd_parser(engine);
--
2.15.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH] drm/i915: Combine cleanup_status_page()
2018-02-01 8:36 [PATCH] drm/i915: Combine cleanup_status_page() Chris Wilson
@ 2018-02-01 8:40 ` Chris Wilson
2018-02-01 9:21 ` ✓ Fi.CI.BAT: success for " Patchwork
` (2 subsequent siblings)
3 siblings, 0 replies; 6+ messages in thread
From: Chris Wilson @ 2018-02-01 8:40 UTC (permalink / raw)
To: intel-gfx
Quoting Chris Wilson (2018-02-01 08:36:34)
> Pull the physical status page cleanup into a common
> cleanup_status_page() for caller simplicity.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
> drivers/gpu/drm/i915/intel_engine_cs.c | 23 +++++++----------------
> 1 file changed, 7 insertions(+), 16 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
> index 7eebfbb95e89..a3ad6925abaa 100644
> --- a/drivers/gpu/drm/i915/intel_engine_cs.c
> +++ b/drivers/gpu/drm/i915/intel_engine_cs.c
> @@ -488,21 +488,15 @@ static void intel_engine_cleanup_scratch(struct intel_engine_cs *engine)
> i915_vma_unpin_and_release(&engine->scratch);
> }
>
> -static void cleanup_phys_status_page(struct intel_engine_cs *engine)
> -{
> - struct drm_i915_private *dev_priv = engine->i915;
> -
> - if (!dev_priv->status_page_dmah)
> - return;
> -
> - drm_pci_free(&dev_priv->drm, dev_priv->status_page_dmah);
> - engine->status_page.page_addr = NULL;
> -}
> -
> static void cleanup_status_page(struct intel_engine_cs *engine)
> {
> - struct i915_vma *vma;
> struct drm_i915_gem_object *obj;
> + struct drm_dma_handle *dmah;
> + struct i915_vma *vma;
> +
> + dmah = fetch_and_zero(&engine->i915->status_page_dmah);
> + if (dmah)
> + drm_pci_free(&engine->i915->drm, dmah);
>
> vma = fetch_and_zero(&engine->status_page.vma);
> if (!vma)
> @@ -674,10 +668,7 @@ void intel_engine_cleanup_common(struct intel_engine_cs *engine)
> {
> intel_engine_cleanup_scratch(engine);
>
> - if (HWS_NEEDS_PHYSICAL(engine->i915))
> - cleanup_phys_status_page(engine);
> - else
> - cleanup_status_page(engine);
> + cleanup_status_page(engine);
Should do the corresponding side for alloc as well.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 6+ messages in thread
* ✓ Fi.CI.BAT: success for drm/i915: Combine cleanup_status_page()
2018-02-01 8:36 [PATCH] drm/i915: Combine cleanup_status_page() Chris Wilson
2018-02-01 8:40 ` Chris Wilson
@ 2018-02-01 9:21 ` Patchwork
2018-02-01 9:42 ` [PATCH] " Tvrtko Ursulin
2018-02-01 11:39 ` ✓ Fi.CI.IGT: success for " Patchwork
3 siblings, 0 replies; 6+ messages in thread
From: Patchwork @ 2018-02-01 9:21 UTC (permalink / raw)
To: Chris Wilson; +Cc: intel-gfx
== Series Details ==
Series: drm/i915: Combine cleanup_status_page()
URL : https://patchwork.freedesktop.org/series/37468/
State : success
== Summary ==
Series 37468v1 drm/i915: Combine cleanup_status_page()
https://patchwork.freedesktop.org/api/1.0/series/37468/revisions/1/mbox/
Test debugfs_test:
Subgroup read_all_entries:
dmesg-fail -> DMESG-WARN (fi-elk-e7500) fdo#103989
Test gem_mmap_gtt:
Subgroup basic-small-bo-tiledx:
fail -> PASS (fi-gdg-551) fdo#102575
fdo#103989 https://bugs.freedesktop.org/show_bug.cgi?id=103989
fdo#102575 https://bugs.freedesktop.org/show_bug.cgi?id=102575
fi-bdw-5557u total:288 pass:267 dwarn:0 dfail:0 fail:0 skip:21 time:419s
fi-bdw-gvtdvm total:288 pass:264 dwarn:0 dfail:0 fail:0 skip:24 time:422s
fi-blb-e6850 total:288 pass:223 dwarn:1 dfail:0 fail:0 skip:64 time:370s
fi-bsw-n3050 total:288 pass:242 dwarn:0 dfail:0 fail:0 skip:46 time:482s
fi-bwr-2160 total:288 pass:183 dwarn:0 dfail:0 fail:0 skip:105 time:281s
fi-bxt-dsi total:288 pass:258 dwarn:0 dfail:0 fail:0 skip:30 time:484s
fi-bxt-j4205 total:288 pass:259 dwarn:0 dfail:0 fail:0 skip:29 time:480s
fi-byt-j1900 total:288 pass:253 dwarn:0 dfail:0 fail:0 skip:35 time:462s
fi-byt-n2820 total:288 pass:249 dwarn:0 dfail:0 fail:0 skip:39 time:453s
fi-cfl-s2 total:288 pass:262 dwarn:0 dfail:0 fail:0 skip:26 time:572s
fi-elk-e7500 total:224 pass:168 dwarn:10 dfail:0 fail:0 skip:45
fi-gdg-551 total:288 pass:180 dwarn:0 dfail:0 fail:0 skip:108 time:278s
fi-glk-1 total:288 pass:260 dwarn:0 dfail:0 fail:0 skip:28 time:511s
fi-hsw-4770 total:288 pass:261 dwarn:0 dfail:0 fail:0 skip:27 time:388s
fi-hsw-4770r total:288 pass:261 dwarn:0 dfail:0 fail:0 skip:27 time:402s
fi-ilk-650 total:288 pass:228 dwarn:0 dfail:0 fail:0 skip:60 time:417s
fi-ivb-3520m total:288 pass:259 dwarn:0 dfail:0 fail:0 skip:29 time:452s
fi-ivb-3770 total:288 pass:255 dwarn:0 dfail:0 fail:0 skip:33 time:411s
fi-kbl-7500u total:288 pass:263 dwarn:1 dfail:0 fail:0 skip:24 time:456s
fi-kbl-7560u total:288 pass:269 dwarn:0 dfail:0 fail:0 skip:19 time:495s
fi-kbl-7567u total:288 pass:268 dwarn:0 dfail:0 fail:0 skip:20 time:455s
fi-kbl-r total:288 pass:261 dwarn:0 dfail:0 fail:0 skip:27 time:499s
fi-pnv-d510 total:288 pass:222 dwarn:1 dfail:0 fail:0 skip:65 time:575s
fi-skl-6260u total:288 pass:268 dwarn:0 dfail:0 fail:0 skip:20 time:427s
fi-skl-6600u total:288 pass:261 dwarn:0 dfail:0 fail:0 skip:27 time:503s
fi-skl-6700hq total:288 pass:262 dwarn:0 dfail:0 fail:0 skip:26 time:523s
fi-skl-6700k2 total:288 pass:264 dwarn:0 dfail:0 fail:0 skip:24 time:483s
fi-skl-6770hq total:288 pass:268 dwarn:0 dfail:0 fail:0 skip:20 time:482s
fi-skl-guc total:288 pass:260 dwarn:0 dfail:0 fail:0 skip:28 time:418s
fi-skl-gvtdvm total:288 pass:265 dwarn:0 dfail:0 fail:0 skip:23 time:430s
fi-snb-2520m total:288 pass:248 dwarn:0 dfail:0 fail:0 skip:40 time:523s
fi-snb-2600 total:288 pass:248 dwarn:0 dfail:0 fail:0 skip:40 time:393s
Blacklisted hosts:
fi-glk-dsi total:106 pass:93 dwarn:0 dfail:0 fail:0 skip:12
efb4e2e6223beec49c6e4086a5115f3690358314 drm-tip: 2018y-02m-01d-07h-23m-44s UTC integration manifest
bd1086d49106 drm/i915: Combine cleanup_status_page()
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_7847/issues.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] drm/i915: Combine cleanup_status_page()
2018-02-01 8:36 [PATCH] drm/i915: Combine cleanup_status_page() Chris Wilson
2018-02-01 8:40 ` Chris Wilson
2018-02-01 9:21 ` ✓ Fi.CI.BAT: success for " Patchwork
@ 2018-02-01 9:42 ` Tvrtko Ursulin
2018-02-01 9:49 ` Chris Wilson
2018-02-01 11:39 ` ✓ Fi.CI.IGT: success for " Patchwork
3 siblings, 1 reply; 6+ messages in thread
From: Tvrtko Ursulin @ 2018-02-01 9:42 UTC (permalink / raw)
To: Chris Wilson, intel-gfx
On 01/02/2018 08:36, Chris Wilson wrote:
> Pull the physical status page cleanup into a common
> cleanup_status_page() for caller simplicity.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
> drivers/gpu/drm/i915/intel_engine_cs.c | 23 +++++++----------------
> 1 file changed, 7 insertions(+), 16 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
> index 7eebfbb95e89..a3ad6925abaa 100644
> --- a/drivers/gpu/drm/i915/intel_engine_cs.c
> +++ b/drivers/gpu/drm/i915/intel_engine_cs.c
> @@ -488,21 +488,15 @@ static void intel_engine_cleanup_scratch(struct intel_engine_cs *engine)
> i915_vma_unpin_and_release(&engine->scratch);
> }
>
> -static void cleanup_phys_status_page(struct intel_engine_cs *engine)
> -{
> - struct drm_i915_private *dev_priv = engine->i915;
> -
> - if (!dev_priv->status_page_dmah)
> - return;
> -
> - drm_pci_free(&dev_priv->drm, dev_priv->status_page_dmah);
> - engine->status_page.page_addr = NULL;
> -}
> -
> static void cleanup_status_page(struct intel_engine_cs *engine)
> {
> - struct i915_vma *vma;
> struct drm_i915_gem_object *obj;
> + struct drm_dma_handle *dmah;
> + struct i915_vma *vma;
> +
> + dmah = fetch_and_zero(&engine->i915->status_page_dmah);
> + if (dmah)
> + drm_pci_free(&engine->i915->drm, dmah);
>
> vma = fetch_and_zero(&engine->status_page.vma);
> if (!vma)
> @@ -674,10 +668,7 @@ void intel_engine_cleanup_common(struct intel_engine_cs *engine)
> {
> intel_engine_cleanup_scratch(engine);
>
> - if (HWS_NEEDS_PHYSICAL(engine->i915))
> - cleanup_phys_status_page(engine);
> - else
> - cleanup_status_page(engine);
> + cleanup_status_page(engine);
>
> intel_engine_fini_breadcrumbs(engine);
> intel_engine_cleanup_cmd_parser(engine);
>
Immediate question arises why not the same treatment for
init_status_page paths, but on a deeper look, why not move the phys
paths out of common and into intel_ringbuffer.c?
Although I spotted on the same page when looking at it,
intel_engine_init_common "Initializes @engine@ structure members shared
between legacy and execlists..", and in it, "if
(HAS_LOGICAL_RING_PREEMPTION(..".
So I'd say in general we deviated, again, from the spirit of the earlier
cleanups. :(
Regards,
Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] drm/i915: Combine cleanup_status_page()
2018-02-01 9:42 ` [PATCH] " Tvrtko Ursulin
@ 2018-02-01 9:49 ` Chris Wilson
0 siblings, 0 replies; 6+ messages in thread
From: Chris Wilson @ 2018-02-01 9:49 UTC (permalink / raw)
To: Tvrtko Ursulin, intel-gfx
Quoting Tvrtko Ursulin (2018-02-01 09:42:02)
>
> On 01/02/2018 08:36, Chris Wilson wrote:
> > Pull the physical status page cleanup into a common
> > cleanup_status_page() for caller simplicity.
> >
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > ---
> > drivers/gpu/drm/i915/intel_engine_cs.c | 23 +++++++----------------
> > 1 file changed, 7 insertions(+), 16 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
> > index 7eebfbb95e89..a3ad6925abaa 100644
> > --- a/drivers/gpu/drm/i915/intel_engine_cs.c
> > +++ b/drivers/gpu/drm/i915/intel_engine_cs.c
> > @@ -488,21 +488,15 @@ static void intel_engine_cleanup_scratch(struct intel_engine_cs *engine)
> > i915_vma_unpin_and_release(&engine->scratch);
> > }
> >
> > -static void cleanup_phys_status_page(struct intel_engine_cs *engine)
> > -{
> > - struct drm_i915_private *dev_priv = engine->i915;
> > -
> > - if (!dev_priv->status_page_dmah)
> > - return;
> > -
> > - drm_pci_free(&dev_priv->drm, dev_priv->status_page_dmah);
> > - engine->status_page.page_addr = NULL;
> > -}
> > -
> > static void cleanup_status_page(struct intel_engine_cs *engine)
> > {
> > - struct i915_vma *vma;
> > struct drm_i915_gem_object *obj;
> > + struct drm_dma_handle *dmah;
> > + struct i915_vma *vma;
> > +
> > + dmah = fetch_and_zero(&engine->i915->status_page_dmah);
> > + if (dmah)
> > + drm_pci_free(&engine->i915->drm, dmah);
> >
> > vma = fetch_and_zero(&engine->status_page.vma);
> > if (!vma)
> > @@ -674,10 +668,7 @@ void intel_engine_cleanup_common(struct intel_engine_cs *engine)
> > {
> > intel_engine_cleanup_scratch(engine);
> >
> > - if (HWS_NEEDS_PHYSICAL(engine->i915))
> > - cleanup_phys_status_page(engine);
> > - else
> > - cleanup_status_page(engine);
> > + cleanup_status_page(engine);
> >
> > intel_engine_fini_breadcrumbs(engine);
> > intel_engine_cleanup_cmd_parser(engine);
> >
>
> Immediate question arises why not the same treatment for
> init_status_page paths, but on a deeper look, why not move the phys
> paths out of common and into intel_ringbuffer.c?
My feeling is that it's part of the engine setup, especially as we keep
a dedicated HWS for execlist. Overall I'm not sold on this, so if you
have something to make next year easier, or that can trim down
yesteryears, go for it.
I just happened to be looking at seeing if I could easily move the
init_workarounds around to skip the "rc0 setup X workarounds" spam.
Then decided that's better solved by poking Oscar into reviving his
workaround cleanup.
> Although I spotted on the same page when looking at it,
> intel_engine_init_common "Initializes @engine@ structure members shared
> between legacy and execlists..", and in it, "if
> (HAS_LOGICAL_RING_PREEMPTION(..".
>
> So I'd say in general we deviated, again, from the spirit of the earlier
> cleanups. :(
Yeah, the biggest mess at the moment is the multilayered engine cleanup.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 6+ messages in thread
* ✓ Fi.CI.IGT: success for drm/i915: Combine cleanup_status_page()
2018-02-01 8:36 [PATCH] drm/i915: Combine cleanup_status_page() Chris Wilson
` (2 preceding siblings ...)
2018-02-01 9:42 ` [PATCH] " Tvrtko Ursulin
@ 2018-02-01 11:39 ` Patchwork
3 siblings, 0 replies; 6+ messages in thread
From: Patchwork @ 2018-02-01 11:39 UTC (permalink / raw)
To: Chris Wilson; +Cc: intel-gfx
== Series Details ==
Series: drm/i915: Combine cleanup_status_page()
URL : https://patchwork.freedesktop.org/series/37468/
State : success
== Summary ==
Test perf:
Subgroup oa-exponents:
pass -> FAIL (shard-apl) fdo#102254
Test drv_selftest:
Subgroup live_gtt:
incomplete -> PASS (shard-apl) fdo#103927
Test gem_eio:
Subgroup in-flight-contexts:
fail -> PASS (shard-hsw) fdo#104676
Test kms_flip:
Subgroup 2x-plain-flip-ts-check-interruptible:
pass -> FAIL (shard-hsw) fdo#100368
fdo#102254 https://bugs.freedesktop.org/show_bug.cgi?id=102254
fdo#103927 https://bugs.freedesktop.org/show_bug.cgi?id=103927
fdo#104676 https://bugs.freedesktop.org/show_bug.cgi?id=104676
fdo#100368 https://bugs.freedesktop.org/show_bug.cgi?id=100368
shard-apl total:2838 pass:1751 dwarn:1 dfail:0 fail:22 skip:1064 time:12516s
shard-hsw total:2838 pass:1734 dwarn:1 dfail:0 fail:12 skip:1090 time:11856s
shard-snb total:2838 pass:1330 dwarn:1 dfail:0 fail:10 skip:1497 time:6604s
Blacklisted hosts:
shard-kbl total:2820 pass:1853 dwarn:1 dfail:0 fail:22 skip:943 time:9306s
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_7847/shards.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2018-02-01 11:39 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-02-01 8:36 [PATCH] drm/i915: Combine cleanup_status_page() Chris Wilson
2018-02-01 8:40 ` Chris Wilson
2018-02-01 9:21 ` ✓ Fi.CI.BAT: success for " Patchwork
2018-02-01 9:42 ` [PATCH] " Tvrtko Ursulin
2018-02-01 9:49 ` Chris Wilson
2018-02-01 11:39 ` ✓ Fi.CI.IGT: success for " Patchwork
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.