intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [Intel-gfx] [PATCH 1/3] drm/i915: Fix the sgt.pfn sanity check
@ 2021-01-18 14:17 Matthew Auld
  2021-01-18 14:17 ` [Intel-gfx] [PATCH 2/3] drm/i915: prefer FORCE_WC for the blitter routines Matthew Auld
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Matthew Auld @ 2021-01-18 14:17 UTC (permalink / raw)
  To: intel-gfx; +Cc: Kui Wen

From: Kui Wen <kui.wen@intel.com>

For the device local-memory case, sgt.pfn will always be equal to zero,
since we instead use sgt.dma. Also, for device local-memory it is
perfectly valid for it to start from zero anyway, so no need to add a
new check for that either.

Signed-off-by: Kui Wen <kui.wen@intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_mm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_mm.c b/drivers/gpu/drm/i915/i915_mm.c
index 43039dc8c607..dcf6b3e5bfdf 100644
--- a/drivers/gpu/drm/i915/i915_mm.c
+++ b/drivers/gpu/drm/i915/i915_mm.c
@@ -62,7 +62,7 @@ static int remap_sg(pte_t *pte, unsigned long addr, void *data)
 {
 	struct remap_pfn *r = data;
 
-	if (GEM_WARN_ON(!r->sgt.pfn))
+	if (GEM_WARN_ON(!use_dma(r->iobase) && !r->sgt.pfn))
 		return -EINVAL;
 
 	/* Special PTE are not associated with any struct page */
-- 
2.26.2

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [Intel-gfx] [PATCH 2/3] drm/i915: prefer FORCE_WC for the blitter routines
  2021-01-18 14:17 [Intel-gfx] [PATCH 1/3] drm/i915: Fix the sgt.pfn sanity check Matthew Auld
@ 2021-01-18 14:17 ` Matthew Auld
  2021-01-18 14:44   ` Chris Wilson
  2021-01-18 14:54   ` Chris Wilson
  2021-01-18 14:17 ` [Intel-gfx] [PATCH 3/3] drm/i915/error: Fix object page offset within a region Matthew Auld
  2021-01-18 18:22 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for series starting with [1/3] drm/i915: Fix the sgt.pfn sanity check (rev2) Patchwork
  2 siblings, 2 replies; 9+ messages in thread
From: Matthew Auld @ 2021-01-18 14:17 UTC (permalink / raw)
  To: intel-gfx

From: CQ Tang <cq.tang@intel.com>

The pool is shared and so we might find that there is a pool object with
an existing mapping, but is mapped with different underlying type, which
will result in -EBUSY.

Signed-off-by: CQ Tang <cq.tang@intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object_blt.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c
index 10cac9fac79b..c6db745900b3 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c
@@ -55,7 +55,7 @@ struct i915_vma *intel_emit_vma_fill_blt(struct intel_context *ce,
 	if (unlikely(err))
 		goto out_put;
 
-	cmd = i915_gem_object_pin_map(pool->obj, I915_MAP_WC);
+	cmd = i915_gem_object_pin_map(pool->obj, I915_MAP_FORCE_WC);
 	if (IS_ERR(cmd)) {
 		err = PTR_ERR(cmd);
 		goto out_unpin;
@@ -277,7 +277,7 @@ struct i915_vma *intel_emit_vma_copy_blt(struct intel_context *ce,
 	if (unlikely(err))
 		goto out_put;
 
-	cmd = i915_gem_object_pin_map(pool->obj, I915_MAP_WC);
+	cmd = i915_gem_object_pin_map(pool->obj, I915_MAP_FORCE_WC);
 	if (IS_ERR(cmd)) {
 		err = PTR_ERR(cmd);
 		goto out_unpin;
-- 
2.26.2

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [Intel-gfx] [PATCH 3/3] drm/i915/error: Fix object page offset within a region
  2021-01-18 14:17 [Intel-gfx] [PATCH 1/3] drm/i915: Fix the sgt.pfn sanity check Matthew Auld
  2021-01-18 14:17 ` [Intel-gfx] [PATCH 2/3] drm/i915: prefer FORCE_WC for the blitter routines Matthew Auld
@ 2021-01-18 14:17 ` Matthew Auld
  2021-01-18 14:56   ` Chris Wilson
  2021-01-18 18:22 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for series starting with [1/3] drm/i915: Fix the sgt.pfn sanity check (rev2) Patchwork
  2 siblings, 1 reply; 9+ messages in thread
From: Matthew Auld @ 2021-01-18 14:17 UTC (permalink / raw)
  To: intel-gfx

From: CQ Tang <cq.tang@intel.com>

io_mapping_map_wc() expects the offset to be relative to the iomapping
base address. Currently we just pass in the physical address for the
page which only works if the region.start starts at zero.

Signed-off-by: CQ Tang <cq.tang@intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_gpu_error.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 8b163ee1b86d..f962693404b7 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -1051,7 +1051,9 @@ i915_vma_coredump_create(const struct intel_gt *gt,
 		for_each_sgt_daddr(dma, iter, vma->pages) {
 			void __iomem *s;
 
-			s = io_mapping_map_wc(&mem->iomap, dma, PAGE_SIZE);
+			s = io_mapping_map_wc(&mem->iomap,
+					      dma - mem->region.start,
+					      PAGE_SIZE);
 			ret = compress_page(compress,
 					    (void __force *)s, dst,
 					    true);
-- 
2.26.2

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [Intel-gfx] [PATCH 2/3] drm/i915: prefer FORCE_WC for the blitter routines
  2021-01-18 14:17 ` [Intel-gfx] [PATCH 2/3] drm/i915: prefer FORCE_WC for the blitter routines Matthew Auld
@ 2021-01-18 14:44   ` Chris Wilson
  2021-01-18 14:54   ` Chris Wilson
  1 sibling, 0 replies; 9+ messages in thread
From: Chris Wilson @ 2021-01-18 14:44 UTC (permalink / raw)
  To: Matthew Auld, intel-gfx

Quoting Matthew Auld (2021-01-18 14:17:31)
> From: CQ Tang <cq.tang@intel.com>

First patch hasn't arrive, so excuse this misplaced reply.

-	if (GEM_WARN_ON(!r->sgt.pfn))
+	if (GEM_WARN_ON(!use_dma(r->iobase) && !r->sgt.pfn))
 		return -EINVAL;

The better check would be if (GEM_WARN_ON(!r->sgt.sgp)) return -EINVAL;
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Intel-gfx] [PATCH 2/3] drm/i915: prefer FORCE_WC for the blitter routines
  2021-01-18 14:17 ` [Intel-gfx] [PATCH 2/3] drm/i915: prefer FORCE_WC for the blitter routines Matthew Auld
  2021-01-18 14:44   ` Chris Wilson
@ 2021-01-18 14:54   ` Chris Wilson
  2021-01-18 15:55     ` Matthew Auld
  1 sibling, 1 reply; 9+ messages in thread
From: Chris Wilson @ 2021-01-18 14:54 UTC (permalink / raw)
  To: Matthew Auld, intel-gfx

Quoting Matthew Auld (2021-01-18 14:17:31)
> From: CQ Tang <cq.tang@intel.com>
> 
> The pool is shared and so we might find that there is a pool object with
> an existing mapping, but is mapped with different underlying type, which
> will result in -EBUSY.
> 
> Signed-off-by: CQ Tang <cq.tang@intel.com>
> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> ---
>  drivers/gpu/drm/i915/gem/i915_gem_object_blt.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c
> index 10cac9fac79b..c6db745900b3 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c
> @@ -55,7 +55,7 @@ struct i915_vma *intel_emit_vma_fill_blt(struct intel_context *ce,
>         if (unlikely(err))
>                 goto out_put;
>  
> -       cmd = i915_gem_object_pin_map(pool->obj, I915_MAP_WC);
> +       cmd = i915_gem_object_pin_map(pool->obj, I915_MAP_FORCE_WC);
>         if (IS_ERR(cmd)) {
>                 err = PTR_ERR(cmd);
>                 goto out_unpin;
> @@ -277,7 +277,7 @@ struct i915_vma *intel_emit_vma_copy_blt(struct intel_context *ce,
>         if (unlikely(err))
>                 goto out_put;
>  
> -       cmd = i915_gem_object_pin_map(pool->obj, I915_MAP_WC);
> +       cmd = i915_gem_object_pin_map(pool->obj, I915_MAP_FORCE_WC);
>         if (IS_ERR(cmd)) {
>                 err = PTR_ERR(cmd);
>                 goto out_unpin;

FORCE is becoming meaningless.

In this case we pin the pages upon acquiring from the pool, which then
prevents us from changing the mapping type. The purpose of which was so
that we could cache the mapping between users, and here we are saying
that cache is made useless. The danger is that we are now thrashing the
cache, hurting ourselves with the vmap overhead.

Maybe we should move the mapping-type into the buffer-pool cache itself?
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Intel-gfx] [PATCH 3/3] drm/i915/error: Fix object page offset within a region
  2021-01-18 14:17 ` [Intel-gfx] [PATCH 3/3] drm/i915/error: Fix object page offset within a region Matthew Auld
@ 2021-01-18 14:56   ` Chris Wilson
  0 siblings, 0 replies; 9+ messages in thread
From: Chris Wilson @ 2021-01-18 14:56 UTC (permalink / raw)
  To: Matthew Auld, intel-gfx

Quoting Matthew Auld (2021-01-18 14:17:32)
> From: CQ Tang <cq.tang@intel.com>
> 
> io_mapping_map_wc() expects the offset to be relative to the iomapping
> base address. Currently we just pass in the physical address for the
> page which only works if the region.start starts at zero.
> 
> Signed-off-by: CQ Tang <cq.tang@intel.com>
> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> ---
>  drivers/gpu/drm/i915/i915_gpu_error.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
> index 8b163ee1b86d..f962693404b7 100644
> --- a/drivers/gpu/drm/i915/i915_gpu_error.c
> +++ b/drivers/gpu/drm/i915/i915_gpu_error.c
> @@ -1051,7 +1051,9 @@ i915_vma_coredump_create(const struct intel_gt *gt,
>                 for_each_sgt_daddr(dma, iter, vma->pages) {
>                         void __iomem *s;
>  
> -                       s = io_mapping_map_wc(&mem->iomap, dma, PAGE_SIZE);
> +                       s = io_mapping_map_wc(&mem->iomap,
> +                                             dma - mem->region.start,
> +                                             PAGE_SIZE);

From i915_gem_object_get_pages_buddy:

 	sg_dma_address(sg) = mem->region.start + offset;

Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Intel-gfx] [PATCH 2/3] drm/i915: prefer FORCE_WC for the blitter routines
  2021-01-18 14:54   ` Chris Wilson
@ 2021-01-18 15:55     ` Matthew Auld
  2021-01-18 16:02       ` Chris Wilson
  0 siblings, 1 reply; 9+ messages in thread
From: Matthew Auld @ 2021-01-18 15:55 UTC (permalink / raw)
  To: Chris Wilson; +Cc: Intel Graphics Development, Matthew Auld

On Mon, 18 Jan 2021 at 14:54, Chris Wilson <chris@chris-wilson.co.uk> wrote:
>
> Quoting Matthew Auld (2021-01-18 14:17:31)
> > From: CQ Tang <cq.tang@intel.com>
> >
> > The pool is shared and so we might find that there is a pool object with
> > an existing mapping, but is mapped with different underlying type, which
> > will result in -EBUSY.
> >
> > Signed-off-by: CQ Tang <cq.tang@intel.com>
> > Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> > ---
> >  drivers/gpu/drm/i915/gem/i915_gem_object_blt.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c
> > index 10cac9fac79b..c6db745900b3 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c
> > @@ -55,7 +55,7 @@ struct i915_vma *intel_emit_vma_fill_blt(struct intel_context *ce,
> >         if (unlikely(err))
> >                 goto out_put;
> >
> > -       cmd = i915_gem_object_pin_map(pool->obj, I915_MAP_WC);
> > +       cmd = i915_gem_object_pin_map(pool->obj, I915_MAP_FORCE_WC);
> >         if (IS_ERR(cmd)) {
> >                 err = PTR_ERR(cmd);
> >                 goto out_unpin;
> > @@ -277,7 +277,7 @@ struct i915_vma *intel_emit_vma_copy_blt(struct intel_context *ce,
> >         if (unlikely(err))
> >                 goto out_put;
> >
> > -       cmd = i915_gem_object_pin_map(pool->obj, I915_MAP_WC);
> > +       cmd = i915_gem_object_pin_map(pool->obj, I915_MAP_FORCE_WC);
> >         if (IS_ERR(cmd)) {
> >                 err = PTR_ERR(cmd);
> >                 goto out_unpin;
>
> FORCE is becoming meaningless.
>
> In this case we pin the pages upon acquiring from the pool, which then
> prevents us from changing the mapping type. The purpose of which was so
> that we could cache the mapping between users, and here we are saying
> that cache is made useless. The danger is that we are now thrashing the
> cache, hurting ourselves with the vmap overhead.
>
> Maybe we should move the mapping-type into the buffer-pool cache itself?

Yeah, makes sense I think. Maybe something simple like:

--- a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.c
@@ -145,7 +145,8 @@ static void pool_retire(struct i915_active *ref)
 }

 static struct intel_gt_buffer_pool_node *
-node_create(struct intel_gt_buffer_pool *pool, size_t sz)
+node_create(struct intel_gt_buffer_pool *pool, size_t sz,
+           enum i915_map_type type)
 {
        struct intel_gt *gt = to_gt(pool);
        struct intel_gt_buffer_pool_node *node;
@@ -169,12 +170,14 @@ node_create(struct intel_gt_buffer_pool *pool, size_t sz)

        i915_gem_object_set_readonly(obj);

+       node->type = type;
        node->obj = obj;
        return node;
 }

 struct intel_gt_buffer_pool_node *
-intel_gt_get_buffer_pool(struct intel_gt *gt, size_t size)
+intel_gt_get_buffer_pool(struct intel_gt *gt, size_t size,
+                        enum i915_map_type type)
 {
        struct intel_gt_buffer_pool *pool = &gt->buffer_pool;
        struct intel_gt_buffer_pool_node *node;
@@ -191,6 +194,9 @@ intel_gt_get_buffer_pool(struct intel_gt *gt, size_t size)
                if (node->obj->base.size < size)
                        continue;

+               if (node->type != type)
+                       continue;
+
                age = READ_ONCE(node->age);
                if (!age)
                        continue;
@@ -205,7 +211,7 @@ intel_gt_get_buffer_pool(struct intel_gt *gt, size_t size)
        rcu_read_unlock();

        if (&node->link == list) {
-               node = node_create(pool, size);
+               node = node_create(pool, size, type);
                if (IS_ERR(node))
                        return node;
        }
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.h
b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.h
index 42cbac003e8a..6068f8f1762e 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.h
@@ -15,7 +15,8 @@ struct intel_gt;
 struct i915_request;

 struct intel_gt_buffer_pool_node *
-intel_gt_get_buffer_pool(struct intel_gt *gt, size_t size);
+intel_gt_get_buffer_pool(struct intel_gt *gt, size_t size,
+                        enum i915_map_type type);

 static inline int
 intel_gt_buffer_pool_mark_active(struct intel_gt_buffer_pool_node *node,
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool_types.h
b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool_types.h
index bcf1658c9633..e8f7dba36b76 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool_types.h
@@ -31,6 +31,7 @@ struct intel_gt_buffer_pool_node {
                struct rcu_head rcu;
        };
        unsigned long age;
+       enum i915_map_type type;
 };

Or maybe it should be split over multiple lists or something, one for each type?

> -Chris
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [Intel-gfx] [PATCH 2/3] drm/i915: prefer FORCE_WC for the blitter routines
  2021-01-18 15:55     ` Matthew Auld
@ 2021-01-18 16:02       ` Chris Wilson
  0 siblings, 0 replies; 9+ messages in thread
From: Chris Wilson @ 2021-01-18 16:02 UTC (permalink / raw)
  To: Matthew Auld; +Cc: Intel Graphics Development, Matthew Auld

Quoting Matthew Auld (2021-01-18 15:55:31)
> On Mon, 18 Jan 2021 at 14:54, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> >
> > Quoting Matthew Auld (2021-01-18 14:17:31)
> > > From: CQ Tang <cq.tang@intel.com>
> > >
> > > The pool is shared and so we might find that there is a pool object with
> > > an existing mapping, but is mapped with different underlying type, which
> > > will result in -EBUSY.
> > >
> > > Signed-off-by: CQ Tang <cq.tang@intel.com>
> > > Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> > > ---
> > >  drivers/gpu/drm/i915/gem/i915_gem_object_blt.c | 4 ++--
> > >  1 file changed, 2 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c
> > > index 10cac9fac79b..c6db745900b3 100644
> > > --- a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c
> > > +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c
> > > @@ -55,7 +55,7 @@ struct i915_vma *intel_emit_vma_fill_blt(struct intel_context *ce,
> > >         if (unlikely(err))
> > >                 goto out_put;
> > >
> > > -       cmd = i915_gem_object_pin_map(pool->obj, I915_MAP_WC);
> > > +       cmd = i915_gem_object_pin_map(pool->obj, I915_MAP_FORCE_WC);
> > >         if (IS_ERR(cmd)) {
> > >                 err = PTR_ERR(cmd);
> > >                 goto out_unpin;
> > > @@ -277,7 +277,7 @@ struct i915_vma *intel_emit_vma_copy_blt(struct intel_context *ce,
> > >         if (unlikely(err))
> > >                 goto out_put;
> > >
> > > -       cmd = i915_gem_object_pin_map(pool->obj, I915_MAP_WC);
> > > +       cmd = i915_gem_object_pin_map(pool->obj, I915_MAP_FORCE_WC);
> > >         if (IS_ERR(cmd)) {
> > >                 err = PTR_ERR(cmd);
> > >                 goto out_unpin;
> >
> > FORCE is becoming meaningless.
> >
> > In this case we pin the pages upon acquiring from the pool, which then
> > prevents us from changing the mapping type. The purpose of which was so
> > that we could cache the mapping between users, and here we are saying
> > that cache is made useless. The danger is that we are now thrashing the
> > cache, hurting ourselves with the vmap overhead.
> >
> > Maybe we should move the mapping-type into the buffer-pool cache itself?
> 
> Yeah, makes sense I think. Maybe something simple like:
> 
> --- a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.c
> +++ b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.c
> @@ -145,7 +145,8 @@ static void pool_retire(struct i915_active *ref)
>  }
> 
>  static struct intel_gt_buffer_pool_node *
> -node_create(struct intel_gt_buffer_pool *pool, size_t sz)
> +node_create(struct intel_gt_buffer_pool *pool, size_t sz,
> +           enum i915_map_type type)
>  {
>         struct intel_gt *gt = to_gt(pool);
>         struct intel_gt_buffer_pool_node *node;
> @@ -169,12 +170,14 @@ node_create(struct intel_gt_buffer_pool *pool, size_t sz)
> 
>         i915_gem_object_set_readonly(obj);
> 
> +       node->type = type;
>         node->obj = obj;
>         return node;
>  }
> 
>  struct intel_gt_buffer_pool_node *
> -intel_gt_get_buffer_pool(struct intel_gt *gt, size_t size)
> +intel_gt_get_buffer_pool(struct intel_gt *gt, size_t size,
> +                        enum i915_map_type type)
>  {
>         struct intel_gt_buffer_pool *pool = &gt->buffer_pool;
>         struct intel_gt_buffer_pool_node *node;
> @@ -191,6 +194,9 @@ intel_gt_get_buffer_pool(struct intel_gt *gt, size_t size)
>                 if (node->obj->base.size < size)
>                         continue;
> 
> +               if (node->type != type)
> +                       continue;
> +
>                 age = READ_ONCE(node->age);
>                 if (!age)
>                         continue;
> @@ -205,7 +211,7 @@ intel_gt_get_buffer_pool(struct intel_gt *gt, size_t size)
>         rcu_read_unlock();
> 
>         if (&node->link == list) {
> -               node = node_create(pool, size);
> +               node = node_create(pool, size, type);
>                 if (IS_ERR(node))
>                         return node;
>         }
> diff --git a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.h
> b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.h
> index 42cbac003e8a..6068f8f1762e 100644
> --- a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.h
> +++ b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.h
> @@ -15,7 +15,8 @@ struct intel_gt;
>  struct i915_request;
> 
>  struct intel_gt_buffer_pool_node *
> -intel_gt_get_buffer_pool(struct intel_gt *gt, size_t size);
> +intel_gt_get_buffer_pool(struct intel_gt *gt, size_t size,
> +                        enum i915_map_type type);
> 
>  static inline int
>  intel_gt_buffer_pool_mark_active(struct intel_gt_buffer_pool_node *node,
> diff --git a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool_types.h
> b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool_types.h
> index bcf1658c9633..e8f7dba36b76 100644
> --- a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool_types.h
> +++ b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool_types.h
> @@ -31,6 +31,7 @@ struct intel_gt_buffer_pool_node {
>                 struct rcu_head rcu;
>         };
>         unsigned long age;
> +       enum i915_map_type type;
>  };
> 
> Or maybe it should be split over multiple lists or something, one for each type?

This looks good for a first pass. We can split the buckets by type later
if we feel so inclined. At the moment, I hope our lists are short enough
that we only have to skip one or two before finding a match.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [Intel-gfx] ✗ Fi.CI.BUILD: failure for series starting with [1/3] drm/i915: Fix the sgt.pfn sanity check (rev2)
  2021-01-18 14:17 [Intel-gfx] [PATCH 1/3] drm/i915: Fix the sgt.pfn sanity check Matthew Auld
  2021-01-18 14:17 ` [Intel-gfx] [PATCH 2/3] drm/i915: prefer FORCE_WC for the blitter routines Matthew Auld
  2021-01-18 14:17 ` [Intel-gfx] [PATCH 3/3] drm/i915/error: Fix object page offset within a region Matthew Auld
@ 2021-01-18 18:22 ` Patchwork
  2 siblings, 0 replies; 9+ messages in thread
From: Patchwork @ 2021-01-18 18:22 UTC (permalink / raw)
  To: Matthew Auld; +Cc: intel-gfx

== Series Details ==

Series: series starting with [1/3] drm/i915: Fix the sgt.pfn sanity check (rev2)
URL   : https://patchwork.freedesktop.org/series/85997/
State : failure

== Summary ==

Applying: drm/i915: Fix the sgt.pfn sanity check
Applying: drm/i915: prefer FORCE_WC for the blitter routines
error: git diff header lacks filename information when removing 1 leading pathname component (line 49)
error: could not build fake ancestor
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Patch failed at 0002 drm/i915: prefer FORCE_WC for the blitter routines
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2021-01-18 18:22 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-18 14:17 [Intel-gfx] [PATCH 1/3] drm/i915: Fix the sgt.pfn sanity check Matthew Auld
2021-01-18 14:17 ` [Intel-gfx] [PATCH 2/3] drm/i915: prefer FORCE_WC for the blitter routines Matthew Auld
2021-01-18 14:44   ` Chris Wilson
2021-01-18 14:54   ` Chris Wilson
2021-01-18 15:55     ` Matthew Auld
2021-01-18 16:02       ` Chris Wilson
2021-01-18 14:17 ` [Intel-gfx] [PATCH 3/3] drm/i915/error: Fix object page offset within a region Matthew Auld
2021-01-18 14:56   ` Chris Wilson
2021-01-18 18:22 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for series starting with [1/3] drm/i915: Fix the sgt.pfn sanity check (rev2) Patchwork

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).