* [Intel-gfx] [PATCH v3 0/4] Apply Wa_16018031267 / Wa_16018063123
@ 2023-10-23 7:41 Andrzej Hajda
2023-10-23 7:41 ` [Intel-gfx] [PATCH v3 1/4] drm/i915: Enable NULL PTE support for vm scratch Andrzej Hajda
` (4 more replies)
0 siblings, 5 replies; 17+ messages in thread
From: Andrzej Hajda @ 2023-10-23 7:41 UTC (permalink / raw)
To: intel-gfx; +Cc: Jonathan Cavitt, Andrzej Hajda, Chris Wilson, Nirmoy Das
Hi all,
This the series from Jonathan:
[PATCH v12 0/4] Apply Wa_16018031267 / Wa_16018063123
taken over by me.
Changes in this version are described in the patches, in short:
v2:
- use real memory as WABB destination,
- address CI compains - do not decrease vm.total,
- minor reordering.
v3:
- fixed typos,
- removed spare defs,
- added tags
Regards
Andrzej
Andrzej Hajda (1):
drm/i915: Reserve some kernel space per vm
Jonathan Cavitt (3):
drm/i915: Enable NULL PTE support for vm scratch
drm/i915: Add WABB blit for Wa_16018031267 / Wa_16018063123
drm/i915: Set copy engine arbitration for Wa_16018031267 /
Wa_16018063123
.../drm/i915/gem/selftests/i915_gem_context.c | 6 ++
drivers/gpu/drm/i915/gt/gen8_ppgtt.c | 41 +++++++
drivers/gpu/drm/i915/gt/intel_engine_regs.h | 6 ++
drivers/gpu/drm/i915/gt/intel_gt.h | 4 +
drivers/gpu/drm/i915/gt/intel_gt_types.h | 2 +
drivers/gpu/drm/i915/gt/intel_gtt.h | 2 +
drivers/gpu/drm/i915/gt/intel_lrc.c | 100 +++++++++++++++++-
drivers/gpu/drm/i915/gt/intel_workarounds.c | 5 +
drivers/gpu/drm/i915/gt/selftest_lrc.c | 65 ++++++++----
drivers/gpu/drm/i915/i915_drv.h | 2 +
drivers/gpu/drm/i915/i915_pci.c | 2 +
drivers/gpu/drm/i915/intel_device_info.h | 1 +
12 files changed, 215 insertions(+), 21 deletions(-)
---
Andrzej Hajda (1):
drm/i915: Reserve some kernel space per vm
Jonathan Cavitt (3):
drm/i915: Enable NULL PTE support for vm scratch
drm/i915: Add WABB blit for Wa_16018031267 / Wa_16018063123
drm/i915: Set copy engine arbitration for Wa_16018031267 / Wa_16018063123
.../gpu/drm/i915/gem/selftests/i915_gem_context.c | 6 ++
drivers/gpu/drm/i915/gt/gen8_ppgtt.c | 41 +++++++++
drivers/gpu/drm/i915/gt/intel_engine_regs.h | 6 ++
drivers/gpu/drm/i915/gt/intel_gt.h | 4 +
drivers/gpu/drm/i915/gt/intel_gtt.h | 2 +
drivers/gpu/drm/i915/gt/intel_lrc.c | 100 ++++++++++++++++++++-
drivers/gpu/drm/i915/gt/intel_workarounds.c | 5 ++
drivers/gpu/drm/i915/gt/selftest_lrc.c | 65 ++++++++++----
drivers/gpu/drm/i915/i915_drv.h | 2 +
drivers/gpu/drm/i915/i915_pci.c | 2 +
drivers/gpu/drm/i915/intel_device_info.h | 1 +
11 files changed, 213 insertions(+), 21 deletions(-)
---
base-commit: 201c8a7bd1f3f415920a2df4b8a8817e973f42fe
change-id: 20231020-wabb-bbe9324a69a8
Best regards,
--
Andrzej Hajda <andrzej.hajda@intel.com>
^ permalink raw reply [flat|nested] 17+ messages in thread
* [Intel-gfx] [PATCH v3 1/4] drm/i915: Enable NULL PTE support for vm scratch
2023-10-23 7:41 [Intel-gfx] [PATCH v3 0/4] Apply Wa_16018031267 / Wa_16018063123 Andrzej Hajda
@ 2023-10-23 7:41 ` Andrzej Hajda
2023-10-23 12:23 ` Nirmoy Das
2023-10-23 7:41 ` [Intel-gfx] [PATCH v3 2/4] drm/i915: Reserve some kernel space per vm Andrzej Hajda
` (3 subsequent siblings)
4 siblings, 1 reply; 17+ messages in thread
From: Andrzej Hajda @ 2023-10-23 7:41 UTC (permalink / raw)
To: intel-gfx; +Cc: Jonathan Cavitt, Andrzej Hajda, Chris Wilson
From: Jonathan Cavitt <jonathan.cavitt@intel.com>
Enable NULL PTE support for vm scratch pages.
The use of NULL PTEs in vm scratch pages requires us to change how
the i915 gem_contexts live selftest perform vm_isolation: instead of
checking the scratch pages are isolated and don't affect each other, we
check that all changes to the scratch pages are voided.
v2: fixed order of definitions
v3: fixed typo
Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
Suggested-by: Chris Wilson <chris.p.wilson@linux.intel.com>
Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
Reviewed-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
---
drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c | 6 ++++++
drivers/gpu/drm/i915/gt/gen8_ppgtt.c | 3 +++
drivers/gpu/drm/i915/gt/intel_gtt.h | 1 +
drivers/gpu/drm/i915/i915_drv.h | 2 ++
drivers/gpu/drm/i915/i915_pci.c | 2 ++
drivers/gpu/drm/i915/intel_device_info.h | 1 +
6 files changed, 15 insertions(+)
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
index 7021b6e9b219ef..48fc5990343bc7 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
@@ -1751,6 +1751,12 @@ static int check_scratch_page(struct i915_gem_context *ctx, u32 *out)
if (!vm)
return -ENODEV;
+ if (HAS_NULL_PAGE(vm->i915)) {
+ if (out)
+ *out = 0;
+ return 0;
+ }
+
if (!vm->scratch[0]) {
pr_err("No scratch page!\n");
return -EINVAL;
diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
index 9895e18df0435a..84aa29715e0aca 100644
--- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
+++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
@@ -855,6 +855,9 @@ static int gen8_init_scratch(struct i915_address_space *vm)
I915_CACHE_NONE),
pte_flags);
+ if (HAS_NULL_PAGE(vm->i915))
+ vm->scratch[0]->encode |= PTE_NULL_PAGE;
+
for (i = 1; i <= vm->top; i++) {
struct drm_i915_gem_object *obj;
diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.h b/drivers/gpu/drm/i915/gt/intel_gtt.h
index b471edac269920..15c71da14d1d27 100644
--- a/drivers/gpu/drm/i915/gt/intel_gtt.h
+++ b/drivers/gpu/drm/i915/gt/intel_gtt.h
@@ -151,6 +151,7 @@ typedef u64 gen8_pte_t;
#define GEN8_PAGE_PRESENT BIT_ULL(0)
#define GEN8_PAGE_RW BIT_ULL(1)
+#define PTE_NULL_PAGE BIT_ULL(9)
#define GEN8_PDE_IPS_64K BIT(11)
#define GEN8_PDE_PS_2M BIT(7)
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index cb60fc9cf87373..8f61137deb6cef 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -776,6 +776,8 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915,
*/
#define HAS_FLAT_CCS(i915) (INTEL_INFO(i915)->has_flat_ccs)
+#define HAS_NULL_PAGE(dev_priv) (INTEL_INFO(dev_priv)->has_null_page)
+
#define HAS_GT_UC(i915) (INTEL_INFO(i915)->has_gt_uc)
#define HAS_POOLED_EU(i915) (RUNTIME_INFO(i915)->has_pooled_eu)
diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c
index 454467cfa52b9d..aa6e4559b0f0c7 100644
--- a/drivers/gpu/drm/i915/i915_pci.c
+++ b/drivers/gpu/drm/i915/i915_pci.c
@@ -642,6 +642,7 @@ static const struct intel_device_info jsl_info = {
GEN(12), \
TGL_CACHELEVEL, \
.has_global_mocs = 1, \
+ .has_null_page = 1, \
.has_pxp = 1, \
.max_pat_index = 3
@@ -719,6 +720,7 @@ static const struct intel_device_info adl_p_info = {
.has_logical_ring_contexts = 1, \
.has_logical_ring_elsq = 1, \
.has_mslice_steering = 1, \
+ .has_null_page = 1, \
.has_oa_bpc_reporting = 1, \
.has_oa_slice_contrib_limits = 1, \
.has_oam = 1, \
diff --git a/drivers/gpu/drm/i915/intel_device_info.h b/drivers/gpu/drm/i915/intel_device_info.h
index 39817490b13fd4..36e169695cd61b 100644
--- a/drivers/gpu/drm/i915/intel_device_info.h
+++ b/drivers/gpu/drm/i915/intel_device_info.h
@@ -160,6 +160,7 @@ enum intel_ppgtt_type {
func(has_logical_ring_elsq); \
func(has_media_ratio_mode); \
func(has_mslice_steering); \
+ func(has_null_page); \
func(has_oa_bpc_reporting); \
func(has_oa_slice_contrib_limits); \
func(has_oam); \
--
2.34.1
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [Intel-gfx] [PATCH v3 2/4] drm/i915: Reserve some kernel space per vm
2023-10-23 7:41 [Intel-gfx] [PATCH v3 0/4] Apply Wa_16018031267 / Wa_16018063123 Andrzej Hajda
2023-10-23 7:41 ` [Intel-gfx] [PATCH v3 1/4] drm/i915: Enable NULL PTE support for vm scratch Andrzej Hajda
@ 2023-10-23 7:41 ` Andrzej Hajda
2023-10-23 8:49 ` Nirmoy Das
2023-10-23 7:41 ` [Intel-gfx] [PATCH v3 3/4] drm/i915: Add WABB blit for Wa_16018031267 / Wa_16018063123 Andrzej Hajda
` (2 subsequent siblings)
4 siblings, 1 reply; 17+ messages in thread
From: Andrzej Hajda @ 2023-10-23 7:41 UTC (permalink / raw)
To: intel-gfx; +Cc: Jonathan Cavitt, Andrzej Hajda, Chris Wilson
Reserve two pages in each vm for kernel space to use for things
such as workarounds.
v2: use real memory, do not decrease vm.total
Suggested-by: Chris Wilson <chris.p.wilson@linux.intel.com>
Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
Reviewed-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
---
drivers/gpu/drm/i915/gt/gen8_ppgtt.c | 38 ++++++++++++++++++++++++++++++++++++
drivers/gpu/drm/i915/gt/intel_gtt.h | 1 +
2 files changed, 39 insertions(+)
diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
index 84aa29715e0aca..c25e1d4cceeb17 100644
--- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
+++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
@@ -5,6 +5,7 @@
#include <linux/log2.h>
+#include "gem/i915_gem_internal.h"
#include "gem/i915_gem_lmem.h"
#include "gen8_ppgtt.h"
@@ -953,6 +954,39 @@ gen8_alloc_top_pd(struct i915_address_space *vm)
return ERR_PTR(err);
}
+static int gen8_init_rsvd(struct i915_address_space *vm)
+{
+ const resource_size_t size = 2 * PAGE_SIZE;
+ struct drm_i915_private *i915 = vm->i915;
+ struct drm_i915_gem_object *obj;
+ struct i915_vma *vma;
+ int ret;
+
+ obj = i915_gem_object_create_lmem(i915, size,
+ I915_BO_ALLOC_VOLATILE |
+ I915_BO_ALLOC_GPU_ONLY);
+ if (IS_ERR(obj))
+ obj = i915_gem_object_create_internal(i915, size);
+ if (IS_ERR(obj))
+ return PTR_ERR(obj);
+
+ vma = i915_vma_instance(obj, vm, NULL);
+ if (IS_ERR(vma)) {
+ ret = PTR_ERR(vma);
+ goto unref;
+ }
+
+ ret = i915_vma_pin(vma, 0, 0, PIN_USER | PIN_HIGH);
+ if (ret)
+ goto unref;
+
+ vm->rsvd = i915_vma_make_unshrinkable(vma);
+
+unref:
+ i915_gem_object_put(obj);
+ return ret;
+}
+
/*
* GEN8 legacy ppgtt programming is accomplished through a max 4 PDP registers
* with a net effect resembling a 2-level page table in normal x86 terms. Each
@@ -1034,6 +1068,10 @@ struct i915_ppgtt *gen8_ppgtt_create(struct intel_gt *gt,
if (intel_vgpu_active(gt->i915))
gen8_ppgtt_notify_vgt(ppgtt, true);
+ err = gen8_init_rsvd(&ppgtt->vm);
+ if (err)
+ goto err_put;
+
return ppgtt;
err_put:
diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.h b/drivers/gpu/drm/i915/gt/intel_gtt.h
index 15c71da14d1d27..4a35ef24501b5f 100644
--- a/drivers/gpu/drm/i915/gt/intel_gtt.h
+++ b/drivers/gpu/drm/i915/gt/intel_gtt.h
@@ -250,6 +250,7 @@ struct i915_address_space {
struct work_struct release_work;
struct drm_mm mm;
+ struct i915_vma *rsvd;
struct intel_gt *gt;
struct drm_i915_private *i915;
struct device *dma;
--
2.34.1
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [Intel-gfx] [PATCH v3 3/4] drm/i915: Add WABB blit for Wa_16018031267 / Wa_16018063123
2023-10-23 7:41 [Intel-gfx] [PATCH v3 0/4] Apply Wa_16018031267 / Wa_16018063123 Andrzej Hajda
2023-10-23 7:41 ` [Intel-gfx] [PATCH v3 1/4] drm/i915: Enable NULL PTE support for vm scratch Andrzej Hajda
2023-10-23 7:41 ` [Intel-gfx] [PATCH v3 2/4] drm/i915: Reserve some kernel space per vm Andrzej Hajda
@ 2023-10-23 7:41 ` Andrzej Hajda
2023-10-23 9:05 ` Nirmoy Das
2023-10-23 7:41 ` [Intel-gfx] [PATCH v3 4/4] drm/i915: Set copy engine arbitration " Andrzej Hajda
2023-10-23 8:38 ` [Intel-gfx] [PATCH v3 0/4] Apply " Nirmoy Das
4 siblings, 1 reply; 17+ messages in thread
From: Andrzej Hajda @ 2023-10-23 7:41 UTC (permalink / raw)
To: intel-gfx; +Cc: Jonathan Cavitt, Andrzej Hajda, Nirmoy Das
From: Jonathan Cavitt <jonathan.cavitt@intel.com>
Apply WABB blit for Wa_16018031267 / Wa_16018063123.
Additionally, update the lrc selftest to exercise the new
WABB changes.
v3: drop unused enum definition
Co-developed-by: Nirmoy Das <nirmoy.das@intel.com>
Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
Reviewed-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
---
drivers/gpu/drm/i915/gt/intel_engine_regs.h | 3 +
drivers/gpu/drm/i915/gt/intel_gt.h | 4 ++
drivers/gpu/drm/i915/gt/intel_lrc.c | 100 +++++++++++++++++++++++++++-
drivers/gpu/drm/i915/gt/selftest_lrc.c | 65 +++++++++++++-----
4 files changed, 151 insertions(+), 21 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_regs.h b/drivers/gpu/drm/i915/gt/intel_engine_regs.h
index fdd4ddd3a978a2..b8618ee3e3041a 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_regs.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_regs.h
@@ -118,6 +118,9 @@
#define CCID_EXTENDED_STATE_RESTORE BIT(2)
#define CCID_EXTENDED_STATE_SAVE BIT(3)
#define RING_BB_PER_CTX_PTR(base) _MMIO((base) + 0x1c0) /* gen8+ */
+#define PER_CTX_BB_FORCE BIT(2)
+#define PER_CTX_BB_VALID BIT(0)
+
#define RING_INDIRECT_CTX(base) _MMIO((base) + 0x1c4) /* gen8+ */
#define RING_INDIRECT_CTX_OFFSET(base) _MMIO((base) + 0x1c8) /* gen8+ */
#define ECOSKPD(base) _MMIO((base) + 0x1d0)
diff --git a/drivers/gpu/drm/i915/gt/intel_gt.h b/drivers/gpu/drm/i915/gt/intel_gt.h
index 970bedf6b78a7b..50989fc2b6debe 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt.h
@@ -82,6 +82,10 @@ struct drm_printer;
##__VA_ARGS__); \
} while (0)
+#define NEEDS_FASTCOLOR_BLT_WABB(engine) ( \
+ IS_GFX_GT_IP_RANGE(engine->gt, IP_VER(12, 55), IP_VER(12, 71)) && \
+ engine->class == COPY_ENGINE_CLASS)
+
static inline bool gt_is_root(struct intel_gt *gt)
{
return !gt->info.id;
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index eaf66d90316655..96ef901113eae9 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -828,6 +828,18 @@ lrc_ring_indirect_offset_default(const struct intel_engine_cs *engine)
return 0;
}
+static void
+lrc_setup_bb_per_ctx(u32 *regs,
+ const struct intel_engine_cs *engine,
+ u32 ctx_bb_ggtt_addr)
+{
+ GEM_BUG_ON(lrc_ring_wa_bb_per_ctx(engine) == -1);
+ regs[lrc_ring_wa_bb_per_ctx(engine) + 1] =
+ ctx_bb_ggtt_addr |
+ PER_CTX_BB_FORCE |
+ PER_CTX_BB_VALID;
+}
+
static void
lrc_setup_indirect_ctx(u32 *regs,
const struct intel_engine_cs *engine,
@@ -1020,7 +1032,13 @@ static u32 context_wa_bb_offset(const struct intel_context *ce)
return PAGE_SIZE * ce->wa_bb_page;
}
-static u32 *context_indirect_bb(const struct intel_context *ce)
+/*
+ * per_ctx below determines which WABB section is used.
+ * When true, the function returns the location of the
+ * PER_CTX_BB. When false, the function returns the
+ * location of the INDIRECT_CTX.
+ */
+static u32 *context_wabb(const struct intel_context *ce, bool per_ctx)
{
void *ptr;
@@ -1029,6 +1047,7 @@ static u32 *context_indirect_bb(const struct intel_context *ce)
ptr = ce->lrc_reg_state;
ptr -= LRC_STATE_OFFSET; /* back to start of context image */
ptr += context_wa_bb_offset(ce);
+ ptr += per_ctx ? PAGE_SIZE : 0;
return ptr;
}
@@ -1105,7 +1124,8 @@ __lrc_alloc_state(struct intel_context *ce, struct intel_engine_cs *engine)
if (GRAPHICS_VER(engine->i915) >= 12) {
ce->wa_bb_page = context_size / PAGE_SIZE;
- context_size += PAGE_SIZE;
+ /* INDIRECT_CTX and PER_CTX_BB need separate pages. */
+ context_size += PAGE_SIZE * 2;
}
if (intel_context_is_parent(ce) && intel_engine_uses_guc(engine)) {
@@ -1407,12 +1427,85 @@ gen12_emit_indirect_ctx_xcs(const struct intel_context *ce, u32 *cs)
return gen12_emit_aux_table_inv(ce->engine, cs);
}
+static u32 *xehp_emit_fastcolor_blt_wabb(const struct intel_context *ce, u32 *cs)
+{
+ struct intel_gt *gt = ce->engine->gt;
+ int mocs = gt->mocs.uc_index << 1;
+
+ /**
+ * Wa_16018031267 / Wa_16018063123 requires that SW forces the
+ * main copy engine arbitration into round robin mode. We
+ * additionally need to submit the following WABB blt command
+ * to produce 4 subblits with each subblit generating 0 byte
+ * write requests as WABB:
+ *
+ * XY_FASTCOLOR_BLT
+ * BG0 -> 5100000E
+ * BG1 -> 0000003F (Dest pitch)
+ * BG2 -> 00000000 (X1, Y1) = (0, 0)
+ * BG3 -> 00040001 (X2, Y2) = (1, 4)
+ * BG4 -> scratch
+ * BG5 -> scratch
+ * BG6-12 -> 00000000
+ * BG13 -> 20004004 (Surf. Width= 2,Surf. Height = 5 )
+ * BG14 -> 00000010 (Qpitch = 4)
+ * BG15 -> 00000000
+ */
+ *cs++ = XY_FAST_COLOR_BLT_CMD | (16 - 2);
+ *cs++ = FIELD_PREP(XY_FAST_COLOR_BLT_MOCS_MASK, mocs) | 0x3f;
+ *cs++ = 0;
+ *cs++ = 4 << 16 | 1;
+ *cs++ = lower_32_bits(i915_vma_offset(ce->vm->rsvd));
+ *cs++ = upper_32_bits(i915_vma_offset(ce->vm->rsvd));
+ *cs++ = 0;
+ *cs++ = 0;
+ *cs++ = 0;
+ *cs++ = 0;
+ *cs++ = 0;
+ *cs++ = 0;
+ *cs++ = 0;
+ *cs++ = 0x20004004;
+ *cs++ = 0x10;
+ *cs++ = 0;
+
+ return cs;
+}
+
+static u32 *
+xehp_emit_per_ctx_bb(const struct intel_context *ce, u32 *cs)
+{
+ /* Wa_16018031267, Wa_16018063123 */
+ if (NEEDS_FASTCOLOR_BLT_WABB(ce->engine))
+ cs = xehp_emit_fastcolor_blt_wabb(ce, cs);
+
+ return cs;
+}
+
+static void
+setup_per_ctx_bb(const struct intel_context *ce,
+ const struct intel_engine_cs *engine,
+ u32 *(*emit)(const struct intel_context *, u32 *))
+{
+ /* Place PER_CTX_BB on next page after INDIRECT_CTX */
+ u32 * const start = context_wabb(ce, true);
+ u32 *cs;
+
+ cs = emit(ce, start);
+
+ /* PER_CTX_BB must manually terminate */
+ *cs++ = MI_BATCH_BUFFER_END;
+
+ GEM_BUG_ON(cs - start > I915_GTT_PAGE_SIZE / sizeof(*cs));
+ lrc_setup_bb_per_ctx(ce->lrc_reg_state, engine,
+ lrc_indirect_bb(ce) + PAGE_SIZE);
+}
+
static void
setup_indirect_ctx_bb(const struct intel_context *ce,
const struct intel_engine_cs *engine,
u32 *(*emit)(const struct intel_context *, u32 *))
{
- u32 * const start = context_indirect_bb(ce);
+ u32 * const start = context_wabb(ce, false);
u32 *cs;
cs = emit(ce, start);
@@ -1511,6 +1604,7 @@ u32 lrc_update_regs(const struct intel_context *ce,
/* Mutually exclusive wrt to global indirect bb */
GEM_BUG_ON(engine->wa_ctx.indirect_ctx.size);
setup_indirect_ctx_bb(ce, engine, fn);
+ setup_per_ctx_bb(ce, engine, xehp_emit_per_ctx_bb);
}
return lrc_descriptor(ce) | CTX_DESC_FORCE_RESTORE;
diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
index 5f826b6dcf5d6f..e17b8777d21dc9 100644
--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
@@ -1555,7 +1555,7 @@ static int live_lrc_isolation(void *arg)
return err;
}
-static int indirect_ctx_submit_req(struct intel_context *ce)
+static int wabb_ctx_submit_req(struct intel_context *ce)
{
struct i915_request *rq;
int err = 0;
@@ -1579,7 +1579,8 @@ static int indirect_ctx_submit_req(struct intel_context *ce)
#define CTX_BB_CANARY_INDEX (CTX_BB_CANARY_OFFSET / sizeof(u32))
static u32 *
-emit_indirect_ctx_bb_canary(const struct intel_context *ce, u32 *cs)
+emit_wabb_ctx_canary(const struct intel_context *ce,
+ u32 *cs, bool per_ctx)
{
*cs++ = MI_STORE_REGISTER_MEM_GEN8 |
MI_SRM_LRM_GLOBAL_GTT |
@@ -1587,26 +1588,43 @@ emit_indirect_ctx_bb_canary(const struct intel_context *ce, u32 *cs)
*cs++ = i915_mmio_reg_offset(RING_START(0));
*cs++ = i915_ggtt_offset(ce->state) +
context_wa_bb_offset(ce) +
- CTX_BB_CANARY_OFFSET;
+ CTX_BB_CANARY_OFFSET +
+ (per_ctx ? PAGE_SIZE : 0);
*cs++ = 0;
return cs;
}
+static u32 *
+emit_indirect_ctx_bb_canary(const struct intel_context *ce, u32 *cs)
+{
+ return emit_wabb_ctx_canary(ce, cs, false);
+}
+
+static u32 *
+emit_per_ctx_bb_canary(const struct intel_context *ce, u32 *cs)
+{
+ return emit_wabb_ctx_canary(ce, cs, true);
+}
+
static void
-indirect_ctx_bb_setup(struct intel_context *ce)
+wabb_ctx_setup(struct intel_context *ce, bool per_ctx)
{
- u32 *cs = context_indirect_bb(ce);
+ u32 *cs = context_wabb(ce, per_ctx);
cs[CTX_BB_CANARY_INDEX] = 0xdeadf00d;
- setup_indirect_ctx_bb(ce, ce->engine, emit_indirect_ctx_bb_canary);
+ if (per_ctx)
+ setup_per_ctx_bb(ce, ce->engine, emit_per_ctx_bb_canary);
+ else
+ setup_indirect_ctx_bb(ce, ce->engine, emit_indirect_ctx_bb_canary);
}
-static bool check_ring_start(struct intel_context *ce)
+static bool check_ring_start(struct intel_context *ce, bool per_ctx)
{
const u32 * const ctx_bb = (void *)(ce->lrc_reg_state) -
- LRC_STATE_OFFSET + context_wa_bb_offset(ce);
+ LRC_STATE_OFFSET + context_wa_bb_offset(ce) +
+ (per_ctx ? PAGE_SIZE : 0);
if (ctx_bb[CTX_BB_CANARY_INDEX] == ce->lrc_reg_state[CTX_RING_START])
return true;
@@ -1618,21 +1636,21 @@ static bool check_ring_start(struct intel_context *ce)
return false;
}
-static int indirect_ctx_bb_check(struct intel_context *ce)
+static int wabb_ctx_check(struct intel_context *ce, bool per_ctx)
{
int err;
- err = indirect_ctx_submit_req(ce);
+ err = wabb_ctx_submit_req(ce);
if (err)
return err;
- if (!check_ring_start(ce))
+ if (!check_ring_start(ce, per_ctx))
return -EINVAL;
return 0;
}
-static int __live_lrc_indirect_ctx_bb(struct intel_engine_cs *engine)
+static int __lrc_wabb_ctx(struct intel_engine_cs *engine, bool per_ctx)
{
struct intel_context *a, *b;
int err;
@@ -1667,14 +1685,14 @@ static int __live_lrc_indirect_ctx_bb(struct intel_engine_cs *engine)
* As ring start is restored apriori of starting the indirect ctx bb and
* as it will be different for each context, it fits to this purpose.
*/
- indirect_ctx_bb_setup(a);
- indirect_ctx_bb_setup(b);
+ wabb_ctx_setup(a, per_ctx);
+ wabb_ctx_setup(b, per_ctx);
- err = indirect_ctx_bb_check(a);
+ err = wabb_ctx_check(a, per_ctx);
if (err)
goto unpin_b;
- err = indirect_ctx_bb_check(b);
+ err = wabb_ctx_check(b, per_ctx);
unpin_b:
intel_context_unpin(b);
@@ -1688,7 +1706,7 @@ static int __live_lrc_indirect_ctx_bb(struct intel_engine_cs *engine)
return err;
}
-static int live_lrc_indirect_ctx_bb(void *arg)
+static int lrc_wabb_ctx(void *arg, bool per_ctx)
{
struct intel_gt *gt = arg;
struct intel_engine_cs *engine;
@@ -1697,7 +1715,7 @@ static int live_lrc_indirect_ctx_bb(void *arg)
for_each_engine(engine, gt, id) {
intel_engine_pm_get(engine);
- err = __live_lrc_indirect_ctx_bb(engine);
+ err = __lrc_wabb_ctx(engine, per_ctx);
intel_engine_pm_put(engine);
if (igt_flush_test(gt->i915))
@@ -1710,6 +1728,16 @@ static int live_lrc_indirect_ctx_bb(void *arg)
return err;
}
+static int live_lrc_indirect_ctx_bb(void *arg)
+{
+ return lrc_wabb_ctx(arg, false);
+}
+
+static int live_lrc_per_ctx_bb(void *arg)
+{
+ return lrc_wabb_ctx(arg, true);
+}
+
static void garbage_reset(struct intel_engine_cs *engine,
struct i915_request *rq)
{
@@ -1947,6 +1975,7 @@ int intel_lrc_live_selftests(struct drm_i915_private *i915)
SUBTEST(live_lrc_garbage),
SUBTEST(live_pphwsp_runtime),
SUBTEST(live_lrc_indirect_ctx_bb),
+ SUBTEST(live_lrc_per_ctx_bb),
};
if (!HAS_LOGICAL_RING_CONTEXTS(i915))
--
2.34.1
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [Intel-gfx] [PATCH v3 4/4] drm/i915: Set copy engine arbitration for Wa_16018031267 / Wa_16018063123
2023-10-23 7:41 [Intel-gfx] [PATCH v3 0/4] Apply Wa_16018031267 / Wa_16018063123 Andrzej Hajda
` (2 preceding siblings ...)
2023-10-23 7:41 ` [Intel-gfx] [PATCH v3 3/4] drm/i915: Add WABB blit for Wa_16018031267 / Wa_16018063123 Andrzej Hajda
@ 2023-10-23 7:41 ` Andrzej Hajda
2023-10-23 9:55 ` Nirmoy Das
2023-10-23 8:38 ` [Intel-gfx] [PATCH v3 0/4] Apply " Nirmoy Das
4 siblings, 1 reply; 17+ messages in thread
From: Andrzej Hajda @ 2023-10-23 7:41 UTC (permalink / raw)
To: intel-gfx; +Cc: Jonathan Cavitt, Andrzej Hajda, Nirmoy Das
From: Jonathan Cavitt <jonathan.cavitt@intel.com>
Set copy engine arbitration into round robin mode
for part of Wa_16018031267 / Wa_16018063123 mitigation.
Signed-off-by: Nirmoy Das <nirmoy.das@intel.com>
Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
Reviewed-by: Andrzej Hajda <andrzej.hajda@intel.com>
---
drivers/gpu/drm/i915/gt/intel_engine_regs.h | 3 +++
drivers/gpu/drm/i915/gt/intel_workarounds.c | 5 +++++
2 files changed, 8 insertions(+)
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_regs.h b/drivers/gpu/drm/i915/gt/intel_engine_regs.h
index b8618ee3e3041a..c0c8c12edea104 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_regs.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_regs.h
@@ -124,6 +124,9 @@
#define RING_INDIRECT_CTX(base) _MMIO((base) + 0x1c4) /* gen8+ */
#define RING_INDIRECT_CTX_OFFSET(base) _MMIO((base) + 0x1c8) /* gen8+ */
#define ECOSKPD(base) _MMIO((base) + 0x1d0)
+#define XEHP_BLITTER_SCHEDULING_MODE_MASK REG_GENMASK(12, 11)
+#define XEHP_BLITTER_ROUND_ROBIN_MODE \
+ REG_FIELD_PREP(XEHP_BLITTER_SCHEDULING_MODE_MASK, 1)
#define ECO_CONSTANT_BUFFER_SR_DISABLE REG_BIT(4)
#define ECO_GATING_CX_ONLY REG_BIT(3)
#define GEN6_BLITTER_FBC_NOTIFY REG_BIT(3)
diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c
index 192ac0e59afa13..108d9326735910 100644
--- a/drivers/gpu/drm/i915/gt/intel_workarounds.c
+++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c
@@ -2782,6 +2782,11 @@ xcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal)
RING_SEMA_WAIT_POLL(engine->mmio_base),
1);
}
+ /* Wa_16018031267, Wa_16018063123 */
+ if (NEEDS_FASTCOLOR_BLT_WABB(engine))
+ wa_masked_field_set(wal, ECOSKPD(engine->mmio_base),
+ XEHP_BLITTER_SCHEDULING_MODE_MASK,
+ XEHP_BLITTER_ROUND_ROBIN_MODE);
}
static void
--
2.34.1
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [Intel-gfx] [PATCH v3 0/4] Apply Wa_16018031267 / Wa_16018063123
2023-10-23 7:41 [Intel-gfx] [PATCH v3 0/4] Apply Wa_16018031267 / Wa_16018063123 Andrzej Hajda
` (3 preceding siblings ...)
2023-10-23 7:41 ` [Intel-gfx] [PATCH v3 4/4] drm/i915: Set copy engine arbitration " Andrzej Hajda
@ 2023-10-23 8:38 ` Nirmoy Das
2023-10-23 11:35 ` Andrzej Hajda
4 siblings, 1 reply; 17+ messages in thread
From: Nirmoy Das @ 2023-10-23 8:38 UTC (permalink / raw)
To: Andrzej Hajda, intel-gfx; +Cc: Chris Wilson, Jonathan Cavitt, Nirmoy Das
Hi Andrzej
On 10/23/2023 9:41 AM, Andrzej Hajda wrote:
> Hi all,
>
> This the series from Jonathan:
> [PATCH v12 0/4] Apply Wa_16018031267 / Wa_16018063123
>
> taken over by me.
>
> Changes in this version are described in the patches, in short:
> v2:
> - use real memory as WABB destination,
Do we still need the NULL PTE patch now ?
Regards,
Nirmoy
> - address CI compains - do not decrease vm.total,
> - minor reordering.
> v3:
> - fixed typos,
> - removed spare defs,
> - added tags
>
> Regards
> Andrzej
>
> Andrzej Hajda (1):
> drm/i915: Reserve some kernel space per vm
>
> Jonathan Cavitt (3):
> drm/i915: Enable NULL PTE support for vm scratch
> drm/i915: Add WABB blit for Wa_16018031267 / Wa_16018063123
> drm/i915: Set copy engine arbitration for Wa_16018031267 /
> Wa_16018063123
>
> .../drm/i915/gem/selftests/i915_gem_context.c | 6 ++
> drivers/gpu/drm/i915/gt/gen8_ppgtt.c | 41 +++++++
> drivers/gpu/drm/i915/gt/intel_engine_regs.h | 6 ++
> drivers/gpu/drm/i915/gt/intel_gt.h | 4 +
> drivers/gpu/drm/i915/gt/intel_gt_types.h | 2 +
> drivers/gpu/drm/i915/gt/intel_gtt.h | 2 +
> drivers/gpu/drm/i915/gt/intel_lrc.c | 100 +++++++++++++++++-
> drivers/gpu/drm/i915/gt/intel_workarounds.c | 5 +
> drivers/gpu/drm/i915/gt/selftest_lrc.c | 65 ++++++++----
> drivers/gpu/drm/i915/i915_drv.h | 2 +
> drivers/gpu/drm/i915/i915_pci.c | 2 +
> drivers/gpu/drm/i915/intel_device_info.h | 1 +
> 12 files changed, 215 insertions(+), 21 deletions(-)
>
> ---
> Andrzej Hajda (1):
> drm/i915: Reserve some kernel space per vm
>
> Jonathan Cavitt (3):
> drm/i915: Enable NULL PTE support for vm scratch
> drm/i915: Add WABB blit for Wa_16018031267 / Wa_16018063123
> drm/i915: Set copy engine arbitration for Wa_16018031267 / Wa_16018063123
>
> .../gpu/drm/i915/gem/selftests/i915_gem_context.c | 6 ++
> drivers/gpu/drm/i915/gt/gen8_ppgtt.c | 41 +++++++++
> drivers/gpu/drm/i915/gt/intel_engine_regs.h | 6 ++
> drivers/gpu/drm/i915/gt/intel_gt.h | 4 +
> drivers/gpu/drm/i915/gt/intel_gtt.h | 2 +
> drivers/gpu/drm/i915/gt/intel_lrc.c | 100 ++++++++++++++++++++-
> drivers/gpu/drm/i915/gt/intel_workarounds.c | 5 ++
> drivers/gpu/drm/i915/gt/selftest_lrc.c | 65 ++++++++++----
> drivers/gpu/drm/i915/i915_drv.h | 2 +
> drivers/gpu/drm/i915/i915_pci.c | 2 +
> drivers/gpu/drm/i915/intel_device_info.h | 1 +
> 11 files changed, 213 insertions(+), 21 deletions(-)
> ---
> base-commit: 201c8a7bd1f3f415920a2df4b8a8817e973f42fe
> change-id: 20231020-wabb-bbe9324a69a8
>
> Best regards,
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Intel-gfx] [PATCH v3 2/4] drm/i915: Reserve some kernel space per vm
2023-10-23 7:41 ` [Intel-gfx] [PATCH v3 2/4] drm/i915: Reserve some kernel space per vm Andrzej Hajda
@ 2023-10-23 8:49 ` Nirmoy Das
2023-10-23 11:40 ` Andrzej Hajda
0 siblings, 1 reply; 17+ messages in thread
From: Nirmoy Das @ 2023-10-23 8:49 UTC (permalink / raw)
To: Andrzej Hajda, intel-gfx; +Cc: Chris Wilson, Jonathan Cavitt
Hi Andrzej,
On 10/23/2023 9:41 AM, Andrzej Hajda wrote:
> Reserve two pages in each vm for kernel space to use for things
> such as workarounds.
>
> v2: use real memory, do not decrease vm.total
>
> Suggested-by: Chris Wilson <chris.p.wilson@linux.intel.com>
> Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
> Reviewed-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
> ---
> drivers/gpu/drm/i915/gt/gen8_ppgtt.c | 38 ++++++++++++++++++++++++++++++++++++
> drivers/gpu/drm/i915/gt/intel_gtt.h | 1 +
> 2 files changed, 39 insertions(+)
>
> diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
> index 84aa29715e0aca..c25e1d4cceeb17 100644
> --- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
> +++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
> @@ -5,6 +5,7 @@
>
> #include <linux/log2.h>
>
> +#include "gem/i915_gem_internal.h"
> #include "gem/i915_gem_lmem.h"
>
> #include "gen8_ppgtt.h"
> @@ -953,6 +954,39 @@ gen8_alloc_top_pd(struct i915_address_space *vm)
> return ERR_PTR(err);
> }
>
> +static int gen8_init_rsvd(struct i915_address_space *vm)
> +{
> + const resource_size_t size = 2 * PAGE_SIZE;
> + struct drm_i915_private *i915 = vm->i915;
> + struct drm_i915_gem_object *obj;
> + struct i915_vma *vma;
> + int ret;
> +
> + obj = i915_gem_object_create_lmem(i915, size,
> + I915_BO_ALLOC_VOLATILE |
> + I915_BO_ALLOC_GPU_ONLY);
Please add a comment why GPU_ONLY flag is used. It makes sense to me now
but good to have a comment for the future. Also why 2 pages are
reserved ?
Regards,
Nirmoy
> + if (IS_ERR(obj))
> + obj = i915_gem_object_create_internal(i915, size);
> + if (IS_ERR(obj))
> + return PTR_ERR(obj);
> +
> + vma = i915_vma_instance(obj, vm, NULL);
> + if (IS_ERR(vma)) {
> + ret = PTR_ERR(vma);
> + goto unref;
> + }
> +
> + ret = i915_vma_pin(vma, 0, 0, PIN_USER | PIN_HIGH);
> + if (ret)
> + goto unref;
> +
> + vm->rsvd = i915_vma_make_unshrinkable(vma);
> +
> +unref:
> + i915_gem_object_put(obj);
> + return ret;
> +}
> +
> /*
> * GEN8 legacy ppgtt programming is accomplished through a max 4 PDP registers
> * with a net effect resembling a 2-level page table in normal x86 terms. Each
> @@ -1034,6 +1068,10 @@ struct i915_ppgtt *gen8_ppgtt_create(struct intel_gt *gt,
> if (intel_vgpu_active(gt->i915))
> gen8_ppgtt_notify_vgt(ppgtt, true);
>
> + err = gen8_init_rsvd(&ppgtt->vm);
> + if (err)
> + goto err_put;
> +
> return ppgtt;
>
> err_put:
> diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.h b/drivers/gpu/drm/i915/gt/intel_gtt.h
> index 15c71da14d1d27..4a35ef24501b5f 100644
> --- a/drivers/gpu/drm/i915/gt/intel_gtt.h
> +++ b/drivers/gpu/drm/i915/gt/intel_gtt.h
> @@ -250,6 +250,7 @@ struct i915_address_space {
> struct work_struct release_work;
>
> struct drm_mm mm;
> + struct i915_vma *rsvd;
> struct intel_gt *gt;
> struct drm_i915_private *i915;
> struct device *dma;
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Intel-gfx] [PATCH v3 3/4] drm/i915: Add WABB blit for Wa_16018031267 / Wa_16018063123
2023-10-23 7:41 ` [Intel-gfx] [PATCH v3 3/4] drm/i915: Add WABB blit for Wa_16018031267 / Wa_16018063123 Andrzej Hajda
@ 2023-10-23 9:05 ` Nirmoy Das
2023-10-23 19:08 ` Andrzej Hajda
0 siblings, 1 reply; 17+ messages in thread
From: Nirmoy Das @ 2023-10-23 9:05 UTC (permalink / raw)
To: Andrzej Hajda, intel-gfx; +Cc: Jonathan Cavitt, Nirmoy Das
On 10/23/2023 9:41 AM, Andrzej Hajda wrote:
> From: Jonathan Cavitt <jonathan.cavitt@intel.com>
>
> Apply WABB blit for Wa_16018031267 / Wa_16018063123.
Should this be split into two patches, one that adds per_ctx wabb and
another
where this WA is applied on top of per_ctx BB ?
> Additionally, update the lrc selftest to exercise the new
> WABB changes.
>
> v3: drop unused enum definition
>
> Co-developed-by: Nirmoy Das <nirmoy.das@intel.com>
> Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
> Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
> Reviewed-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
Don't think Author can also review.
Regards,
Nirmoy
> ---
> drivers/gpu/drm/i915/gt/intel_engine_regs.h | 3 +
> drivers/gpu/drm/i915/gt/intel_gt.h | 4 ++
> drivers/gpu/drm/i915/gt/intel_lrc.c | 100 +++++++++++++++++++++++++++-
> drivers/gpu/drm/i915/gt/selftest_lrc.c | 65 +++++++++++++-----
> 4 files changed, 151 insertions(+), 21 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_regs.h b/drivers/gpu/drm/i915/gt/intel_engine_regs.h
> index fdd4ddd3a978a2..b8618ee3e3041a 100644
> --- a/drivers/gpu/drm/i915/gt/intel_engine_regs.h
> +++ b/drivers/gpu/drm/i915/gt/intel_engine_regs.h
> @@ -118,6 +118,9 @@
> #define CCID_EXTENDED_STATE_RESTORE BIT(2)
> #define CCID_EXTENDED_STATE_SAVE BIT(3)
> #define RING_BB_PER_CTX_PTR(base) _MMIO((base) + 0x1c0) /* gen8+ */
> +#define PER_CTX_BB_FORCE BIT(2)
> +#define PER_CTX_BB_VALID BIT(0)
> +
> #define RING_INDIRECT_CTX(base) _MMIO((base) + 0x1c4) /* gen8+ */
> #define RING_INDIRECT_CTX_OFFSET(base) _MMIO((base) + 0x1c8) /* gen8+ */
> #define ECOSKPD(base) _MMIO((base) + 0x1d0)
> diff --git a/drivers/gpu/drm/i915/gt/intel_gt.h b/drivers/gpu/drm/i915/gt/intel_gt.h
> index 970bedf6b78a7b..50989fc2b6debe 100644
> --- a/drivers/gpu/drm/i915/gt/intel_gt.h
> +++ b/drivers/gpu/drm/i915/gt/intel_gt.h
> @@ -82,6 +82,10 @@ struct drm_printer;
> ##__VA_ARGS__); \
> } while (0)
>
> +#define NEEDS_FASTCOLOR_BLT_WABB(engine) ( \
> + IS_GFX_GT_IP_RANGE(engine->gt, IP_VER(12, 55), IP_VER(12, 71)) && \
> + engine->class == COPY_ENGINE_CLASS)
> +
> static inline bool gt_is_root(struct intel_gt *gt)
> {
> return !gt->info.id;
> diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
> index eaf66d90316655..96ef901113eae9 100644
> --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
> @@ -828,6 +828,18 @@ lrc_ring_indirect_offset_default(const struct intel_engine_cs *engine)
> return 0;
> }
>
> +static void
> +lrc_setup_bb_per_ctx(u32 *regs,
> + const struct intel_engine_cs *engine,
> + u32 ctx_bb_ggtt_addr)
> +{
> + GEM_BUG_ON(lrc_ring_wa_bb_per_ctx(engine) == -1);
> + regs[lrc_ring_wa_bb_per_ctx(engine) + 1] =
> + ctx_bb_ggtt_addr |
> + PER_CTX_BB_FORCE |
> + PER_CTX_BB_VALID;
> +}
> +
> static void
> lrc_setup_indirect_ctx(u32 *regs,
> const struct intel_engine_cs *engine,
> @@ -1020,7 +1032,13 @@ static u32 context_wa_bb_offset(const struct intel_context *ce)
> return PAGE_SIZE * ce->wa_bb_page;
> }
>
> -static u32 *context_indirect_bb(const struct intel_context *ce)
> +/*
> + * per_ctx below determines which WABB section is used.
> + * When true, the function returns the location of the
> + * PER_CTX_BB. When false, the function returns the
> + * location of the INDIRECT_CTX.
> + */
> +static u32 *context_wabb(const struct intel_context *ce, bool per_ctx)
> {
> void *ptr;
>
> @@ -1029,6 +1047,7 @@ static u32 *context_indirect_bb(const struct intel_context *ce)
> ptr = ce->lrc_reg_state;
> ptr -= LRC_STATE_OFFSET; /* back to start of context image */
> ptr += context_wa_bb_offset(ce);
> + ptr += per_ctx ? PAGE_SIZE : 0;
>
> return ptr;
> }
> @@ -1105,7 +1124,8 @@ __lrc_alloc_state(struct intel_context *ce, struct intel_engine_cs *engine)
>
> if (GRAPHICS_VER(engine->i915) >= 12) {
> ce->wa_bb_page = context_size / PAGE_SIZE;
> - context_size += PAGE_SIZE;
> + /* INDIRECT_CTX and PER_CTX_BB need separate pages. */
> + context_size += PAGE_SIZE * 2;
> }
>
> if (intel_context_is_parent(ce) && intel_engine_uses_guc(engine)) {
> @@ -1407,12 +1427,85 @@ gen12_emit_indirect_ctx_xcs(const struct intel_context *ce, u32 *cs)
> return gen12_emit_aux_table_inv(ce->engine, cs);
> }
>
> +static u32 *xehp_emit_fastcolor_blt_wabb(const struct intel_context *ce, u32 *cs)
> +{
> + struct intel_gt *gt = ce->engine->gt;
> + int mocs = gt->mocs.uc_index << 1;
> +
> + /**
> + * Wa_16018031267 / Wa_16018063123 requires that SW forces the
> + * main copy engine arbitration into round robin mode. We
> + * additionally need to submit the following WABB blt command
> + * to produce 4 subblits with each subblit generating 0 byte
> + * write requests as WABB:
> + *
> + * XY_FASTCOLOR_BLT
> + * BG0 -> 5100000E
> + * BG1 -> 0000003F (Dest pitch)
> + * BG2 -> 00000000 (X1, Y1) = (0, 0)
> + * BG3 -> 00040001 (X2, Y2) = (1, 4)
> + * BG4 -> scratch
> + * BG5 -> scratch
> + * BG6-12 -> 00000000
> + * BG13 -> 20004004 (Surf. Width= 2,Surf. Height = 5 )
> + * BG14 -> 00000010 (Qpitch = 4)
> + * BG15 -> 00000000
> + */
> + *cs++ = XY_FAST_COLOR_BLT_CMD | (16 - 2);
> + *cs++ = FIELD_PREP(XY_FAST_COLOR_BLT_MOCS_MASK, mocs) | 0x3f;
> + *cs++ = 0;
> + *cs++ = 4 << 16 | 1;
> + *cs++ = lower_32_bits(i915_vma_offset(ce->vm->rsvd));
> + *cs++ = upper_32_bits(i915_vma_offset(ce->vm->rsvd));
> + *cs++ = 0;
> + *cs++ = 0;
> + *cs++ = 0;
> + *cs++ = 0;
> + *cs++ = 0;
> + *cs++ = 0;
> + *cs++ = 0;
> + *cs++ = 0x20004004;
> + *cs++ = 0x10;
> + *cs++ = 0;
> +
> + return cs;
> +}
> +
> +static u32 *
> +xehp_emit_per_ctx_bb(const struct intel_context *ce, u32 *cs)
> +{
> + /* Wa_16018031267, Wa_16018063123 */
> + if (NEEDS_FASTCOLOR_BLT_WABB(ce->engine))
> + cs = xehp_emit_fastcolor_blt_wabb(ce, cs);
> +
> + return cs;
> +}
> +
> +static void
> +setup_per_ctx_bb(const struct intel_context *ce,
> + const struct intel_engine_cs *engine,
> + u32 *(*emit)(const struct intel_context *, u32 *))
> +{
> + /* Place PER_CTX_BB on next page after INDIRECT_CTX */
> + u32 * const start = context_wabb(ce, true);
> + u32 *cs;
> +
> + cs = emit(ce, start);
> +
> + /* PER_CTX_BB must manually terminate */
> + *cs++ = MI_BATCH_BUFFER_END;
> +
> + GEM_BUG_ON(cs - start > I915_GTT_PAGE_SIZE / sizeof(*cs));
> + lrc_setup_bb_per_ctx(ce->lrc_reg_state, engine,
> + lrc_indirect_bb(ce) + PAGE_SIZE);
> +}
> +
> static void
> setup_indirect_ctx_bb(const struct intel_context *ce,
> const struct intel_engine_cs *engine,
> u32 *(*emit)(const struct intel_context *, u32 *))
> {
> - u32 * const start = context_indirect_bb(ce);
> + u32 * const start = context_wabb(ce, false);
> u32 *cs;
>
> cs = emit(ce, start);
> @@ -1511,6 +1604,7 @@ u32 lrc_update_regs(const struct intel_context *ce,
> /* Mutually exclusive wrt to global indirect bb */
> GEM_BUG_ON(engine->wa_ctx.indirect_ctx.size);
> setup_indirect_ctx_bb(ce, engine, fn);
> + setup_per_ctx_bb(ce, engine, xehp_emit_per_ctx_bb);
> }
>
> return lrc_descriptor(ce) | CTX_DESC_FORCE_RESTORE;
> diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
> index 5f826b6dcf5d6f..e17b8777d21dc9 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
> @@ -1555,7 +1555,7 @@ static int live_lrc_isolation(void *arg)
> return err;
> }
>
> -static int indirect_ctx_submit_req(struct intel_context *ce)
> +static int wabb_ctx_submit_req(struct intel_context *ce)
> {
> struct i915_request *rq;
> int err = 0;
> @@ -1579,7 +1579,8 @@ static int indirect_ctx_submit_req(struct intel_context *ce)
> #define CTX_BB_CANARY_INDEX (CTX_BB_CANARY_OFFSET / sizeof(u32))
>
> static u32 *
> -emit_indirect_ctx_bb_canary(const struct intel_context *ce, u32 *cs)
> +emit_wabb_ctx_canary(const struct intel_context *ce,
> + u32 *cs, bool per_ctx)
> {
> *cs++ = MI_STORE_REGISTER_MEM_GEN8 |
> MI_SRM_LRM_GLOBAL_GTT |
> @@ -1587,26 +1588,43 @@ emit_indirect_ctx_bb_canary(const struct intel_context *ce, u32 *cs)
> *cs++ = i915_mmio_reg_offset(RING_START(0));
> *cs++ = i915_ggtt_offset(ce->state) +
> context_wa_bb_offset(ce) +
> - CTX_BB_CANARY_OFFSET;
> + CTX_BB_CANARY_OFFSET +
> + (per_ctx ? PAGE_SIZE : 0);
> *cs++ = 0;
>
> return cs;
> }
>
> +static u32 *
> +emit_indirect_ctx_bb_canary(const struct intel_context *ce, u32 *cs)
> +{
> + return emit_wabb_ctx_canary(ce, cs, false);
> +}
> +
> +static u32 *
> +emit_per_ctx_bb_canary(const struct intel_context *ce, u32 *cs)
> +{
> + return emit_wabb_ctx_canary(ce, cs, true);
> +}
> +
> static void
> -indirect_ctx_bb_setup(struct intel_context *ce)
> +wabb_ctx_setup(struct intel_context *ce, bool per_ctx)
> {
> - u32 *cs = context_indirect_bb(ce);
> + u32 *cs = context_wabb(ce, per_ctx);
>
> cs[CTX_BB_CANARY_INDEX] = 0xdeadf00d;
>
> - setup_indirect_ctx_bb(ce, ce->engine, emit_indirect_ctx_bb_canary);
> + if (per_ctx)
> + setup_per_ctx_bb(ce, ce->engine, emit_per_ctx_bb_canary);
> + else
> + setup_indirect_ctx_bb(ce, ce->engine, emit_indirect_ctx_bb_canary);
> }
>
> -static bool check_ring_start(struct intel_context *ce)
> +static bool check_ring_start(struct intel_context *ce, bool per_ctx)
> {
> const u32 * const ctx_bb = (void *)(ce->lrc_reg_state) -
> - LRC_STATE_OFFSET + context_wa_bb_offset(ce);
> + LRC_STATE_OFFSET + context_wa_bb_offset(ce) +
> + (per_ctx ? PAGE_SIZE : 0);
>
> if (ctx_bb[CTX_BB_CANARY_INDEX] == ce->lrc_reg_state[CTX_RING_START])
> return true;
> @@ -1618,21 +1636,21 @@ static bool check_ring_start(struct intel_context *ce)
> return false;
> }
>
> -static int indirect_ctx_bb_check(struct intel_context *ce)
> +static int wabb_ctx_check(struct intel_context *ce, bool per_ctx)
> {
> int err;
>
> - err = indirect_ctx_submit_req(ce);
> + err = wabb_ctx_submit_req(ce);
> if (err)
> return err;
>
> - if (!check_ring_start(ce))
> + if (!check_ring_start(ce, per_ctx))
> return -EINVAL;
>
> return 0;
> }
>
> -static int __live_lrc_indirect_ctx_bb(struct intel_engine_cs *engine)
> +static int __lrc_wabb_ctx(struct intel_engine_cs *engine, bool per_ctx)
> {
> struct intel_context *a, *b;
> int err;
> @@ -1667,14 +1685,14 @@ static int __live_lrc_indirect_ctx_bb(struct intel_engine_cs *engine)
> * As ring start is restored apriori of starting the indirect ctx bb and
> * as it will be different for each context, it fits to this purpose.
> */
> - indirect_ctx_bb_setup(a);
> - indirect_ctx_bb_setup(b);
> + wabb_ctx_setup(a, per_ctx);
> + wabb_ctx_setup(b, per_ctx);
>
> - err = indirect_ctx_bb_check(a);
> + err = wabb_ctx_check(a, per_ctx);
> if (err)
> goto unpin_b;
>
> - err = indirect_ctx_bb_check(b);
> + err = wabb_ctx_check(b, per_ctx);
>
> unpin_b:
> intel_context_unpin(b);
> @@ -1688,7 +1706,7 @@ static int __live_lrc_indirect_ctx_bb(struct intel_engine_cs *engine)
> return err;
> }
>
> -static int live_lrc_indirect_ctx_bb(void *arg)
> +static int lrc_wabb_ctx(void *arg, bool per_ctx)
> {
> struct intel_gt *gt = arg;
> struct intel_engine_cs *engine;
> @@ -1697,7 +1715,7 @@ static int live_lrc_indirect_ctx_bb(void *arg)
>
> for_each_engine(engine, gt, id) {
> intel_engine_pm_get(engine);
> - err = __live_lrc_indirect_ctx_bb(engine);
> + err = __lrc_wabb_ctx(engine, per_ctx);
> intel_engine_pm_put(engine);
>
> if (igt_flush_test(gt->i915))
> @@ -1710,6 +1728,16 @@ static int live_lrc_indirect_ctx_bb(void *arg)
> return err;
> }
>
> +static int live_lrc_indirect_ctx_bb(void *arg)
> +{
> + return lrc_wabb_ctx(arg, false);
> +}
> +
> +static int live_lrc_per_ctx_bb(void *arg)
> +{
> + return lrc_wabb_ctx(arg, true);
> +}
> +
> static void garbage_reset(struct intel_engine_cs *engine,
> struct i915_request *rq)
> {
> @@ -1947,6 +1975,7 @@ int intel_lrc_live_selftests(struct drm_i915_private *i915)
> SUBTEST(live_lrc_garbage),
> SUBTEST(live_pphwsp_runtime),
> SUBTEST(live_lrc_indirect_ctx_bb),
> + SUBTEST(live_lrc_per_ctx_bb),
> };
>
> if (!HAS_LOGICAL_RING_CONTEXTS(i915))
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Intel-gfx] [PATCH v3 4/4] drm/i915: Set copy engine arbitration for Wa_16018031267 / Wa_16018063123
2023-10-23 7:41 ` [Intel-gfx] [PATCH v3 4/4] drm/i915: Set copy engine arbitration " Andrzej Hajda
@ 2023-10-23 9:55 ` Nirmoy Das
2023-10-23 15:24 ` Andrzej Hajda
0 siblings, 1 reply; 17+ messages in thread
From: Nirmoy Das @ 2023-10-23 9:55 UTC (permalink / raw)
To: Andrzej Hajda, intel-gfx; +Cc: Jonathan Cavitt, Nirmoy Das
Hi Andrzej,
On 10/23/2023 9:41 AM, Andrzej Hajda wrote:
> From: Jonathan Cavitt <jonathan.cavitt@intel.com>
>
> Set copy engine arbitration into round robin mode
> for part of Wa_16018031267 / Wa_16018063123 mitigation.
>
> Signed-off-by: Nirmoy Das <nirmoy.das@intel.com>
> Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
> Reviewed-by: Andrzej Hajda <andrzej.hajda@intel.com>
> ---
> drivers/gpu/drm/i915/gt/intel_engine_regs.h | 3 +++
> drivers/gpu/drm/i915/gt/intel_workarounds.c | 5 +++++
> 2 files changed, 8 insertions(+)
>
> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_regs.h b/drivers/gpu/drm/i915/gt/intel_engine_regs.h
> index b8618ee3e3041a..c0c8c12edea104 100644
> --- a/drivers/gpu/drm/i915/gt/intel_engine_regs.h
> +++ b/drivers/gpu/drm/i915/gt/intel_engine_regs.h
> @@ -124,6 +124,9 @@
> #define RING_INDIRECT_CTX(base) _MMIO((base) + 0x1c4) /* gen8+ */
> #define RING_INDIRECT_CTX_OFFSET(base) _MMIO((base) + 0x1c8) /* gen8+ */
> #define ECOSKPD(base) _MMIO((base) + 0x1d0)
> +#define XEHP_BLITTER_SCHEDULING_MODE_MASK REG_GENMASK(12, 11)
> +#define XEHP_BLITTER_ROUND_ROBIN_MODE \
> + REG_FIELD_PREP(XEHP_BLITTER_SCHEDULING_MODE_MASK, 1)
> #define ECO_CONSTANT_BUFFER_SR_DISABLE REG_BIT(4)
> #define ECO_GATING_CX_ONLY REG_BIT(3)
> #define GEN6_BLITTER_FBC_NOTIFY REG_BIT(3)
> diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c
> index 192ac0e59afa13..108d9326735910 100644
> --- a/drivers/gpu/drm/i915/gt/intel_workarounds.c
> +++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c
> @@ -2782,6 +2782,11 @@ xcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal)
> RING_SEMA_WAIT_POLL(engine->mmio_base),
> 1);
> }
> + /* Wa_16018031267, Wa_16018063123 */
> + if (NEEDS_FASTCOLOR_BLT_WABB(engine))
Not sure if I missed any previous discussion on this, the WA talked
about applying this on main copy engine. This needs to be taken into
account in
NEEDS_FASTCOLOR_BLT_WABB()
> + wa_masked_field_set(wal, ECOSKPD(engine->mmio_base),
> + XEHP_BLITTER_SCHEDULING_MODE_MASK,
> + XEHP_BLITTER_ROUND_ROBIN_MODE);
> }
This function sets masked_reg = true and will not read the register back
and I remember MattR asked internally to not use that if that is not
required.
With those two concern handled this is Reviewed-by: Nirmoy Das
<nirmoy.das@intel.com>
Regards,
Nirmoy
>
> static void
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Intel-gfx] [PATCH v3 0/4] Apply Wa_16018031267 / Wa_16018063123
2023-10-23 8:38 ` [Intel-gfx] [PATCH v3 0/4] Apply " Nirmoy Das
@ 2023-10-23 11:35 ` Andrzej Hajda
2023-10-23 12:20 ` Nirmoy Das
0 siblings, 1 reply; 17+ messages in thread
From: Andrzej Hajda @ 2023-10-23 11:35 UTC (permalink / raw)
To: Nirmoy Das, intel-gfx; +Cc: Chris Wilson, Jonathan Cavitt, Nirmoy Das
On 23.10.2023 10:38, Nirmoy Das wrote:
> Hi Andrzej
>
> On 10/23/2023 9:41 AM, Andrzej Hajda wrote:
>> Hi all,
>>
>> This the series from Jonathan:
>> [PATCH v12 0/4] Apply Wa_16018031267 / Wa_16018063123
>>
>> taken over by me.
>>
>> Changes in this version are described in the patches, in short:
>> v2:
>> - use real memory as WABB destination,
>
> Do we still need the NULL PTE patch now ?
In fact no, since we are using real address.
On the other side it is still valuable, IMO, but probably better is to
drop it from this patchset.
Regards
Andrzej
>
>
> Regards,
>
> Nirmoy
>
>> - address CI compains - do not decrease vm.total,
>> - minor reordering.
>> v3:
>> - fixed typos,
>> - removed spare defs,
>> - added tags
>>
>> Regards
>> Andrzej
>>
>> Andrzej Hajda (1):
>> drm/i915: Reserve some kernel space per vm
>>
>> Jonathan Cavitt (3):
>> drm/i915: Enable NULL PTE support for vm scratch
>> drm/i915: Add WABB blit for Wa_16018031267 / Wa_16018063123
>> drm/i915: Set copy engine arbitration for Wa_16018031267 /
>> Wa_16018063123
>>
>> .../drm/i915/gem/selftests/i915_gem_context.c | 6 ++
>> drivers/gpu/drm/i915/gt/gen8_ppgtt.c | 41 +++++++
>> drivers/gpu/drm/i915/gt/intel_engine_regs.h | 6 ++
>> drivers/gpu/drm/i915/gt/intel_gt.h | 4 +
>> drivers/gpu/drm/i915/gt/intel_gt_types.h | 2 +
>> drivers/gpu/drm/i915/gt/intel_gtt.h | 2 +
>> drivers/gpu/drm/i915/gt/intel_lrc.c | 100 +++++++++++++++++-
>> drivers/gpu/drm/i915/gt/intel_workarounds.c | 5 +
>> drivers/gpu/drm/i915/gt/selftest_lrc.c | 65 ++++++++----
>> drivers/gpu/drm/i915/i915_drv.h | 2 +
>> drivers/gpu/drm/i915/i915_pci.c | 2 +
>> drivers/gpu/drm/i915/intel_device_info.h | 1 +
>> 12 files changed, 215 insertions(+), 21 deletions(-)
>>
>> ---
>> Andrzej Hajda (1):
>> drm/i915: Reserve some kernel space per vm
>>
>> Jonathan Cavitt (3):
>> drm/i915: Enable NULL PTE support for vm scratch
>> drm/i915: Add WABB blit for Wa_16018031267 / Wa_16018063123
>> drm/i915: Set copy engine arbitration for Wa_16018031267 /
>> Wa_16018063123
>>
>> .../gpu/drm/i915/gem/selftests/i915_gem_context.c | 6 ++
>> drivers/gpu/drm/i915/gt/gen8_ppgtt.c | 41 +++++++++
>> drivers/gpu/drm/i915/gt/intel_engine_regs.h | 6 ++
>> drivers/gpu/drm/i915/gt/intel_gt.h | 4 +
>> drivers/gpu/drm/i915/gt/intel_gtt.h | 2 +
>> drivers/gpu/drm/i915/gt/intel_lrc.c | 100
>> ++++++++++++++++++++-
>> drivers/gpu/drm/i915/gt/intel_workarounds.c | 5 ++
>> drivers/gpu/drm/i915/gt/selftest_lrc.c | 65
>> ++++++++++----
>> drivers/gpu/drm/i915/i915_drv.h | 2 +
>> drivers/gpu/drm/i915/i915_pci.c | 2 +
>> drivers/gpu/drm/i915/intel_device_info.h | 1 +
>> 11 files changed, 213 insertions(+), 21 deletions(-)
>> ---
>> base-commit: 201c8a7bd1f3f415920a2df4b8a8817e973f42fe
>> change-id: 20231020-wabb-bbe9324a69a8
>>
>> Best regards,
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Intel-gfx] [PATCH v3 2/4] drm/i915: Reserve some kernel space per vm
2023-10-23 8:49 ` Nirmoy Das
@ 2023-10-23 11:40 ` Andrzej Hajda
0 siblings, 0 replies; 17+ messages in thread
From: Andrzej Hajda @ 2023-10-23 11:40 UTC (permalink / raw)
To: Nirmoy Das, intel-gfx; +Cc: Jonathan Cavitt, Chris Wilson
On 23.10.2023 10:49, Nirmoy Das wrote:
> Hi Andrzej,
>
> On 10/23/2023 9:41 AM, Andrzej Hajda wrote:
>> Reserve two pages in each vm for kernel space to use for things
>> such as workarounds.
>>
>> v2: use real memory, do not decrease vm.total
>>
>> Suggested-by: Chris Wilson <chris.p.wilson@linux.intel.com>
>> Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
>> Reviewed-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
>> ---
>> drivers/gpu/drm/i915/gt/gen8_ppgtt.c | 38
>> ++++++++++++++++++++++++++++++++++++
>> drivers/gpu/drm/i915/gt/intel_gtt.h | 1 +
>> 2 files changed, 39 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
>> b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
>> index 84aa29715e0aca..c25e1d4cceeb17 100644
>> --- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
>> +++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
>> @@ -5,6 +5,7 @@
>> #include <linux/log2.h>
>> +#include "gem/i915_gem_internal.h"
>> #include "gem/i915_gem_lmem.h"
>> #include "gen8_ppgtt.h"
>> @@ -953,6 +954,39 @@ gen8_alloc_top_pd(struct i915_address_space *vm)
>> return ERR_PTR(err);
>> }
>> +static int gen8_init_rsvd(struct i915_address_space *vm)
>> +{
>> + const resource_size_t size = 2 * PAGE_SIZE;
>> + struct drm_i915_private *i915 = vm->i915;
>> + struct drm_i915_gem_object *obj;
>> + struct i915_vma *vma;
>> + int ret;
>> +
>> + obj = i915_gem_object_create_lmem(i915, size,
>> + I915_BO_ALLOC_VOLATILE |
>> + I915_BO_ALLOC_GPU_ONLY);
>
> Please add a comment why GPU_ONLY flag is used. It makes sense to me now
> but good to have a comment for the future. Also why 2 pages are
>
> reserved ?
GPU only because it is just for GPU write, nothing more.
About two pages, it is probably leftover from prev versions,
Jonathan if there are no objections I will use one page,
as it should be enough (IIRC, in WA description/discussion
it was mentioned that one cacheline is enough).
Regards
Andrzej
>
>
> Regards,
>
> Nirmoy
>
>> + if (IS_ERR(obj))
>> + obj = i915_gem_object_create_internal(i915, size);
>> + if (IS_ERR(obj))
>> + return PTR_ERR(obj);
>> +
>> + vma = i915_vma_instance(obj, vm, NULL);
>> + if (IS_ERR(vma)) {
>> + ret = PTR_ERR(vma);
>> + goto unref;
>> + }
>> +
>> + ret = i915_vma_pin(vma, 0, 0, PIN_USER | PIN_HIGH);
>> + if (ret)
>> + goto unref;
>> +
>> + vm->rsvd = i915_vma_make_unshrinkable(vma);
>> +
>> +unref:
>> + i915_gem_object_put(obj);
>> + return ret;
>> +}
>> +
>> /*
>> * GEN8 legacy ppgtt programming is accomplished through a max 4 PDP
>> registers
>> * with a net effect resembling a 2-level page table in normal x86
>> terms. Each
>> @@ -1034,6 +1068,10 @@ struct i915_ppgtt *gen8_ppgtt_create(struct
>> intel_gt *gt,
>> if (intel_vgpu_active(gt->i915))
>> gen8_ppgtt_notify_vgt(ppgtt, true);
>> + err = gen8_init_rsvd(&ppgtt->vm);
>> + if (err)
>> + goto err_put;
>> +
>> return ppgtt;
>> err_put:
>> diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.h
>> b/drivers/gpu/drm/i915/gt/intel_gtt.h
>> index 15c71da14d1d27..4a35ef24501b5f 100644
>> --- a/drivers/gpu/drm/i915/gt/intel_gtt.h
>> +++ b/drivers/gpu/drm/i915/gt/intel_gtt.h
>> @@ -250,6 +250,7 @@ struct i915_address_space {
>> struct work_struct release_work;
>> struct drm_mm mm;
>> + struct i915_vma *rsvd;
>> struct intel_gt *gt;
>> struct drm_i915_private *i915;
>> struct device *dma;
>>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Intel-gfx] [PATCH v3 0/4] Apply Wa_16018031267 / Wa_16018063123
2023-10-23 11:35 ` Andrzej Hajda
@ 2023-10-23 12:20 ` Nirmoy Das
0 siblings, 0 replies; 17+ messages in thread
From: Nirmoy Das @ 2023-10-23 12:20 UTC (permalink / raw)
To: Andrzej Hajda, intel-gfx; +Cc: Chris Wilson, Jonathan Cavitt, Nirmoy Das
On 10/23/2023 1:35 PM, Andrzej Hajda wrote:
>
>
> On 23.10.2023 10:38, Nirmoy Das wrote:
>> Hi Andrzej
>>
>> On 10/23/2023 9:41 AM, Andrzej Hajda wrote:
>>> Hi all,
>>>
>>> This the series from Jonathan:
>>> [PATCH v12 0/4] Apply Wa_16018031267 / Wa_16018063123
>>>
>>> taken over by me.
>>>
>>> Changes in this version are described in the patches, in short:
>>> v2:
>>> - use real memory as WABB destination,
>>
>> Do we still need the NULL PTE patch now ?
>
> In fact no, since we are using real address.
> On the other side it is still valuable, IMO, but probably better is to
> drop it from this patchset.
Yes, sounds good.
Thanks,
Nirmoy
>
>
> Regards
> Andrzej
>
>>
>>
>> Regards,
>>
>> Nirmoy
>>
>>> - address CI compains - do not decrease vm.total,
>>> - minor reordering.
>>> v3:
>>> - fixed typos,
>>> - removed spare defs,
>>> - added tags
>>>
>>> Regards
>>> Andrzej
>>>
>>> Andrzej Hajda (1):
>>> drm/i915: Reserve some kernel space per vm
>>>
>>> Jonathan Cavitt (3):
>>> drm/i915: Enable NULL PTE support for vm scratch
>>> drm/i915: Add WABB blit for Wa_16018031267 / Wa_16018063123
>>> drm/i915: Set copy engine arbitration for Wa_16018031267 /
>>> Wa_16018063123
>>>
>>> .../drm/i915/gem/selftests/i915_gem_context.c | 6 ++
>>> drivers/gpu/drm/i915/gt/gen8_ppgtt.c | 41 +++++++
>>> drivers/gpu/drm/i915/gt/intel_engine_regs.h | 6 ++
>>> drivers/gpu/drm/i915/gt/intel_gt.h | 4 +
>>> drivers/gpu/drm/i915/gt/intel_gt_types.h | 2 +
>>> drivers/gpu/drm/i915/gt/intel_gtt.h | 2 +
>>> drivers/gpu/drm/i915/gt/intel_lrc.c | 100
>>> +++++++++++++++++-
>>> drivers/gpu/drm/i915/gt/intel_workarounds.c | 5 +
>>> drivers/gpu/drm/i915/gt/selftest_lrc.c | 65 ++++++++----
>>> drivers/gpu/drm/i915/i915_drv.h | 2 +
>>> drivers/gpu/drm/i915/i915_pci.c | 2 +
>>> drivers/gpu/drm/i915/intel_device_info.h | 1 +
>>> 12 files changed, 215 insertions(+), 21 deletions(-)
>>>
>>> ---
>>> Andrzej Hajda (1):
>>> drm/i915: Reserve some kernel space per vm
>>>
>>> Jonathan Cavitt (3):
>>> drm/i915: Enable NULL PTE support for vm scratch
>>> drm/i915: Add WABB blit for Wa_16018031267 / Wa_16018063123
>>> drm/i915: Set copy engine arbitration for Wa_16018031267 /
>>> Wa_16018063123
>>>
>>> .../gpu/drm/i915/gem/selftests/i915_gem_context.c | 6 ++
>>> drivers/gpu/drm/i915/gt/gen8_ppgtt.c | 41 +++++++++
>>> drivers/gpu/drm/i915/gt/intel_engine_regs.h | 6 ++
>>> drivers/gpu/drm/i915/gt/intel_gt.h | 4 +
>>> drivers/gpu/drm/i915/gt/intel_gtt.h | 2 +
>>> drivers/gpu/drm/i915/gt/intel_lrc.c | 100
>>> ++++++++++++++++++++-
>>> drivers/gpu/drm/i915/gt/intel_workarounds.c | 5 ++
>>> drivers/gpu/drm/i915/gt/selftest_lrc.c | 65
>>> ++++++++++----
>>> drivers/gpu/drm/i915/i915_drv.h | 2 +
>>> drivers/gpu/drm/i915/i915_pci.c | 2 +
>>> drivers/gpu/drm/i915/intel_device_info.h | 1 +
>>> 11 files changed, 213 insertions(+), 21 deletions(-)
>>> ---
>>> base-commit: 201c8a7bd1f3f415920a2df4b8a8817e973f42fe
>>> change-id: 20231020-wabb-bbe9324a69a8
>>>
>>> Best regards,
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Intel-gfx] [PATCH v3 1/4] drm/i915: Enable NULL PTE support for vm scratch
2023-10-23 7:41 ` [Intel-gfx] [PATCH v3 1/4] drm/i915: Enable NULL PTE support for vm scratch Andrzej Hajda
@ 2023-10-23 12:23 ` Nirmoy Das
2023-10-23 14:54 ` Andrzej Hajda
0 siblings, 1 reply; 17+ messages in thread
From: Nirmoy Das @ 2023-10-23 12:23 UTC (permalink / raw)
To: Andrzej Hajda, intel-gfx; +Cc: Chris Wilson, Jonathan Cavitt
On 10/23/2023 9:41 AM, Andrzej Hajda wrote:
> From: Jonathan Cavitt <jonathan.cavitt@intel.com>
>
> Enable NULL PTE support for vm scratch pages.
>
> The use of NULL PTEs in vm scratch pages requires us to change how
> the i915 gem_contexts live selftest perform vm_isolation: instead of
> checking the scratch pages are isolated and don't affect each other, we
> check that all changes to the scratch pages are voided.
>
> v2: fixed order of definitions
> v3: fixed typo
>
> Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
> Suggested-by: Chris Wilson <chris.p.wilson@linux.intel.com>
> Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
> Reviewed-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
> ---
> drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c | 6 ++++++
> drivers/gpu/drm/i915/gt/gen8_ppgtt.c | 3 +++
> drivers/gpu/drm/i915/gt/intel_gtt.h | 1 +
> drivers/gpu/drm/i915/i915_drv.h | 2 ++
> drivers/gpu/drm/i915/i915_pci.c | 2 ++
> drivers/gpu/drm/i915/intel_device_info.h | 1 +
> 6 files changed, 15 insertions(+)
>
> diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
> index 7021b6e9b219ef..48fc5990343bc7 100644
> --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
> +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
> @@ -1751,6 +1751,12 @@ static int check_scratch_page(struct i915_gem_context *ctx, u32 *out)
> if (!vm)
> return -ENODEV;
>
> + if (HAS_NULL_PAGE(vm->i915)) {
> + if (out)
> + *out = 0;
> + return 0;
> + }
> +
> if (!vm->scratch[0]) {
> pr_err("No scratch page!\n");
> return -EINVAL;
> diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
> index 9895e18df0435a..84aa29715e0aca 100644
> --- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
> +++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
> @@ -855,6 +855,9 @@ static int gen8_init_scratch(struct i915_address_space *vm)
> I915_CACHE_NONE),
> pte_flags);
>
> + if (HAS_NULL_PAGE(vm->i915))
> + vm->scratch[0]->encode |= PTE_NULL_PAGE;
> +
> for (i = 1; i <= vm->top; i++) {
> struct drm_i915_gem_object *obj;
>
> diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.h b/drivers/gpu/drm/i915/gt/intel_gtt.h
> index b471edac269920..15c71da14d1d27 100644
> --- a/drivers/gpu/drm/i915/gt/intel_gtt.h
> +++ b/drivers/gpu/drm/i915/gt/intel_gtt.h
> @@ -151,6 +151,7 @@ typedef u64 gen8_pte_t;
>
> #define GEN8_PAGE_PRESENT BIT_ULL(0)
> #define GEN8_PAGE_RW BIT_ULL(1)
> +#define PTE_NULL_PAGE BIT_ULL(9)
>
> #define GEN8_PDE_IPS_64K BIT(11)
> #define GEN8_PDE_PS_2M BIT(7)
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index cb60fc9cf87373..8f61137deb6cef 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -776,6 +776,8 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915,
> */
> #define HAS_FLAT_CCS(i915) (INTEL_INFO(i915)->has_flat_ccs)
>
> +#define HAS_NULL_PAGE(dev_priv) (INTEL_INFO(dev_priv)->has_null_page)
> +
> #define HAS_GT_UC(i915) (INTEL_INFO(i915)->has_gt_uc)
>
> #define HAS_POOLED_EU(i915) (RUNTIME_INFO(i915)->has_pooled_eu)
> diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c
> index 454467cfa52b9d..aa6e4559b0f0c7 100644
> --- a/drivers/gpu/drm/i915/i915_pci.c
> +++ b/drivers/gpu/drm/i915/i915_pci.c
> @@ -642,6 +642,7 @@ static const struct intel_device_info jsl_info = {
> GEN(12), \
> TGL_CACHELEVEL, \
> .has_global_mocs = 1, \
> + .has_null_page = 1, \
> .has_pxp = 1, \
> .max_pat_index = 3
>
> @@ -719,6 +720,7 @@ static const struct intel_device_info adl_p_info = {
> .has_logical_ring_contexts = 1, \
> .has_logical_ring_elsq = 1, \
> .has_mslice_steering = 1, \
> + .has_null_page = 1, \
> .has_oa_bpc_reporting = 1, \
> .has_oa_slice_contrib_limits = 1, \
> .has_oam = 1, \
Why only above platforms are picked, it is not clear from the commit
message.
Regards,
Nirmoy
> diff --git a/drivers/gpu/drm/i915/intel_device_info.h b/drivers/gpu/drm/i915/intel_device_info.h
> index 39817490b13fd4..36e169695cd61b 100644
> --- a/drivers/gpu/drm/i915/intel_device_info.h
> +++ b/drivers/gpu/drm/i915/intel_device_info.h
> @@ -160,6 +160,7 @@ enum intel_ppgtt_type {
> func(has_logical_ring_elsq); \
> func(has_media_ratio_mode); \
> func(has_mslice_steering); \
> + func(has_null_page); \
> func(has_oa_bpc_reporting); \
> func(has_oa_slice_contrib_limits); \
> func(has_oam); \
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Intel-gfx] [PATCH v3 1/4] drm/i915: Enable NULL PTE support for vm scratch
2023-10-23 12:23 ` Nirmoy Das
@ 2023-10-23 14:54 ` Andrzej Hajda
0 siblings, 0 replies; 17+ messages in thread
From: Andrzej Hajda @ 2023-10-23 14:54 UTC (permalink / raw)
To: Nirmoy Das, intel-gfx; +Cc: Jonathan Cavitt, Chris Wilson
On 23.10.2023 14:23, Nirmoy Das wrote:
>
> On 10/23/2023 9:41 AM, Andrzej Hajda wrote:
>> From: Jonathan Cavitt <jonathan.cavitt@intel.com>
>>
>> Enable NULL PTE support for vm scratch pages.
>>
>> The use of NULL PTEs in vm scratch pages requires us to change how
>> the i915 gem_contexts live selftest perform vm_isolation: instead of
>> checking the scratch pages are isolated and don't affect each other, we
>> check that all changes to the scratch pages are voided.
>>
>> v2: fixed order of definitions
>> v3: fixed typo
>>
>> Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
>> Suggested-by: Chris Wilson <chris.p.wilson@linux.intel.com>
>> Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
>> Reviewed-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
>> ---
>> drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c | 6 ++++++
>> drivers/gpu/drm/i915/gt/gen8_ppgtt.c | 3 +++
>> drivers/gpu/drm/i915/gt/intel_gtt.h | 1 +
>> drivers/gpu/drm/i915/i915_drv.h | 2 ++
>> drivers/gpu/drm/i915/i915_pci.c | 2 ++
>> drivers/gpu/drm/i915/intel_device_info.h | 1 +
>> 6 files changed, 15 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
>> b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
>> index 7021b6e9b219ef..48fc5990343bc7 100644
>> --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
>> +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
>> @@ -1751,6 +1751,12 @@ static int check_scratch_page(struct
>> i915_gem_context *ctx, u32 *out)
>> if (!vm)
>> return -ENODEV;
>> + if (HAS_NULL_PAGE(vm->i915)) {
>> + if (out)
>> + *out = 0;
>> + return 0;
>> + }
>> +
>> if (!vm->scratch[0]) {
>> pr_err("No scratch page!\n");
>> return -EINVAL;
>> diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
>> b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
>> index 9895e18df0435a..84aa29715e0aca 100644
>> --- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
>> +++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
>> @@ -855,6 +855,9 @@ static int gen8_init_scratch(struct
>> i915_address_space *vm)
>> I915_CACHE_NONE),
>> pte_flags);
>> + if (HAS_NULL_PAGE(vm->i915))
>> + vm->scratch[0]->encode |= PTE_NULL_PAGE;
>> +
>> for (i = 1; i <= vm->top; i++) {
>> struct drm_i915_gem_object *obj;
>> diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.h
>> b/drivers/gpu/drm/i915/gt/intel_gtt.h
>> index b471edac269920..15c71da14d1d27 100644
>> --- a/drivers/gpu/drm/i915/gt/intel_gtt.h
>> +++ b/drivers/gpu/drm/i915/gt/intel_gtt.h
>> @@ -151,6 +151,7 @@ typedef u64 gen8_pte_t;
>> #define GEN8_PAGE_PRESENT BIT_ULL(0)
>> #define GEN8_PAGE_RW BIT_ULL(1)
>> +#define PTE_NULL_PAGE BIT_ULL(9)
>> #define GEN8_PDE_IPS_64K BIT(11)
>> #define GEN8_PDE_PS_2M BIT(7)
>> diff --git a/drivers/gpu/drm/i915/i915_drv.h
>> b/drivers/gpu/drm/i915/i915_drv.h
>> index cb60fc9cf87373..8f61137deb6cef 100644
>> --- a/drivers/gpu/drm/i915/i915_drv.h
>> +++ b/drivers/gpu/drm/i915/i915_drv.h
>> @@ -776,6 +776,8 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915,
>> */
>> #define HAS_FLAT_CCS(i915) (INTEL_INFO(i915)->has_flat_ccs)
>> +#define HAS_NULL_PAGE(dev_priv) (INTEL_INFO(dev_priv)->has_null_page)
>> +
>> #define HAS_GT_UC(i915) (INTEL_INFO(i915)->has_gt_uc)
>> #define HAS_POOLED_EU(i915) (RUNTIME_INFO(i915)->has_pooled_eu)
>> diff --git a/drivers/gpu/drm/i915/i915_pci.c
>> b/drivers/gpu/drm/i915/i915_pci.c
>> index 454467cfa52b9d..aa6e4559b0f0c7 100644
>> --- a/drivers/gpu/drm/i915/i915_pci.c
>> +++ b/drivers/gpu/drm/i915/i915_pci.c
>> @@ -642,6 +642,7 @@ static const struct intel_device_info jsl_info = {
>> GEN(12), \
>> TGL_CACHELEVEL, \
>> .has_global_mocs = 1, \
>> + .has_null_page = 1, \
>> .has_pxp = 1, \
>> .max_pat_index = 3
>> @@ -719,6 +720,7 @@ static const struct intel_device_info adl_p_info = {
>> .has_logical_ring_contexts = 1, \
>> .has_logical_ring_elsq = 1, \
>> .has_mslice_steering = 1, \
>> + .has_null_page = 1, \
>> .has_oa_bpc_reporting = 1, \
>> .has_oa_slice_contrib_limits = 1, \
>> .has_oam = 1, \
>
> Why only above platforms are picked, it is not clear from the commit
> message.
This is git issue, it fails to parse #define, and provides incorrect
owner hints, here we have actually:
1st: #define GEN12_FEATURES
2nd: #define XE_HP_FEATURES, which is included in all later gens.
So IIRC all gen12+.
Just for information - this patch will be dropped anyway.
Regards
Andrzej
>
>
> Regards,
>
> Nirmoy
>
>> diff --git a/drivers/gpu/drm/i915/intel_device_info.h
>> b/drivers/gpu/drm/i915/intel_device_info.h
>> index 39817490b13fd4..36e169695cd61b 100644
>> --- a/drivers/gpu/drm/i915/intel_device_info.h
>> +++ b/drivers/gpu/drm/i915/intel_device_info.h
>> @@ -160,6 +160,7 @@ enum intel_ppgtt_type {
>> func(has_logical_ring_elsq); \
>> func(has_media_ratio_mode); \
>> func(has_mslice_steering); \
>> + func(has_null_page); \
>> func(has_oa_bpc_reporting); \
>> func(has_oa_slice_contrib_limits); \
>> func(has_oam); \
>>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Intel-gfx] [PATCH v3 4/4] drm/i915: Set copy engine arbitration for Wa_16018031267 / Wa_16018063123
2023-10-23 9:55 ` Nirmoy Das
@ 2023-10-23 15:24 ` Andrzej Hajda
2023-10-23 16:06 ` Nirmoy Das
0 siblings, 1 reply; 17+ messages in thread
From: Andrzej Hajda @ 2023-10-23 15:24 UTC (permalink / raw)
To: Nirmoy Das, intel-gfx; +Cc: Jonathan Cavitt, Nirmoy Das
On 23.10.2023 11:55, Nirmoy Das wrote:
> Hi Andrzej,
>
> On 10/23/2023 9:41 AM, Andrzej Hajda wrote:
>> From: Jonathan Cavitt <jonathan.cavitt@intel.com>
>>
>> Set copy engine arbitration into round robin mode
>> for part of Wa_16018031267 / Wa_16018063123 mitigation.
>>
>> Signed-off-by: Nirmoy Das <nirmoy.das@intel.com>
>> Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
>> Reviewed-by: Andrzej Hajda <andrzej.hajda@intel.com>
>> ---
>> drivers/gpu/drm/i915/gt/intel_engine_regs.h | 3 +++
>> drivers/gpu/drm/i915/gt/intel_workarounds.c | 5 +++++
>> 2 files changed, 8 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_regs.h
>> b/drivers/gpu/drm/i915/gt/intel_engine_regs.h
>> index b8618ee3e3041a..c0c8c12edea104 100644
>> --- a/drivers/gpu/drm/i915/gt/intel_engine_regs.h
>> +++ b/drivers/gpu/drm/i915/gt/intel_engine_regs.h
>> @@ -124,6 +124,9 @@
>> #define RING_INDIRECT_CTX(base) _MMIO((base) + 0x1c4) /*
>> gen8+ */
>> #define RING_INDIRECT_CTX_OFFSET(base) _MMIO((base) + 0x1c8)
>> /* gen8+ */
>> #define ECOSKPD(base) _MMIO((base) + 0x1d0)
>> +#define XEHP_BLITTER_SCHEDULING_MODE_MASK REG_GENMASK(12, 11)
>> +#define XEHP_BLITTER_ROUND_ROBIN_MODE \
>> + REG_FIELD_PREP(XEHP_BLITTER_SCHEDULING_MODE_MASK, 1)
>> #define ECO_CONSTANT_BUFFER_SR_DISABLE REG_BIT(4)
>> #define ECO_GATING_CX_ONLY REG_BIT(3)
>> #define GEN6_BLITTER_FBC_NOTIFY REG_BIT(3)
>> diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c
>> b/drivers/gpu/drm/i915/gt/intel_workarounds.c
>> index 192ac0e59afa13..108d9326735910 100644
>> --- a/drivers/gpu/drm/i915/gt/intel_workarounds.c
>> +++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c
>> @@ -2782,6 +2782,11 @@ xcs_engine_wa_init(struct intel_engine_cs
>> *engine, struct i915_wa_list *wal)
>> RING_SEMA_WAIT_POLL(engine->mmio_base),
>> 1);
>> }
>> + /* Wa_16018031267, Wa_16018063123 */
>> + if (NEEDS_FASTCOLOR_BLT_WABB(engine))
>
>
> Not sure if I missed any previous discussion on this, the WA talked
> about applying this on main copy engine. This needs to be taken into
> account in
>
> NEEDS_FASTCOLOR_BLT_WABB()
Do you mean we need to check if instance == 0? Now above macro checks if
it is copy engine.
>
>> + wa_masked_field_set(wal, ECOSKPD(engine->mmio_base),
>> + XEHP_BLITTER_SCHEDULING_MODE_MASK,
>> + XEHP_BLITTER_ROUND_ROBIN_MODE);
>> }
>
> This function sets masked_reg = true and will not read the register back
> and I remember MattR asked internally to not use that if that is not
> required.
IIRC, wa_masked_field_set sets read_mask, so read back is performed,
anyway it is the only function (beside low level wa_add), which works on
fields(not bits). Are you sure?
Regards
Andrzej
>
>
> With those two concern handled this is Reviewed-by: Nirmoy Das
> <nirmoy.das@intel.com>
>
>
> Regards,
>
> Nirmoy
>
>> static void
>>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Intel-gfx] [PATCH v3 4/4] drm/i915: Set copy engine arbitration for Wa_16018031267 / Wa_16018063123
2023-10-23 15:24 ` Andrzej Hajda
@ 2023-10-23 16:06 ` Nirmoy Das
0 siblings, 0 replies; 17+ messages in thread
From: Nirmoy Das @ 2023-10-23 16:06 UTC (permalink / raw)
To: Andrzej Hajda, intel-gfx; +Cc: Jonathan Cavitt, Nirmoy Das
On 10/23/2023 5:24 PM, Andrzej Hajda wrote:
> On 23.10.2023 11:55, Nirmoy Das wrote:
>> Hi Andrzej,
>>
>> On 10/23/2023 9:41 AM, Andrzej Hajda wrote:
>>> From: Jonathan Cavitt <jonathan.cavitt@intel.com>
>>>
>>> Set copy engine arbitration into round robin mode
>>> for part of Wa_16018031267 / Wa_16018063123 mitigation.
>>>
>>> Signed-off-by: Nirmoy Das <nirmoy.das@intel.com>
>>> Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
>>> Reviewed-by: Andrzej Hajda <andrzej.hajda@intel.com>
>>> ---
>>> drivers/gpu/drm/i915/gt/intel_engine_regs.h | 3 +++
>>> drivers/gpu/drm/i915/gt/intel_workarounds.c | 5 +++++
>>> 2 files changed, 8 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_regs.h
>>> b/drivers/gpu/drm/i915/gt/intel_engine_regs.h
>>> index b8618ee3e3041a..c0c8c12edea104 100644
>>> --- a/drivers/gpu/drm/i915/gt/intel_engine_regs.h
>>> +++ b/drivers/gpu/drm/i915/gt/intel_engine_regs.h
>>> @@ -124,6 +124,9 @@
>>> #define RING_INDIRECT_CTX(base) _MMIO((base) + 0x1c4)
>>> /* gen8+ */
>>> #define RING_INDIRECT_CTX_OFFSET(base) _MMIO((base) +
>>> 0x1c8) /* gen8+ */
>>> #define ECOSKPD(base) _MMIO((base) + 0x1d0)
>>> +#define XEHP_BLITTER_SCHEDULING_MODE_MASK REG_GENMASK(12, 11)
>>> +#define XEHP_BLITTER_ROUND_ROBIN_MODE \
>>> + REG_FIELD_PREP(XEHP_BLITTER_SCHEDULING_MODE_MASK, 1)
>>> #define ECO_CONSTANT_BUFFER_SR_DISABLE REG_BIT(4)
>>> #define ECO_GATING_CX_ONLY REG_BIT(3)
>>> #define GEN6_BLITTER_FBC_NOTIFY REG_BIT(3)
>>> diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c
>>> b/drivers/gpu/drm/i915/gt/intel_workarounds.c
>>> index 192ac0e59afa13..108d9326735910 100644
>>> --- a/drivers/gpu/drm/i915/gt/intel_workarounds.c
>>> +++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c
>>> @@ -2782,6 +2782,11 @@ xcs_engine_wa_init(struct intel_engine_cs
>>> *engine, struct i915_wa_list *wal)
>>> RING_SEMA_WAIT_POLL(engine->mmio_base),
>>> 1);
>>> }
>>> + /* Wa_16018031267, Wa_16018063123 */
>>> + if (NEEDS_FASTCOLOR_BLT_WABB(engine))
>>
>>
>> Not sure if I missed any previous discussion on this, the WA talked
>> about applying this on main copy engine. This needs to be taken into
>> account in
>>
>> NEEDS_FASTCOLOR_BLT_WABB()
>
> Do you mean we need to check if instance == 0? Now above macro checks
> if it is copy engine.
Yes, The WA should be limited to BCS0.
>
>
>>
>>> + wa_masked_field_set(wal, ECOSKPD(engine->mmio_base),
>>> + XEHP_BLITTER_SCHEDULING_MODE_MASK,
>>> + XEHP_BLITTER_ROUND_ROBIN_MODE);
>>> }
>>
>> This function sets masked_reg = true and will not read the register
>> back and I remember MattR asked internally to not use that if that is
>> not required.
>
> IIRC, wa_masked_field_set sets read_mask, so read back is performed,
> anyway it is the only function (beside low level wa_add), which works
> on fields(not bits). Are you sure?
Yes, you are right. I misread something. Please ignore that comment.
Regards,
Nirmoy
>
> Regards
> Andrzej
>
>>
>>
>> With those two concern handled this is Reviewed-by: Nirmoy Das
>> <nirmoy.das@intel.com>
>>
>>
>> Regards,
>>
>> Nirmoy
>>
>>> static void
>>>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Intel-gfx] [PATCH v3 3/4] drm/i915: Add WABB blit for Wa_16018031267 / Wa_16018063123
2023-10-23 9:05 ` Nirmoy Das
@ 2023-10-23 19:08 ` Andrzej Hajda
0 siblings, 0 replies; 17+ messages in thread
From: Andrzej Hajda @ 2023-10-23 19:08 UTC (permalink / raw)
To: Nirmoy Das, intel-gfx; +Cc: Jonathan Cavitt, Nirmoy Das
On 23.10.2023 11:05, Nirmoy Das wrote:
>
> On 10/23/2023 9:41 AM, Andrzej Hajda wrote:
>> From: Jonathan Cavitt <jonathan.cavitt@intel.com>
>>
>> Apply WABB blit for Wa_16018031267 / Wa_16018063123.
>
> Should this be split into two patches, one that adds per_ctx wabb and
> another
>
> where this WA is applied on top of per_ctx BB ?
This way some function, for example setup_per_ctx_bb, will be unused
after 1st patch.
Maybe better would be to separate selftest part?
Regards
Andrzej
>
>
>> Additionally, update the lrc selftest to exercise the new
>> WABB changes.
>>
>> v3: drop unused enum definition
>>
>> Co-developed-by: Nirmoy Das <nirmoy.das@intel.com>
>> Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
>> Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
>> Reviewed-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
>
> Don't think Author can also review.
>
>
> Regards,
>
> Nirmoy
>
>> ---
>> drivers/gpu/drm/i915/gt/intel_engine_regs.h | 3 +
>> drivers/gpu/drm/i915/gt/intel_gt.h | 4 ++
>> drivers/gpu/drm/i915/gt/intel_lrc.c | 100
>> +++++++++++++++++++++++++++-
>> drivers/gpu/drm/i915/gt/selftest_lrc.c | 65 +++++++++++++-----
>> 4 files changed, 151 insertions(+), 21 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_regs.h
>> b/drivers/gpu/drm/i915/gt/intel_engine_regs.h
>> index fdd4ddd3a978a2..b8618ee3e3041a 100644
>> --- a/drivers/gpu/drm/i915/gt/intel_engine_regs.h
>> +++ b/drivers/gpu/drm/i915/gt/intel_engine_regs.h
>> @@ -118,6 +118,9 @@
>> #define CCID_EXTENDED_STATE_RESTORE BIT(2)
>> #define CCID_EXTENDED_STATE_SAVE BIT(3)
>> #define RING_BB_PER_CTX_PTR(base) _MMIO((base) + 0x1c0) /*
>> gen8+ */
>> +#define PER_CTX_BB_FORCE BIT(2)
>> +#define PER_CTX_BB_VALID BIT(0)
>> +
>> #define RING_INDIRECT_CTX(base) _MMIO((base) + 0x1c4) /*
>> gen8+ */
>> #define RING_INDIRECT_CTX_OFFSET(base) _MMIO((base) + 0x1c8)
>> /* gen8+ */
>> #define ECOSKPD(base) _MMIO((base) + 0x1d0)
>> diff --git a/drivers/gpu/drm/i915/gt/intel_gt.h
>> b/drivers/gpu/drm/i915/gt/intel_gt.h
>> index 970bedf6b78a7b..50989fc2b6debe 100644
>> --- a/drivers/gpu/drm/i915/gt/intel_gt.h
>> +++ b/drivers/gpu/drm/i915/gt/intel_gt.h
>> @@ -82,6 +82,10 @@ struct drm_printer;
>> ##__VA_ARGS__); \
>> } while (0)
>> +#define NEEDS_FASTCOLOR_BLT_WABB(engine) ( \
>> + IS_GFX_GT_IP_RANGE(engine->gt, IP_VER(12, 55), IP_VER(12, 71)) && \
>> + engine->class == COPY_ENGINE_CLASS)
>> +
>> static inline bool gt_is_root(struct intel_gt *gt)
>> {
>> return !gt->info.id;
>> diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c
>> b/drivers/gpu/drm/i915/gt/intel_lrc.c
>> index eaf66d90316655..96ef901113eae9 100644
>> --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
>> +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
>> @@ -828,6 +828,18 @@ lrc_ring_indirect_offset_default(const struct
>> intel_engine_cs *engine)
>> return 0;
>> }
>> +static void
>> +lrc_setup_bb_per_ctx(u32 *regs,
>> + const struct intel_engine_cs *engine,
>> + u32 ctx_bb_ggtt_addr)
>> +{
>> + GEM_BUG_ON(lrc_ring_wa_bb_per_ctx(engine) == -1);
>> + regs[lrc_ring_wa_bb_per_ctx(engine) + 1] =
>> + ctx_bb_ggtt_addr |
>> + PER_CTX_BB_FORCE |
>> + PER_CTX_BB_VALID;
>> +}
>> +
>> static void
>> lrc_setup_indirect_ctx(u32 *regs,
>> const struct intel_engine_cs *engine,
>> @@ -1020,7 +1032,13 @@ static u32 context_wa_bb_offset(const struct
>> intel_context *ce)
>> return PAGE_SIZE * ce->wa_bb_page;
>> }
>> -static u32 *context_indirect_bb(const struct intel_context *ce)
>> +/*
>> + * per_ctx below determines which WABB section is used.
>> + * When true, the function returns the location of the
>> + * PER_CTX_BB. When false, the function returns the
>> + * location of the INDIRECT_CTX.
>> + */
>> +static u32 *context_wabb(const struct intel_context *ce, bool per_ctx)
>> {
>> void *ptr;
>> @@ -1029,6 +1047,7 @@ static u32 *context_indirect_bb(const struct
>> intel_context *ce)
>> ptr = ce->lrc_reg_state;
>> ptr -= LRC_STATE_OFFSET; /* back to start of context image */
>> ptr += context_wa_bb_offset(ce);
>> + ptr += per_ctx ? PAGE_SIZE : 0;
>> return ptr;
>> }
>> @@ -1105,7 +1124,8 @@ __lrc_alloc_state(struct intel_context *ce,
>> struct intel_engine_cs *engine)
>> if (GRAPHICS_VER(engine->i915) >= 12) {
>> ce->wa_bb_page = context_size / PAGE_SIZE;
>> - context_size += PAGE_SIZE;
>> + /* INDIRECT_CTX and PER_CTX_BB need separate pages. */
>> + context_size += PAGE_SIZE * 2;
>> }
>> if (intel_context_is_parent(ce) && intel_engine_uses_guc(engine)) {
>> @@ -1407,12 +1427,85 @@ gen12_emit_indirect_ctx_xcs(const struct
>> intel_context *ce, u32 *cs)
>> return gen12_emit_aux_table_inv(ce->engine, cs);
>> }
>> +static u32 *xehp_emit_fastcolor_blt_wabb(const struct intel_context
>> *ce, u32 *cs)
>> +{
>> + struct intel_gt *gt = ce->engine->gt;
>> + int mocs = gt->mocs.uc_index << 1;
>> +
>> + /**
>> + * Wa_16018031267 / Wa_16018063123 requires that SW forces the
>> + * main copy engine arbitration into round robin mode. We
>> + * additionally need to submit the following WABB blt command
>> + * to produce 4 subblits with each subblit generating 0 byte
>> + * write requests as WABB:
>> + *
>> + * XY_FASTCOLOR_BLT
>> + * BG0 -> 5100000E
>> + * BG1 -> 0000003F (Dest pitch)
>> + * BG2 -> 00000000 (X1, Y1) = (0, 0)
>> + * BG3 -> 00040001 (X2, Y2) = (1, 4)
>> + * BG4 -> scratch
>> + * BG5 -> scratch
>> + * BG6-12 -> 00000000
>> + * BG13 -> 20004004 (Surf. Width= 2,Surf. Height = 5 )
>> + * BG14 -> 00000010 (Qpitch = 4)
>> + * BG15 -> 00000000
>> + */
>> + *cs++ = XY_FAST_COLOR_BLT_CMD | (16 - 2);
>> + *cs++ = FIELD_PREP(XY_FAST_COLOR_BLT_MOCS_MASK, mocs) | 0x3f;
>> + *cs++ = 0;
>> + *cs++ = 4 << 16 | 1;
>> + *cs++ = lower_32_bits(i915_vma_offset(ce->vm->rsvd));
>> + *cs++ = upper_32_bits(i915_vma_offset(ce->vm->rsvd));
>> + *cs++ = 0;
>> + *cs++ = 0;
>> + *cs++ = 0;
>> + *cs++ = 0;
>> + *cs++ = 0;
>> + *cs++ = 0;
>> + *cs++ = 0;
>> + *cs++ = 0x20004004;
>> + *cs++ = 0x10;
>> + *cs++ = 0;
>> +
>> + return cs;
>> +}
>> +
>> +static u32 *
>> +xehp_emit_per_ctx_bb(const struct intel_context *ce, u32 *cs)
>> +{
>> + /* Wa_16018031267, Wa_16018063123 */
>> + if (NEEDS_FASTCOLOR_BLT_WABB(ce->engine))
>> + cs = xehp_emit_fastcolor_blt_wabb(ce, cs);
>> +
>> + return cs;
>> +}
>> +
>> +static void
>> +setup_per_ctx_bb(const struct intel_context *ce,
>> + const struct intel_engine_cs *engine,
>> + u32 *(*emit)(const struct intel_context *, u32 *))
>> +{
>> + /* Place PER_CTX_BB on next page after INDIRECT_CTX */
>> + u32 * const start = context_wabb(ce, true);
>> + u32 *cs;
>> +
>> + cs = emit(ce, start);
>> +
>> + /* PER_CTX_BB must manually terminate */
>> + *cs++ = MI_BATCH_BUFFER_END;
>> +
>> + GEM_BUG_ON(cs - start > I915_GTT_PAGE_SIZE / sizeof(*cs));
>> + lrc_setup_bb_per_ctx(ce->lrc_reg_state, engine,
>> + lrc_indirect_bb(ce) + PAGE_SIZE);
>> +}
>> +
>> static void
>> setup_indirect_ctx_bb(const struct intel_context *ce,
>> const struct intel_engine_cs *engine,
>> u32 *(*emit)(const struct intel_context *, u32 *))
>> {
>> - u32 * const start = context_indirect_bb(ce);
>> + u32 * const start = context_wabb(ce, false);
>> u32 *cs;
>> cs = emit(ce, start);
>> @@ -1511,6 +1604,7 @@ u32 lrc_update_regs(const struct intel_context *ce,
>> /* Mutually exclusive wrt to global indirect bb */
>> GEM_BUG_ON(engine->wa_ctx.indirect_ctx.size);
>> setup_indirect_ctx_bb(ce, engine, fn);
>> + setup_per_ctx_bb(ce, engine, xehp_emit_per_ctx_bb);
>> }
>> return lrc_descriptor(ce) | CTX_DESC_FORCE_RESTORE;
>> diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c
>> b/drivers/gpu/drm/i915/gt/selftest_lrc.c
>> index 5f826b6dcf5d6f..e17b8777d21dc9 100644
>> --- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
>> +++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
>> @@ -1555,7 +1555,7 @@ static int live_lrc_isolation(void *arg)
>> return err;
>> }
>> -static int indirect_ctx_submit_req(struct intel_context *ce)
>> +static int wabb_ctx_submit_req(struct intel_context *ce)
>> {
>> struct i915_request *rq;
>> int err = 0;
>> @@ -1579,7 +1579,8 @@ static int indirect_ctx_submit_req(struct
>> intel_context *ce)
>> #define CTX_BB_CANARY_INDEX (CTX_BB_CANARY_OFFSET / sizeof(u32))
>> static u32 *
>> -emit_indirect_ctx_bb_canary(const struct intel_context *ce, u32 *cs)
>> +emit_wabb_ctx_canary(const struct intel_context *ce,
>> + u32 *cs, bool per_ctx)
>> {
>> *cs++ = MI_STORE_REGISTER_MEM_GEN8 |
>> MI_SRM_LRM_GLOBAL_GTT |
>> @@ -1587,26 +1588,43 @@ emit_indirect_ctx_bb_canary(const struct
>> intel_context *ce, u32 *cs)
>> *cs++ = i915_mmio_reg_offset(RING_START(0));
>> *cs++ = i915_ggtt_offset(ce->state) +
>> context_wa_bb_offset(ce) +
>> - CTX_BB_CANARY_OFFSET;
>> + CTX_BB_CANARY_OFFSET +
>> + (per_ctx ? PAGE_SIZE : 0);
>> *cs++ = 0;
>> return cs;
>> }
>> +static u32 *
>> +emit_indirect_ctx_bb_canary(const struct intel_context *ce, u32 *cs)
>> +{
>> + return emit_wabb_ctx_canary(ce, cs, false);
>> +}
>> +
>> +static u32 *
>> +emit_per_ctx_bb_canary(const struct intel_context *ce, u32 *cs)
>> +{
>> + return emit_wabb_ctx_canary(ce, cs, true);
>> +}
>> +
>> static void
>> -indirect_ctx_bb_setup(struct intel_context *ce)
>> +wabb_ctx_setup(struct intel_context *ce, bool per_ctx)
>> {
>> - u32 *cs = context_indirect_bb(ce);
>> + u32 *cs = context_wabb(ce, per_ctx);
>> cs[CTX_BB_CANARY_INDEX] = 0xdeadf00d;
>> - setup_indirect_ctx_bb(ce, ce->engine, emit_indirect_ctx_bb_canary);
>> + if (per_ctx)
>> + setup_per_ctx_bb(ce, ce->engine, emit_per_ctx_bb_canary);
>> + else
>> + setup_indirect_ctx_bb(ce, ce->engine,
>> emit_indirect_ctx_bb_canary);
>> }
>> -static bool check_ring_start(struct intel_context *ce)
>> +static bool check_ring_start(struct intel_context *ce, bool per_ctx)
>> {
>> const u32 * const ctx_bb = (void *)(ce->lrc_reg_state) -
>> - LRC_STATE_OFFSET + context_wa_bb_offset(ce);
>> + LRC_STATE_OFFSET + context_wa_bb_offset(ce) +
>> + (per_ctx ? PAGE_SIZE : 0);
>> if (ctx_bb[CTX_BB_CANARY_INDEX] ==
>> ce->lrc_reg_state[CTX_RING_START])
>> return true;
>> @@ -1618,21 +1636,21 @@ static bool check_ring_start(struct
>> intel_context *ce)
>> return false;
>> }
>> -static int indirect_ctx_bb_check(struct intel_context *ce)
>> +static int wabb_ctx_check(struct intel_context *ce, bool per_ctx)
>> {
>> int err;
>> - err = indirect_ctx_submit_req(ce);
>> + err = wabb_ctx_submit_req(ce);
>> if (err)
>> return err;
>> - if (!check_ring_start(ce))
>> + if (!check_ring_start(ce, per_ctx))
>> return -EINVAL;
>> return 0;
>> }
>> -static int __live_lrc_indirect_ctx_bb(struct intel_engine_cs *engine)
>> +static int __lrc_wabb_ctx(struct intel_engine_cs *engine, bool per_ctx)
>> {
>> struct intel_context *a, *b;
>> int err;
>> @@ -1667,14 +1685,14 @@ static int __live_lrc_indirect_ctx_bb(struct
>> intel_engine_cs *engine)
>> * As ring start is restored apriori of starting the indirect
>> ctx bb and
>> * as it will be different for each context, it fits to this
>> purpose.
>> */
>> - indirect_ctx_bb_setup(a);
>> - indirect_ctx_bb_setup(b);
>> + wabb_ctx_setup(a, per_ctx);
>> + wabb_ctx_setup(b, per_ctx);
>> - err = indirect_ctx_bb_check(a);
>> + err = wabb_ctx_check(a, per_ctx);
>> if (err)
>> goto unpin_b;
>> - err = indirect_ctx_bb_check(b);
>> + err = wabb_ctx_check(b, per_ctx);
>> unpin_b:
>> intel_context_unpin(b);
>> @@ -1688,7 +1706,7 @@ static int __live_lrc_indirect_ctx_bb(struct
>> intel_engine_cs *engine)
>> return err;
>> }
>> -static int live_lrc_indirect_ctx_bb(void *arg)
>> +static int lrc_wabb_ctx(void *arg, bool per_ctx)
>> {
>> struct intel_gt *gt = arg;
>> struct intel_engine_cs *engine;
>> @@ -1697,7 +1715,7 @@ static int live_lrc_indirect_ctx_bb(void *arg)
>> for_each_engine(engine, gt, id) {
>> intel_engine_pm_get(engine);
>> - err = __live_lrc_indirect_ctx_bb(engine);
>> + err = __lrc_wabb_ctx(engine, per_ctx);
>> intel_engine_pm_put(engine);
>> if (igt_flush_test(gt->i915))
>> @@ -1710,6 +1728,16 @@ static int live_lrc_indirect_ctx_bb(void *arg)
>> return err;
>> }
>> +static int live_lrc_indirect_ctx_bb(void *arg)
>> +{
>> + return lrc_wabb_ctx(arg, false);
>> +}
>> +
>> +static int live_lrc_per_ctx_bb(void *arg)
>> +{
>> + return lrc_wabb_ctx(arg, true);
>> +}
>> +
>> static void garbage_reset(struct intel_engine_cs *engine,
>> struct i915_request *rq)
>> {
>> @@ -1947,6 +1975,7 @@ int intel_lrc_live_selftests(struct
>> drm_i915_private *i915)
>> SUBTEST(live_lrc_garbage),
>> SUBTEST(live_pphwsp_runtime),
>> SUBTEST(live_lrc_indirect_ctx_bb),
>> + SUBTEST(live_lrc_per_ctx_bb),
>> };
>> if (!HAS_LOGICAL_RING_CONTEXTS(i915))
>>
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2023-10-23 19:08 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-23 7:41 [Intel-gfx] [PATCH v3 0/4] Apply Wa_16018031267 / Wa_16018063123 Andrzej Hajda
2023-10-23 7:41 ` [Intel-gfx] [PATCH v3 1/4] drm/i915: Enable NULL PTE support for vm scratch Andrzej Hajda
2023-10-23 12:23 ` Nirmoy Das
2023-10-23 14:54 ` Andrzej Hajda
2023-10-23 7:41 ` [Intel-gfx] [PATCH v3 2/4] drm/i915: Reserve some kernel space per vm Andrzej Hajda
2023-10-23 8:49 ` Nirmoy Das
2023-10-23 11:40 ` Andrzej Hajda
2023-10-23 7:41 ` [Intel-gfx] [PATCH v3 3/4] drm/i915: Add WABB blit for Wa_16018031267 / Wa_16018063123 Andrzej Hajda
2023-10-23 9:05 ` Nirmoy Das
2023-10-23 19:08 ` Andrzej Hajda
2023-10-23 7:41 ` [Intel-gfx] [PATCH v3 4/4] drm/i915: Set copy engine arbitration " Andrzej Hajda
2023-10-23 9:55 ` Nirmoy Das
2023-10-23 15:24 ` Andrzej Hajda
2023-10-23 16:06 ` Nirmoy Das
2023-10-23 8:38 ` [Intel-gfx] [PATCH v3 0/4] Apply " Nirmoy Das
2023-10-23 11:35 ` Andrzej Hajda
2023-10-23 12:20 ` Nirmoy Das
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.