All of lore.kernel.org
 help / color / mirror / Atom feed
* [Intel-xe] [PATCH v2 0/4] Convert xe_mmio to struct xe_reg
@ 2023-05-08 22:53 Lucas De Marchi
  2023-05-08 22:53 ` [Intel-xe] [PATCH v2 1/4] drm/xe/mmio: Use " Lucas De Marchi
                   ` (5 more replies)
  0 siblings, 6 replies; 12+ messages in thread
From: Lucas De Marchi @ 2023-05-08 22:53 UTC (permalink / raw)
  To: intel-xe; +Cc: Lucas De Marchi, Rodrigo Vivi

Now that struct xe_reg is in place, convert xe_mmio to use it so we
avoid mistakes of passing the wrong argument.

v2:
  - First 2 patches from v1 already applied
  - Drop controversial patch, "drm/xe: Use media base for GMD_ID access"
  - Rebase on latest force pushes with display refactors

Lucas De Marchi (4):
  drm/xe/mmio: Use struct xe_reg
  fixup! drm/xe/display: Implement display support
  drm/xe: Rename reg field to addr
  drm/xe: Fix indent in xe_hw_engine_print_state()

 .../drm/xe/compat-i915-headers/intel_uncore.h | 103 +++++++++----
 drivers/gpu/drm/xe/regs/xe_reg_defs.h         |   6 +-
 drivers/gpu/drm/xe/tests/xe_rtp_test.c        |   2 +-
 drivers/gpu/drm/xe/xe_device.c                |   2 +-
 drivers/gpu/drm/xe/xe_execlist.c              |  18 +--
 drivers/gpu/drm/xe/xe_force_wake.c            |  25 ++--
 drivers/gpu/drm/xe/xe_force_wake_types.h      |   6 +-
 drivers/gpu/drm/xe/xe_ggtt.c                  |   6 +-
 drivers/gpu/drm/xe/xe_gt.c                    |   4 +-
 drivers/gpu/drm/xe/xe_gt_clock.c              |   6 +-
 drivers/gpu/drm/xe/xe_gt_mcr.c                |  39 ++---
 drivers/gpu/drm/xe/xe_gt_topology.c           |  18 +--
 drivers/gpu/drm/xe/xe_guc.c                   |  61 ++++----
 drivers/gpu/drm/xe/xe_guc_ads.c               |   5 +-
 drivers/gpu/drm/xe/xe_guc_pc.c                |  32 ++--
 drivers/gpu/drm/xe/xe_guc_types.h             |   3 +-
 drivers/gpu/drm/xe/xe_huc.c                   |   4 +-
 drivers/gpu/drm/xe/xe_hw_engine.c             | 103 +++++++------
 drivers/gpu/drm/xe/xe_irq.c                   | 140 +++++++++---------
 drivers/gpu/drm/xe/xe_mmio.c                  |  33 +++--
 drivers/gpu/drm/xe/xe_mmio.h                  |  55 +++----
 drivers/gpu/drm/xe/xe_mocs.c                  |  11 +-
 drivers/gpu/drm/xe/xe_pat.c                   |  14 +-
 drivers/gpu/drm/xe/xe_pci.c                   |   4 +-
 drivers/gpu/drm/xe/xe_pcode.c                 |  16 +-
 drivers/gpu/drm/xe/xe_reg_sr.c                |  18 ++-
 drivers/gpu/drm/xe/xe_ring_ops.c              |  11 +-
 drivers/gpu/drm/xe/xe_rtp.c                   |   2 +-
 drivers/gpu/drm/xe/xe_ttm_stolen_mgr.c        |   4 +-
 drivers/gpu/drm/xe/xe_uc_fw.c                 |  16 +-
 drivers/gpu/drm/xe/xe_wopcm.c                 |  16 +-
 31 files changed, 429 insertions(+), 354 deletions(-)

-- 
2.40.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [Intel-xe] [PATCH v2 1/4] drm/xe/mmio: Use struct xe_reg
  2023-05-08 22:53 [Intel-xe] [PATCH v2 0/4] Convert xe_mmio to struct xe_reg Lucas De Marchi
@ 2023-05-08 22:53 ` Lucas De Marchi
  2023-05-09 15:24   ` Rodrigo Vivi
  2023-05-08 22:53 ` [Intel-xe] [PATCH v2 2/4] fixup! drm/xe/display: Implement display support Lucas De Marchi
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 12+ messages in thread
From: Lucas De Marchi @ 2023-05-08 22:53 UTC (permalink / raw)
  To: intel-xe; +Cc: Lucas De Marchi, Rodrigo Vivi

Convert all the callers to deal with xe_mmio_*() using struct xe_reg
instead of plain u32. In a few places there was also a rename
s/reg/reg_val/ when dealing with the value returned so it doesn't get
mixed up with the register address.

Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 drivers/gpu/drm/xe/xe_device.c           |   2 +-
 drivers/gpu/drm/xe/xe_execlist.c         |  18 +--
 drivers/gpu/drm/xe/xe_force_wake.c       |  25 ++--
 drivers/gpu/drm/xe/xe_force_wake_types.h |   6 +-
 drivers/gpu/drm/xe/xe_ggtt.c             |   6 +-
 drivers/gpu/drm/xe/xe_gt.c               |   4 +-
 drivers/gpu/drm/xe/xe_gt_clock.c         |   6 +-
 drivers/gpu/drm/xe/xe_gt_mcr.c           |  37 +++---
 drivers/gpu/drm/xe/xe_gt_topology.c      |  18 +--
 drivers/gpu/drm/xe/xe_guc.c              |  61 +++++-----
 drivers/gpu/drm/xe/xe_guc_ads.c          |   3 +-
 drivers/gpu/drm/xe/xe_guc_pc.c           |  32 +++---
 drivers/gpu/drm/xe/xe_guc_types.h        |   3 +-
 drivers/gpu/drm/xe/xe_huc.c              |   4 +-
 drivers/gpu/drm/xe/xe_hw_engine.c        |  85 +++++++-------
 drivers/gpu/drm/xe/xe_irq.c              | 138 +++++++++++------------
 drivers/gpu/drm/xe/xe_mmio.c             |  31 +++--
 drivers/gpu/drm/xe/xe_mmio.h             |  55 ++++-----
 drivers/gpu/drm/xe/xe_mocs.c             |   7 +-
 drivers/gpu/drm/xe/xe_pat.c              |  14 ++-
 drivers/gpu/drm/xe/xe_pcode.c            |  16 +--
 drivers/gpu/drm/xe/xe_reg_sr.c           |  14 ++-
 drivers/gpu/drm/xe/xe_ring_ops.c         |  11 +-
 drivers/gpu/drm/xe/xe_ttm_stolen_mgr.c   |   4 +-
 drivers/gpu/drm/xe/xe_uc_fw.c            |  16 +--
 drivers/gpu/drm/xe/xe_wopcm.c            |  12 +-
 26 files changed, 329 insertions(+), 299 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 00f1d9e386f1..342e3362b75f 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -393,7 +393,7 @@ void xe_device_wmb(struct xe_device *xe)
 
 	wmb();
 	if (IS_DGFX(xe))
-		xe_mmio_write32(gt, SOFTWARE_FLAGS_SPR33.reg, 0);
+		xe_mmio_write32(gt, SOFTWARE_FLAGS_SPR33, 0);
 }
 
 u32 xe_device_ccs_bytes(struct xe_device *xe, u64 size)
diff --git a/drivers/gpu/drm/xe/xe_execlist.c b/drivers/gpu/drm/xe/xe_execlist.c
index de4f0044b211..5d2d26e361b9 100644
--- a/drivers/gpu/drm/xe/xe_execlist.c
+++ b/drivers/gpu/drm/xe/xe_execlist.c
@@ -60,7 +60,7 @@ static void __start_lrc(struct xe_hw_engine *hwe, struct xe_lrc *lrc,
 	}
 
 	if (hwe->class == XE_ENGINE_CLASS_COMPUTE)
-		xe_mmio_write32(hwe->gt, RCU_MODE.reg,
+		xe_mmio_write32(hwe->gt, RCU_MODE,
 				_MASKED_BIT_ENABLE(RCU_MODE_CCS_ENABLE));
 
 	xe_lrc_write_ctx_reg(lrc, CTX_RING_TAIL, lrc->ring.tail);
@@ -78,17 +78,17 @@ static void __start_lrc(struct xe_hw_engine *hwe, struct xe_lrc *lrc,
 	 */
 	wmb();
 
-	xe_mmio_write32(gt, RING_HWS_PGA(hwe->mmio_base).reg,
+	xe_mmio_write32(gt, RING_HWS_PGA(hwe->mmio_base),
 			xe_bo_ggtt_addr(hwe->hwsp));
-	xe_mmio_read32(gt, RING_HWS_PGA(hwe->mmio_base).reg);
-	xe_mmio_write32(gt, RING_MODE(hwe->mmio_base).reg,
+	xe_mmio_read32(gt, RING_HWS_PGA(hwe->mmio_base));
+	xe_mmio_write32(gt, RING_MODE(hwe->mmio_base),
 			_MASKED_BIT_ENABLE(GFX_DISABLE_LEGACY_MODE));
 
-	xe_mmio_write32(gt, RING_EXECLIST_SQ_CONTENTS_LO(hwe->mmio_base).reg,
+	xe_mmio_write32(gt, RING_EXECLIST_SQ_CONTENTS_LO(hwe->mmio_base),
 			lower_32_bits(lrc_desc));
-	xe_mmio_write32(gt, RING_EXECLIST_SQ_CONTENTS_HI(hwe->mmio_base).reg,
+	xe_mmio_write32(gt, RING_EXECLIST_SQ_CONTENTS_HI(hwe->mmio_base),
 			upper_32_bits(lrc_desc));
-	xe_mmio_write32(gt, RING_EXECLIST_CONTROL(hwe->mmio_base).reg,
+	xe_mmio_write32(gt, RING_EXECLIST_CONTROL(hwe->mmio_base),
 			EL_CTRL_LOAD);
 }
 
@@ -173,8 +173,8 @@ static u64 read_execlist_status(struct xe_hw_engine *hwe)
 	struct xe_gt *gt = hwe->gt;
 	u32 hi, lo;
 
-	lo = xe_mmio_read32(gt, RING_EXECLIST_STATUS_LO(hwe->mmio_base).reg);
-	hi = xe_mmio_read32(gt, RING_EXECLIST_STATUS_HI(hwe->mmio_base).reg);
+	lo = xe_mmio_read32(gt, RING_EXECLIST_STATUS_LO(hwe->mmio_base));
+	hi = xe_mmio_read32(gt, RING_EXECLIST_STATUS_HI(hwe->mmio_base));
 
 	printk(KERN_INFO "EXECLIST_STATUS %d:%d = 0x%08x %08x\n", hwe->class,
 	       hwe->instance, hi, lo);
diff --git a/drivers/gpu/drm/xe/xe_force_wake.c b/drivers/gpu/drm/xe/xe_force_wake.c
index 53d73f36a121..363b81c3d746 100644
--- a/drivers/gpu/drm/xe/xe_force_wake.c
+++ b/drivers/gpu/drm/xe/xe_force_wake.c
@@ -8,6 +8,7 @@
 #include <drm/drm_util.h>
 
 #include "regs/xe_gt_regs.h"
+#include "regs/xe_reg_defs.h"
 #include "xe_gt.h"
 #include "xe_mmio.h"
 
@@ -27,7 +28,7 @@ fw_to_xe(struct xe_force_wake *fw)
 
 static void domain_init(struct xe_force_wake_domain *domain,
 			enum xe_force_wake_domain_id id,
-			u32 reg, u32 ack, u32 val, u32 mask)
+			struct xe_reg reg, struct xe_reg ack, u32 val, u32 mask)
 {
 	domain->id = id;
 	domain->reg_ctl = reg;
@@ -49,14 +50,14 @@ void xe_force_wake_init_gt(struct xe_gt *gt, struct xe_force_wake *fw)
 	if (xe->info.graphics_verx100 >= 1270) {
 		domain_init(&fw->domains[XE_FW_DOMAIN_ID_GT],
 			    XE_FW_DOMAIN_ID_GT,
-			    FORCEWAKE_GT.reg,
-			    FORCEWAKE_ACK_GT_MTL.reg,
+			    FORCEWAKE_GT,
+			    FORCEWAKE_ACK_GT_MTL,
 			    BIT(0), BIT(16));
 	} else {
 		domain_init(&fw->domains[XE_FW_DOMAIN_ID_GT],
 			    XE_FW_DOMAIN_ID_GT,
-			    FORCEWAKE_GT.reg,
-			    FORCEWAKE_ACK_GT.reg,
+			    FORCEWAKE_GT,
+			    FORCEWAKE_ACK_GT,
 			    BIT(0), BIT(16));
 	}
 }
@@ -71,8 +72,8 @@ void xe_force_wake_init_engines(struct xe_gt *gt, struct xe_force_wake *fw)
 	if (!xe_gt_is_media_type(gt))
 		domain_init(&fw->domains[XE_FW_DOMAIN_ID_RENDER],
 			    XE_FW_DOMAIN_ID_RENDER,
-			    FORCEWAKE_RENDER.reg,
-			    FORCEWAKE_ACK_RENDER.reg,
+			    FORCEWAKE_RENDER,
+			    FORCEWAKE_ACK_RENDER,
 			    BIT(0), BIT(16));
 
 	for (i = XE_HW_ENGINE_VCS0, j = 0; i <= XE_HW_ENGINE_VCS7; ++i, ++j) {
@@ -81,8 +82,8 @@ void xe_force_wake_init_engines(struct xe_gt *gt, struct xe_force_wake *fw)
 
 		domain_init(&fw->domains[XE_FW_DOMAIN_ID_MEDIA_VDBOX0 + j],
 			    XE_FW_DOMAIN_ID_MEDIA_VDBOX0 + j,
-			    FORCEWAKE_MEDIA_VDBOX(j).reg,
-			    FORCEWAKE_ACK_MEDIA_VDBOX(j).reg,
+			    FORCEWAKE_MEDIA_VDBOX(j),
+			    FORCEWAKE_ACK_MEDIA_VDBOX(j),
 			    BIT(0), BIT(16));
 	}
 
@@ -92,8 +93,8 @@ void xe_force_wake_init_engines(struct xe_gt *gt, struct xe_force_wake *fw)
 
 		domain_init(&fw->domains[XE_FW_DOMAIN_ID_MEDIA_VEBOX0 + j],
 			    XE_FW_DOMAIN_ID_MEDIA_VEBOX0 + j,
-			    FORCEWAKE_MEDIA_VEBOX(j).reg,
-			    FORCEWAKE_ACK_MEDIA_VEBOX(j).reg,
+			    FORCEWAKE_MEDIA_VEBOX(j),
+			    FORCEWAKE_ACK_MEDIA_VEBOX(j),
 			    BIT(0), BIT(16));
 	}
 }
@@ -128,7 +129,7 @@ static int domain_sleep_wait(struct xe_gt *gt,
 	for (tmp__ = (mask__); tmp__; tmp__ &= ~BIT(ffs(tmp__) - 1)) \
 		for_each_if((domain__ = ((fw__)->domains + \
 					 (ffs(tmp__) - 1))) && \
-					 domain__->reg_ctl)
+					 domain__->reg_ctl.reg)
 
 int xe_force_wake_get(struct xe_force_wake *fw,
 		      enum xe_force_wake_domains domains)
diff --git a/drivers/gpu/drm/xe/xe_force_wake_types.h b/drivers/gpu/drm/xe/xe_force_wake_types.h
index 208dd629d7b1..cb782696855b 100644
--- a/drivers/gpu/drm/xe/xe_force_wake_types.h
+++ b/drivers/gpu/drm/xe/xe_force_wake_types.h
@@ -9,6 +9,8 @@
 #include <linux/mutex.h>
 #include <linux/types.h>
 
+#include "regs/xe_reg_defs.h"
+
 enum xe_force_wake_domain_id {
 	XE_FW_DOMAIN_ID_GT = 0,
 	XE_FW_DOMAIN_ID_RENDER,
@@ -56,9 +58,9 @@ struct xe_force_wake_domain {
 	/** @id: domain force wake id */
 	enum xe_force_wake_domain_id id;
 	/** @reg_ctl: domain wake control register address */
-	u32 reg_ctl;
+	struct xe_reg reg_ctl;
 	/** @reg_ack: domain ack register address */
-	u32 reg_ack;
+	struct xe_reg reg_ack;
 	/** @val: domain wake write value */
 	u32 val;
 	/** @mask: domain mask */
diff --git a/drivers/gpu/drm/xe/xe_ggtt.c b/drivers/gpu/drm/xe/xe_ggtt.c
index 9c08031c9350..546240261e0a 100644
--- a/drivers/gpu/drm/xe/xe_ggtt.c
+++ b/drivers/gpu/drm/xe/xe_ggtt.c
@@ -207,12 +207,12 @@ void xe_ggtt_invalidate(struct xe_gt *gt)
 		struct xe_device *xe = gt_to_xe(gt);
 
 		if (xe->info.platform == XE_PVC) {
-			xe_mmio_write32(gt, PVC_GUC_TLB_INV_DESC1.reg,
+			xe_mmio_write32(gt, PVC_GUC_TLB_INV_DESC1,
 					PVC_GUC_TLB_INV_DESC1_INVALIDATE);
-			xe_mmio_write32(gt, PVC_GUC_TLB_INV_DESC0.reg,
+			xe_mmio_write32(gt, PVC_GUC_TLB_INV_DESC0,
 					PVC_GUC_TLB_INV_DESC0_VALID);
 		} else
-			xe_mmio_write32(gt, GUC_TLB_INV_CR.reg,
+			xe_mmio_write32(gt, GUC_TLB_INV_CR,
 					GUC_TLB_INV_CR_INVALIDATE);
 	}
 }
diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
index 3afca3dd9657..cbe063a40aca 100644
--- a/drivers/gpu/drm/xe/xe_gt.c
+++ b/drivers/gpu/drm/xe/xe_gt.c
@@ -544,8 +544,8 @@ static int do_gt_reset(struct xe_gt *gt)
 	struct xe_device *xe = gt_to_xe(gt);
 	int err;
 
-	xe_mmio_write32(gt, GDRST.reg, GRDOM_FULL);
-	err = xe_mmio_wait32(gt, GDRST.reg, 0, GRDOM_FULL, 5000,
+	xe_mmio_write32(gt, GDRST, GRDOM_FULL);
+	err = xe_mmio_wait32(gt, GDRST, 0, GRDOM_FULL, 5000,
 			     NULL, false);
 	if (err)
 		drm_err(&xe->drm,
diff --git a/drivers/gpu/drm/xe/xe_gt_clock.c b/drivers/gpu/drm/xe/xe_gt_clock.c
index 49625d49bdcc..7cf11078ff57 100644
--- a/drivers/gpu/drm/xe/xe_gt_clock.c
+++ b/drivers/gpu/drm/xe/xe_gt_clock.c
@@ -14,7 +14,7 @@
 
 static u32 read_reference_ts_freq(struct xe_gt *gt)
 {
-	u32 ts_override = xe_mmio_read32(gt, TIMESTAMP_OVERRIDE.reg);
+	u32 ts_override = xe_mmio_read32(gt, TIMESTAMP_OVERRIDE);
 	u32 base_freq, frac_freq;
 
 	base_freq = REG_FIELD_GET(TIMESTAMP_OVERRIDE_US_COUNTER_DIVIDER_MASK,
@@ -54,7 +54,7 @@ static u32 get_crystal_clock_freq(u32 rpm_config_reg)
 
 int xe_gt_clock_init(struct xe_gt *gt)
 {
-	u32 ctc_reg = xe_mmio_read32(gt, CTC_MODE.reg);
+	u32 ctc_reg = xe_mmio_read32(gt, CTC_MODE);
 	u32 freq = 0;
 
 	/* Assuming gen11+ so assert this assumption is correct */
@@ -63,7 +63,7 @@ int xe_gt_clock_init(struct xe_gt *gt)
 	if (ctc_reg & CTC_SOURCE_DIVIDE_LOGIC) {
 		freq = read_reference_ts_freq(gt);
 	} else {
-		u32 c0 = xe_mmio_read32(gt, RPM_CONFIG0.reg);
+		u32 c0 = xe_mmio_read32(gt, RPM_CONFIG0);
 
 		freq = get_crystal_clock_freq(c0);
 
diff --git a/drivers/gpu/drm/xe/xe_gt_mcr.c b/drivers/gpu/drm/xe/xe_gt_mcr.c
index 125c63bdc9b5..c6b9e9869fee 100644
--- a/drivers/gpu/drm/xe/xe_gt_mcr.c
+++ b/drivers/gpu/drm/xe/xe_gt_mcr.c
@@ -40,6 +40,8 @@
  * non-terminated instance.
  */
 
+#define STEER_SEMAPHORE		XE_REG(0xFD0)
+
 static inline struct xe_reg to_xe_reg(struct xe_reg_mcr reg_mcr)
 {
 	return reg_mcr.__reg;
@@ -183,9 +185,9 @@ static void init_steering_l3bank(struct xe_gt *gt)
 {
 	if (GRAPHICS_VERx100(gt_to_xe(gt)) >= 1270) {
 		u32 mslice_mask = REG_FIELD_GET(MEML3_EN_MASK,
-						xe_mmio_read32(gt, MIRROR_FUSE3.reg));
+						xe_mmio_read32(gt, MIRROR_FUSE3));
 		u32 bank_mask = REG_FIELD_GET(GT_L3_EXC_MASK,
-					      xe_mmio_read32(gt, XEHP_FUSE4.reg));
+					      xe_mmio_read32(gt, XEHP_FUSE4));
 
 		/*
 		 * Group selects mslice, instance selects bank within mslice.
@@ -196,7 +198,7 @@ static void init_steering_l3bank(struct xe_gt *gt)
 			bank_mask & BIT(0) ? 0 : 2;
 	} else if (gt_to_xe(gt)->info.platform == XE_DG2) {
 		u32 mslice_mask = REG_FIELD_GET(MEML3_EN_MASK,
-						xe_mmio_read32(gt, MIRROR_FUSE3.reg));
+						xe_mmio_read32(gt, MIRROR_FUSE3));
 		u32 bank = __ffs(mslice_mask) * 8;
 
 		/*
@@ -208,7 +210,7 @@ static void init_steering_l3bank(struct xe_gt *gt)
 		gt->steering[L3BANK].instance_target = bank & 0x3;
 	} else {
 		u32 fuse = REG_FIELD_GET(L3BANK_MASK,
-					 ~xe_mmio_read32(gt, MIRROR_FUSE3.reg));
+					 ~xe_mmio_read32(gt, MIRROR_FUSE3));
 
 		gt->steering[L3BANK].group_target = 0;	/* unused */
 		gt->steering[L3BANK].instance_target = __ffs(fuse);
@@ -218,7 +220,7 @@ static void init_steering_l3bank(struct xe_gt *gt)
 static void init_steering_mslice(struct xe_gt *gt)
 {
 	u32 mask = REG_FIELD_GET(MEML3_EN_MASK,
-				 xe_mmio_read32(gt, MIRROR_FUSE3.reg));
+				 xe_mmio_read32(gt, MIRROR_FUSE3));
 
 	/*
 	 * mslice registers are valid (not terminated) if either the meml3
@@ -337,8 +339,8 @@ void xe_gt_mcr_set_implicit_defaults(struct xe_gt *gt)
 		u32 steer_val = REG_FIELD_PREP(MCR_SLICE_MASK, 0) |
 			REG_FIELD_PREP(MCR_SUBSLICE_MASK, 2);
 
-		xe_mmio_write32(gt, MCFG_MCR_SELECTOR.reg, steer_val);
-		xe_mmio_write32(gt, SF_MCR_SELECTOR.reg, steer_val);
+		xe_mmio_write32(gt, MCFG_MCR_SELECTOR, steer_val);
+		xe_mmio_write32(gt, SF_MCR_SELECTOR, steer_val);
 		/*
 		 * For GAM registers, all reads should be directed to instance 1
 		 * (unicast reads against other instances are not allowed),
@@ -376,7 +378,7 @@ static bool xe_gt_mcr_get_nonterminated_steering(struct xe_gt *gt,
 			continue;
 
 		for (int i = 0; gt->steering[type].ranges[i].end > 0; i++) {
-			if (xe_mmio_in_range(&gt->steering[type].ranges[i], reg.reg)) {
+			if (xe_mmio_in_range(&gt->steering[type].ranges[i], reg)) {
 				*group = gt->steering[type].group_target;
 				*instance = gt->steering[type].instance_target;
 				return true;
@@ -387,7 +389,7 @@ static bool xe_gt_mcr_get_nonterminated_steering(struct xe_gt *gt,
 	implicit_ranges = gt->steering[IMPLICIT_STEERING].ranges;
 	if (implicit_ranges)
 		for (int i = 0; implicit_ranges[i].end > 0; i++)
-			if (xe_mmio_in_range(&implicit_ranges[i], reg.reg))
+			if (xe_mmio_in_range(&implicit_ranges[i], reg))
 				return false;
 
 	/*
@@ -403,8 +405,6 @@ static bool xe_gt_mcr_get_nonterminated_steering(struct xe_gt *gt,
 	return true;
 }
 
-#define STEER_SEMAPHORE		0xFD0
-
 /*
  * Obtain exclusive access to MCR steering.  On MTL and beyond we also need
  * to synchronize with external clients (e.g., firmware), so a semaphore
@@ -446,16 +446,17 @@ static u32 rw_with_mcr_steering(struct xe_gt *gt, struct xe_reg_mcr reg_mcr,
 				u8 rw_flag, int group, int instance, u32 value)
 {
 	const struct xe_reg reg = to_xe_reg(reg_mcr);
-	u32 steer_reg, steer_val, val = 0;
+	struct xe_reg steer_reg;
+	u32 steer_val, val = 0;
 
 	lockdep_assert_held(&gt->mcr_lock);
 
 	if (GRAPHICS_VERx100(gt_to_xe(gt)) >= 1270) {
-		steer_reg = MTL_MCR_SELECTOR.reg;
+		steer_reg = MTL_MCR_SELECTOR;
 		steer_val = REG_FIELD_PREP(MTL_MCR_GROUPID, group) |
 			REG_FIELD_PREP(MTL_MCR_INSTANCEID, instance);
 	} else {
-		steer_reg = MCR_SELECTOR.reg;
+		steer_reg = MCR_SELECTOR;
 		steer_val = REG_FIELD_PREP(MCR_SLICE_MASK, group) |
 			REG_FIELD_PREP(MCR_SUBSLICE_MASK, instance);
 	}
@@ -480,9 +481,9 @@ static u32 rw_with_mcr_steering(struct xe_gt *gt, struct xe_reg_mcr reg_mcr,
 	xe_mmio_write32(gt, steer_reg, steer_val);
 
 	if (rw_flag == MCR_OP_READ)
-		val = xe_mmio_read32(gt, reg.reg);
+		val = xe_mmio_read32(gt, reg);
 	else
-		xe_mmio_write32(gt, reg.reg, value);
+		xe_mmio_write32(gt, reg, value);
 
 	/*
 	 * If we turned off the multicast bit (during a write) we're required
@@ -524,7 +525,7 @@ u32 xe_gt_mcr_unicast_read_any(struct xe_gt *gt, struct xe_reg_mcr reg_mcr)
 					   group, instance, 0);
 		mcr_unlock(gt);
 	} else {
-		val = xe_mmio_read32(gt, reg.reg);
+		val = xe_mmio_read32(gt, reg);
 	}
 
 	return val;
@@ -591,7 +592,7 @@ void xe_gt_mcr_multicast_write(struct xe_gt *gt, struct xe_reg_mcr reg_mcr,
 	 * to touch the steering register.
 	 */
 	mcr_lock(gt);
-	xe_mmio_write32(gt, reg.reg, value);
+	xe_mmio_write32(gt, reg, value);
 	mcr_unlock(gt);
 }
 
diff --git a/drivers/gpu/drm/xe/xe_gt_topology.c b/drivers/gpu/drm/xe/xe_gt_topology.c
index 14cf135fd648..7c3e347e4d74 100644
--- a/drivers/gpu/drm/xe/xe_gt_topology.c
+++ b/drivers/gpu/drm/xe/xe_gt_topology.c
@@ -26,7 +26,7 @@ load_dss_mask(struct xe_gt *gt, xe_dss_mask_t mask, int numregs, ...)
 
 	va_start(argp, numregs);
 	for (i = 0; i < numregs; i++)
-		fuse_val[i] = xe_mmio_read32(gt, va_arg(argp, u32));
+		fuse_val[i] = xe_mmio_read32(gt, va_arg(argp, struct xe_reg));
 	va_end(argp);
 
 	bitmap_from_arr32(mask, fuse_val, numregs * 32);
@@ -36,7 +36,7 @@ static void
 load_eu_mask(struct xe_gt *gt, xe_eu_mask_t mask)
 {
 	struct xe_device *xe = gt_to_xe(gt);
-	u32 reg = xe_mmio_read32(gt, XELP_EU_ENABLE.reg);
+	u32 reg_val = xe_mmio_read32(gt, XELP_EU_ENABLE);
 	u32 val = 0;
 	int i;
 
@@ -47,15 +47,15 @@ load_eu_mask(struct xe_gt *gt, xe_eu_mask_t mask)
 	 * of enable).
 	 */
 	if (GRAPHICS_VERx100(xe) < 1250)
-		reg = ~reg & XELP_EU_MASK;
+		reg_val = ~reg_val & XELP_EU_MASK;
 
 	/* On PVC, one bit = one EU */
 	if (GRAPHICS_VERx100(xe) == 1260) {
-		val = reg;
+		val = reg_val;
 	} else {
 		/* All other platforms, one bit = 2 EU */
-		for (i = 0; i < fls(reg); i++)
-			if (reg & BIT(i))
+		for (i = 0; i < fls(reg_val); i++)
+			if (reg_val & BIT(i))
 				val |= 0x3 << 2 * i;
 	}
 
@@ -95,10 +95,10 @@ xe_gt_topology_init(struct xe_gt *gt)
 
 	load_dss_mask(gt, gt->fuse_topo.g_dss_mask,
 		      num_geometry_regs,
-		      XELP_GT_GEOMETRY_DSS_ENABLE.reg);
+		      XELP_GT_GEOMETRY_DSS_ENABLE);
 	load_dss_mask(gt, gt->fuse_topo.c_dss_mask, num_compute_regs,
-		      XEHP_GT_COMPUTE_DSS_ENABLE.reg,
-		      XEHPC_GT_COMPUTE_DSS_ENABLE_EXT.reg);
+		      XEHP_GT_COMPUTE_DSS_ENABLE,
+		      XEHPC_GT_COMPUTE_DSS_ENABLE_EXT);
 	load_eu_mask(gt, gt->fuse_topo.eu_mask_per_dss);
 
 	xe_gt_topology_dump(gt, &p);
diff --git a/drivers/gpu/drm/xe/xe_guc.c b/drivers/gpu/drm/xe/xe_guc.c
index 62b4fcf84acf..e8a126ad400f 100644
--- a/drivers/gpu/drm/xe/xe_guc.c
+++ b/drivers/gpu/drm/xe/xe_guc.c
@@ -232,10 +232,10 @@ static void guc_write_params(struct xe_guc *guc)
 
 	xe_force_wake_assert_held(gt_to_fw(gt), XE_FW_GT);
 
-	xe_mmio_write32(gt, SOFT_SCRATCH(0).reg, 0);
+	xe_mmio_write32(gt, SOFT_SCRATCH(0), 0);
 
 	for (i = 0; i < GUC_CTL_MAX_DWORDS; i++)
-		xe_mmio_write32(gt, SOFT_SCRATCH(1 + i).reg, guc->params[i]);
+		xe_mmio_write32(gt, SOFT_SCRATCH(1 + i), guc->params[i]);
 }
 
 int xe_guc_init(struct xe_guc *guc)
@@ -268,9 +268,9 @@ int xe_guc_init(struct xe_guc *guc)
 	guc_init_params(guc);
 
 	if (xe_gt_is_media_type(gt))
-		guc->notify_reg = MEDIA_GUC_HOST_INTERRUPT.reg;
+		guc->notify_reg = MEDIA_GUC_HOST_INTERRUPT;
 	else
-		guc->notify_reg = GUC_HOST_INTERRUPT.reg;
+		guc->notify_reg = GUC_HOST_INTERRUPT;
 
 	xe_uc_fw_change_status(&guc->fw, XE_UC_FIRMWARE_LOADABLE);
 
@@ -309,9 +309,9 @@ int xe_guc_reset(struct xe_guc *guc)
 
 	xe_force_wake_assert_held(gt_to_fw(gt), XE_FW_GT);
 
-	xe_mmio_write32(gt, GDRST.reg, GRDOM_GUC);
+	xe_mmio_write32(gt, GDRST, GRDOM_GUC);
 
-	ret = xe_mmio_wait32(gt, GDRST.reg, 0, GRDOM_GUC, 5000,
+	ret = xe_mmio_wait32(gt, GDRST, 0, GRDOM_GUC, 5000,
 			     &gdrst, false);
 	if (ret) {
 		drm_err(&xe->drm, "GuC reset timed out, GEN6_GDRST=0x%8x\n",
@@ -319,7 +319,7 @@ int xe_guc_reset(struct xe_guc *guc)
 		goto err_out;
 	}
 
-	guc_status = xe_mmio_read32(gt, GUC_STATUS.reg);
+	guc_status = xe_mmio_read32(gt, GUC_STATUS);
 	if (!(guc_status & GS_MIA_IN_RESET)) {
 		drm_err(&xe->drm,
 			"GuC status: 0x%x, MIA core expected to be in reset\n",
@@ -352,9 +352,9 @@ static void guc_prepare_xfer(struct xe_guc *guc)
 		shim_flags |= PVC_GUC_MOCS_INDEX(PVC_GUC_MOCS_UC_INDEX);
 
 	/* Must program this register before loading the ucode with DMA */
-	xe_mmio_write32(gt, GUC_SHIM_CONTROL.reg, shim_flags);
+	xe_mmio_write32(gt, GUC_SHIM_CONTROL, shim_flags);
 
-	xe_mmio_write32(gt, GT_PM_CONFIG.reg, GT_DOORBELL_ENABLE);
+	xe_mmio_write32(gt, GT_PM_CONFIG, GT_DOORBELL_ENABLE);
 }
 
 /*
@@ -370,7 +370,7 @@ static int guc_xfer_rsa(struct xe_guc *guc)
 	if (guc->fw.rsa_size > 256) {
 		u32 rsa_ggtt_addr = xe_bo_ggtt_addr(guc->fw.bo) +
 				    xe_uc_fw_rsa_offset(&guc->fw);
-		xe_mmio_write32(gt, UOS_RSA_SCRATCH(0).reg, rsa_ggtt_addr);
+		xe_mmio_write32(gt, UOS_RSA_SCRATCH(0), rsa_ggtt_addr);
 		return 0;
 	}
 
@@ -379,7 +379,7 @@ static int guc_xfer_rsa(struct xe_guc *guc)
 		return -ENOMEM;
 
 	for (i = 0; i < UOS_RSA_SCRATCH_COUNT; i++)
-		xe_mmio_write32(gt, UOS_RSA_SCRATCH(i).reg, rsa[i]);
+		xe_mmio_write32(gt, UOS_RSA_SCRATCH(i), rsa[i]);
 
 	return 0;
 }
@@ -407,7 +407,7 @@ static int guc_wait_ucode(struct xe_guc *guc)
 	 * 200ms. Even at slowest clock, this should be sufficient. And
 	 * in the working case, a larger timeout makes no difference.
 	 */
-	ret = xe_mmio_wait32(guc_to_gt(guc), GUC_STATUS.reg,
+	ret = xe_mmio_wait32(guc_to_gt(guc), GUC_STATUS,
 			     FIELD_PREP(GS_UKERNEL_MASK,
 					XE_GUC_LOAD_STATUS_READY),
 			     GS_UKERNEL_MASK, 200000, &status, false);
@@ -435,7 +435,7 @@ static int guc_wait_ucode(struct xe_guc *guc)
 		    XE_GUC_LOAD_STATUS_EXCEPTION) {
 			drm_info(drm, "GuC firmware exception. EIP: %#x\n",
 				 xe_mmio_read32(guc_to_gt(guc),
-						SOFT_SCRATCH(13).reg));
+						SOFT_SCRATCH(13)));
 			ret = -ENXIO;
 		}
 
@@ -532,10 +532,10 @@ static void guc_handle_mmio_msg(struct xe_guc *guc)
 
 	xe_force_wake_assert_held(gt_to_fw(gt), XE_FW_GT);
 
-	msg = xe_mmio_read32(gt, SOFT_SCRATCH(15).reg);
+	msg = xe_mmio_read32(gt, SOFT_SCRATCH(15));
 	msg &= XE_GUC_RECV_MSG_EXCEPTION |
 		XE_GUC_RECV_MSG_CRASH_DUMP_POSTED;
-	xe_mmio_write32(gt, SOFT_SCRATCH(15).reg, 0);
+	xe_mmio_write32(gt, SOFT_SCRATCH(15), 0);
 
 	if (msg & XE_GUC_RECV_MSG_CRASH_DUMP_POSTED)
 		drm_err(&guc_to_xe(guc)->drm,
@@ -553,12 +553,12 @@ static void guc_enable_irq(struct xe_guc *guc)
 		REG_FIELD_PREP(ENGINE0_MASK, GUC_INTR_GUC2HOST)  :
 		REG_FIELD_PREP(ENGINE1_MASK, GUC_INTR_GUC2HOST);
 
-	xe_mmio_write32(gt, GUC_SG_INTR_ENABLE.reg,
+	xe_mmio_write32(gt, GUC_SG_INTR_ENABLE,
 			REG_FIELD_PREP(ENGINE1_MASK, GUC_INTR_GUC2HOST));
 	if (xe_gt_is_media_type(gt))
-		xe_mmio_rmw32(gt, GUC_SG_INTR_MASK.reg, events, 0);
+		xe_mmio_rmw32(gt, GUC_SG_INTR_MASK, events, 0);
 	else
-		xe_mmio_write32(gt, GUC_SG_INTR_MASK.reg, ~events);
+		xe_mmio_write32(gt, GUC_SG_INTR_MASK, ~events);
 }
 
 int xe_guc_enable_communication(struct xe_guc *guc)
@@ -567,7 +567,7 @@ int xe_guc_enable_communication(struct xe_guc *guc)
 
 	guc_enable_irq(guc);
 
-	xe_mmio_rmw32(guc_to_gt(guc), PMINTRMSK.reg,
+	xe_mmio_rmw32(guc_to_gt(guc), PMINTRMSK,
 		      ARAT_EXPIRED_INTRMSK, 0);
 
 	err = xe_guc_ct_enable(&guc->ct);
@@ -620,8 +620,8 @@ int xe_guc_mmio_send_recv(struct xe_guc *guc, const u32 *request,
 	struct xe_device *xe = guc_to_xe(guc);
 	struct xe_gt *gt = guc_to_gt(guc);
 	u32 header, reply;
-	u32 reply_reg = xe_gt_is_media_type(gt) ?
-		MED_VF_SW_FLAG(0).reg : VF_SW_FLAG(0).reg;
+	struct xe_reg reply_reg = xe_gt_is_media_type(gt) ?
+		MED_VF_SW_FLAG(0) : VF_SW_FLAG(0);
 	const u32 LAST_INDEX = VF_SW_FLAG_COUNT;
 	int ret;
 	int i;
@@ -641,14 +641,14 @@ int xe_guc_mmio_send_recv(struct xe_guc *guc, const u32 *request,
 	/* Not in critical data-path, just do if else for GT type */
 	if (xe_gt_is_media_type(gt)) {
 		for (i = 0; i < len; ++i)
-			xe_mmio_write32(gt, MED_VF_SW_FLAG(i).reg,
+			xe_mmio_write32(gt, MED_VF_SW_FLAG(i),
 					request[i]);
-		xe_mmio_read32(gt, MED_VF_SW_FLAG(LAST_INDEX).reg);
+		xe_mmio_read32(gt, MED_VF_SW_FLAG(LAST_INDEX));
 	} else {
 		for (i = 0; i < len; ++i)
-			xe_mmio_write32(gt, VF_SW_FLAG(i).reg,
+			xe_mmio_write32(gt, VF_SW_FLAG(i),
 					request[i]);
-		xe_mmio_read32(gt, VF_SW_FLAG(LAST_INDEX).reg);
+		xe_mmio_read32(gt, VF_SW_FLAG(LAST_INDEX));
 	}
 
 	xe_guc_notify(guc);
@@ -712,9 +712,10 @@ int xe_guc_mmio_send_recv(struct xe_guc *guc, const u32 *request,
 	if (response_buf) {
 		response_buf[0] = header;
 
-		for (i = 1; i < VF_SW_FLAG_COUNT; i++)
-			response_buf[i] =
-				xe_mmio_read32(gt, reply_reg + i * sizeof(u32));
+		for (i = 1; i < VF_SW_FLAG_COUNT; i++) {
+			reply_reg.reg += i * sizeof(u32);
+			response_buf[i] = xe_mmio_read32(gt, reply_reg);
+		}
 	}
 
 	/* Use data from the GuC response as our return value */
@@ -836,7 +837,7 @@ void xe_guc_print_info(struct xe_guc *guc, struct drm_printer *p)
 	if (err)
 		return;
 
-	status = xe_mmio_read32(gt, GUC_STATUS.reg);
+	status = xe_mmio_read32(gt, GUC_STATUS);
 
 	drm_printf(p, "\nGuC status 0x%08x:\n", status);
 	drm_printf(p, "\tBootrom status = 0x%x\n",
@@ -851,7 +852,7 @@ void xe_guc_print_info(struct xe_guc *guc, struct drm_printer *p)
 	drm_puts(p, "\nScratch registers:\n");
 	for (i = 0; i < SOFT_SCRATCH_COUNT; i++) {
 		drm_printf(p, "\t%2d: \t0x%x\n",
-			   i, xe_mmio_read32(gt, SOFT_SCRATCH(i).reg));
+			   i, xe_mmio_read32(gt, SOFT_SCRATCH(i)));
 	}
 
 	xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
diff --git a/drivers/gpu/drm/xe/xe_guc_ads.c b/drivers/gpu/drm/xe/xe_guc_ads.c
index 84c2d7c624c6..683f2df09c49 100644
--- a/drivers/gpu/drm/xe/xe_guc_ads.c
+++ b/drivers/gpu/drm/xe/xe_guc_ads.c
@@ -428,7 +428,6 @@ static void guc_mmio_regset_write_one(struct xe_guc_ads *ads,
 	struct guc_mmio_reg entry = {
 		.offset = reg.reg,
 		.flags = reg.masked ? GUC_REGSET_MASKED : 0,
-		/* TODO: steering */
 	};
 
 	xe_map_memcpy_to(ads_to_xe(ads), regset_map, n_entry * sizeof(entry),
@@ -551,7 +550,7 @@ static void guc_doorbell_init(struct xe_guc_ads *ads)
 
 	if (GRAPHICS_VER(xe) >= 12 && !IS_DGFX(xe)) {
 		u32 distdbreg =
-			xe_mmio_read32(gt, DIST_DBS_POPULATED.reg);
+			xe_mmio_read32(gt, DIST_DBS_POPULATED);
 
 		ads_blob_write(ads,
 			       system_info.generic_gt_sysinfo[GUC_GENERIC_GT_SYSINFO_DOORBELL_COUNT_PER_SQIDI],
diff --git a/drivers/gpu/drm/xe/xe_guc_pc.c b/drivers/gpu/drm/xe/xe_guc_pc.c
index 72d460d5323b..e799faa1c6b8 100644
--- a/drivers/gpu/drm/xe/xe_guc_pc.c
+++ b/drivers/gpu/drm/xe/xe_guc_pc.c
@@ -317,9 +317,9 @@ static void mtl_update_rpe_value(struct xe_guc_pc *pc)
 	u32 reg;
 
 	if (xe_gt_is_media_type(gt))
-		reg = xe_mmio_read32(gt, MTL_MPE_FREQUENCY.reg);
+		reg = xe_mmio_read32(gt, MTL_MPE_FREQUENCY);
 	else
-		reg = xe_mmio_read32(gt, MTL_GT_RPE_FREQUENCY.reg);
+		reg = xe_mmio_read32(gt, MTL_GT_RPE_FREQUENCY);
 
 	pc->rpe_freq = REG_FIELD_GET(MTL_RPE_MASK, reg) * GT_FREQUENCY_MULTIPLIER;
 }
@@ -336,9 +336,9 @@ static void tgl_update_rpe_value(struct xe_guc_pc *pc)
 	 * PCODE at a different register
 	 */
 	if (xe->info.platform == XE_PVC)
-		reg = xe_mmio_read32(gt, PVC_RP_STATE_CAP.reg);
+		reg = xe_mmio_read32(gt, PVC_RP_STATE_CAP);
 	else
-		reg = xe_mmio_read32(gt, GEN10_FREQ_INFO_REC.reg);
+		reg = xe_mmio_read32(gt, GEN10_FREQ_INFO_REC);
 
 	pc->rpe_freq = REG_FIELD_GET(RPE_MASK, reg) * GT_FREQUENCY_MULTIPLIER;
 }
@@ -380,10 +380,10 @@ static ssize_t freq_act_show(struct device *dev,
 		goto out;
 
 	if (xe->info.platform == XE_METEORLAKE) {
-		freq = xe_mmio_read32(gt, MTL_MIRROR_TARGET_WP1.reg);
+		freq = xe_mmio_read32(gt, MTL_MIRROR_TARGET_WP1);
 		freq = REG_FIELD_GET(MTL_CAGF_MASK, freq);
 	} else {
-		freq = xe_mmio_read32(gt, GEN12_RPSTAT1.reg);
+		freq = xe_mmio_read32(gt, GEN12_RPSTAT1);
 		freq = REG_FIELD_GET(GEN12_CAGF_MASK, freq);
 	}
 
@@ -413,7 +413,7 @@ static ssize_t freq_cur_show(struct device *dev,
 	if (ret)
 		goto out;
 
-	freq = xe_mmio_read32(gt, RPNSWREQ.reg);
+	freq = xe_mmio_read32(gt, RPNSWREQ);
 
 	freq = REG_FIELD_GET(REQ_RATIO_MASK, freq);
 	ret = sysfs_emit(buf, "%d\n", decode_freq(freq));
@@ -588,7 +588,7 @@ static ssize_t rc_status_show(struct device *dev,
 	u32 reg;
 
 	xe_device_mem_access_get(gt_to_xe(gt));
-	reg = xe_mmio_read32(gt, GT_CORE_STATUS.reg);
+	reg = xe_mmio_read32(gt, GT_CORE_STATUS);
 	xe_device_mem_access_put(gt_to_xe(gt));
 
 	switch (REG_FIELD_GET(RCN_MASK, reg)) {
@@ -615,7 +615,7 @@ static ssize_t rc6_residency_show(struct device *dev,
 	if (ret)
 		goto out;
 
-	reg = xe_mmio_read32(gt, GT_GFX_RC6.reg);
+	reg = xe_mmio_read32(gt, GT_GFX_RC6);
 	ret = sysfs_emit(buff, "%u\n", reg);
 
 	XE_WARN_ON(xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL));
@@ -646,9 +646,9 @@ static void mtl_init_fused_rp_values(struct xe_guc_pc *pc)
 	xe_device_assert_mem_access(pc_to_xe(pc));
 
 	if (xe_gt_is_media_type(gt))
-		reg = xe_mmio_read32(gt, MTL_MEDIAP_STATE_CAP.reg);
+		reg = xe_mmio_read32(gt, MTL_MEDIAP_STATE_CAP);
 	else
-		reg = xe_mmio_read32(gt, MTL_RP_STATE_CAP.reg);
+		reg = xe_mmio_read32(gt, MTL_RP_STATE_CAP);
 	pc->rp0_freq = REG_FIELD_GET(MTL_RP0_CAP_MASK, reg) *
 		GT_FREQUENCY_MULTIPLIER;
 	pc->rpn_freq = REG_FIELD_GET(MTL_RPN_CAP_MASK, reg) *
@@ -664,9 +664,9 @@ static void tgl_init_fused_rp_values(struct xe_guc_pc *pc)
 	xe_device_assert_mem_access(pc_to_xe(pc));
 
 	if (xe->info.platform == XE_PVC)
-		reg = xe_mmio_read32(gt, PVC_RP_STATE_CAP.reg);
+		reg = xe_mmio_read32(gt, PVC_RP_STATE_CAP);
 	else
-		reg = xe_mmio_read32(gt, GEN6_RP_STATE_CAP.reg);
+		reg = xe_mmio_read32(gt, GEN6_RP_STATE_CAP);
 	pc->rp0_freq = REG_FIELD_GET(RP0_MASK, reg) * GT_FREQUENCY_MULTIPLIER;
 	pc->rpn_freq = REG_FIELD_GET(RPN_MASK, reg) * GT_FREQUENCY_MULTIPLIER;
 }
@@ -745,9 +745,9 @@ static int pc_gucrc_disable(struct xe_guc_pc *pc)
 	if (ret)
 		return ret;
 
-	xe_mmio_write32(gt, PG_ENABLE.reg, 0);
-	xe_mmio_write32(gt, RC_CONTROL.reg, 0);
-	xe_mmio_write32(gt, RC_STATE.reg, 0);
+	xe_mmio_write32(gt, PG_ENABLE, 0);
+	xe_mmio_write32(gt, RC_CONTROL, 0);
+	xe_mmio_write32(gt, RC_STATE, 0);
 
 	XE_WARN_ON(xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL));
 	return 0;
diff --git a/drivers/gpu/drm/xe/xe_guc_types.h b/drivers/gpu/drm/xe/xe_guc_types.h
index ac7eec28934d..a304dce4e9f4 100644
--- a/drivers/gpu/drm/xe/xe_guc_types.h
+++ b/drivers/gpu/drm/xe/xe_guc_types.h
@@ -9,6 +9,7 @@
 #include <linux/idr.h>
 #include <linux/xarray.h>
 
+#include "regs/xe_reg_defs.h"
 #include "xe_guc_ads_types.h"
 #include "xe_guc_ct_types.h"
 #include "xe_guc_fwif.h"
@@ -74,7 +75,7 @@ struct xe_guc {
 	/**
 	 * @notify_reg: Register which is written to notify GuC of H2G messages
 	 */
-	u32 notify_reg;
+	struct xe_reg notify_reg;
 	/** @params: Control params for fw initialization */
 	u32 params[GUC_CTL_MAX_DWORDS];
 };
diff --git a/drivers/gpu/drm/xe/xe_huc.c b/drivers/gpu/drm/xe/xe_huc.c
index 55dcaab34ea4..e0377083d1f2 100644
--- a/drivers/gpu/drm/xe/xe_huc.c
+++ b/drivers/gpu/drm/xe/xe_huc.c
@@ -84,7 +84,7 @@ int xe_huc_auth(struct xe_huc *huc)
 		goto fail;
 	}
 
-	ret = xe_mmio_wait32(gt, HUC_KERNEL_LOAD_INFO.reg,
+	ret = xe_mmio_wait32(gt, HUC_KERNEL_LOAD_INFO,
 			     HUC_LOAD_SUCCESSFUL,
 			     HUC_LOAD_SUCCESSFUL, 100000, NULL, false);
 	if (ret) {
@@ -126,7 +126,7 @@ void xe_huc_print_info(struct xe_huc *huc, struct drm_printer *p)
 		return;
 
 	drm_printf(p, "\nHuC status: 0x%08x\n",
-		   xe_mmio_read32(gt, HUC_KERNEL_LOAD_INFO.reg));
+		   xe_mmio_read32(gt, HUC_KERNEL_LOAD_INFO));
 
 	xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
 }
diff --git a/drivers/gpu/drm/xe/xe_hw_engine.c b/drivers/gpu/drm/xe/xe_hw_engine.c
index a9adac0624f6..5e275aff8974 100644
--- a/drivers/gpu/drm/xe/xe_hw_engine.c
+++ b/drivers/gpu/drm/xe/xe_hw_engine.c
@@ -233,20 +233,25 @@ static void hw_engine_fini(struct drm_device *drm, void *arg)
 	hwe->gt = NULL;
 }
 
-static void hw_engine_mmio_write32(struct xe_hw_engine *hwe, u32 reg, u32 val)
+static void hw_engine_mmio_write32(struct xe_hw_engine *hwe, struct xe_reg reg,
+				   u32 val)
 {
-	XE_BUG_ON(reg & hwe->mmio_base);
+	XE_BUG_ON(reg.reg & hwe->mmio_base);
 	xe_force_wake_assert_held(gt_to_fw(hwe->gt), hwe->domain);
 
-	xe_mmio_write32(hwe->gt, reg + hwe->mmio_base, val);
+	reg.reg += hwe->mmio_base;
+
+	xe_mmio_write32(hwe->gt, reg, val);
 }
 
-static u32 hw_engine_mmio_read32(struct xe_hw_engine *hwe, u32 reg)
+static u32 hw_engine_mmio_read32(struct xe_hw_engine *hwe, struct xe_reg reg)
 {
-	XE_BUG_ON(reg & hwe->mmio_base);
+	XE_BUG_ON(reg.reg & hwe->mmio_base);
 	xe_force_wake_assert_held(gt_to_fw(hwe->gt), hwe->domain);
 
-	return xe_mmio_read32(hwe->gt, reg + hwe->mmio_base);
+	reg.reg += hwe->mmio_base;
+
+	return xe_mmio_read32(hwe->gt, reg);
 }
 
 void xe_hw_engine_enable_ring(struct xe_hw_engine *hwe)
@@ -255,17 +260,17 @@ void xe_hw_engine_enable_ring(struct xe_hw_engine *hwe)
 		xe_hw_engine_mask_per_class(hwe->gt, XE_ENGINE_CLASS_COMPUTE);
 
 	if (hwe->class == XE_ENGINE_CLASS_COMPUTE && ccs_mask)
-		xe_mmio_write32(hwe->gt, RCU_MODE.reg,
+		xe_mmio_write32(hwe->gt, RCU_MODE,
 				_MASKED_BIT_ENABLE(RCU_MODE_CCS_ENABLE));
 
-	hw_engine_mmio_write32(hwe, RING_HWSTAM(0).reg, ~0x0);
-	hw_engine_mmio_write32(hwe, RING_HWS_PGA(0).reg,
+	hw_engine_mmio_write32(hwe, RING_HWSTAM(0), ~0x0);
+	hw_engine_mmio_write32(hwe, RING_HWS_PGA(0),
 			       xe_bo_ggtt_addr(hwe->hwsp));
-	hw_engine_mmio_write32(hwe, RING_MODE(0).reg,
+	hw_engine_mmio_write32(hwe, RING_MODE(0),
 			       _MASKED_BIT_ENABLE(GFX_DISABLE_LEGACY_MODE));
-	hw_engine_mmio_write32(hwe, RING_MI_MODE(0).reg,
+	hw_engine_mmio_write32(hwe, RING_MI_MODE(0),
 			       _MASKED_BIT_DISABLE(STOP_RING));
-	hw_engine_mmio_read32(hwe, RING_MI_MODE(0).reg);
+	hw_engine_mmio_read32(hwe, RING_MI_MODE(0));
 }
 
 void
@@ -443,7 +448,7 @@ static void read_media_fuses(struct xe_gt *gt)
 
 	xe_force_wake_assert_held(gt_to_fw(gt), XE_FW_GT);
 
-	media_fuse = xe_mmio_read32(gt, GT_VEBOX_VDBOX_DISABLE.reg);
+	media_fuse = xe_mmio_read32(gt, GT_VEBOX_VDBOX_DISABLE);
 
 	/*
 	 * Pre-Xe_HP platforms had register bits representing absent engines,
@@ -485,7 +490,7 @@ static void read_copy_fuses(struct xe_gt *gt)
 
 	xe_force_wake_assert_held(gt_to_fw(gt), XE_FW_GT);
 
-	bcs_mask = xe_mmio_read32(gt, MIRROR_FUSE3.reg);
+	bcs_mask = xe_mmio_read32(gt, MIRROR_FUSE3);
 	bcs_mask = REG_FIELD_GET(MEML3_EN_MASK, bcs_mask);
 
 	/* BCS0 is always present; only BCS1-BCS8 may be fused off */
@@ -582,63 +587,63 @@ void xe_hw_engine_print_state(struct xe_hw_engine *hwe, struct drm_printer *p)
 	drm_printf(p, "\tMMIO base: 0x%08x\n", hwe->mmio_base);
 
 	drm_printf(p, "\tHWSTAM: 0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_HWSTAM(0).reg));
+		hw_engine_mmio_read32(hwe, RING_HWSTAM(0)));
 	drm_printf(p, "\tRING_HWS_PGA: 0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_HWS_PGA(0).reg));
+		hw_engine_mmio_read32(hwe, RING_HWS_PGA(0)));
 
 	drm_printf(p, "\tRING_EXECLIST_STATUS_LO: 0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_EXECLIST_STATUS_LO(0).reg));
+		hw_engine_mmio_read32(hwe, RING_EXECLIST_STATUS_LO(0)));
 	drm_printf(p, "\tRING_EXECLIST_STATUS_HI: 0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_EXECLIST_STATUS_HI(0).reg));
+		hw_engine_mmio_read32(hwe, RING_EXECLIST_STATUS_HI(0)));
 	drm_printf(p, "\tRING_EXECLIST_SQ_CONTENTS_LO: 0x%08x\n",
 		hw_engine_mmio_read32(hwe,
-					 RING_EXECLIST_SQ_CONTENTS_LO(0).reg));
+					 RING_EXECLIST_SQ_CONTENTS_LO(0)));
 	drm_printf(p, "\tRING_EXECLIST_SQ_CONTENTS_HI: 0x%08x\n",
 		hw_engine_mmio_read32(hwe,
-					 RING_EXECLIST_SQ_CONTENTS_HI(0).reg));
+					 RING_EXECLIST_SQ_CONTENTS_HI(0)));
 	drm_printf(p, "\tRING_EXECLIST_CONTROL: 0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_EXECLIST_CONTROL(0).reg));
+		hw_engine_mmio_read32(hwe, RING_EXECLIST_CONTROL(0)));
 
 	drm_printf(p, "\tRING_START: 0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_START(0).reg));
+		hw_engine_mmio_read32(hwe, RING_START(0)));
 	drm_printf(p, "\tRING_HEAD:  0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_HEAD(0).reg) & HEAD_ADDR);
+		hw_engine_mmio_read32(hwe, RING_HEAD(0)) & HEAD_ADDR);
 	drm_printf(p, "\tRING_TAIL:  0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_TAIL(0).reg) & TAIL_ADDR);
+		hw_engine_mmio_read32(hwe, RING_TAIL(0)) & TAIL_ADDR);
 	drm_printf(p, "\tRING_CTL: 0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_CTL(0).reg));
+		hw_engine_mmio_read32(hwe, RING_CTL(0)));
 	drm_printf(p, "\tRING_MODE: 0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_MI_MODE(0).reg));
+		hw_engine_mmio_read32(hwe, RING_MI_MODE(0)));
 	drm_printf(p, "\tRING_MODE_GEN7: 0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_MODE(0).reg));
+		hw_engine_mmio_read32(hwe, RING_MODE(0)));
 
 	drm_printf(p, "\tRING_IMR:   0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_IMR(0).reg));
+		hw_engine_mmio_read32(hwe, RING_IMR(0)));
 	drm_printf(p, "\tRING_ESR:   0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_ESR(0).reg));
+		hw_engine_mmio_read32(hwe, RING_ESR(0)));
 	drm_printf(p, "\tRING_EMR:   0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_EMR(0).reg));
+		hw_engine_mmio_read32(hwe, RING_EMR(0)));
 	drm_printf(p, "\tRING_EIR:   0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_EIR(0).reg));
+		hw_engine_mmio_read32(hwe, RING_EIR(0)));
 
         drm_printf(p, "\tACTHD:  0x%08x_%08x\n",
-		hw_engine_mmio_read32(hwe, RING_ACTHD_UDW(0).reg),
-		hw_engine_mmio_read32(hwe, RING_ACTHD(0).reg));
+		hw_engine_mmio_read32(hwe, RING_ACTHD_UDW(0)),
+		hw_engine_mmio_read32(hwe, RING_ACTHD(0)));
         drm_printf(p, "\tBBADDR: 0x%08x_%08x\n",
-		hw_engine_mmio_read32(hwe, RING_BBADDR_UDW(0).reg),
-		hw_engine_mmio_read32(hwe, RING_BBADDR(0).reg));
+		hw_engine_mmio_read32(hwe, RING_BBADDR_UDW(0)),
+		hw_engine_mmio_read32(hwe, RING_BBADDR(0)));
         drm_printf(p, "\tDMA_FADDR: 0x%08x_%08x\n",
-		hw_engine_mmio_read32(hwe, RING_DMA_FADD_UDW(0).reg),
-		hw_engine_mmio_read32(hwe, RING_DMA_FADD(0).reg));
+		hw_engine_mmio_read32(hwe, RING_DMA_FADD_UDW(0)),
+		hw_engine_mmio_read32(hwe, RING_DMA_FADD(0)));
 
 	drm_printf(p, "\tIPEIR: 0x%08x\n",
-		hw_engine_mmio_read32(hwe, IPEIR(0).reg));
+		hw_engine_mmio_read32(hwe, IPEIR(0)));
 	drm_printf(p, "\tIPEHR: 0x%08x\n\n",
-		hw_engine_mmio_read32(hwe, IPEHR(0).reg));
+		hw_engine_mmio_read32(hwe, IPEHR(0)));
 
 	if (hwe->class == XE_ENGINE_CLASS_COMPUTE)
 		drm_printf(p, "\tRCU_MODE: 0x%08x\n",
-			xe_mmio_read32(hwe->gt, RCU_MODE.reg));
+			xe_mmio_read32(hwe->gt, RCU_MODE));
 
 }
 
diff --git a/drivers/gpu/drm/xe/xe_irq.c b/drivers/gpu/drm/xe/xe_irq.c
index ac72c1a38e5c..7aa245792927 100644
--- a/drivers/gpu/drm/xe/xe_irq.c
+++ b/drivers/gpu/drm/xe/xe_irq.c
@@ -29,7 +29,7 @@
 
 static void assert_iir_is_zero(struct xe_gt *gt, struct xe_reg reg)
 {
-	u32 val = xe_mmio_read32(gt, reg.reg);
+	u32 val = xe_mmio_read32(gt, reg);
 
 	if (val == 0)
 		return;
@@ -37,10 +37,10 @@ static void assert_iir_is_zero(struct xe_gt *gt, struct xe_reg reg)
 	drm_WARN(&gt_to_xe(gt)->drm, 1,
 		 "Interrupt register 0x%x is not zero: 0x%08x\n",
 		 reg.reg, val);
-	xe_mmio_write32(gt, reg.reg, 0xffffffff);
-	xe_mmio_read32(gt, reg.reg);
-	xe_mmio_write32(gt, reg.reg, 0xffffffff);
-	xe_mmio_read32(gt, reg.reg);
+	xe_mmio_write32(gt, reg, 0xffffffff);
+	xe_mmio_read32(gt, reg);
+	xe_mmio_write32(gt, reg, 0xffffffff);
+	xe_mmio_read32(gt, reg);
 }
 
 /*
@@ -55,32 +55,32 @@ static void unmask_and_enable(struct xe_gt *gt, u32 irqregs, u32 bits)
 	 */
 	assert_iir_is_zero(gt, IIR(irqregs));
 
-	xe_mmio_write32(gt, IER(irqregs).reg, bits);
-	xe_mmio_write32(gt, IMR(irqregs).reg, ~bits);
+	xe_mmio_write32(gt, IER(irqregs), bits);
+	xe_mmio_write32(gt, IMR(irqregs), ~bits);
 
 	/* Posting read */
-	xe_mmio_read32(gt, IMR(irqregs).reg);
+	xe_mmio_read32(gt, IMR(irqregs));
 }
 
 /* Mask and disable all interrupts. */
 static void mask_and_disable(struct xe_gt *gt, u32 irqregs)
 {
-	xe_mmio_write32(gt, IMR(irqregs).reg, ~0);
+	xe_mmio_write32(gt, IMR(irqregs), ~0);
 	/* Posting read */
-	xe_mmio_read32(gt, IMR(irqregs).reg);
+	xe_mmio_read32(gt, IMR(irqregs));
 
-	xe_mmio_write32(gt, IER(irqregs).reg, 0);
+	xe_mmio_write32(gt, IER(irqregs), 0);
 
 	/* IIR can theoretically queue up two events. Be paranoid. */
-	xe_mmio_write32(gt, IIR(irqregs).reg, ~0);
-	xe_mmio_read32(gt, IIR(irqregs).reg);
-	xe_mmio_write32(gt, IIR(irqregs).reg, ~0);
-	xe_mmio_read32(gt, IIR(irqregs).reg);
+	xe_mmio_write32(gt, IIR(irqregs), ~0);
+	xe_mmio_read32(gt, IIR(irqregs));
+	xe_mmio_write32(gt, IIR(irqregs), ~0);
+	xe_mmio_read32(gt, IIR(irqregs));
 }
 
 static u32 xelp_intr_disable(struct xe_gt *gt)
 {
-	xe_mmio_write32(gt, GFX_MSTR_IRQ.reg, 0);
+	xe_mmio_write32(gt, GFX_MSTR_IRQ, 0);
 
 	/*
 	 * Now with master disabled, get a sample of level indications
@@ -88,7 +88,7 @@ static u32 xelp_intr_disable(struct xe_gt *gt)
 	 * New indications can and will light up during processing,
 	 * and will generate new interrupt after enabling master.
 	 */
-	return xe_mmio_read32(gt, GFX_MSTR_IRQ.reg);
+	return xe_mmio_read32(gt, GFX_MSTR_IRQ);
 }
 
 static u32
@@ -99,18 +99,18 @@ gu_misc_irq_ack(struct xe_gt *gt, const u32 master_ctl)
 	if (!(master_ctl & GU_MISC_IRQ))
 		return 0;
 
-	iir = xe_mmio_read32(gt, IIR(GU_MISC_IRQ_OFFSET).reg);
+	iir = xe_mmio_read32(gt, IIR(GU_MISC_IRQ_OFFSET));
 	if (likely(iir))
-		xe_mmio_write32(gt, IIR(GU_MISC_IRQ_OFFSET).reg, iir);
+		xe_mmio_write32(gt, IIR(GU_MISC_IRQ_OFFSET), iir);
 
 	return iir;
 }
 
 static inline void xelp_intr_enable(struct xe_gt *gt, bool stall)
 {
-	xe_mmio_write32(gt, GFX_MSTR_IRQ.reg, MASTER_IRQ);
+	xe_mmio_write32(gt, GFX_MSTR_IRQ, MASTER_IRQ);
 	if (stall)
-		xe_mmio_read32(gt, GFX_MSTR_IRQ.reg);
+		xe_mmio_read32(gt, GFX_MSTR_IRQ);
 }
 
 static void gt_irq_postinstall(struct xe_device *xe, struct xe_gt *gt)
@@ -133,41 +133,41 @@ static void gt_irq_postinstall(struct xe_device *xe, struct xe_gt *gt)
 	smask = irqs << 16;
 
 	/* Enable RCS, BCS, VCS and VECS class interrupts. */
-	xe_mmio_write32(gt, RENDER_COPY_INTR_ENABLE.reg, dmask);
-	xe_mmio_write32(gt, VCS_VECS_INTR_ENABLE.reg, dmask);
+	xe_mmio_write32(gt, RENDER_COPY_INTR_ENABLE, dmask);
+	xe_mmio_write32(gt, VCS_VECS_INTR_ENABLE, dmask);
 	if (ccs_mask)
-		xe_mmio_write32(gt, CCS_RSVD_INTR_ENABLE.reg, smask);
+		xe_mmio_write32(gt, CCS_RSVD_INTR_ENABLE, smask);
 
 	/* Unmask irqs on RCS, BCS, VCS and VECS engines. */
-	xe_mmio_write32(gt, RCS0_RSVD_INTR_MASK.reg, ~smask);
-	xe_mmio_write32(gt, BCS_RSVD_INTR_MASK.reg, ~smask);
+	xe_mmio_write32(gt, RCS0_RSVD_INTR_MASK, ~smask);
+	xe_mmio_write32(gt, BCS_RSVD_INTR_MASK, ~smask);
 	if (bcs_mask & (BIT(1)|BIT(2)))
-		xe_mmio_write32(gt, XEHPC_BCS1_BCS2_INTR_MASK.reg, ~dmask);
+		xe_mmio_write32(gt, XEHPC_BCS1_BCS2_INTR_MASK, ~dmask);
 	if (bcs_mask & (BIT(3)|BIT(4)))
-		xe_mmio_write32(gt, XEHPC_BCS3_BCS4_INTR_MASK.reg, ~dmask);
+		xe_mmio_write32(gt, XEHPC_BCS3_BCS4_INTR_MASK, ~dmask);
 	if (bcs_mask & (BIT(5)|BIT(6)))
-		xe_mmio_write32(gt, XEHPC_BCS5_BCS6_INTR_MASK.reg, ~dmask);
+		xe_mmio_write32(gt, XEHPC_BCS5_BCS6_INTR_MASK, ~dmask);
 	if (bcs_mask & (BIT(7)|BIT(8)))
-		xe_mmio_write32(gt, XEHPC_BCS7_BCS8_INTR_MASK.reg, ~dmask);
-	xe_mmio_write32(gt, VCS0_VCS1_INTR_MASK.reg, ~dmask);
-	xe_mmio_write32(gt, VCS2_VCS3_INTR_MASK.reg, ~dmask);
-	xe_mmio_write32(gt, VECS0_VECS1_INTR_MASK.reg, ~dmask);
+		xe_mmio_write32(gt, XEHPC_BCS7_BCS8_INTR_MASK, ~dmask);
+	xe_mmio_write32(gt, VCS0_VCS1_INTR_MASK, ~dmask);
+	xe_mmio_write32(gt, VCS2_VCS3_INTR_MASK, ~dmask);
+	xe_mmio_write32(gt, VECS0_VECS1_INTR_MASK, ~dmask);
 	if (ccs_mask & (BIT(0)|BIT(1)))
-		xe_mmio_write32(gt, CCS0_CCS1_INTR_MASK.reg, ~dmask);
+		xe_mmio_write32(gt, CCS0_CCS1_INTR_MASK, ~dmask);
 	if (ccs_mask & (BIT(2)|BIT(3)))
-		xe_mmio_write32(gt,  CCS2_CCS3_INTR_MASK.reg, ~dmask);
+		xe_mmio_write32(gt,  CCS2_CCS3_INTR_MASK, ~dmask);
 
 	/*
 	 * RPS interrupts will get enabled/disabled on demand when RPS itself
 	 * is enabled/disabled.
 	 */
 	/* TODO: gt->pm_ier, gt->pm_imr */
-	xe_mmio_write32(gt, GPM_WGBOXPERF_INTR_ENABLE.reg, 0);
-	xe_mmio_write32(gt, GPM_WGBOXPERF_INTR_MASK.reg,  ~0);
+	xe_mmio_write32(gt, GPM_WGBOXPERF_INTR_ENABLE, 0);
+	xe_mmio_write32(gt, GPM_WGBOXPERF_INTR_MASK,  ~0);
 
 	/* Same thing for GuC interrupts */
-	xe_mmio_write32(gt, GUC_SG_INTR_ENABLE.reg, 0);
-	xe_mmio_write32(gt, GUC_SG_INTR_MASK.reg,  ~0);
+	xe_mmio_write32(gt, GUC_SG_INTR_ENABLE, 0);
+	xe_mmio_write32(gt, GUC_SG_INTR_MASK,  ~0);
 }
 
 static void xelp_irq_postinstall(struct xe_device *xe, struct xe_gt *gt)
@@ -192,7 +192,7 @@ gt_engine_identity(struct xe_device *xe,
 
 	lockdep_assert_held(&xe->irq.lock);
 
-	xe_mmio_write32(gt, IIR_REG_SELECTOR(bank).reg, BIT(bit));
+	xe_mmio_write32(gt, IIR_REG_SELECTOR(bank), BIT(bit));
 
 	/*
 	 * NB: Specs do not specify how long to spin wait,
@@ -200,7 +200,7 @@ gt_engine_identity(struct xe_device *xe,
 	 */
 	timeout_ts = (local_clock() >> 10) + 100;
 	do {
-		ident = xe_mmio_read32(gt, INTR_IDENTITY_REG(bank).reg);
+		ident = xe_mmio_read32(gt, INTR_IDENTITY_REG(bank));
 	} while (!(ident & INTR_DATA_VALID) &&
 		 !time_after32(local_clock() >> 10, timeout_ts));
 
@@ -210,7 +210,7 @@ gt_engine_identity(struct xe_device *xe,
 		return 0;
 	}
 
-	xe_mmio_write32(gt, INTR_IDENTITY_REG(bank).reg, INTR_DATA_VALID);
+	xe_mmio_write32(gt, INTR_IDENTITY_REG(bank), INTR_DATA_VALID);
 
 	return ident;
 }
@@ -249,11 +249,11 @@ static void gt_irq_handler(struct xe_device *xe, struct xe_gt *gt,
 
 		if (!xe_gt_is_media_type(gt)) {
 			intr_dw[bank] =
-				xe_mmio_read32(gt, GT_INTR_DW(bank).reg);
+				xe_mmio_read32(gt, GT_INTR_DW(bank));
 			for_each_set_bit(bit, intr_dw + bank, 32)
 				identity[bit] = gt_engine_identity(xe, gt,
 								   bank, bit);
-			xe_mmio_write32(gt, GT_INTR_DW(bank).reg,
+			xe_mmio_write32(gt, GT_INTR_DW(bank),
 					intr_dw[bank]);
 		}
 
@@ -315,14 +315,14 @@ static u32 dg1_intr_disable(struct xe_device *xe)
 	u32 val;
 
 	/* First disable interrupts */
-	xe_mmio_write32(gt, DG1_MSTR_TILE_INTR.reg, 0);
+	xe_mmio_write32(gt, DG1_MSTR_TILE_INTR, 0);
 
 	/* Get the indication levels and ack the master unit */
-	val = xe_mmio_read32(gt, DG1_MSTR_TILE_INTR.reg);
+	val = xe_mmio_read32(gt, DG1_MSTR_TILE_INTR);
 	if (unlikely(!val))
 		return 0;
 
-	xe_mmio_write32(gt, DG1_MSTR_TILE_INTR.reg, val);
+	xe_mmio_write32(gt, DG1_MSTR_TILE_INTR, val);
 
 	return val;
 }
@@ -331,9 +331,9 @@ static void dg1_intr_enable(struct xe_device *xe, bool stall)
 {
 	struct xe_gt *gt = xe_device_get_gt(xe, 0);
 
-	xe_mmio_write32(gt, DG1_MSTR_TILE_INTR.reg, DG1_MSTR_IRQ);
+	xe_mmio_write32(gt, DG1_MSTR_TILE_INTR, DG1_MSTR_IRQ);
 	if (stall)
-		xe_mmio_read32(gt, DG1_MSTR_TILE_INTR.reg);
+		xe_mmio_read32(gt, DG1_MSTR_TILE_INTR);
 }
 
 static void dg1_irq_postinstall(struct xe_device *xe, struct xe_gt *gt)
@@ -373,7 +373,7 @@ static irqreturn_t dg1_irq_handler(int irq, void *arg)
 			continue;
 
 		if (!xe_gt_is_media_type(gt))
-			master_ctl = xe_mmio_read32(gt, GFX_MSTR_IRQ.reg);
+			master_ctl = xe_mmio_read32(gt, GFX_MSTR_IRQ);
 
 		/*
 		 * We might be in irq handler just when PCIe DPC is initiated
@@ -387,7 +387,7 @@ static irqreturn_t dg1_irq_handler(int irq, void *arg)
 		}
 
 		if (!xe_gt_is_media_type(gt))
-			xe_mmio_write32(gt, GFX_MSTR_IRQ.reg, master_ctl);
+			xe_mmio_write32(gt, GFX_MSTR_IRQ, master_ctl);
 		gt_irq_handler(xe, gt, master_ctl, intr_dw, identity);
 
 		/*
@@ -416,34 +416,34 @@ static void gt_irq_reset(struct xe_gt *gt)
 	u32 bcs_mask = xe_hw_engine_mask_per_class(gt, XE_ENGINE_CLASS_COPY);
 
 	/* Disable RCS, BCS, VCS and VECS class engines. */
-	xe_mmio_write32(gt, RENDER_COPY_INTR_ENABLE.reg,	 0);
-	xe_mmio_write32(gt, VCS_VECS_INTR_ENABLE.reg,	 0);
+	xe_mmio_write32(gt, RENDER_COPY_INTR_ENABLE,	 0);
+	xe_mmio_write32(gt, VCS_VECS_INTR_ENABLE,	 0);
 	if (ccs_mask)
-		xe_mmio_write32(gt, CCS_RSVD_INTR_ENABLE.reg, 0);
+		xe_mmio_write32(gt, CCS_RSVD_INTR_ENABLE, 0);
 
 	/* Restore masks irqs on RCS, BCS, VCS and VECS engines. */
-	xe_mmio_write32(gt, RCS0_RSVD_INTR_MASK.reg,	~0);
-	xe_mmio_write32(gt, BCS_RSVD_INTR_MASK.reg,	~0);
+	xe_mmio_write32(gt, RCS0_RSVD_INTR_MASK,	~0);
+	xe_mmio_write32(gt, BCS_RSVD_INTR_MASK,	~0);
 	if (bcs_mask & (BIT(1)|BIT(2)))
-		xe_mmio_write32(gt, XEHPC_BCS1_BCS2_INTR_MASK.reg, ~0);
+		xe_mmio_write32(gt, XEHPC_BCS1_BCS2_INTR_MASK, ~0);
 	if (bcs_mask & (BIT(3)|BIT(4)))
-		xe_mmio_write32(gt, XEHPC_BCS3_BCS4_INTR_MASK.reg, ~0);
+		xe_mmio_write32(gt, XEHPC_BCS3_BCS4_INTR_MASK, ~0);
 	if (bcs_mask & (BIT(5)|BIT(6)))
-		xe_mmio_write32(gt, XEHPC_BCS5_BCS6_INTR_MASK.reg, ~0);
+		xe_mmio_write32(gt, XEHPC_BCS5_BCS6_INTR_MASK, ~0);
 	if (bcs_mask & (BIT(7)|BIT(8)))
-		xe_mmio_write32(gt, XEHPC_BCS7_BCS8_INTR_MASK.reg, ~0);
-	xe_mmio_write32(gt, VCS0_VCS1_INTR_MASK.reg,	~0);
-	xe_mmio_write32(gt, VCS2_VCS3_INTR_MASK.reg,	~0);
-	xe_mmio_write32(gt, VECS0_VECS1_INTR_MASK.reg,	~0);
+		xe_mmio_write32(gt, XEHPC_BCS7_BCS8_INTR_MASK, ~0);
+	xe_mmio_write32(gt, VCS0_VCS1_INTR_MASK,	~0);
+	xe_mmio_write32(gt, VCS2_VCS3_INTR_MASK,	~0);
+	xe_mmio_write32(gt, VECS0_VECS1_INTR_MASK,	~0);
 	if (ccs_mask & (BIT(0)|BIT(1)))
-		xe_mmio_write32(gt, CCS0_CCS1_INTR_MASK.reg, ~0);
+		xe_mmio_write32(gt, CCS0_CCS1_INTR_MASK, ~0);
 	if (ccs_mask & (BIT(2)|BIT(3)))
-		xe_mmio_write32(gt,  CCS2_CCS3_INTR_MASK.reg, ~0);
+		xe_mmio_write32(gt,  CCS2_CCS3_INTR_MASK, ~0);
 
-	xe_mmio_write32(gt, GPM_WGBOXPERF_INTR_ENABLE.reg, 0);
-	xe_mmio_write32(gt, GPM_WGBOXPERF_INTR_MASK.reg,  ~0);
-	xe_mmio_write32(gt, GUC_SG_INTR_ENABLE.reg,	 0);
-	xe_mmio_write32(gt, GUC_SG_INTR_MASK.reg,		~0);
+	xe_mmio_write32(gt, GPM_WGBOXPERF_INTR_ENABLE, 0);
+	xe_mmio_write32(gt, GPM_WGBOXPERF_INTR_MASK,  ~0);
+	xe_mmio_write32(gt, GUC_SG_INTR_ENABLE,	 0);
+	xe_mmio_write32(gt, GUC_SG_INTR_MASK,		~0);
 }
 
 static void xelp_irq_reset(struct xe_gt *gt)
diff --git a/drivers/gpu/drm/xe/xe_mmio.c b/drivers/gpu/drm/xe/xe_mmio.c
index 3b719c774efa..0e91004fa06d 100644
--- a/drivers/gpu/drm/xe/xe_mmio.c
+++ b/drivers/gpu/drm/xe/xe_mmio.c
@@ -153,13 +153,13 @@ int xe_mmio_total_vram_size(struct xe_device *xe, u64 *vram_size, u64 *usable_si
 	struct xe_gt *gt = xe_device_get_gt(xe, 0);
 	struct pci_dev *pdev = to_pci_dev(xe->drm.dev);
 	int err;
-	u32 reg;
+	u32 reg_val;
 
 	if (!xe->info.has_flat_ccs)  {
 		*vram_size = pci_resource_len(pdev, GEN12_LMEM_BAR);
 		if (usable_size)
 			*usable_size = min(*vram_size,
-					   xe_mmio_read64(gt, GSMBASE.reg));
+					   xe_mmio_read64(gt, GSMBASE));
 		return 0;
 	}
 
@@ -167,11 +167,11 @@ int xe_mmio_total_vram_size(struct xe_device *xe, u64 *vram_size, u64 *usable_si
 	if (err)
 		return err;
 
-	reg = xe_gt_mcr_unicast_read_any(gt, XEHP_TILE0_ADDR_RANGE);
-	*vram_size = (u64)REG_FIELD_GET(GENMASK(14, 8), reg) * SZ_1G;
+	reg_val = xe_gt_mcr_unicast_read_any(gt, XEHP_TILE0_ADDR_RANGE);
+	*vram_size = (u64)REG_FIELD_GET(GENMASK(14, 8), reg_val) * SZ_1G;
 	if (usable_size) {
-		reg = xe_gt_mcr_unicast_read_any(gt, XEHP_FLAT_CCS_BASE_ADDR);
-		*usable_size = (u64)REG_FIELD_GET(GENMASK(31, 8), reg) * SZ_64K;
+		reg_val = xe_gt_mcr_unicast_read_any(gt, XEHP_FLAT_CCS_BASE_ADDR);
+		*usable_size = (u64)REG_FIELD_GET(GENMASK(31, 8), reg_val) * SZ_64K;
 		drm_info(&xe->drm, "vram_size: 0x%llx usable_size: 0x%llx\n",
 			 *vram_size, *usable_size);
 	}
@@ -298,7 +298,7 @@ static void xe_mmio_probe_tiles(struct xe_device *xe)
 	if (xe->info.tile_count == 1)
 		return;
 
-	mtcfg = xe_mmio_read64(gt, XEHP_MTCFG_ADDR.reg);
+	mtcfg = xe_mmio_read64(gt, XEHP_MTCFG_ADDR);
 	adj_tile_count = xe->info.tile_count =
 		REG_FIELD_GET(TILE_COUNT, mtcfg) + 1;
 	if (xe->info.media_verx100 >= 1300)
@@ -374,7 +374,7 @@ int xe_mmio_init(struct xe_device *xe)
 	 * keep the GT powered down; we won't be able to communicate with it
 	 * and we should not continue with driver initialization.
 	 */
-	if (IS_DGFX(xe) && !(xe_mmio_read32(gt, GU_CNTL.reg) & LMEM_INIT)) {
+	if (IS_DGFX(xe) && !(xe_mmio_read32(gt, GU_CNTL) & LMEM_INIT)) {
 		drm_err(&xe->drm, "VRAM not initialized by firmware\n");
 		return -ENODEV;
 	}
@@ -403,6 +403,7 @@ int xe_mmio_ioctl(struct drm_device *dev, void *data,
 	struct xe_device *xe = to_xe_device(dev);
 	struct drm_xe_mmio *args = data;
 	unsigned int bits_flag, bytes;
+	struct xe_reg reg;
 	bool allowed;
 	int ret = 0;
 
@@ -435,6 +436,12 @@ int xe_mmio_ioctl(struct drm_device *dev, void *data,
 	if (XE_IOCTL_ERR(xe, args->addr + bytes > xe->mmio.size))
 		return -EINVAL;
 
+	/*
+	 * TODO: migrate to xe_gt_mcr to lookup the mmio range and handle
+	 * multicast registers. Steering would need uapi extension.
+	 */
+	reg = XE_REG(args->addr);
+
 	xe_force_wake_get(gt_to_fw(&xe->gt[0]), XE_FORCEWAKE_ALL);
 
 	if (args->flags & DRM_XE_MMIO_WRITE) {
@@ -444,10 +451,10 @@ int xe_mmio_ioctl(struct drm_device *dev, void *data,
 				ret = -EINVAL;
 				goto exit;
 			}
-			xe_mmio_write32(to_gt(xe), args->addr, args->value);
+			xe_mmio_write32(to_gt(xe), reg, args->value);
 			break;
 		case DRM_XE_MMIO_64BIT:
-			xe_mmio_write64(to_gt(xe), args->addr, args->value);
+			xe_mmio_write64(to_gt(xe), reg, args->value);
 			break;
 		default:
 			drm_dbg(&xe->drm, "Invalid MMIO bit size");
@@ -462,10 +469,10 @@ int xe_mmio_ioctl(struct drm_device *dev, void *data,
 	if (args->flags & DRM_XE_MMIO_READ) {
 		switch (bits_flag) {
 		case DRM_XE_MMIO_32BIT:
-			args->value = xe_mmio_read32(to_gt(xe), args->addr);
+			args->value = xe_mmio_read32(to_gt(xe), reg);
 			break;
 		case DRM_XE_MMIO_64BIT:
-			args->value = xe_mmio_read64(to_gt(xe), args->addr);
+			args->value = xe_mmio_read64(to_gt(xe), reg);
 			break;
 		default:
 			drm_dbg(&xe->drm, "Invalid MMIO bit size");
diff --git a/drivers/gpu/drm/xe/xe_mmio.h b/drivers/gpu/drm/xe/xe_mmio.h
index b72a0a75259f..821701f8ada6 100644
--- a/drivers/gpu/drm/xe/xe_mmio.h
+++ b/drivers/gpu/drm/xe/xe_mmio.h
@@ -9,6 +9,7 @@
 #include <linux/delay.h>
 #include <linux/io-64-nonatomic-lo-hi.h>
 
+#include "regs/xe_reg_defs.h"
 #include "xe_gt_types.h"
 
 struct drm_device;
@@ -17,32 +18,32 @@ struct xe_device;
 
 int xe_mmio_init(struct xe_device *xe);
 
-static inline u8 xe_mmio_read8(struct xe_gt *gt, u32 reg)
+static inline u8 xe_mmio_read8(struct xe_gt *gt, struct xe_reg reg)
 {
-	if (reg < gt->mmio.adj_limit)
-		reg += gt->mmio.adj_offset;
+	if (reg.reg < gt->mmio.adj_limit)
+		reg.reg += gt->mmio.adj_offset;
 
-	return readb(gt->mmio.regs + reg);
+	return readb(gt->mmio.regs + reg.reg);
 }
 
 static inline void xe_mmio_write32(struct xe_gt *gt,
-				   u32 reg, u32 val)
+				   struct xe_reg reg, u32 val)
 {
-	if (reg < gt->mmio.adj_limit)
-		reg += gt->mmio.adj_offset;
+	if (reg.reg < gt->mmio.adj_limit)
+		reg.reg += gt->mmio.adj_offset;
 
-	writel(val, gt->mmio.regs + reg);
+	writel(val, gt->mmio.regs + reg.reg);
 }
 
-static inline u32 xe_mmio_read32(struct xe_gt *gt, u32 reg)
+static inline u32 xe_mmio_read32(struct xe_gt *gt, struct xe_reg reg)
 {
-	if (reg < gt->mmio.adj_limit)
-		reg += gt->mmio.adj_offset;
+	if (reg.reg < gt->mmio.adj_limit)
+		reg.reg += gt->mmio.adj_offset;
 
-	return readl(gt->mmio.regs + reg);
+	return readl(gt->mmio.regs + reg.reg);
 }
 
-static inline u32 xe_mmio_rmw32(struct xe_gt *gt, u32 reg, u32 clr,
+static inline u32 xe_mmio_rmw32(struct xe_gt *gt, struct xe_reg reg, u32 clr,
 				 u32 set)
 {
 	u32 old, reg_val;
@@ -55,24 +56,24 @@ static inline u32 xe_mmio_rmw32(struct xe_gt *gt, u32 reg, u32 clr,
 }
 
 static inline void xe_mmio_write64(struct xe_gt *gt,
-				   u32 reg, u64 val)
+				   struct xe_reg reg, u64 val)
 {
-	if (reg < gt->mmio.adj_limit)
-		reg += gt->mmio.adj_offset;
+	if (reg.reg < gt->mmio.adj_limit)
+		reg.reg += gt->mmio.adj_offset;
 
-	writeq(val, gt->mmio.regs + reg);
+	writeq(val, gt->mmio.regs + reg.reg);
 }
 
-static inline u64 xe_mmio_read64(struct xe_gt *gt, u32 reg)
+static inline u64 xe_mmio_read64(struct xe_gt *gt, struct xe_reg reg)
 {
-	if (reg < gt->mmio.adj_limit)
-		reg += gt->mmio.adj_offset;
+	if (reg.reg < gt->mmio.adj_limit)
+		reg.reg += gt->mmio.adj_offset;
 
-	return readq(gt->mmio.regs + reg);
+	return readq(gt->mmio.regs + reg.reg);
 }
 
 static inline int xe_mmio_write32_and_verify(struct xe_gt *gt,
-					     u32 reg, u32 val,
+					     struct xe_reg reg, u32 val,
 					     u32 mask, u32 eval)
 {
 	u32 reg_val;
@@ -83,8 +84,9 @@ static inline int xe_mmio_write32_and_verify(struct xe_gt *gt,
 	return (reg_val & mask) != eval ? -EINVAL : 0;
 }
 
-static inline int xe_mmio_wait32(struct xe_gt *gt, u32 reg, u32 val, u32 mask,
-				 u32 timeout_us, u32 *out_val, bool atomic)
+static inline int xe_mmio_wait32(struct xe_gt *gt, struct xe_reg reg, u32 val,
+				 u32 mask, u32 timeout_us, u32 *out_val,
+				 bool atomic)
 {
 	ktime_t cur = ktime_get_raw();
 	const ktime_t end = ktime_add_us(cur, timeout_us);
@@ -122,9 +124,10 @@ static inline int xe_mmio_wait32(struct xe_gt *gt, u32 reg, u32 val, u32 mask,
 int xe_mmio_ioctl(struct drm_device *dev, void *data,
 		  struct drm_file *file);
 
-static inline bool xe_mmio_in_range(const struct xe_mmio_range *range, u32 reg)
+static inline bool xe_mmio_in_range(const struct xe_mmio_range *range,
+				    struct xe_reg reg)
 {
-	return range && reg >= range->start && reg <= range->end;
+	return range && reg.reg >= range->start && reg.reg <= range->end;
 }
 
 int xe_mmio_probe_vram(struct xe_device *xe);
diff --git a/drivers/gpu/drm/xe/xe_mocs.c b/drivers/gpu/drm/xe/xe_mocs.c
index 0d07811a573f..1175dec5d90b 100644
--- a/drivers/gpu/drm/xe/xe_mocs.c
+++ b/drivers/gpu/drm/xe/xe_mocs.c
@@ -477,8 +477,9 @@ static void __init_mocs_table(struct xe_gt *gt,
 	for (i = 0;
 	     i < info->n_entries ? (mocs = get_entry_control(info, i)), 1 : 0;
 	     i++) {
-		mocs_dbg(&gt->xe->drm, "%d 0x%x 0x%x\n", i, XE_REG(addr + i * 4).reg, mocs);
-		xe_mmio_write32(gt, XE_REG(addr + i * 4).reg, mocs);
+		struct xe_reg reg = XE_REG(addr + i * 4);
+		mocs_dbg(&gt->xe->drm, "%d 0x%x 0x%x\n", i, reg.reg, mocs);
+		xe_mmio_write32(gt, reg, mocs);
 	}
 }
 
@@ -514,7 +515,7 @@ static void init_l3cc_table(struct xe_gt *gt,
 	     i++) {
 		mocs_dbg(&gt->xe->drm, "%d 0x%x 0x%x\n", i, LNCFCMOCS(i).reg,
 			 l3cc);
-		xe_mmio_write32(gt, LNCFCMOCS(i).reg, l3cc);
+		xe_mmio_write32(gt, LNCFCMOCS(i), l3cc);
 	}
 }
 
diff --git a/drivers/gpu/drm/xe/xe_pat.c b/drivers/gpu/drm/xe/xe_pat.c
index abee41fa3cb9..b56a65779d26 100644
--- a/drivers/gpu/drm/xe/xe_pat.c
+++ b/drivers/gpu/drm/xe/xe_pat.c
@@ -64,14 +64,20 @@ static const u32 mtl_pat_table[] = {
 
 static void program_pat(struct xe_gt *gt, const u32 table[], int n_entries)
 {
-	for (int i = 0; i < n_entries; i++)
-		xe_mmio_write32(gt, _PAT_INDEX(i), table[i]);
+	for (int i = 0; i < n_entries; i++) {
+		struct xe_reg reg = XE_REG(_PAT_INDEX(i));
+
+		xe_mmio_write32(gt, reg, table[i]);
+	}
 }
 
 static void program_pat_mcr(struct xe_gt *gt, const u32 table[], int n_entries)
 {
-	for (int i = 0; i < n_entries; i++)
-		xe_gt_mcr_multicast_write(gt, XE_REG_MCR(_PAT_INDEX(i)), table[i]);
+	for (int i = 0; i < n_entries; i++) {
+		struct xe_reg_mcr reg_mcr = XE_REG_MCR(_PAT_INDEX(i));
+
+		xe_gt_mcr_multicast_write(gt, reg_mcr, table[i]);
+	}
 }
 
 void xe_pat_init(struct xe_gt *gt)
diff --git a/drivers/gpu/drm/xe/xe_pcode.c b/drivers/gpu/drm/xe/xe_pcode.c
index 99bb730684ed..7ab70a83f88d 100644
--- a/drivers/gpu/drm/xe/xe_pcode.c
+++ b/drivers/gpu/drm/xe/xe_pcode.c
@@ -43,7 +43,7 @@ static int pcode_mailbox_status(struct xe_gt *gt)
 
 	lockdep_assert_held(&gt->pcode.lock);
 
-	err = xe_mmio_read32(gt, PCODE_MAILBOX.reg) & PCODE_ERROR_MASK;
+	err = xe_mmio_read32(gt, PCODE_MAILBOX) & PCODE_ERROR_MASK;
 	if (err) {
 		drm_err(&gt_to_xe(gt)->drm, "PCODE Mailbox failed: %d %s", err,
 			err_decode[err].str ?: "Unknown");
@@ -60,22 +60,22 @@ static int pcode_mailbox_rw(struct xe_gt *gt, u32 mbox, u32 *data0, u32 *data1,
 	int err;
 	lockdep_assert_held(&gt->pcode.lock);
 
-	if ((xe_mmio_read32(gt, PCODE_MAILBOX.reg) & PCODE_READY) != 0)
+	if ((xe_mmio_read32(gt, PCODE_MAILBOX) & PCODE_READY) != 0)
 		return -EAGAIN;
 
-	xe_mmio_write32(gt, PCODE_DATA0.reg, *data0);
-	xe_mmio_write32(gt, PCODE_DATA1.reg, data1 ? *data1 : 0);
-	xe_mmio_write32(gt, PCODE_MAILBOX.reg, PCODE_READY | mbox);
+	xe_mmio_write32(gt, PCODE_DATA0, *data0);
+	xe_mmio_write32(gt, PCODE_DATA1, data1 ? *data1 : 0);
+	xe_mmio_write32(gt, PCODE_MAILBOX, PCODE_READY | mbox);
 
-	err = xe_mmio_wait32(gt, PCODE_MAILBOX.reg, 0, PCODE_READY,
+	err = xe_mmio_wait32(gt, PCODE_MAILBOX, 0, PCODE_READY,
 			     timeout_ms * 1000, NULL, atomic);
 	if (err)
 		return err;
 
 	if (return_data) {
-		*data0 = xe_mmio_read32(gt, PCODE_DATA0.reg);
+		*data0 = xe_mmio_read32(gt, PCODE_DATA0);
 		if (data1)
-			*data1 = xe_mmio_read32(gt, PCODE_DATA1.reg);
+			*data1 = xe_mmio_read32(gt, PCODE_DATA1);
 	}
 
 	return pcode_mailbox_status(gt);
diff --git a/drivers/gpu/drm/xe/xe_reg_sr.c b/drivers/gpu/drm/xe/xe_reg_sr.c
index 801f211fb733..51a40a9e532d 100644
--- a/drivers/gpu/drm/xe/xe_reg_sr.c
+++ b/drivers/gpu/drm/xe/xe_reg_sr.c
@@ -163,7 +163,7 @@ static void apply_one_mmio(struct xe_gt *gt, struct xe_reg_sr_entry *entry)
 	else if (entry->clr_bits + 1)
 		val = (reg.mcr ?
 		       xe_gt_mcr_unicast_read_any(gt, reg_mcr) :
-		       xe_mmio_read32(gt, reg.reg)) & (~entry->clr_bits);
+		       xe_mmio_read32(gt, reg)) & (~entry->clr_bits);
 	else
 		val = 0;
 
@@ -179,7 +179,7 @@ static void apply_one_mmio(struct xe_gt *gt, struct xe_reg_sr_entry *entry)
 	if (entry->reg.mcr)
 		xe_gt_mcr_multicast_write(gt, reg_mcr, val);
 	else
-		xe_mmio_write32(gt, reg.reg, val);
+		xe_mmio_write32(gt, reg, val);
 }
 
 void xe_reg_sr_apply_mmio(struct xe_reg_sr *sr, struct xe_gt *gt)
@@ -232,15 +232,17 @@ void xe_reg_sr_apply_whitelist(struct xe_reg_sr *sr, u32 mmio_base,
 	p = drm_debug_printer(KBUILD_MODNAME);
 	xa_for_each(&sr->xa, reg, entry) {
 		xe_reg_whitelist_print_entry(&p, 0, reg, entry);
-		xe_mmio_write32(gt, RING_FORCE_TO_NONPRIV(mmio_base, slot).reg,
+		xe_mmio_write32(gt, RING_FORCE_TO_NONPRIV(mmio_base, slot),
 				reg | entry->set_bits);
 		slot++;
 	}
 
 	/* And clear the rest just in case of garbage */
-	for (; slot < RING_MAX_NONPRIV_SLOTS; slot++)
-		xe_mmio_write32(gt, RING_FORCE_TO_NONPRIV(mmio_base, slot).reg,
-				RING_NOPID(mmio_base).reg);
+	for (; slot < RING_MAX_NONPRIV_SLOTS; slot++) {
+		u32 addr = RING_NOPID(mmio_base).reg;
+
+		xe_mmio_write32(gt, RING_FORCE_TO_NONPRIV(mmio_base, slot), addr);
+	}
 
 	err = xe_force_wake_put(&gt->mmio.fw, XE_FORCEWAKE_ALL);
 	XE_WARN_ON(err);
diff --git a/drivers/gpu/drm/xe/xe_ring_ops.c b/drivers/gpu/drm/xe/xe_ring_ops.c
index 75838b8bb9a8..733ed8a30c2e 100644
--- a/drivers/gpu/drm/xe/xe_ring_ops.c
+++ b/drivers/gpu/drm/xe/xe_ring_ops.c
@@ -44,10 +44,11 @@ static u32 preparser_disable(bool state)
 	return MI_ARB_CHECK | BIT(8) | state;
 }
 
-static int emit_aux_table_inv(struct xe_gt *gt, u32 addr, u32 *dw, int i)
+static int emit_aux_table_inv(struct xe_gt *gt, struct xe_reg reg,
+			      u32 *dw, int i)
 {
 	dw[i++] = MI_LOAD_REGISTER_IMM(1) | MI_LRI_MMIO_REMAP_EN;
-	dw[i++] = addr + gt->mmio.adj_offset;
+	dw[i++] = reg.reg + gt->mmio.adj_offset;
 	dw[i++] = AUX_INV;
 	dw[i++] = MI_NOOP;
 
@@ -203,9 +204,9 @@ static void __emit_job_gen12_video(struct xe_sched_job *job, struct xe_lrc *lrc,
 	/* hsdes: 1809175790 */
 	if (!xe->info.has_flat_ccs) {
 		if (decode)
-			i = emit_aux_table_inv(gt, VD0_AUX_NV.reg, dw, i);
+			i = emit_aux_table_inv(gt, VD0_AUX_NV, dw, i);
 		else
-			i = emit_aux_table_inv(gt, VE0_AUX_NV.reg, dw, i);
+			i = emit_aux_table_inv(gt, VE0_AUX_NV, dw, i);
 	}
 	dw[i++] = preparser_disable(false);
 
@@ -248,7 +249,7 @@ static void __emit_job_gen12_render_compute(struct xe_sched_job *job,
 
 	/* hsdes: 1809175790 */
 	if (!xe->info.has_flat_ccs)
-		i = emit_aux_table_inv(gt, GFX_CCS_AUX_NV.reg, dw, i);
+		i = emit_aux_table_inv(gt, GFX_CCS_AUX_NV, dw, i);
 
 	dw[i++] = preparser_disable(false);
 
diff --git a/drivers/gpu/drm/xe/xe_ttm_stolen_mgr.c b/drivers/gpu/drm/xe/xe_ttm_stolen_mgr.c
index 9ce0a0585539..a3855870321f 100644
--- a/drivers/gpu/drm/xe/xe_ttm_stolen_mgr.c
+++ b/drivers/gpu/drm/xe/xe_ttm_stolen_mgr.c
@@ -65,7 +65,7 @@ static s64 detect_bar2_dgfx(struct xe_device *xe, struct xe_ttm_stolen_mgr *mgr)
 	}
 
 	/* Use DSM base address instead for stolen memory */
-	mgr->stolen_base = xe_mmio_read64(gt, DSMBASE.reg) & BDSM_MASK;
+	mgr->stolen_base = xe_mmio_read64(gt, DSMBASE) & BDSM_MASK;
 	if (drm_WARN_ON(&xe->drm, vram_size < mgr->stolen_base))
 		return 0;
 
@@ -88,7 +88,7 @@ static u32 detect_bar2_integrated(struct xe_device *xe, struct xe_ttm_stolen_mgr
 	u32 stolen_size;
 	u32 ggc, gms;
 
-	ggc = xe_mmio_read32(to_gt(xe), GGC.reg);
+	ggc = xe_mmio_read32(to_gt(xe), GGC);
 
 	/* check GGMS, should be fixed 0x3 (8MB) */
 	if (drm_WARN_ON(&xe->drm, (ggc & GGMS_MASK) != GGMS_MASK))
diff --git a/drivers/gpu/drm/xe/xe_uc_fw.c b/drivers/gpu/drm/xe/xe_uc_fw.c
index cd5433b5c970..5c3a571d2a29 100644
--- a/drivers/gpu/drm/xe/xe_uc_fw.c
+++ b/drivers/gpu/drm/xe/xe_uc_fw.c
@@ -462,33 +462,33 @@ static int uc_fw_xfer(struct xe_uc_fw *uc_fw, u32 offset, u32 dma_flags)
 
 	/* Set the source address for the uCode */
 	src_offset = uc_fw_ggtt_offset(uc_fw);
-	xe_mmio_write32(gt, DMA_ADDR_0_LOW.reg, lower_32_bits(src_offset));
-	xe_mmio_write32(gt, DMA_ADDR_0_HIGH.reg, upper_32_bits(src_offset));
+	xe_mmio_write32(gt, DMA_ADDR_0_LOW, lower_32_bits(src_offset));
+	xe_mmio_write32(gt, DMA_ADDR_0_HIGH, upper_32_bits(src_offset));
 
 	/* Set the DMA destination */
-	xe_mmio_write32(gt, DMA_ADDR_1_LOW.reg, offset);
-	xe_mmio_write32(gt, DMA_ADDR_1_HIGH.reg, DMA_ADDRESS_SPACE_WOPCM);
+	xe_mmio_write32(gt, DMA_ADDR_1_LOW, offset);
+	xe_mmio_write32(gt, DMA_ADDR_1_HIGH, DMA_ADDRESS_SPACE_WOPCM);
 
 	/*
 	 * Set the transfer size. The header plus uCode will be copied to WOPCM
 	 * via DMA, excluding any other components
 	 */
-	xe_mmio_write32(gt, DMA_COPY_SIZE.reg,
+	xe_mmio_write32(gt, DMA_COPY_SIZE,
 			sizeof(struct uc_css_header) + uc_fw->ucode_size);
 
 	/* Start the DMA */
-	xe_mmio_write32(gt, DMA_CTRL.reg,
+	xe_mmio_write32(gt, DMA_CTRL,
 			_MASKED_BIT_ENABLE(dma_flags | START_DMA));
 
 	/* Wait for DMA to finish */
-	ret = xe_mmio_wait32(gt, DMA_CTRL.reg, 0, START_DMA, 100000, &dma_ctrl,
+	ret = xe_mmio_wait32(gt, DMA_CTRL, 0, START_DMA, 100000, &dma_ctrl,
 			     false);
 	if (ret)
 		drm_err(&xe->drm, "DMA for %s fw failed, DMA_CTRL=%u\n",
 			xe_uc_fw_type_repr(uc_fw->type), dma_ctrl);
 
 	/* Disable the bits once DMA is over */
-	xe_mmio_write32(gt, DMA_CTRL.reg, _MASKED_BIT_DISABLE(dma_flags));
+	xe_mmio_write32(gt, DMA_CTRL, _MASKED_BIT_DISABLE(dma_flags));
 
 	return ret;
 }
diff --git a/drivers/gpu/drm/xe/xe_wopcm.c b/drivers/gpu/drm/xe/xe_wopcm.c
index 7b5014aea9c8..11eea970c207 100644
--- a/drivers/gpu/drm/xe/xe_wopcm.c
+++ b/drivers/gpu/drm/xe/xe_wopcm.c
@@ -124,8 +124,8 @@ static bool __check_layout(struct xe_device *xe, u32 wopcm_size,
 static bool __wopcm_regs_locked(struct xe_gt *gt,
 				u32 *guc_wopcm_base, u32 *guc_wopcm_size)
 {
-	u32 reg_base = xe_mmio_read32(gt, DMA_GUC_WOPCM_OFFSET.reg);
-	u32 reg_size = xe_mmio_read32(gt, GUC_WOPCM_SIZE.reg);
+	u32 reg_base = xe_mmio_read32(gt, DMA_GUC_WOPCM_OFFSET);
+	u32 reg_size = xe_mmio_read32(gt, GUC_WOPCM_SIZE);
 
 	if (!(reg_size & GUC_WOPCM_SIZE_LOCKED) ||
 	    !(reg_base & GUC_WOPCM_OFFSET_VALID))
@@ -152,13 +152,13 @@ static int __wopcm_init_regs(struct xe_device *xe, struct xe_gt *gt,
 	XE_BUG_ON(size & ~GUC_WOPCM_SIZE_MASK);
 
 	mask = GUC_WOPCM_SIZE_MASK | GUC_WOPCM_SIZE_LOCKED;
-	err = xe_mmio_write32_and_verify(gt, GUC_WOPCM_SIZE.reg, size, mask,
+	err = xe_mmio_write32_and_verify(gt, GUC_WOPCM_SIZE, size, mask,
 					 size | GUC_WOPCM_SIZE_LOCKED);
 	if (err)
 		goto err_out;
 
 	mask = GUC_WOPCM_OFFSET_MASK | GUC_WOPCM_OFFSET_VALID | huc_agent;
-	err = xe_mmio_write32_and_verify(gt, DMA_GUC_WOPCM_OFFSET.reg,
+	err = xe_mmio_write32_and_verify(gt, DMA_GUC_WOPCM_OFFSET,
 					 base | huc_agent, mask,
 					 base | huc_agent |
 					 GUC_WOPCM_OFFSET_VALID);
@@ -171,10 +171,10 @@ static int __wopcm_init_regs(struct xe_device *xe, struct xe_gt *gt,
 	drm_notice(&xe->drm, "Failed to init uC WOPCM registers!\n");
 	drm_notice(&xe->drm, "%s(%#x)=%#x\n", "DMA_GUC_WOPCM_OFFSET",
 		   DMA_GUC_WOPCM_OFFSET.reg,
-		   xe_mmio_read32(gt, DMA_GUC_WOPCM_OFFSET.reg));
+		   xe_mmio_read32(gt, DMA_GUC_WOPCM_OFFSET));
 	drm_notice(&xe->drm, "%s(%#x)=%#x\n", "GUC_WOPCM_SIZE",
 		   GUC_WOPCM_SIZE.reg,
-		   xe_mmio_read32(gt, GUC_WOPCM_SIZE.reg));
+		   xe_mmio_read32(gt, GUC_WOPCM_SIZE));
 
 	return err;
 }
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [Intel-xe] [PATCH v2 2/4] fixup! drm/xe/display: Implement display support
  2023-05-08 22:53 [Intel-xe] [PATCH v2 0/4] Convert xe_mmio to struct xe_reg Lucas De Marchi
  2023-05-08 22:53 ` [Intel-xe] [PATCH v2 1/4] drm/xe/mmio: Use " Lucas De Marchi
@ 2023-05-08 22:53 ` Lucas De Marchi
  2023-05-09 15:26   ` Rodrigo Vivi
  2023-05-08 22:53 ` [Intel-xe] [PATCH v2 3/4] drm/xe: Rename reg field to addr Lucas De Marchi
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 12+ messages in thread
From: Lucas De Marchi @ 2023-05-08 22:53 UTC (permalink / raw)
  To: intel-xe; +Cc: Lucas De Marchi, Rodrigo Vivi

WARNING: This should only be squashed when the display implementation
moves above commit "drm/xe/mmio: Use struct xe_reg".

With the move of display above xe_reg conversion in xe_mmio,
it should use the new types everywhere.

Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 .../drm/xe/compat-i915-headers/intel_uncore.h | 103 +++++++++++++-----
 1 file changed, 74 insertions(+), 29 deletions(-)

diff --git a/drivers/gpu/drm/xe/compat-i915-headers/intel_uncore.h b/drivers/gpu/drm/xe/compat-i915-headers/intel_uncore.h
index 90d79290a211..14f195fe275d 100644
--- a/drivers/gpu/drm/xe/compat-i915-headers/intel_uncore.h
+++ b/drivers/gpu/drm/xe/compat-i915-headers/intel_uncore.h
@@ -17,82 +17,127 @@ static inline struct xe_gt *__fake_uncore_to_gt(struct fake_uncore *uncore)
 	return to_gt(xe);
 }
 
-static inline u32 intel_uncore_read(struct fake_uncore *uncore, i915_reg_t reg)
+static inline u32 intel_uncore_read(struct fake_uncore *uncore,
+				    i915_reg_t i915_reg)
 {
-	return xe_mmio_read32(__fake_uncore_to_gt(uncore), reg.reg);
+	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
+
+	return xe_mmio_read32(__fake_uncore_to_gt(uncore), reg);
 }
 
-static inline u32 intel_uncore_read8(struct fake_uncore *uncore, i915_reg_t reg)
+static inline u32 intel_uncore_read8(struct fake_uncore *uncore,
+				     i915_reg_t i915_reg)
 {
-	return xe_mmio_read8(__fake_uncore_to_gt(uncore), reg.reg);
+	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
+
+	return xe_mmio_read8(__fake_uncore_to_gt(uncore), reg);
 }
 
-static inline u64 intel_uncore_read64_2x32(struct fake_uncore *uncore, i915_reg_t lower_reg, i915_reg_t upper_reg)
+static inline u64
+intel_uncore_read64_2x32(struct fake_uncore *uncore,
+			 i915_reg_t i915_lower_reg, i915_reg_t i915_upper_reg)
 {
+	struct xe_reg lower_reg = XE_REG(i915_mmio_reg_offset(i915_lower_reg));
+	struct xe_reg upper_reg = XE_REG(i915_mmio_reg_offset(i915_upper_reg));
 	u32 upper, lower, old_upper;
 	int loop = 0;
 
-	upper = xe_mmio_read32(__fake_uncore_to_gt(uncore), upper_reg.reg);
+	upper = xe_mmio_read32(__fake_uncore_to_gt(uncore), upper_reg);
 	do {
 		old_upper = upper;
-		lower = xe_mmio_read32(__fake_uncore_to_gt(uncore), lower_reg.reg);
-		upper = xe_mmio_read32(__fake_uncore_to_gt(uncore), upper_reg.reg);
+		lower = xe_mmio_read32(__fake_uncore_to_gt(uncore), lower_reg);
+		upper = xe_mmio_read32(__fake_uncore_to_gt(uncore), upper_reg);
 	} while (upper != old_upper && loop++ < 2);
 
 	return (u64)upper << 32 | lower;
 }
 
-static inline void intel_uncore_posting_read(struct fake_uncore *uncore, i915_reg_t reg)
+static inline void intel_uncore_posting_read(struct fake_uncore *uncore,
+					     i915_reg_t i915_reg)
 {
-	xe_mmio_read32(__fake_uncore_to_gt(uncore), reg.reg);
+	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
+
+	xe_mmio_read32(__fake_uncore_to_gt(uncore), reg);
 }
 
-static inline void intel_uncore_write(struct fake_uncore *uncore, i915_reg_t reg, u32 val)
+static inline void intel_uncore_write(struct fake_uncore *uncore,
+				      i915_reg_t i915_reg, u32 val)
 {
-	xe_mmio_write32(__fake_uncore_to_gt(uncore), reg.reg, val);
+	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
+
+	xe_mmio_write32(__fake_uncore_to_gt(uncore), reg, val);
 }
 
-static inline u32 intel_uncore_rmw(struct fake_uncore *uncore, i915_reg_t reg, u32 clear, u32 set)
+static inline u32 intel_uncore_rmw(struct fake_uncore *uncore,
+				   i915_reg_t i915_reg, u32 clear, u32 set)
 {
-	return xe_mmio_rmw32(__fake_uncore_to_gt(uncore), reg.reg, clear, set);
+	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
+
+	return xe_mmio_rmw32(__fake_uncore_to_gt(uncore), reg, clear, set);
 }
 
-static inline int intel_wait_for_register(struct fake_uncore *uncore, i915_reg_t reg, u32 mask, u32 value, unsigned int timeout)
+static inline int intel_wait_for_register(struct fake_uncore *uncore,
+					  i915_reg_t i915_reg, u32 mask,
+					  u32 value, unsigned int timeout)
 {
-	return xe_mmio_wait32(__fake_uncore_to_gt(uncore), reg.reg, value, mask, timeout * USEC_PER_MSEC, NULL, false);
+	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
+
+	return xe_mmio_wait32(__fake_uncore_to_gt(uncore), reg, value, mask,
+			      timeout * USEC_PER_MSEC, NULL, false);
 }
 
-static inline int intel_wait_for_register_fw(struct fake_uncore *uncore, i915_reg_t reg, u32 mask, u32 value, unsigned int timeout)
+static inline int intel_wait_for_register_fw(struct fake_uncore *uncore,
+					     i915_reg_t i915_reg, u32 mask,
+					     u32 value, unsigned int timeout)
 {
-	return xe_mmio_wait32(__fake_uncore_to_gt(uncore), reg.reg, value, mask, timeout * USEC_PER_MSEC, NULL, false);
+	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
+
+	return xe_mmio_wait32(__fake_uncore_to_gt(uncore), reg, value, mask,
+			      timeout * USEC_PER_MSEC, NULL, false);
 }
 
-static inline int __intel_wait_for_register(struct fake_uncore *uncore, i915_reg_t reg, u32 mask, u32 value,
-					    unsigned int fast_timeout_us, unsigned int slow_timeout_ms, u32 *out_value)
+static inline int
+__intel_wait_for_register(struct fake_uncore *uncore, i915_reg_t i915_reg,
+			  u32 mask, u32 value, unsigned int fast_timeout_us,
+			  unsigned int slow_timeout_ms, u32 *out_value)
 {
-	return xe_mmio_wait32(__fake_uncore_to_gt(uncore), reg.reg, value, mask,
+	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
+
+	return xe_mmio_wait32(__fake_uncore_to_gt(uncore), reg, value, mask,
 			      fast_timeout_us + 1000 * slow_timeout_ms,
 			      out_value, false);
 }
 
-static inline u32 intel_uncore_read_fw(struct fake_uncore *uncore, i915_reg_t reg)
+static inline u32 intel_uncore_read_fw(struct fake_uncore *uncore,
+				       i915_reg_t i915_reg)
 {
-	return xe_mmio_read32(__fake_uncore_to_gt(uncore), reg.reg);
+	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
+
+	return xe_mmio_read32(__fake_uncore_to_gt(uncore), reg);
 }
 
-static inline void intel_uncore_write_fw(struct fake_uncore *uncore, i915_reg_t reg, u32 val)
+static inline void intel_uncore_write_fw(struct fake_uncore *uncore,
+					 i915_reg_t i915_reg, u32 val)
 {
-	xe_mmio_write32(__fake_uncore_to_gt(uncore), reg.reg, val);
+	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
+
+	xe_mmio_write32(__fake_uncore_to_gt(uncore), reg, val);
 }
 
-static inline u32 intel_uncore_read_notrace(struct fake_uncore *uncore, i915_reg_t reg)
+static inline u32 intel_uncore_read_notrace(struct fake_uncore *uncore,
+					    i915_reg_t i915_reg)
 {
-	return xe_mmio_read32(__fake_uncore_to_gt(uncore), reg.reg);
+	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
+
+	return xe_mmio_read32(__fake_uncore_to_gt(uncore), reg);
 }
 
-static inline void intel_uncore_write_notrace(struct fake_uncore *uncore, i915_reg_t reg, u32 val)
+static inline void intel_uncore_write_notrace(struct fake_uncore *uncore,
+					      i915_reg_t i915_reg, u32 val)
 {
-	xe_mmio_write32(__fake_uncore_to_gt(uncore), reg.reg, val);
+	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
+
+	xe_mmio_write32(__fake_uncore_to_gt(uncore), reg, val);
 }
 
 #endif /* __INTEL_UNCORE_H__ */
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [Intel-xe] [PATCH v2 3/4] drm/xe: Rename reg field to addr
  2023-05-08 22:53 [Intel-xe] [PATCH v2 0/4] Convert xe_mmio to struct xe_reg Lucas De Marchi
  2023-05-08 22:53 ` [Intel-xe] [PATCH v2 1/4] drm/xe/mmio: Use " Lucas De Marchi
  2023-05-08 22:53 ` [Intel-xe] [PATCH v2 2/4] fixup! drm/xe/display: Implement display support Lucas De Marchi
@ 2023-05-08 22:53 ` Lucas De Marchi
  2023-05-09 15:27   ` Rodrigo Vivi
  2023-05-08 22:53 ` [Intel-xe] [PATCH v2 4/4] drm/xe: Fix indent in xe_hw_engine_print_state() Lucas De Marchi
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 12+ messages in thread
From: Lucas De Marchi @ 2023-05-08 22:53 UTC (permalink / raw)
  To: intel-xe; +Cc: Lucas De Marchi, Rodrigo Vivi

Rename the address field to "addr" rather than "reg" so it's easier to
understand what it is.

Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 drivers/gpu/drm/xe/regs/xe_reg_defs.h  |  6 ++---
 drivers/gpu/drm/xe/tests/xe_rtp_test.c |  2 +-
 drivers/gpu/drm/xe/xe_force_wake.c     |  2 +-
 drivers/gpu/drm/xe/xe_gt_mcr.c         |  2 +-
 drivers/gpu/drm/xe/xe_guc.c            |  2 +-
 drivers/gpu/drm/xe/xe_guc_ads.c        |  2 +-
 drivers/gpu/drm/xe/xe_hw_engine.c      |  8 +++----
 drivers/gpu/drm/xe/xe_irq.c            |  2 +-
 drivers/gpu/drm/xe/xe_mmio.c           |  2 +-
 drivers/gpu/drm/xe/xe_mmio.h           | 32 +++++++++++++-------------
 drivers/gpu/drm/xe/xe_mocs.c           |  6 ++---
 drivers/gpu/drm/xe/xe_pci.c            |  4 ++--
 drivers/gpu/drm/xe/xe_reg_sr.c         |  6 ++---
 drivers/gpu/drm/xe/xe_ring_ops.c       |  2 +-
 drivers/gpu/drm/xe/xe_rtp.c            |  2 +-
 drivers/gpu/drm/xe/xe_wopcm.c          |  4 ++--
 16 files changed, 42 insertions(+), 42 deletions(-)

diff --git a/drivers/gpu/drm/xe/regs/xe_reg_defs.h b/drivers/gpu/drm/xe/regs/xe_reg_defs.h
index da781bc7bdc7..4554362ff4d9 100644
--- a/drivers/gpu/drm/xe/regs/xe_reg_defs.h
+++ b/drivers/gpu/drm/xe/regs/xe_reg_defs.h
@@ -18,8 +18,8 @@
 struct xe_reg {
 	union {
 		struct {
-			/** @reg: address */
-			u32 reg:22;
+			/** @addr: address */
+			u32 addr:22;
 			/**
 			 * @masked: register is "masked", with upper 16bits used
 			 * to identify the bits that are updated on the lower
@@ -71,7 +71,7 @@ struct xe_reg_mcr {
  * object of the right type. However when initializing static const storage,
  * where a compound statement is not allowed, this can be used instead.
  */
-#define XE_REG_INITIALIZER(r_, ...)    { .reg = r_, __VA_ARGS__ }
+#define XE_REG_INITIALIZER(r_, ...)    { .addr = r_, __VA_ARGS__ }
 
 
 /**
diff --git a/drivers/gpu/drm/xe/tests/xe_rtp_test.c b/drivers/gpu/drm/xe/tests/xe_rtp_test.c
index ad2fe8a39a78..4b2aac5ccf28 100644
--- a/drivers/gpu/drm/xe/tests/xe_rtp_test.c
+++ b/drivers/gpu/drm/xe/tests/xe_rtp_test.c
@@ -244,7 +244,7 @@ static void xe_rtp_process_tests(struct kunit *test)
 	xe_rtp_process(param->entries, reg_sr, &xe->gt[0], NULL);
 
 	xa_for_each(&reg_sr->xa, idx, sre) {
-		if (idx == param->expected_reg.reg)
+		if (idx == param->expected_reg.addr)
 			sr_entry = sre;
 
 		count++;
diff --git a/drivers/gpu/drm/xe/xe_force_wake.c b/drivers/gpu/drm/xe/xe_force_wake.c
index 363b81c3d746..f0f0592fc598 100644
--- a/drivers/gpu/drm/xe/xe_force_wake.c
+++ b/drivers/gpu/drm/xe/xe_force_wake.c
@@ -129,7 +129,7 @@ static int domain_sleep_wait(struct xe_gt *gt,
 	for (tmp__ = (mask__); tmp__; tmp__ &= ~BIT(ffs(tmp__) - 1)) \
 		for_each_if((domain__ = ((fw__)->domains + \
 					 (ffs(tmp__) - 1))) && \
-					 domain__->reg_ctl.reg)
+					 domain__->reg_ctl.addr)
 
 int xe_force_wake_get(struct xe_force_wake *fw,
 		      enum xe_force_wake_domains domains)
diff --git a/drivers/gpu/drm/xe/xe_gt_mcr.c b/drivers/gpu/drm/xe/xe_gt_mcr.c
index c6b9e9869fee..3db550c85e32 100644
--- a/drivers/gpu/drm/xe/xe_gt_mcr.c
+++ b/drivers/gpu/drm/xe/xe_gt_mcr.c
@@ -398,7 +398,7 @@ static bool xe_gt_mcr_get_nonterminated_steering(struct xe_gt *gt,
 	 */
 	drm_WARN(&gt_to_xe(gt)->drm, true,
 		 "Did not find MCR register %#x in any MCR steering table\n",
-		 reg.reg);
+		 reg.addr);
 	*group = 0;
 	*instance = 0;
 
diff --git a/drivers/gpu/drm/xe/xe_guc.c b/drivers/gpu/drm/xe/xe_guc.c
index e8a126ad400f..eb4af4c71124 100644
--- a/drivers/gpu/drm/xe/xe_guc.c
+++ b/drivers/gpu/drm/xe/xe_guc.c
@@ -713,7 +713,7 @@ int xe_guc_mmio_send_recv(struct xe_guc *guc, const u32 *request,
 		response_buf[0] = header;
 
 		for (i = 1; i < VF_SW_FLAG_COUNT; i++) {
-			reply_reg.reg += i * sizeof(u32);
+			reply_reg.addr += i * sizeof(u32);
 			response_buf[i] = xe_mmio_read32(gt, reply_reg);
 		}
 	}
diff --git a/drivers/gpu/drm/xe/xe_guc_ads.c b/drivers/gpu/drm/xe/xe_guc_ads.c
index 683f2df09c49..6d550d746909 100644
--- a/drivers/gpu/drm/xe/xe_guc_ads.c
+++ b/drivers/gpu/drm/xe/xe_guc_ads.c
@@ -426,7 +426,7 @@ static void guc_mmio_regset_write_one(struct xe_guc_ads *ads,
 				      unsigned int n_entry)
 {
 	struct guc_mmio_reg entry = {
-		.offset = reg.reg,
+		.offset = reg.addr,
 		.flags = reg.masked ? GUC_REGSET_MASKED : 0,
 	};
 
diff --git a/drivers/gpu/drm/xe/xe_hw_engine.c b/drivers/gpu/drm/xe/xe_hw_engine.c
index 5e275aff8974..696b9d949163 100644
--- a/drivers/gpu/drm/xe/xe_hw_engine.c
+++ b/drivers/gpu/drm/xe/xe_hw_engine.c
@@ -236,20 +236,20 @@ static void hw_engine_fini(struct drm_device *drm, void *arg)
 static void hw_engine_mmio_write32(struct xe_hw_engine *hwe, struct xe_reg reg,
 				   u32 val)
 {
-	XE_BUG_ON(reg.reg & hwe->mmio_base);
+	XE_BUG_ON(reg.addr & hwe->mmio_base);
 	xe_force_wake_assert_held(gt_to_fw(hwe->gt), hwe->domain);
 
-	reg.reg += hwe->mmio_base;
+	reg.addr += hwe->mmio_base;
 
 	xe_mmio_write32(hwe->gt, reg, val);
 }
 
 static u32 hw_engine_mmio_read32(struct xe_hw_engine *hwe, struct xe_reg reg)
 {
-	XE_BUG_ON(reg.reg & hwe->mmio_base);
+	XE_BUG_ON(reg.addr & hwe->mmio_base);
 	xe_force_wake_assert_held(gt_to_fw(hwe->gt), hwe->domain);
 
-	reg.reg += hwe->mmio_base;
+	reg.addr += hwe->mmio_base;
 
 	return xe_mmio_read32(hwe->gt, reg);
 }
diff --git a/drivers/gpu/drm/xe/xe_irq.c b/drivers/gpu/drm/xe/xe_irq.c
index 7aa245792927..5bf359c81cc5 100644
--- a/drivers/gpu/drm/xe/xe_irq.c
+++ b/drivers/gpu/drm/xe/xe_irq.c
@@ -36,7 +36,7 @@ static void assert_iir_is_zero(struct xe_gt *gt, struct xe_reg reg)
 
 	drm_WARN(&gt_to_xe(gt)->drm, 1,
 		 "Interrupt register 0x%x is not zero: 0x%08x\n",
-		 reg.reg, val);
+		 reg.addr, val);
 	xe_mmio_write32(gt, reg, 0xffffffff);
 	xe_mmio_read32(gt, reg);
 	xe_mmio_write32(gt, reg, 0xffffffff);
diff --git a/drivers/gpu/drm/xe/xe_mmio.c b/drivers/gpu/drm/xe/xe_mmio.c
index 0e91004fa06d..c7fbb1cc1f64 100644
--- a/drivers/gpu/drm/xe/xe_mmio.c
+++ b/drivers/gpu/drm/xe/xe_mmio.c
@@ -421,7 +421,7 @@ int xe_mmio_ioctl(struct drm_device *dev, void *data,
 		unsigned int i;
 
 		for (i = 0; i < ARRAY_SIZE(mmio_read_whitelist); i++) {
-			if (mmio_read_whitelist[i].reg == args->addr) {
+			if (mmio_read_whitelist[i].addr == args->addr) {
 				allowed = true;
 				break;
 			}
diff --git a/drivers/gpu/drm/xe/xe_mmio.h b/drivers/gpu/drm/xe/xe_mmio.h
index 821701f8ada6..01732ff7e4c6 100644
--- a/drivers/gpu/drm/xe/xe_mmio.h
+++ b/drivers/gpu/drm/xe/xe_mmio.h
@@ -20,27 +20,27 @@ int xe_mmio_init(struct xe_device *xe);
 
 static inline u8 xe_mmio_read8(struct xe_gt *gt, struct xe_reg reg)
 {
-	if (reg.reg < gt->mmio.adj_limit)
-		reg.reg += gt->mmio.adj_offset;
+	if (reg.addr < gt->mmio.adj_limit)
+		reg.addr += gt->mmio.adj_offset;
 
-	return readb(gt->mmio.regs + reg.reg);
+	return readb(gt->mmio.regs + reg.addr);
 }
 
 static inline void xe_mmio_write32(struct xe_gt *gt,
 				   struct xe_reg reg, u32 val)
 {
-	if (reg.reg < gt->mmio.adj_limit)
-		reg.reg += gt->mmio.adj_offset;
+	if (reg.addr < gt->mmio.adj_limit)
+		reg.addr += gt->mmio.adj_offset;
 
-	writel(val, gt->mmio.regs + reg.reg);
+	writel(val, gt->mmio.regs + reg.addr);
 }
 
 static inline u32 xe_mmio_read32(struct xe_gt *gt, struct xe_reg reg)
 {
-	if (reg.reg < gt->mmio.adj_limit)
-		reg.reg += gt->mmio.adj_offset;
+	if (reg.addr < gt->mmio.adj_limit)
+		reg.addr += gt->mmio.adj_offset;
 
-	return readl(gt->mmio.regs + reg.reg);
+	return readl(gt->mmio.regs + reg.addr);
 }
 
 static inline u32 xe_mmio_rmw32(struct xe_gt *gt, struct xe_reg reg, u32 clr,
@@ -58,18 +58,18 @@ static inline u32 xe_mmio_rmw32(struct xe_gt *gt, struct xe_reg reg, u32 clr,
 static inline void xe_mmio_write64(struct xe_gt *gt,
 				   struct xe_reg reg, u64 val)
 {
-	if (reg.reg < gt->mmio.adj_limit)
-		reg.reg += gt->mmio.adj_offset;
+	if (reg.addr < gt->mmio.adj_limit)
+		reg.addr += gt->mmio.adj_offset;
 
-	writeq(val, gt->mmio.regs + reg.reg);
+	writeq(val, gt->mmio.regs + reg.addr);
 }
 
 static inline u64 xe_mmio_read64(struct xe_gt *gt, struct xe_reg reg)
 {
-	if (reg.reg < gt->mmio.adj_limit)
-		reg.reg += gt->mmio.adj_offset;
+	if (reg.addr < gt->mmio.adj_limit)
+		reg.addr += gt->mmio.adj_offset;
 
-	return readq(gt->mmio.regs + reg.reg);
+	return readq(gt->mmio.regs + reg.addr);
 }
 
 static inline int xe_mmio_write32_and_verify(struct xe_gt *gt,
@@ -127,7 +127,7 @@ int xe_mmio_ioctl(struct drm_device *dev, void *data,
 static inline bool xe_mmio_in_range(const struct xe_mmio_range *range,
 				    struct xe_reg reg)
 {
-	return range && reg.reg >= range->start && reg.reg <= range->end;
+	return range && reg.addr >= range->start && reg.addr <= range->end;
 }
 
 int xe_mmio_probe_vram(struct xe_device *xe);
diff --git a/drivers/gpu/drm/xe/xe_mocs.c b/drivers/gpu/drm/xe/xe_mocs.c
index 1175dec5d90b..5698df87aba7 100644
--- a/drivers/gpu/drm/xe/xe_mocs.c
+++ b/drivers/gpu/drm/xe/xe_mocs.c
@@ -478,7 +478,7 @@ static void __init_mocs_table(struct xe_gt *gt,
 	     i < info->n_entries ? (mocs = get_entry_control(info, i)), 1 : 0;
 	     i++) {
 		struct xe_reg reg = XE_REG(addr + i * 4);
-		mocs_dbg(&gt->xe->drm, "%d 0x%x 0x%x\n", i, reg.reg, mocs);
+		mocs_dbg(&gt->xe->drm, "%d 0x%x 0x%x\n", i, reg.addr, mocs);
 		xe_mmio_write32(gt, reg, mocs);
 	}
 }
@@ -513,7 +513,7 @@ static void init_l3cc_table(struct xe_gt *gt,
 	     (l3cc = l3cc_combine(get_entry_l3cc(info, 2 * i),
 				  get_entry_l3cc(info, 2 * i + 1))), 1 : 0;
 	     i++) {
-		mocs_dbg(&gt->xe->drm, "%d 0x%x 0x%x\n", i, LNCFCMOCS(i).reg,
+		mocs_dbg(&gt->xe->drm, "%d 0x%x 0x%x\n", i, LNCFCMOCS(i).addr,
 			 l3cc);
 		xe_mmio_write32(gt, LNCFCMOCS(i), l3cc);
 	}
@@ -540,7 +540,7 @@ void xe_mocs_init(struct xe_gt *gt)
 	mocs_dbg(&gt->xe->drm, "flag:0x%x\n", flags);
 
 	if (flags & HAS_GLOBAL_MOCS)
-		__init_mocs_table(gt, &table, GLOBAL_MOCS(0).reg);
+		__init_mocs_table(gt, &table, GLOBAL_MOCS(0).addr);
 
 	/*
 	 * Initialize the L3CC table as part of mocs initalization to make
diff --git a/drivers/gpu/drm/xe/xe_pci.c b/drivers/gpu/drm/xe/xe_pci.c
index 855cf8557056..a6858fc7fe8d 100644
--- a/drivers/gpu/drm/xe/xe_pci.c
+++ b/drivers/gpu/drm/xe/xe_pci.c
@@ -442,7 +442,7 @@ static void handle_gmdid(struct xe_device *xe,
 {
 	u32 ver;
 
-	ver = peek_gmdid(xe, GMD_ID.reg);
+	ver = peek_gmdid(xe, GMD_ID.addr);
 	for (int i = 0; i < ARRAY_SIZE(graphics_ip_map); i++) {
 		if (ver == graphics_ip_map[i].ver) {
 			xe->info.graphics_verx100 = ver;
@@ -457,7 +457,7 @@ static void handle_gmdid(struct xe_device *xe,
 			ver / 100, ver % 100);
 	}
 
-	ver = peek_gmdid(xe, GMD_ID.reg + 0x380000);
+	ver = peek_gmdid(xe, GMD_ID.addr + 0x380000);
 	for (int i = 0; i < ARRAY_SIZE(media_ip_map); i++) {
 		if (ver == media_ip_map[i].ver) {
 			xe->info.media_verx100 = ver;
diff --git a/drivers/gpu/drm/xe/xe_reg_sr.c b/drivers/gpu/drm/xe/xe_reg_sr.c
index 51a40a9e532d..0312823101ad 100644
--- a/drivers/gpu/drm/xe/xe_reg_sr.c
+++ b/drivers/gpu/drm/xe/xe_reg_sr.c
@@ -93,7 +93,7 @@ static void reg_sr_inc_error(struct xe_reg_sr *sr)
 int xe_reg_sr_add(struct xe_reg_sr *sr,
 		  const struct xe_reg_sr_entry *e)
 {
-	unsigned long idx = e->reg.reg;
+	unsigned long idx = e->reg.addr;
 	struct xe_reg_sr_entry *pentry = xa_load(&sr->xa, idx);
 	int ret;
 
@@ -174,7 +174,7 @@ static void apply_one_mmio(struct xe_gt *gt, struct xe_reg_sr_entry *entry)
 	 */
 	val |= entry->set_bits;
 
-	drm_dbg(&xe->drm, "REG[0x%x] = 0x%08x", reg.reg, val);
+	drm_dbg(&xe->drm, "REG[0x%x] = 0x%08x", reg.addr, val);
 
 	if (entry->reg.mcr)
 		xe_gt_mcr_multicast_write(gt, reg_mcr, val);
@@ -239,7 +239,7 @@ void xe_reg_sr_apply_whitelist(struct xe_reg_sr *sr, u32 mmio_base,
 
 	/* And clear the rest just in case of garbage */
 	for (; slot < RING_MAX_NONPRIV_SLOTS; slot++) {
-		u32 addr = RING_NOPID(mmio_base).reg;
+		u32 addr = RING_NOPID(mmio_base).addr;
 
 		xe_mmio_write32(gt, RING_FORCE_TO_NONPRIV(mmio_base, slot), addr);
 	}
diff --git a/drivers/gpu/drm/xe/xe_ring_ops.c b/drivers/gpu/drm/xe/xe_ring_ops.c
index 733ed8a30c2e..74c1b5dfbaee 100644
--- a/drivers/gpu/drm/xe/xe_ring_ops.c
+++ b/drivers/gpu/drm/xe/xe_ring_ops.c
@@ -48,7 +48,7 @@ static int emit_aux_table_inv(struct xe_gt *gt, struct xe_reg reg,
 			      u32 *dw, int i)
 {
 	dw[i++] = MI_LOAD_REGISTER_IMM(1) | MI_LRI_MMIO_REMAP_EN;
-	dw[i++] = reg.reg + gt->mmio.adj_offset;
+	dw[i++] = reg.addr + gt->mmio.adj_offset;
 	dw[i++] = AUX_INV;
 	dw[i++] = MI_NOOP;
 
diff --git a/drivers/gpu/drm/xe/xe_rtp.c b/drivers/gpu/drm/xe/xe_rtp.c
index f2a0e8eb4936..0c6a23e14a71 100644
--- a/drivers/gpu/drm/xe/xe_rtp.c
+++ b/drivers/gpu/drm/xe/xe_rtp.c
@@ -101,7 +101,7 @@ static void rtp_add_sr_entry(const struct xe_rtp_action *action,
 		.read_mask = action->read_mask,
 	};
 
-	sr_entry.reg.reg += mmio_base;
+	sr_entry.reg.addr += mmio_base;
 	xe_reg_sr_add(sr, &sr_entry);
 }
 
diff --git a/drivers/gpu/drm/xe/xe_wopcm.c b/drivers/gpu/drm/xe/xe_wopcm.c
index 11eea970c207..35fde8965bca 100644
--- a/drivers/gpu/drm/xe/xe_wopcm.c
+++ b/drivers/gpu/drm/xe/xe_wopcm.c
@@ -170,10 +170,10 @@ static int __wopcm_init_regs(struct xe_device *xe, struct xe_gt *gt,
 err_out:
 	drm_notice(&xe->drm, "Failed to init uC WOPCM registers!\n");
 	drm_notice(&xe->drm, "%s(%#x)=%#x\n", "DMA_GUC_WOPCM_OFFSET",
-		   DMA_GUC_WOPCM_OFFSET.reg,
+		   DMA_GUC_WOPCM_OFFSET.addr,
 		   xe_mmio_read32(gt, DMA_GUC_WOPCM_OFFSET));
 	drm_notice(&xe->drm, "%s(%#x)=%#x\n", "GUC_WOPCM_SIZE",
-		   GUC_WOPCM_SIZE.reg,
+		   GUC_WOPCM_SIZE.addr,
 		   xe_mmio_read32(gt, GUC_WOPCM_SIZE));
 
 	return err;
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [Intel-xe] [PATCH v2 4/4] drm/xe: Fix indent in xe_hw_engine_print_state()
  2023-05-08 22:53 [Intel-xe] [PATCH v2 0/4] Convert xe_mmio to struct xe_reg Lucas De Marchi
                   ` (2 preceding siblings ...)
  2023-05-08 22:53 ` [Intel-xe] [PATCH v2 3/4] drm/xe: Rename reg field to addr Lucas De Marchi
@ 2023-05-08 22:53 ` Lucas De Marchi
  2023-05-08 22:56 ` [Intel-xe] ✓ CI.Patch_applied: success for Convert xe_mmio to struct xe_reg (rev2) Patchwork
  2023-05-09 20:01 ` [Intel-xe] [PATCH v2 0/4] Convert xe_mmio to struct xe_reg Lucas De Marchi
  5 siblings, 0 replies; 12+ messages in thread
From: Lucas De Marchi @ 2023-05-08 22:53 UTC (permalink / raw)
  To: intel-xe; +Cc: Lucas De Marchi, Rodrigo Vivi

Fix the indent to align with open parenthesis, following the coding
style.

Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 drivers/gpu/drm/xe/xe_hw_engine.c | 66 +++++++++++++++----------------
 1 file changed, 33 insertions(+), 33 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_hw_engine.c b/drivers/gpu/drm/xe/xe_hw_engine.c
index 696b9d949163..751f6c3bba17 100644
--- a/drivers/gpu/drm/xe/xe_hw_engine.c
+++ b/drivers/gpu/drm/xe/xe_hw_engine.c
@@ -580,70 +580,70 @@ void xe_hw_engine_print_state(struct xe_hw_engine *hwe, struct drm_printer *p)
 		return;
 
 	drm_printf(p, "%s (physical), logical instance=%d\n", hwe->name,
-		hwe->logical_instance);
+		   hwe->logical_instance);
 	drm_printf(p, "\tForcewake: domain 0x%x, ref %d\n",
-		hwe->domain,
-		xe_force_wake_ref(gt_to_fw(hwe->gt), hwe->domain));
+		   hwe->domain,
+		   xe_force_wake_ref(gt_to_fw(hwe->gt), hwe->domain));
 	drm_printf(p, "\tMMIO base: 0x%08x\n", hwe->mmio_base);
 
 	drm_printf(p, "\tHWSTAM: 0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_HWSTAM(0)));
+		   hw_engine_mmio_read32(hwe, RING_HWSTAM(0)));
 	drm_printf(p, "\tRING_HWS_PGA: 0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_HWS_PGA(0)));
+		   hw_engine_mmio_read32(hwe, RING_HWS_PGA(0)));
 
 	drm_printf(p, "\tRING_EXECLIST_STATUS_LO: 0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_EXECLIST_STATUS_LO(0)));
+		   hw_engine_mmio_read32(hwe, RING_EXECLIST_STATUS_LO(0)));
 	drm_printf(p, "\tRING_EXECLIST_STATUS_HI: 0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_EXECLIST_STATUS_HI(0)));
+		   hw_engine_mmio_read32(hwe, RING_EXECLIST_STATUS_HI(0)));
 	drm_printf(p, "\tRING_EXECLIST_SQ_CONTENTS_LO: 0x%08x\n",
-		hw_engine_mmio_read32(hwe,
+		   hw_engine_mmio_read32(hwe,
 					 RING_EXECLIST_SQ_CONTENTS_LO(0)));
 	drm_printf(p, "\tRING_EXECLIST_SQ_CONTENTS_HI: 0x%08x\n",
-		hw_engine_mmio_read32(hwe,
+		   hw_engine_mmio_read32(hwe,
 					 RING_EXECLIST_SQ_CONTENTS_HI(0)));
 	drm_printf(p, "\tRING_EXECLIST_CONTROL: 0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_EXECLIST_CONTROL(0)));
+		   hw_engine_mmio_read32(hwe, RING_EXECLIST_CONTROL(0)));
 
 	drm_printf(p, "\tRING_START: 0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_START(0)));
+		   hw_engine_mmio_read32(hwe, RING_START(0)));
 	drm_printf(p, "\tRING_HEAD:  0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_HEAD(0)) & HEAD_ADDR);
+		   hw_engine_mmio_read32(hwe, RING_HEAD(0)) & HEAD_ADDR);
 	drm_printf(p, "\tRING_TAIL:  0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_TAIL(0)) & TAIL_ADDR);
+		   hw_engine_mmio_read32(hwe, RING_TAIL(0)) & TAIL_ADDR);
 	drm_printf(p, "\tRING_CTL: 0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_CTL(0)));
+		   hw_engine_mmio_read32(hwe, RING_CTL(0)));
 	drm_printf(p, "\tRING_MODE: 0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_MI_MODE(0)));
+		   hw_engine_mmio_read32(hwe, RING_MI_MODE(0)));
 	drm_printf(p, "\tRING_MODE_GEN7: 0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_MODE(0)));
+		   hw_engine_mmio_read32(hwe, RING_MODE(0)));
 
 	drm_printf(p, "\tRING_IMR:   0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_IMR(0)));
+		   hw_engine_mmio_read32(hwe, RING_IMR(0)));
 	drm_printf(p, "\tRING_ESR:   0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_ESR(0)));
+		   hw_engine_mmio_read32(hwe, RING_ESR(0)));
 	drm_printf(p, "\tRING_EMR:   0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_EMR(0)));
+		   hw_engine_mmio_read32(hwe, RING_EMR(0)));
 	drm_printf(p, "\tRING_EIR:   0x%08x\n",
-		hw_engine_mmio_read32(hwe, RING_EIR(0)));
-
-        drm_printf(p, "\tACTHD:  0x%08x_%08x\n",
-		hw_engine_mmio_read32(hwe, RING_ACTHD_UDW(0)),
-		hw_engine_mmio_read32(hwe, RING_ACTHD(0)));
-        drm_printf(p, "\tBBADDR: 0x%08x_%08x\n",
-		hw_engine_mmio_read32(hwe, RING_BBADDR_UDW(0)),
-		hw_engine_mmio_read32(hwe, RING_BBADDR(0)));
-        drm_printf(p, "\tDMA_FADDR: 0x%08x_%08x\n",
-		hw_engine_mmio_read32(hwe, RING_DMA_FADD_UDW(0)),
-		hw_engine_mmio_read32(hwe, RING_DMA_FADD(0)));
+		   hw_engine_mmio_read32(hwe, RING_EIR(0)));
+
+	drm_printf(p, "\tACTHD:  0x%08x_%08x\n",
+		   hw_engine_mmio_read32(hwe, RING_ACTHD_UDW(0)),
+		   hw_engine_mmio_read32(hwe, RING_ACTHD(0)));
+	drm_printf(p, "\tBBADDR: 0x%08x_%08x\n",
+		   hw_engine_mmio_read32(hwe, RING_BBADDR_UDW(0)),
+		   hw_engine_mmio_read32(hwe, RING_BBADDR(0)));
+	drm_printf(p, "\tDMA_FADDR: 0x%08x_%08x\n",
+		   hw_engine_mmio_read32(hwe, RING_DMA_FADD_UDW(0)),
+		   hw_engine_mmio_read32(hwe, RING_DMA_FADD(0)));
 
 	drm_printf(p, "\tIPEIR: 0x%08x\n",
-		hw_engine_mmio_read32(hwe, IPEIR(0)));
+		   hw_engine_mmio_read32(hwe, IPEIR(0)));
 	drm_printf(p, "\tIPEHR: 0x%08x\n\n",
-		hw_engine_mmio_read32(hwe, IPEHR(0)));
+		   hw_engine_mmio_read32(hwe, IPEHR(0)));
 
 	if (hwe->class == XE_ENGINE_CLASS_COMPUTE)
 		drm_printf(p, "\tRCU_MODE: 0x%08x\n",
-			xe_mmio_read32(hwe->gt, RCU_MODE));
+			   xe_mmio_read32(hwe->gt, RCU_MODE));
 
 }
 
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [Intel-xe] ✓ CI.Patch_applied: success for Convert xe_mmio to struct xe_reg (rev2)
  2023-05-08 22:53 [Intel-xe] [PATCH v2 0/4] Convert xe_mmio to struct xe_reg Lucas De Marchi
                   ` (3 preceding siblings ...)
  2023-05-08 22:53 ` [Intel-xe] [PATCH v2 4/4] drm/xe: Fix indent in xe_hw_engine_print_state() Lucas De Marchi
@ 2023-05-08 22:56 ` Patchwork
  2023-05-09 20:01 ` [Intel-xe] [PATCH v2 0/4] Convert xe_mmio to struct xe_reg Lucas De Marchi
  5 siblings, 0 replies; 12+ messages in thread
From: Patchwork @ 2023-05-08 22:56 UTC (permalink / raw)
  To: Lucas De Marchi; +Cc: intel-xe

== Series Details ==

Series: Convert xe_mmio to struct xe_reg (rev2)
URL   : https://patchwork.freedesktop.org/series/117138/
State : success

== Summary ==

=== Applying kernel patches on branch 'drm-xe-next' with base: ===
Base commit: 6d1561872 drm/xe: Print GT info on TLB inv failure
=== git am output follows ===
Applying: drm/xe/mmio: Use struct xe_reg
Applying: fixup! drm/xe/display: Implement display support
Applying: drm/xe: Rename reg field to addr
Applying: drm/xe: Fix indent in xe_hw_engine_print_state()



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Intel-xe] [PATCH v2 1/4] drm/xe/mmio: Use struct xe_reg
  2023-05-08 22:53 ` [Intel-xe] [PATCH v2 1/4] drm/xe/mmio: Use " Lucas De Marchi
@ 2023-05-09 15:24   ` Rodrigo Vivi
  0 siblings, 0 replies; 12+ messages in thread
From: Rodrigo Vivi @ 2023-05-09 15:24 UTC (permalink / raw)
  To: Lucas De Marchi; +Cc: intel-xe

On Mon, May 08, 2023 at 03:53:19PM -0700, Lucas De Marchi wrote:
> Convert all the callers to deal with xe_mmio_*() using struct xe_reg
> instead of plain u32. In a few places there was also a rename
> s/reg/reg_val/ when dealing with the value returned so it doesn't get
> mixed up with the register address.
> 
> Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
> Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>

you can convert this to a

Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>

> ---
>  drivers/gpu/drm/xe/xe_device.c           |   2 +-
>  drivers/gpu/drm/xe/xe_execlist.c         |  18 +--
>  drivers/gpu/drm/xe/xe_force_wake.c       |  25 ++--
>  drivers/gpu/drm/xe/xe_force_wake_types.h |   6 +-
>  drivers/gpu/drm/xe/xe_ggtt.c             |   6 +-
>  drivers/gpu/drm/xe/xe_gt.c               |   4 +-
>  drivers/gpu/drm/xe/xe_gt_clock.c         |   6 +-
>  drivers/gpu/drm/xe/xe_gt_mcr.c           |  37 +++---
>  drivers/gpu/drm/xe/xe_gt_topology.c      |  18 +--
>  drivers/gpu/drm/xe/xe_guc.c              |  61 +++++-----
>  drivers/gpu/drm/xe/xe_guc_ads.c          |   3 +-
>  drivers/gpu/drm/xe/xe_guc_pc.c           |  32 +++---
>  drivers/gpu/drm/xe/xe_guc_types.h        |   3 +-
>  drivers/gpu/drm/xe/xe_huc.c              |   4 +-
>  drivers/gpu/drm/xe/xe_hw_engine.c        |  85 +++++++-------
>  drivers/gpu/drm/xe/xe_irq.c              | 138 +++++++++++------------
>  drivers/gpu/drm/xe/xe_mmio.c             |  31 +++--
>  drivers/gpu/drm/xe/xe_mmio.h             |  55 ++++-----
>  drivers/gpu/drm/xe/xe_mocs.c             |   7 +-
>  drivers/gpu/drm/xe/xe_pat.c              |  14 ++-
>  drivers/gpu/drm/xe/xe_pcode.c            |  16 +--
>  drivers/gpu/drm/xe/xe_reg_sr.c           |  14 ++-
>  drivers/gpu/drm/xe/xe_ring_ops.c         |  11 +-
>  drivers/gpu/drm/xe/xe_ttm_stolen_mgr.c   |   4 +-
>  drivers/gpu/drm/xe/xe_uc_fw.c            |  16 +--
>  drivers/gpu/drm/xe/xe_wopcm.c            |  12 +-
>  26 files changed, 329 insertions(+), 299 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
> index 00f1d9e386f1..342e3362b75f 100644
> --- a/drivers/gpu/drm/xe/xe_device.c
> +++ b/drivers/gpu/drm/xe/xe_device.c
> @@ -393,7 +393,7 @@ void xe_device_wmb(struct xe_device *xe)
>  
>  	wmb();
>  	if (IS_DGFX(xe))
> -		xe_mmio_write32(gt, SOFTWARE_FLAGS_SPR33.reg, 0);
> +		xe_mmio_write32(gt, SOFTWARE_FLAGS_SPR33, 0);
>  }
>  
>  u32 xe_device_ccs_bytes(struct xe_device *xe, u64 size)
> diff --git a/drivers/gpu/drm/xe/xe_execlist.c b/drivers/gpu/drm/xe/xe_execlist.c
> index de4f0044b211..5d2d26e361b9 100644
> --- a/drivers/gpu/drm/xe/xe_execlist.c
> +++ b/drivers/gpu/drm/xe/xe_execlist.c
> @@ -60,7 +60,7 @@ static void __start_lrc(struct xe_hw_engine *hwe, struct xe_lrc *lrc,
>  	}
>  
>  	if (hwe->class == XE_ENGINE_CLASS_COMPUTE)
> -		xe_mmio_write32(hwe->gt, RCU_MODE.reg,
> +		xe_mmio_write32(hwe->gt, RCU_MODE,
>  				_MASKED_BIT_ENABLE(RCU_MODE_CCS_ENABLE));
>  
>  	xe_lrc_write_ctx_reg(lrc, CTX_RING_TAIL, lrc->ring.tail);
> @@ -78,17 +78,17 @@ static void __start_lrc(struct xe_hw_engine *hwe, struct xe_lrc *lrc,
>  	 */
>  	wmb();
>  
> -	xe_mmio_write32(gt, RING_HWS_PGA(hwe->mmio_base).reg,
> +	xe_mmio_write32(gt, RING_HWS_PGA(hwe->mmio_base),
>  			xe_bo_ggtt_addr(hwe->hwsp));
> -	xe_mmio_read32(gt, RING_HWS_PGA(hwe->mmio_base).reg);
> -	xe_mmio_write32(gt, RING_MODE(hwe->mmio_base).reg,
> +	xe_mmio_read32(gt, RING_HWS_PGA(hwe->mmio_base));
> +	xe_mmio_write32(gt, RING_MODE(hwe->mmio_base),
>  			_MASKED_BIT_ENABLE(GFX_DISABLE_LEGACY_MODE));
>  
> -	xe_mmio_write32(gt, RING_EXECLIST_SQ_CONTENTS_LO(hwe->mmio_base).reg,
> +	xe_mmio_write32(gt, RING_EXECLIST_SQ_CONTENTS_LO(hwe->mmio_base),
>  			lower_32_bits(lrc_desc));
> -	xe_mmio_write32(gt, RING_EXECLIST_SQ_CONTENTS_HI(hwe->mmio_base).reg,
> +	xe_mmio_write32(gt, RING_EXECLIST_SQ_CONTENTS_HI(hwe->mmio_base),
>  			upper_32_bits(lrc_desc));
> -	xe_mmio_write32(gt, RING_EXECLIST_CONTROL(hwe->mmio_base).reg,
> +	xe_mmio_write32(gt, RING_EXECLIST_CONTROL(hwe->mmio_base),
>  			EL_CTRL_LOAD);
>  }
>  
> @@ -173,8 +173,8 @@ static u64 read_execlist_status(struct xe_hw_engine *hwe)
>  	struct xe_gt *gt = hwe->gt;
>  	u32 hi, lo;
>  
> -	lo = xe_mmio_read32(gt, RING_EXECLIST_STATUS_LO(hwe->mmio_base).reg);
> -	hi = xe_mmio_read32(gt, RING_EXECLIST_STATUS_HI(hwe->mmio_base).reg);
> +	lo = xe_mmio_read32(gt, RING_EXECLIST_STATUS_LO(hwe->mmio_base));
> +	hi = xe_mmio_read32(gt, RING_EXECLIST_STATUS_HI(hwe->mmio_base));
>  
>  	printk(KERN_INFO "EXECLIST_STATUS %d:%d = 0x%08x %08x\n", hwe->class,
>  	       hwe->instance, hi, lo);
> diff --git a/drivers/gpu/drm/xe/xe_force_wake.c b/drivers/gpu/drm/xe/xe_force_wake.c
> index 53d73f36a121..363b81c3d746 100644
> --- a/drivers/gpu/drm/xe/xe_force_wake.c
> +++ b/drivers/gpu/drm/xe/xe_force_wake.c
> @@ -8,6 +8,7 @@
>  #include <drm/drm_util.h>
>  
>  #include "regs/xe_gt_regs.h"
> +#include "regs/xe_reg_defs.h"
>  #include "xe_gt.h"
>  #include "xe_mmio.h"
>  
> @@ -27,7 +28,7 @@ fw_to_xe(struct xe_force_wake *fw)
>  
>  static void domain_init(struct xe_force_wake_domain *domain,
>  			enum xe_force_wake_domain_id id,
> -			u32 reg, u32 ack, u32 val, u32 mask)
> +			struct xe_reg reg, struct xe_reg ack, u32 val, u32 mask)
>  {
>  	domain->id = id;
>  	domain->reg_ctl = reg;
> @@ -49,14 +50,14 @@ void xe_force_wake_init_gt(struct xe_gt *gt, struct xe_force_wake *fw)
>  	if (xe->info.graphics_verx100 >= 1270) {
>  		domain_init(&fw->domains[XE_FW_DOMAIN_ID_GT],
>  			    XE_FW_DOMAIN_ID_GT,
> -			    FORCEWAKE_GT.reg,
> -			    FORCEWAKE_ACK_GT_MTL.reg,
> +			    FORCEWAKE_GT,
> +			    FORCEWAKE_ACK_GT_MTL,
>  			    BIT(0), BIT(16));
>  	} else {
>  		domain_init(&fw->domains[XE_FW_DOMAIN_ID_GT],
>  			    XE_FW_DOMAIN_ID_GT,
> -			    FORCEWAKE_GT.reg,
> -			    FORCEWAKE_ACK_GT.reg,
> +			    FORCEWAKE_GT,
> +			    FORCEWAKE_ACK_GT,
>  			    BIT(0), BIT(16));
>  	}
>  }
> @@ -71,8 +72,8 @@ void xe_force_wake_init_engines(struct xe_gt *gt, struct xe_force_wake *fw)
>  	if (!xe_gt_is_media_type(gt))
>  		domain_init(&fw->domains[XE_FW_DOMAIN_ID_RENDER],
>  			    XE_FW_DOMAIN_ID_RENDER,
> -			    FORCEWAKE_RENDER.reg,
> -			    FORCEWAKE_ACK_RENDER.reg,
> +			    FORCEWAKE_RENDER,
> +			    FORCEWAKE_ACK_RENDER,
>  			    BIT(0), BIT(16));
>  
>  	for (i = XE_HW_ENGINE_VCS0, j = 0; i <= XE_HW_ENGINE_VCS7; ++i, ++j) {
> @@ -81,8 +82,8 @@ void xe_force_wake_init_engines(struct xe_gt *gt, struct xe_force_wake *fw)
>  
>  		domain_init(&fw->domains[XE_FW_DOMAIN_ID_MEDIA_VDBOX0 + j],
>  			    XE_FW_DOMAIN_ID_MEDIA_VDBOX0 + j,
> -			    FORCEWAKE_MEDIA_VDBOX(j).reg,
> -			    FORCEWAKE_ACK_MEDIA_VDBOX(j).reg,
> +			    FORCEWAKE_MEDIA_VDBOX(j),
> +			    FORCEWAKE_ACK_MEDIA_VDBOX(j),
>  			    BIT(0), BIT(16));
>  	}
>  
> @@ -92,8 +93,8 @@ void xe_force_wake_init_engines(struct xe_gt *gt, struct xe_force_wake *fw)
>  
>  		domain_init(&fw->domains[XE_FW_DOMAIN_ID_MEDIA_VEBOX0 + j],
>  			    XE_FW_DOMAIN_ID_MEDIA_VEBOX0 + j,
> -			    FORCEWAKE_MEDIA_VEBOX(j).reg,
> -			    FORCEWAKE_ACK_MEDIA_VEBOX(j).reg,
> +			    FORCEWAKE_MEDIA_VEBOX(j),
> +			    FORCEWAKE_ACK_MEDIA_VEBOX(j),
>  			    BIT(0), BIT(16));
>  	}
>  }
> @@ -128,7 +129,7 @@ static int domain_sleep_wait(struct xe_gt *gt,
>  	for (tmp__ = (mask__); tmp__; tmp__ &= ~BIT(ffs(tmp__) - 1)) \
>  		for_each_if((domain__ = ((fw__)->domains + \
>  					 (ffs(tmp__) - 1))) && \
> -					 domain__->reg_ctl)
> +					 domain__->reg_ctl.reg)
>  
>  int xe_force_wake_get(struct xe_force_wake *fw,
>  		      enum xe_force_wake_domains domains)
> diff --git a/drivers/gpu/drm/xe/xe_force_wake_types.h b/drivers/gpu/drm/xe/xe_force_wake_types.h
> index 208dd629d7b1..cb782696855b 100644
> --- a/drivers/gpu/drm/xe/xe_force_wake_types.h
> +++ b/drivers/gpu/drm/xe/xe_force_wake_types.h
> @@ -9,6 +9,8 @@
>  #include <linux/mutex.h>
>  #include <linux/types.h>
>  
> +#include "regs/xe_reg_defs.h"
> +
>  enum xe_force_wake_domain_id {
>  	XE_FW_DOMAIN_ID_GT = 0,
>  	XE_FW_DOMAIN_ID_RENDER,
> @@ -56,9 +58,9 @@ struct xe_force_wake_domain {
>  	/** @id: domain force wake id */
>  	enum xe_force_wake_domain_id id;
>  	/** @reg_ctl: domain wake control register address */
> -	u32 reg_ctl;
> +	struct xe_reg reg_ctl;
>  	/** @reg_ack: domain ack register address */
> -	u32 reg_ack;
> +	struct xe_reg reg_ack;
>  	/** @val: domain wake write value */
>  	u32 val;
>  	/** @mask: domain mask */
> diff --git a/drivers/gpu/drm/xe/xe_ggtt.c b/drivers/gpu/drm/xe/xe_ggtt.c
> index 9c08031c9350..546240261e0a 100644
> --- a/drivers/gpu/drm/xe/xe_ggtt.c
> +++ b/drivers/gpu/drm/xe/xe_ggtt.c
> @@ -207,12 +207,12 @@ void xe_ggtt_invalidate(struct xe_gt *gt)
>  		struct xe_device *xe = gt_to_xe(gt);
>  
>  		if (xe->info.platform == XE_PVC) {
> -			xe_mmio_write32(gt, PVC_GUC_TLB_INV_DESC1.reg,
> +			xe_mmio_write32(gt, PVC_GUC_TLB_INV_DESC1,
>  					PVC_GUC_TLB_INV_DESC1_INVALIDATE);
> -			xe_mmio_write32(gt, PVC_GUC_TLB_INV_DESC0.reg,
> +			xe_mmio_write32(gt, PVC_GUC_TLB_INV_DESC0,
>  					PVC_GUC_TLB_INV_DESC0_VALID);
>  		} else
> -			xe_mmio_write32(gt, GUC_TLB_INV_CR.reg,
> +			xe_mmio_write32(gt, GUC_TLB_INV_CR,
>  					GUC_TLB_INV_CR_INVALIDATE);
>  	}
>  }
> diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
> index 3afca3dd9657..cbe063a40aca 100644
> --- a/drivers/gpu/drm/xe/xe_gt.c
> +++ b/drivers/gpu/drm/xe/xe_gt.c
> @@ -544,8 +544,8 @@ static int do_gt_reset(struct xe_gt *gt)
>  	struct xe_device *xe = gt_to_xe(gt);
>  	int err;
>  
> -	xe_mmio_write32(gt, GDRST.reg, GRDOM_FULL);
> -	err = xe_mmio_wait32(gt, GDRST.reg, 0, GRDOM_FULL, 5000,
> +	xe_mmio_write32(gt, GDRST, GRDOM_FULL);
> +	err = xe_mmio_wait32(gt, GDRST, 0, GRDOM_FULL, 5000,
>  			     NULL, false);
>  	if (err)
>  		drm_err(&xe->drm,
> diff --git a/drivers/gpu/drm/xe/xe_gt_clock.c b/drivers/gpu/drm/xe/xe_gt_clock.c
> index 49625d49bdcc..7cf11078ff57 100644
> --- a/drivers/gpu/drm/xe/xe_gt_clock.c
> +++ b/drivers/gpu/drm/xe/xe_gt_clock.c
> @@ -14,7 +14,7 @@
>  
>  static u32 read_reference_ts_freq(struct xe_gt *gt)
>  {
> -	u32 ts_override = xe_mmio_read32(gt, TIMESTAMP_OVERRIDE.reg);
> +	u32 ts_override = xe_mmio_read32(gt, TIMESTAMP_OVERRIDE);
>  	u32 base_freq, frac_freq;
>  
>  	base_freq = REG_FIELD_GET(TIMESTAMP_OVERRIDE_US_COUNTER_DIVIDER_MASK,
> @@ -54,7 +54,7 @@ static u32 get_crystal_clock_freq(u32 rpm_config_reg)
>  
>  int xe_gt_clock_init(struct xe_gt *gt)
>  {
> -	u32 ctc_reg = xe_mmio_read32(gt, CTC_MODE.reg);
> +	u32 ctc_reg = xe_mmio_read32(gt, CTC_MODE);
>  	u32 freq = 0;
>  
>  	/* Assuming gen11+ so assert this assumption is correct */
> @@ -63,7 +63,7 @@ int xe_gt_clock_init(struct xe_gt *gt)
>  	if (ctc_reg & CTC_SOURCE_DIVIDE_LOGIC) {
>  		freq = read_reference_ts_freq(gt);
>  	} else {
> -		u32 c0 = xe_mmio_read32(gt, RPM_CONFIG0.reg);
> +		u32 c0 = xe_mmio_read32(gt, RPM_CONFIG0);
>  
>  		freq = get_crystal_clock_freq(c0);
>  
> diff --git a/drivers/gpu/drm/xe/xe_gt_mcr.c b/drivers/gpu/drm/xe/xe_gt_mcr.c
> index 125c63bdc9b5..c6b9e9869fee 100644
> --- a/drivers/gpu/drm/xe/xe_gt_mcr.c
> +++ b/drivers/gpu/drm/xe/xe_gt_mcr.c
> @@ -40,6 +40,8 @@
>   * non-terminated instance.
>   */
>  
> +#define STEER_SEMAPHORE		XE_REG(0xFD0)
> +
>  static inline struct xe_reg to_xe_reg(struct xe_reg_mcr reg_mcr)
>  {
>  	return reg_mcr.__reg;
> @@ -183,9 +185,9 @@ static void init_steering_l3bank(struct xe_gt *gt)
>  {
>  	if (GRAPHICS_VERx100(gt_to_xe(gt)) >= 1270) {
>  		u32 mslice_mask = REG_FIELD_GET(MEML3_EN_MASK,
> -						xe_mmio_read32(gt, MIRROR_FUSE3.reg));
> +						xe_mmio_read32(gt, MIRROR_FUSE3));
>  		u32 bank_mask = REG_FIELD_GET(GT_L3_EXC_MASK,
> -					      xe_mmio_read32(gt, XEHP_FUSE4.reg));
> +					      xe_mmio_read32(gt, XEHP_FUSE4));
>  
>  		/*
>  		 * Group selects mslice, instance selects bank within mslice.
> @@ -196,7 +198,7 @@ static void init_steering_l3bank(struct xe_gt *gt)
>  			bank_mask & BIT(0) ? 0 : 2;
>  	} else if (gt_to_xe(gt)->info.platform == XE_DG2) {
>  		u32 mslice_mask = REG_FIELD_GET(MEML3_EN_MASK,
> -						xe_mmio_read32(gt, MIRROR_FUSE3.reg));
> +						xe_mmio_read32(gt, MIRROR_FUSE3));
>  		u32 bank = __ffs(mslice_mask) * 8;
>  
>  		/*
> @@ -208,7 +210,7 @@ static void init_steering_l3bank(struct xe_gt *gt)
>  		gt->steering[L3BANK].instance_target = bank & 0x3;
>  	} else {
>  		u32 fuse = REG_FIELD_GET(L3BANK_MASK,
> -					 ~xe_mmio_read32(gt, MIRROR_FUSE3.reg));
> +					 ~xe_mmio_read32(gt, MIRROR_FUSE3));
>  
>  		gt->steering[L3BANK].group_target = 0;	/* unused */
>  		gt->steering[L3BANK].instance_target = __ffs(fuse);
> @@ -218,7 +220,7 @@ static void init_steering_l3bank(struct xe_gt *gt)
>  static void init_steering_mslice(struct xe_gt *gt)
>  {
>  	u32 mask = REG_FIELD_GET(MEML3_EN_MASK,
> -				 xe_mmio_read32(gt, MIRROR_FUSE3.reg));
> +				 xe_mmio_read32(gt, MIRROR_FUSE3));
>  
>  	/*
>  	 * mslice registers are valid (not terminated) if either the meml3
> @@ -337,8 +339,8 @@ void xe_gt_mcr_set_implicit_defaults(struct xe_gt *gt)
>  		u32 steer_val = REG_FIELD_PREP(MCR_SLICE_MASK, 0) |
>  			REG_FIELD_PREP(MCR_SUBSLICE_MASK, 2);
>  
> -		xe_mmio_write32(gt, MCFG_MCR_SELECTOR.reg, steer_val);
> -		xe_mmio_write32(gt, SF_MCR_SELECTOR.reg, steer_val);
> +		xe_mmio_write32(gt, MCFG_MCR_SELECTOR, steer_val);
> +		xe_mmio_write32(gt, SF_MCR_SELECTOR, steer_val);
>  		/*
>  		 * For GAM registers, all reads should be directed to instance 1
>  		 * (unicast reads against other instances are not allowed),
> @@ -376,7 +378,7 @@ static bool xe_gt_mcr_get_nonterminated_steering(struct xe_gt *gt,
>  			continue;
>  
>  		for (int i = 0; gt->steering[type].ranges[i].end > 0; i++) {
> -			if (xe_mmio_in_range(&gt->steering[type].ranges[i], reg.reg)) {
> +			if (xe_mmio_in_range(&gt->steering[type].ranges[i], reg)) {
>  				*group = gt->steering[type].group_target;
>  				*instance = gt->steering[type].instance_target;
>  				return true;
> @@ -387,7 +389,7 @@ static bool xe_gt_mcr_get_nonterminated_steering(struct xe_gt *gt,
>  	implicit_ranges = gt->steering[IMPLICIT_STEERING].ranges;
>  	if (implicit_ranges)
>  		for (int i = 0; implicit_ranges[i].end > 0; i++)
> -			if (xe_mmio_in_range(&implicit_ranges[i], reg.reg))
> +			if (xe_mmio_in_range(&implicit_ranges[i], reg))
>  				return false;
>  
>  	/*
> @@ -403,8 +405,6 @@ static bool xe_gt_mcr_get_nonterminated_steering(struct xe_gt *gt,
>  	return true;
>  }
>  
> -#define STEER_SEMAPHORE		0xFD0
> -
>  /*
>   * Obtain exclusive access to MCR steering.  On MTL and beyond we also need
>   * to synchronize with external clients (e.g., firmware), so a semaphore
> @@ -446,16 +446,17 @@ static u32 rw_with_mcr_steering(struct xe_gt *gt, struct xe_reg_mcr reg_mcr,
>  				u8 rw_flag, int group, int instance, u32 value)
>  {
>  	const struct xe_reg reg = to_xe_reg(reg_mcr);
> -	u32 steer_reg, steer_val, val = 0;
> +	struct xe_reg steer_reg;
> +	u32 steer_val, val = 0;
>  
>  	lockdep_assert_held(&gt->mcr_lock);
>  
>  	if (GRAPHICS_VERx100(gt_to_xe(gt)) >= 1270) {
> -		steer_reg = MTL_MCR_SELECTOR.reg;
> +		steer_reg = MTL_MCR_SELECTOR;
>  		steer_val = REG_FIELD_PREP(MTL_MCR_GROUPID, group) |
>  			REG_FIELD_PREP(MTL_MCR_INSTANCEID, instance);
>  	} else {
> -		steer_reg = MCR_SELECTOR.reg;
> +		steer_reg = MCR_SELECTOR;
>  		steer_val = REG_FIELD_PREP(MCR_SLICE_MASK, group) |
>  			REG_FIELD_PREP(MCR_SUBSLICE_MASK, instance);
>  	}
> @@ -480,9 +481,9 @@ static u32 rw_with_mcr_steering(struct xe_gt *gt, struct xe_reg_mcr reg_mcr,
>  	xe_mmio_write32(gt, steer_reg, steer_val);
>  
>  	if (rw_flag == MCR_OP_READ)
> -		val = xe_mmio_read32(gt, reg.reg);
> +		val = xe_mmio_read32(gt, reg);
>  	else
> -		xe_mmio_write32(gt, reg.reg, value);
> +		xe_mmio_write32(gt, reg, value);
>  
>  	/*
>  	 * If we turned off the multicast bit (during a write) we're required
> @@ -524,7 +525,7 @@ u32 xe_gt_mcr_unicast_read_any(struct xe_gt *gt, struct xe_reg_mcr reg_mcr)
>  					   group, instance, 0);
>  		mcr_unlock(gt);
>  	} else {
> -		val = xe_mmio_read32(gt, reg.reg);
> +		val = xe_mmio_read32(gt, reg);
>  	}
>  
>  	return val;
> @@ -591,7 +592,7 @@ void xe_gt_mcr_multicast_write(struct xe_gt *gt, struct xe_reg_mcr reg_mcr,
>  	 * to touch the steering register.
>  	 */
>  	mcr_lock(gt);
> -	xe_mmio_write32(gt, reg.reg, value);
> +	xe_mmio_write32(gt, reg, value);
>  	mcr_unlock(gt);
>  }
>  
> diff --git a/drivers/gpu/drm/xe/xe_gt_topology.c b/drivers/gpu/drm/xe/xe_gt_topology.c
> index 14cf135fd648..7c3e347e4d74 100644
> --- a/drivers/gpu/drm/xe/xe_gt_topology.c
> +++ b/drivers/gpu/drm/xe/xe_gt_topology.c
> @@ -26,7 +26,7 @@ load_dss_mask(struct xe_gt *gt, xe_dss_mask_t mask, int numregs, ...)
>  
>  	va_start(argp, numregs);
>  	for (i = 0; i < numregs; i++)
> -		fuse_val[i] = xe_mmio_read32(gt, va_arg(argp, u32));
> +		fuse_val[i] = xe_mmio_read32(gt, va_arg(argp, struct xe_reg));
>  	va_end(argp);
>  
>  	bitmap_from_arr32(mask, fuse_val, numregs * 32);
> @@ -36,7 +36,7 @@ static void
>  load_eu_mask(struct xe_gt *gt, xe_eu_mask_t mask)
>  {
>  	struct xe_device *xe = gt_to_xe(gt);
> -	u32 reg = xe_mmio_read32(gt, XELP_EU_ENABLE.reg);
> +	u32 reg_val = xe_mmio_read32(gt, XELP_EU_ENABLE);
>  	u32 val = 0;
>  	int i;
>  
> @@ -47,15 +47,15 @@ load_eu_mask(struct xe_gt *gt, xe_eu_mask_t mask)
>  	 * of enable).
>  	 */
>  	if (GRAPHICS_VERx100(xe) < 1250)
> -		reg = ~reg & XELP_EU_MASK;
> +		reg_val = ~reg_val & XELP_EU_MASK;
>  
>  	/* On PVC, one bit = one EU */
>  	if (GRAPHICS_VERx100(xe) == 1260) {
> -		val = reg;
> +		val = reg_val;
>  	} else {
>  		/* All other platforms, one bit = 2 EU */
> -		for (i = 0; i < fls(reg); i++)
> -			if (reg & BIT(i))
> +		for (i = 0; i < fls(reg_val); i++)
> +			if (reg_val & BIT(i))
>  				val |= 0x3 << 2 * i;
>  	}
>  
> @@ -95,10 +95,10 @@ xe_gt_topology_init(struct xe_gt *gt)
>  
>  	load_dss_mask(gt, gt->fuse_topo.g_dss_mask,
>  		      num_geometry_regs,
> -		      XELP_GT_GEOMETRY_DSS_ENABLE.reg);
> +		      XELP_GT_GEOMETRY_DSS_ENABLE);
>  	load_dss_mask(gt, gt->fuse_topo.c_dss_mask, num_compute_regs,
> -		      XEHP_GT_COMPUTE_DSS_ENABLE.reg,
> -		      XEHPC_GT_COMPUTE_DSS_ENABLE_EXT.reg);
> +		      XEHP_GT_COMPUTE_DSS_ENABLE,
> +		      XEHPC_GT_COMPUTE_DSS_ENABLE_EXT);
>  	load_eu_mask(gt, gt->fuse_topo.eu_mask_per_dss);
>  
>  	xe_gt_topology_dump(gt, &p);
> diff --git a/drivers/gpu/drm/xe/xe_guc.c b/drivers/gpu/drm/xe/xe_guc.c
> index 62b4fcf84acf..e8a126ad400f 100644
> --- a/drivers/gpu/drm/xe/xe_guc.c
> +++ b/drivers/gpu/drm/xe/xe_guc.c
> @@ -232,10 +232,10 @@ static void guc_write_params(struct xe_guc *guc)
>  
>  	xe_force_wake_assert_held(gt_to_fw(gt), XE_FW_GT);
>  
> -	xe_mmio_write32(gt, SOFT_SCRATCH(0).reg, 0);
> +	xe_mmio_write32(gt, SOFT_SCRATCH(0), 0);
>  
>  	for (i = 0; i < GUC_CTL_MAX_DWORDS; i++)
> -		xe_mmio_write32(gt, SOFT_SCRATCH(1 + i).reg, guc->params[i]);
> +		xe_mmio_write32(gt, SOFT_SCRATCH(1 + i), guc->params[i]);
>  }
>  
>  int xe_guc_init(struct xe_guc *guc)
> @@ -268,9 +268,9 @@ int xe_guc_init(struct xe_guc *guc)
>  	guc_init_params(guc);
>  
>  	if (xe_gt_is_media_type(gt))
> -		guc->notify_reg = MEDIA_GUC_HOST_INTERRUPT.reg;
> +		guc->notify_reg = MEDIA_GUC_HOST_INTERRUPT;
>  	else
> -		guc->notify_reg = GUC_HOST_INTERRUPT.reg;
> +		guc->notify_reg = GUC_HOST_INTERRUPT;
>  
>  	xe_uc_fw_change_status(&guc->fw, XE_UC_FIRMWARE_LOADABLE);
>  
> @@ -309,9 +309,9 @@ int xe_guc_reset(struct xe_guc *guc)
>  
>  	xe_force_wake_assert_held(gt_to_fw(gt), XE_FW_GT);
>  
> -	xe_mmio_write32(gt, GDRST.reg, GRDOM_GUC);
> +	xe_mmio_write32(gt, GDRST, GRDOM_GUC);
>  
> -	ret = xe_mmio_wait32(gt, GDRST.reg, 0, GRDOM_GUC, 5000,
> +	ret = xe_mmio_wait32(gt, GDRST, 0, GRDOM_GUC, 5000,
>  			     &gdrst, false);
>  	if (ret) {
>  		drm_err(&xe->drm, "GuC reset timed out, GEN6_GDRST=0x%8x\n",
> @@ -319,7 +319,7 @@ int xe_guc_reset(struct xe_guc *guc)
>  		goto err_out;
>  	}
>  
> -	guc_status = xe_mmio_read32(gt, GUC_STATUS.reg);
> +	guc_status = xe_mmio_read32(gt, GUC_STATUS);
>  	if (!(guc_status & GS_MIA_IN_RESET)) {
>  		drm_err(&xe->drm,
>  			"GuC status: 0x%x, MIA core expected to be in reset\n",
> @@ -352,9 +352,9 @@ static void guc_prepare_xfer(struct xe_guc *guc)
>  		shim_flags |= PVC_GUC_MOCS_INDEX(PVC_GUC_MOCS_UC_INDEX);
>  
>  	/* Must program this register before loading the ucode with DMA */
> -	xe_mmio_write32(gt, GUC_SHIM_CONTROL.reg, shim_flags);
> +	xe_mmio_write32(gt, GUC_SHIM_CONTROL, shim_flags);
>  
> -	xe_mmio_write32(gt, GT_PM_CONFIG.reg, GT_DOORBELL_ENABLE);
> +	xe_mmio_write32(gt, GT_PM_CONFIG, GT_DOORBELL_ENABLE);
>  }
>  
>  /*
> @@ -370,7 +370,7 @@ static int guc_xfer_rsa(struct xe_guc *guc)
>  	if (guc->fw.rsa_size > 256) {
>  		u32 rsa_ggtt_addr = xe_bo_ggtt_addr(guc->fw.bo) +
>  				    xe_uc_fw_rsa_offset(&guc->fw);
> -		xe_mmio_write32(gt, UOS_RSA_SCRATCH(0).reg, rsa_ggtt_addr);
> +		xe_mmio_write32(gt, UOS_RSA_SCRATCH(0), rsa_ggtt_addr);
>  		return 0;
>  	}
>  
> @@ -379,7 +379,7 @@ static int guc_xfer_rsa(struct xe_guc *guc)
>  		return -ENOMEM;
>  
>  	for (i = 0; i < UOS_RSA_SCRATCH_COUNT; i++)
> -		xe_mmio_write32(gt, UOS_RSA_SCRATCH(i).reg, rsa[i]);
> +		xe_mmio_write32(gt, UOS_RSA_SCRATCH(i), rsa[i]);
>  
>  	return 0;
>  }
> @@ -407,7 +407,7 @@ static int guc_wait_ucode(struct xe_guc *guc)
>  	 * 200ms. Even at slowest clock, this should be sufficient. And
>  	 * in the working case, a larger timeout makes no difference.
>  	 */
> -	ret = xe_mmio_wait32(guc_to_gt(guc), GUC_STATUS.reg,
> +	ret = xe_mmio_wait32(guc_to_gt(guc), GUC_STATUS,
>  			     FIELD_PREP(GS_UKERNEL_MASK,
>  					XE_GUC_LOAD_STATUS_READY),
>  			     GS_UKERNEL_MASK, 200000, &status, false);
> @@ -435,7 +435,7 @@ static int guc_wait_ucode(struct xe_guc *guc)
>  		    XE_GUC_LOAD_STATUS_EXCEPTION) {
>  			drm_info(drm, "GuC firmware exception. EIP: %#x\n",
>  				 xe_mmio_read32(guc_to_gt(guc),
> -						SOFT_SCRATCH(13).reg));
> +						SOFT_SCRATCH(13)));
>  			ret = -ENXIO;
>  		}
>  
> @@ -532,10 +532,10 @@ static void guc_handle_mmio_msg(struct xe_guc *guc)
>  
>  	xe_force_wake_assert_held(gt_to_fw(gt), XE_FW_GT);
>  
> -	msg = xe_mmio_read32(gt, SOFT_SCRATCH(15).reg);
> +	msg = xe_mmio_read32(gt, SOFT_SCRATCH(15));
>  	msg &= XE_GUC_RECV_MSG_EXCEPTION |
>  		XE_GUC_RECV_MSG_CRASH_DUMP_POSTED;
> -	xe_mmio_write32(gt, SOFT_SCRATCH(15).reg, 0);
> +	xe_mmio_write32(gt, SOFT_SCRATCH(15), 0);
>  
>  	if (msg & XE_GUC_RECV_MSG_CRASH_DUMP_POSTED)
>  		drm_err(&guc_to_xe(guc)->drm,
> @@ -553,12 +553,12 @@ static void guc_enable_irq(struct xe_guc *guc)
>  		REG_FIELD_PREP(ENGINE0_MASK, GUC_INTR_GUC2HOST)  :
>  		REG_FIELD_PREP(ENGINE1_MASK, GUC_INTR_GUC2HOST);
>  
> -	xe_mmio_write32(gt, GUC_SG_INTR_ENABLE.reg,
> +	xe_mmio_write32(gt, GUC_SG_INTR_ENABLE,
>  			REG_FIELD_PREP(ENGINE1_MASK, GUC_INTR_GUC2HOST));
>  	if (xe_gt_is_media_type(gt))
> -		xe_mmio_rmw32(gt, GUC_SG_INTR_MASK.reg, events, 0);
> +		xe_mmio_rmw32(gt, GUC_SG_INTR_MASK, events, 0);
>  	else
> -		xe_mmio_write32(gt, GUC_SG_INTR_MASK.reg, ~events);
> +		xe_mmio_write32(gt, GUC_SG_INTR_MASK, ~events);
>  }
>  
>  int xe_guc_enable_communication(struct xe_guc *guc)
> @@ -567,7 +567,7 @@ int xe_guc_enable_communication(struct xe_guc *guc)
>  
>  	guc_enable_irq(guc);
>  
> -	xe_mmio_rmw32(guc_to_gt(guc), PMINTRMSK.reg,
> +	xe_mmio_rmw32(guc_to_gt(guc), PMINTRMSK,
>  		      ARAT_EXPIRED_INTRMSK, 0);
>  
>  	err = xe_guc_ct_enable(&guc->ct);
> @@ -620,8 +620,8 @@ int xe_guc_mmio_send_recv(struct xe_guc *guc, const u32 *request,
>  	struct xe_device *xe = guc_to_xe(guc);
>  	struct xe_gt *gt = guc_to_gt(guc);
>  	u32 header, reply;
> -	u32 reply_reg = xe_gt_is_media_type(gt) ?
> -		MED_VF_SW_FLAG(0).reg : VF_SW_FLAG(0).reg;
> +	struct xe_reg reply_reg = xe_gt_is_media_type(gt) ?
> +		MED_VF_SW_FLAG(0) : VF_SW_FLAG(0);
>  	const u32 LAST_INDEX = VF_SW_FLAG_COUNT;
>  	int ret;
>  	int i;
> @@ -641,14 +641,14 @@ int xe_guc_mmio_send_recv(struct xe_guc *guc, const u32 *request,
>  	/* Not in critical data-path, just do if else for GT type */
>  	if (xe_gt_is_media_type(gt)) {
>  		for (i = 0; i < len; ++i)
> -			xe_mmio_write32(gt, MED_VF_SW_FLAG(i).reg,
> +			xe_mmio_write32(gt, MED_VF_SW_FLAG(i),
>  					request[i]);
> -		xe_mmio_read32(gt, MED_VF_SW_FLAG(LAST_INDEX).reg);
> +		xe_mmio_read32(gt, MED_VF_SW_FLAG(LAST_INDEX));
>  	} else {
>  		for (i = 0; i < len; ++i)
> -			xe_mmio_write32(gt, VF_SW_FLAG(i).reg,
> +			xe_mmio_write32(gt, VF_SW_FLAG(i),
>  					request[i]);
> -		xe_mmio_read32(gt, VF_SW_FLAG(LAST_INDEX).reg);
> +		xe_mmio_read32(gt, VF_SW_FLAG(LAST_INDEX));
>  	}
>  
>  	xe_guc_notify(guc);
> @@ -712,9 +712,10 @@ int xe_guc_mmio_send_recv(struct xe_guc *guc, const u32 *request,
>  	if (response_buf) {
>  		response_buf[0] = header;
>  
> -		for (i = 1; i < VF_SW_FLAG_COUNT; i++)
> -			response_buf[i] =
> -				xe_mmio_read32(gt, reply_reg + i * sizeof(u32));
> +		for (i = 1; i < VF_SW_FLAG_COUNT; i++) {
> +			reply_reg.reg += i * sizeof(u32);
> +			response_buf[i] = xe_mmio_read32(gt, reply_reg);
> +		}
>  	}
>  
>  	/* Use data from the GuC response as our return value */
> @@ -836,7 +837,7 @@ void xe_guc_print_info(struct xe_guc *guc, struct drm_printer *p)
>  	if (err)
>  		return;
>  
> -	status = xe_mmio_read32(gt, GUC_STATUS.reg);
> +	status = xe_mmio_read32(gt, GUC_STATUS);
>  
>  	drm_printf(p, "\nGuC status 0x%08x:\n", status);
>  	drm_printf(p, "\tBootrom status = 0x%x\n",
> @@ -851,7 +852,7 @@ void xe_guc_print_info(struct xe_guc *guc, struct drm_printer *p)
>  	drm_puts(p, "\nScratch registers:\n");
>  	for (i = 0; i < SOFT_SCRATCH_COUNT; i++) {
>  		drm_printf(p, "\t%2d: \t0x%x\n",
> -			   i, xe_mmio_read32(gt, SOFT_SCRATCH(i).reg));
> +			   i, xe_mmio_read32(gt, SOFT_SCRATCH(i)));
>  	}
>  
>  	xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
> diff --git a/drivers/gpu/drm/xe/xe_guc_ads.c b/drivers/gpu/drm/xe/xe_guc_ads.c
> index 84c2d7c624c6..683f2df09c49 100644
> --- a/drivers/gpu/drm/xe/xe_guc_ads.c
> +++ b/drivers/gpu/drm/xe/xe_guc_ads.c
> @@ -428,7 +428,6 @@ static void guc_mmio_regset_write_one(struct xe_guc_ads *ads,
>  	struct guc_mmio_reg entry = {
>  		.offset = reg.reg,
>  		.flags = reg.masked ? GUC_REGSET_MASKED : 0,
> -		/* TODO: steering */
>  	};
>  
>  	xe_map_memcpy_to(ads_to_xe(ads), regset_map, n_entry * sizeof(entry),
> @@ -551,7 +550,7 @@ static void guc_doorbell_init(struct xe_guc_ads *ads)
>  
>  	if (GRAPHICS_VER(xe) >= 12 && !IS_DGFX(xe)) {
>  		u32 distdbreg =
> -			xe_mmio_read32(gt, DIST_DBS_POPULATED.reg);
> +			xe_mmio_read32(gt, DIST_DBS_POPULATED);
>  
>  		ads_blob_write(ads,
>  			       system_info.generic_gt_sysinfo[GUC_GENERIC_GT_SYSINFO_DOORBELL_COUNT_PER_SQIDI],
> diff --git a/drivers/gpu/drm/xe/xe_guc_pc.c b/drivers/gpu/drm/xe/xe_guc_pc.c
> index 72d460d5323b..e799faa1c6b8 100644
> --- a/drivers/gpu/drm/xe/xe_guc_pc.c
> +++ b/drivers/gpu/drm/xe/xe_guc_pc.c
> @@ -317,9 +317,9 @@ static void mtl_update_rpe_value(struct xe_guc_pc *pc)
>  	u32 reg;
>  
>  	if (xe_gt_is_media_type(gt))
> -		reg = xe_mmio_read32(gt, MTL_MPE_FREQUENCY.reg);
> +		reg = xe_mmio_read32(gt, MTL_MPE_FREQUENCY);
>  	else
> -		reg = xe_mmio_read32(gt, MTL_GT_RPE_FREQUENCY.reg);
> +		reg = xe_mmio_read32(gt, MTL_GT_RPE_FREQUENCY);
>  
>  	pc->rpe_freq = REG_FIELD_GET(MTL_RPE_MASK, reg) * GT_FREQUENCY_MULTIPLIER;
>  }
> @@ -336,9 +336,9 @@ static void tgl_update_rpe_value(struct xe_guc_pc *pc)
>  	 * PCODE at a different register
>  	 */
>  	if (xe->info.platform == XE_PVC)
> -		reg = xe_mmio_read32(gt, PVC_RP_STATE_CAP.reg);
> +		reg = xe_mmio_read32(gt, PVC_RP_STATE_CAP);
>  	else
> -		reg = xe_mmio_read32(gt, GEN10_FREQ_INFO_REC.reg);
> +		reg = xe_mmio_read32(gt, GEN10_FREQ_INFO_REC);
>  
>  	pc->rpe_freq = REG_FIELD_GET(RPE_MASK, reg) * GT_FREQUENCY_MULTIPLIER;
>  }
> @@ -380,10 +380,10 @@ static ssize_t freq_act_show(struct device *dev,
>  		goto out;
>  
>  	if (xe->info.platform == XE_METEORLAKE) {
> -		freq = xe_mmio_read32(gt, MTL_MIRROR_TARGET_WP1.reg);
> +		freq = xe_mmio_read32(gt, MTL_MIRROR_TARGET_WP1);
>  		freq = REG_FIELD_GET(MTL_CAGF_MASK, freq);
>  	} else {
> -		freq = xe_mmio_read32(gt, GEN12_RPSTAT1.reg);
> +		freq = xe_mmio_read32(gt, GEN12_RPSTAT1);
>  		freq = REG_FIELD_GET(GEN12_CAGF_MASK, freq);
>  	}
>  
> @@ -413,7 +413,7 @@ static ssize_t freq_cur_show(struct device *dev,
>  	if (ret)
>  		goto out;
>  
> -	freq = xe_mmio_read32(gt, RPNSWREQ.reg);
> +	freq = xe_mmio_read32(gt, RPNSWREQ);
>  
>  	freq = REG_FIELD_GET(REQ_RATIO_MASK, freq);
>  	ret = sysfs_emit(buf, "%d\n", decode_freq(freq));
> @@ -588,7 +588,7 @@ static ssize_t rc_status_show(struct device *dev,
>  	u32 reg;
>  
>  	xe_device_mem_access_get(gt_to_xe(gt));
> -	reg = xe_mmio_read32(gt, GT_CORE_STATUS.reg);
> +	reg = xe_mmio_read32(gt, GT_CORE_STATUS);
>  	xe_device_mem_access_put(gt_to_xe(gt));
>  
>  	switch (REG_FIELD_GET(RCN_MASK, reg)) {
> @@ -615,7 +615,7 @@ static ssize_t rc6_residency_show(struct device *dev,
>  	if (ret)
>  		goto out;
>  
> -	reg = xe_mmio_read32(gt, GT_GFX_RC6.reg);
> +	reg = xe_mmio_read32(gt, GT_GFX_RC6);
>  	ret = sysfs_emit(buff, "%u\n", reg);
>  
>  	XE_WARN_ON(xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL));
> @@ -646,9 +646,9 @@ static void mtl_init_fused_rp_values(struct xe_guc_pc *pc)
>  	xe_device_assert_mem_access(pc_to_xe(pc));
>  
>  	if (xe_gt_is_media_type(gt))
> -		reg = xe_mmio_read32(gt, MTL_MEDIAP_STATE_CAP.reg);
> +		reg = xe_mmio_read32(gt, MTL_MEDIAP_STATE_CAP);
>  	else
> -		reg = xe_mmio_read32(gt, MTL_RP_STATE_CAP.reg);
> +		reg = xe_mmio_read32(gt, MTL_RP_STATE_CAP);
>  	pc->rp0_freq = REG_FIELD_GET(MTL_RP0_CAP_MASK, reg) *
>  		GT_FREQUENCY_MULTIPLIER;
>  	pc->rpn_freq = REG_FIELD_GET(MTL_RPN_CAP_MASK, reg) *
> @@ -664,9 +664,9 @@ static void tgl_init_fused_rp_values(struct xe_guc_pc *pc)
>  	xe_device_assert_mem_access(pc_to_xe(pc));
>  
>  	if (xe->info.platform == XE_PVC)
> -		reg = xe_mmio_read32(gt, PVC_RP_STATE_CAP.reg);
> +		reg = xe_mmio_read32(gt, PVC_RP_STATE_CAP);
>  	else
> -		reg = xe_mmio_read32(gt, GEN6_RP_STATE_CAP.reg);
> +		reg = xe_mmio_read32(gt, GEN6_RP_STATE_CAP);
>  	pc->rp0_freq = REG_FIELD_GET(RP0_MASK, reg) * GT_FREQUENCY_MULTIPLIER;
>  	pc->rpn_freq = REG_FIELD_GET(RPN_MASK, reg) * GT_FREQUENCY_MULTIPLIER;
>  }
> @@ -745,9 +745,9 @@ static int pc_gucrc_disable(struct xe_guc_pc *pc)
>  	if (ret)
>  		return ret;
>  
> -	xe_mmio_write32(gt, PG_ENABLE.reg, 0);
> -	xe_mmio_write32(gt, RC_CONTROL.reg, 0);
> -	xe_mmio_write32(gt, RC_STATE.reg, 0);
> +	xe_mmio_write32(gt, PG_ENABLE, 0);
> +	xe_mmio_write32(gt, RC_CONTROL, 0);
> +	xe_mmio_write32(gt, RC_STATE, 0);
>  
>  	XE_WARN_ON(xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL));
>  	return 0;
> diff --git a/drivers/gpu/drm/xe/xe_guc_types.h b/drivers/gpu/drm/xe/xe_guc_types.h
> index ac7eec28934d..a304dce4e9f4 100644
> --- a/drivers/gpu/drm/xe/xe_guc_types.h
> +++ b/drivers/gpu/drm/xe/xe_guc_types.h
> @@ -9,6 +9,7 @@
>  #include <linux/idr.h>
>  #include <linux/xarray.h>
>  
> +#include "regs/xe_reg_defs.h"
>  #include "xe_guc_ads_types.h"
>  #include "xe_guc_ct_types.h"
>  #include "xe_guc_fwif.h"
> @@ -74,7 +75,7 @@ struct xe_guc {
>  	/**
>  	 * @notify_reg: Register which is written to notify GuC of H2G messages
>  	 */
> -	u32 notify_reg;
> +	struct xe_reg notify_reg;
>  	/** @params: Control params for fw initialization */
>  	u32 params[GUC_CTL_MAX_DWORDS];
>  };
> diff --git a/drivers/gpu/drm/xe/xe_huc.c b/drivers/gpu/drm/xe/xe_huc.c
> index 55dcaab34ea4..e0377083d1f2 100644
> --- a/drivers/gpu/drm/xe/xe_huc.c
> +++ b/drivers/gpu/drm/xe/xe_huc.c
> @@ -84,7 +84,7 @@ int xe_huc_auth(struct xe_huc *huc)
>  		goto fail;
>  	}
>  
> -	ret = xe_mmio_wait32(gt, HUC_KERNEL_LOAD_INFO.reg,
> +	ret = xe_mmio_wait32(gt, HUC_KERNEL_LOAD_INFO,
>  			     HUC_LOAD_SUCCESSFUL,
>  			     HUC_LOAD_SUCCESSFUL, 100000, NULL, false);
>  	if (ret) {
> @@ -126,7 +126,7 @@ void xe_huc_print_info(struct xe_huc *huc, struct drm_printer *p)
>  		return;
>  
>  	drm_printf(p, "\nHuC status: 0x%08x\n",
> -		   xe_mmio_read32(gt, HUC_KERNEL_LOAD_INFO.reg));
> +		   xe_mmio_read32(gt, HUC_KERNEL_LOAD_INFO));
>  
>  	xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
>  }
> diff --git a/drivers/gpu/drm/xe/xe_hw_engine.c b/drivers/gpu/drm/xe/xe_hw_engine.c
> index a9adac0624f6..5e275aff8974 100644
> --- a/drivers/gpu/drm/xe/xe_hw_engine.c
> +++ b/drivers/gpu/drm/xe/xe_hw_engine.c
> @@ -233,20 +233,25 @@ static void hw_engine_fini(struct drm_device *drm, void *arg)
>  	hwe->gt = NULL;
>  }
>  
> -static void hw_engine_mmio_write32(struct xe_hw_engine *hwe, u32 reg, u32 val)
> +static void hw_engine_mmio_write32(struct xe_hw_engine *hwe, struct xe_reg reg,
> +				   u32 val)
>  {
> -	XE_BUG_ON(reg & hwe->mmio_base);
> +	XE_BUG_ON(reg.reg & hwe->mmio_base);
>  	xe_force_wake_assert_held(gt_to_fw(hwe->gt), hwe->domain);
>  
> -	xe_mmio_write32(hwe->gt, reg + hwe->mmio_base, val);
> +	reg.reg += hwe->mmio_base;
> +
> +	xe_mmio_write32(hwe->gt, reg, val);
>  }
>  
> -static u32 hw_engine_mmio_read32(struct xe_hw_engine *hwe, u32 reg)
> +static u32 hw_engine_mmio_read32(struct xe_hw_engine *hwe, struct xe_reg reg)
>  {
> -	XE_BUG_ON(reg & hwe->mmio_base);
> +	XE_BUG_ON(reg.reg & hwe->mmio_base);
>  	xe_force_wake_assert_held(gt_to_fw(hwe->gt), hwe->domain);
>  
> -	return xe_mmio_read32(hwe->gt, reg + hwe->mmio_base);
> +	reg.reg += hwe->mmio_base;
> +
> +	return xe_mmio_read32(hwe->gt, reg);
>  }
>  
>  void xe_hw_engine_enable_ring(struct xe_hw_engine *hwe)
> @@ -255,17 +260,17 @@ void xe_hw_engine_enable_ring(struct xe_hw_engine *hwe)
>  		xe_hw_engine_mask_per_class(hwe->gt, XE_ENGINE_CLASS_COMPUTE);
>  
>  	if (hwe->class == XE_ENGINE_CLASS_COMPUTE && ccs_mask)
> -		xe_mmio_write32(hwe->gt, RCU_MODE.reg,
> +		xe_mmio_write32(hwe->gt, RCU_MODE,
>  				_MASKED_BIT_ENABLE(RCU_MODE_CCS_ENABLE));
>  
> -	hw_engine_mmio_write32(hwe, RING_HWSTAM(0).reg, ~0x0);
> -	hw_engine_mmio_write32(hwe, RING_HWS_PGA(0).reg,
> +	hw_engine_mmio_write32(hwe, RING_HWSTAM(0), ~0x0);
> +	hw_engine_mmio_write32(hwe, RING_HWS_PGA(0),
>  			       xe_bo_ggtt_addr(hwe->hwsp));
> -	hw_engine_mmio_write32(hwe, RING_MODE(0).reg,
> +	hw_engine_mmio_write32(hwe, RING_MODE(0),
>  			       _MASKED_BIT_ENABLE(GFX_DISABLE_LEGACY_MODE));
> -	hw_engine_mmio_write32(hwe, RING_MI_MODE(0).reg,
> +	hw_engine_mmio_write32(hwe, RING_MI_MODE(0),
>  			       _MASKED_BIT_DISABLE(STOP_RING));
> -	hw_engine_mmio_read32(hwe, RING_MI_MODE(0).reg);
> +	hw_engine_mmio_read32(hwe, RING_MI_MODE(0));
>  }
>  
>  void
> @@ -443,7 +448,7 @@ static void read_media_fuses(struct xe_gt *gt)
>  
>  	xe_force_wake_assert_held(gt_to_fw(gt), XE_FW_GT);
>  
> -	media_fuse = xe_mmio_read32(gt, GT_VEBOX_VDBOX_DISABLE.reg);
> +	media_fuse = xe_mmio_read32(gt, GT_VEBOX_VDBOX_DISABLE);
>  
>  	/*
>  	 * Pre-Xe_HP platforms had register bits representing absent engines,
> @@ -485,7 +490,7 @@ static void read_copy_fuses(struct xe_gt *gt)
>  
>  	xe_force_wake_assert_held(gt_to_fw(gt), XE_FW_GT);
>  
> -	bcs_mask = xe_mmio_read32(gt, MIRROR_FUSE3.reg);
> +	bcs_mask = xe_mmio_read32(gt, MIRROR_FUSE3);
>  	bcs_mask = REG_FIELD_GET(MEML3_EN_MASK, bcs_mask);
>  
>  	/* BCS0 is always present; only BCS1-BCS8 may be fused off */
> @@ -582,63 +587,63 @@ void xe_hw_engine_print_state(struct xe_hw_engine *hwe, struct drm_printer *p)
>  	drm_printf(p, "\tMMIO base: 0x%08x\n", hwe->mmio_base);
>  
>  	drm_printf(p, "\tHWSTAM: 0x%08x\n",
> -		hw_engine_mmio_read32(hwe, RING_HWSTAM(0).reg));
> +		hw_engine_mmio_read32(hwe, RING_HWSTAM(0)));
>  	drm_printf(p, "\tRING_HWS_PGA: 0x%08x\n",
> -		hw_engine_mmio_read32(hwe, RING_HWS_PGA(0).reg));
> +		hw_engine_mmio_read32(hwe, RING_HWS_PGA(0)));
>  
>  	drm_printf(p, "\tRING_EXECLIST_STATUS_LO: 0x%08x\n",
> -		hw_engine_mmio_read32(hwe, RING_EXECLIST_STATUS_LO(0).reg));
> +		hw_engine_mmio_read32(hwe, RING_EXECLIST_STATUS_LO(0)));
>  	drm_printf(p, "\tRING_EXECLIST_STATUS_HI: 0x%08x\n",
> -		hw_engine_mmio_read32(hwe, RING_EXECLIST_STATUS_HI(0).reg));
> +		hw_engine_mmio_read32(hwe, RING_EXECLIST_STATUS_HI(0)));
>  	drm_printf(p, "\tRING_EXECLIST_SQ_CONTENTS_LO: 0x%08x\n",
>  		hw_engine_mmio_read32(hwe,
> -					 RING_EXECLIST_SQ_CONTENTS_LO(0).reg));
> +					 RING_EXECLIST_SQ_CONTENTS_LO(0)));
>  	drm_printf(p, "\tRING_EXECLIST_SQ_CONTENTS_HI: 0x%08x\n",
>  		hw_engine_mmio_read32(hwe,
> -					 RING_EXECLIST_SQ_CONTENTS_HI(0).reg));
> +					 RING_EXECLIST_SQ_CONTENTS_HI(0)));
>  	drm_printf(p, "\tRING_EXECLIST_CONTROL: 0x%08x\n",
> -		hw_engine_mmio_read32(hwe, RING_EXECLIST_CONTROL(0).reg));
> +		hw_engine_mmio_read32(hwe, RING_EXECLIST_CONTROL(0)));
>  
>  	drm_printf(p, "\tRING_START: 0x%08x\n",
> -		hw_engine_mmio_read32(hwe, RING_START(0).reg));
> +		hw_engine_mmio_read32(hwe, RING_START(0)));
>  	drm_printf(p, "\tRING_HEAD:  0x%08x\n",
> -		hw_engine_mmio_read32(hwe, RING_HEAD(0).reg) & HEAD_ADDR);
> +		hw_engine_mmio_read32(hwe, RING_HEAD(0)) & HEAD_ADDR);
>  	drm_printf(p, "\tRING_TAIL:  0x%08x\n",
> -		hw_engine_mmio_read32(hwe, RING_TAIL(0).reg) & TAIL_ADDR);
> +		hw_engine_mmio_read32(hwe, RING_TAIL(0)) & TAIL_ADDR);
>  	drm_printf(p, "\tRING_CTL: 0x%08x\n",
> -		hw_engine_mmio_read32(hwe, RING_CTL(0).reg));
> +		hw_engine_mmio_read32(hwe, RING_CTL(0)));
>  	drm_printf(p, "\tRING_MODE: 0x%08x\n",
> -		hw_engine_mmio_read32(hwe, RING_MI_MODE(0).reg));
> +		hw_engine_mmio_read32(hwe, RING_MI_MODE(0)));
>  	drm_printf(p, "\tRING_MODE_GEN7: 0x%08x\n",
> -		hw_engine_mmio_read32(hwe, RING_MODE(0).reg));
> +		hw_engine_mmio_read32(hwe, RING_MODE(0)));
>  
>  	drm_printf(p, "\tRING_IMR:   0x%08x\n",
> -		hw_engine_mmio_read32(hwe, RING_IMR(0).reg));
> +		hw_engine_mmio_read32(hwe, RING_IMR(0)));
>  	drm_printf(p, "\tRING_ESR:   0x%08x\n",
> -		hw_engine_mmio_read32(hwe, RING_ESR(0).reg));
> +		hw_engine_mmio_read32(hwe, RING_ESR(0)));
>  	drm_printf(p, "\tRING_EMR:   0x%08x\n",
> -		hw_engine_mmio_read32(hwe, RING_EMR(0).reg));
> +		hw_engine_mmio_read32(hwe, RING_EMR(0)));
>  	drm_printf(p, "\tRING_EIR:   0x%08x\n",
> -		hw_engine_mmio_read32(hwe, RING_EIR(0).reg));
> +		hw_engine_mmio_read32(hwe, RING_EIR(0)));
>  
>          drm_printf(p, "\tACTHD:  0x%08x_%08x\n",
> -		hw_engine_mmio_read32(hwe, RING_ACTHD_UDW(0).reg),
> -		hw_engine_mmio_read32(hwe, RING_ACTHD(0).reg));
> +		hw_engine_mmio_read32(hwe, RING_ACTHD_UDW(0)),
> +		hw_engine_mmio_read32(hwe, RING_ACTHD(0)));
>          drm_printf(p, "\tBBADDR: 0x%08x_%08x\n",
> -		hw_engine_mmio_read32(hwe, RING_BBADDR_UDW(0).reg),
> -		hw_engine_mmio_read32(hwe, RING_BBADDR(0).reg));
> +		hw_engine_mmio_read32(hwe, RING_BBADDR_UDW(0)),
> +		hw_engine_mmio_read32(hwe, RING_BBADDR(0)));
>          drm_printf(p, "\tDMA_FADDR: 0x%08x_%08x\n",
> -		hw_engine_mmio_read32(hwe, RING_DMA_FADD_UDW(0).reg),
> -		hw_engine_mmio_read32(hwe, RING_DMA_FADD(0).reg));
> +		hw_engine_mmio_read32(hwe, RING_DMA_FADD_UDW(0)),
> +		hw_engine_mmio_read32(hwe, RING_DMA_FADD(0)));
>  
>  	drm_printf(p, "\tIPEIR: 0x%08x\n",
> -		hw_engine_mmio_read32(hwe, IPEIR(0).reg));
> +		hw_engine_mmio_read32(hwe, IPEIR(0)));
>  	drm_printf(p, "\tIPEHR: 0x%08x\n\n",
> -		hw_engine_mmio_read32(hwe, IPEHR(0).reg));
> +		hw_engine_mmio_read32(hwe, IPEHR(0)));
>  
>  	if (hwe->class == XE_ENGINE_CLASS_COMPUTE)
>  		drm_printf(p, "\tRCU_MODE: 0x%08x\n",
> -			xe_mmio_read32(hwe->gt, RCU_MODE.reg));
> +			xe_mmio_read32(hwe->gt, RCU_MODE));
>  
>  }
>  
> diff --git a/drivers/gpu/drm/xe/xe_irq.c b/drivers/gpu/drm/xe/xe_irq.c
> index ac72c1a38e5c..7aa245792927 100644
> --- a/drivers/gpu/drm/xe/xe_irq.c
> +++ b/drivers/gpu/drm/xe/xe_irq.c
> @@ -29,7 +29,7 @@
>  
>  static void assert_iir_is_zero(struct xe_gt *gt, struct xe_reg reg)
>  {
> -	u32 val = xe_mmio_read32(gt, reg.reg);
> +	u32 val = xe_mmio_read32(gt, reg);
>  
>  	if (val == 0)
>  		return;
> @@ -37,10 +37,10 @@ static void assert_iir_is_zero(struct xe_gt *gt, struct xe_reg reg)
>  	drm_WARN(&gt_to_xe(gt)->drm, 1,
>  		 "Interrupt register 0x%x is not zero: 0x%08x\n",
>  		 reg.reg, val);
> -	xe_mmio_write32(gt, reg.reg, 0xffffffff);
> -	xe_mmio_read32(gt, reg.reg);
> -	xe_mmio_write32(gt, reg.reg, 0xffffffff);
> -	xe_mmio_read32(gt, reg.reg);
> +	xe_mmio_write32(gt, reg, 0xffffffff);
> +	xe_mmio_read32(gt, reg);
> +	xe_mmio_write32(gt, reg, 0xffffffff);
> +	xe_mmio_read32(gt, reg);
>  }
>  
>  /*
> @@ -55,32 +55,32 @@ static void unmask_and_enable(struct xe_gt *gt, u32 irqregs, u32 bits)
>  	 */
>  	assert_iir_is_zero(gt, IIR(irqregs));
>  
> -	xe_mmio_write32(gt, IER(irqregs).reg, bits);
> -	xe_mmio_write32(gt, IMR(irqregs).reg, ~bits);
> +	xe_mmio_write32(gt, IER(irqregs), bits);
> +	xe_mmio_write32(gt, IMR(irqregs), ~bits);
>  
>  	/* Posting read */
> -	xe_mmio_read32(gt, IMR(irqregs).reg);
> +	xe_mmio_read32(gt, IMR(irqregs));
>  }
>  
>  /* Mask and disable all interrupts. */
>  static void mask_and_disable(struct xe_gt *gt, u32 irqregs)
>  {
> -	xe_mmio_write32(gt, IMR(irqregs).reg, ~0);
> +	xe_mmio_write32(gt, IMR(irqregs), ~0);
>  	/* Posting read */
> -	xe_mmio_read32(gt, IMR(irqregs).reg);
> +	xe_mmio_read32(gt, IMR(irqregs));
>  
> -	xe_mmio_write32(gt, IER(irqregs).reg, 0);
> +	xe_mmio_write32(gt, IER(irqregs), 0);
>  
>  	/* IIR can theoretically queue up two events. Be paranoid. */
> -	xe_mmio_write32(gt, IIR(irqregs).reg, ~0);
> -	xe_mmio_read32(gt, IIR(irqregs).reg);
> -	xe_mmio_write32(gt, IIR(irqregs).reg, ~0);
> -	xe_mmio_read32(gt, IIR(irqregs).reg);
> +	xe_mmio_write32(gt, IIR(irqregs), ~0);
> +	xe_mmio_read32(gt, IIR(irqregs));
> +	xe_mmio_write32(gt, IIR(irqregs), ~0);
> +	xe_mmio_read32(gt, IIR(irqregs));
>  }
>  
>  static u32 xelp_intr_disable(struct xe_gt *gt)
>  {
> -	xe_mmio_write32(gt, GFX_MSTR_IRQ.reg, 0);
> +	xe_mmio_write32(gt, GFX_MSTR_IRQ, 0);
>  
>  	/*
>  	 * Now with master disabled, get a sample of level indications
> @@ -88,7 +88,7 @@ static u32 xelp_intr_disable(struct xe_gt *gt)
>  	 * New indications can and will light up during processing,
>  	 * and will generate new interrupt after enabling master.
>  	 */
> -	return xe_mmio_read32(gt, GFX_MSTR_IRQ.reg);
> +	return xe_mmio_read32(gt, GFX_MSTR_IRQ);
>  }
>  
>  static u32
> @@ -99,18 +99,18 @@ gu_misc_irq_ack(struct xe_gt *gt, const u32 master_ctl)
>  	if (!(master_ctl & GU_MISC_IRQ))
>  		return 0;
>  
> -	iir = xe_mmio_read32(gt, IIR(GU_MISC_IRQ_OFFSET).reg);
> +	iir = xe_mmio_read32(gt, IIR(GU_MISC_IRQ_OFFSET));
>  	if (likely(iir))
> -		xe_mmio_write32(gt, IIR(GU_MISC_IRQ_OFFSET).reg, iir);
> +		xe_mmio_write32(gt, IIR(GU_MISC_IRQ_OFFSET), iir);
>  
>  	return iir;
>  }
>  
>  static inline void xelp_intr_enable(struct xe_gt *gt, bool stall)
>  {
> -	xe_mmio_write32(gt, GFX_MSTR_IRQ.reg, MASTER_IRQ);
> +	xe_mmio_write32(gt, GFX_MSTR_IRQ, MASTER_IRQ);
>  	if (stall)
> -		xe_mmio_read32(gt, GFX_MSTR_IRQ.reg);
> +		xe_mmio_read32(gt, GFX_MSTR_IRQ);
>  }
>  
>  static void gt_irq_postinstall(struct xe_device *xe, struct xe_gt *gt)
> @@ -133,41 +133,41 @@ static void gt_irq_postinstall(struct xe_device *xe, struct xe_gt *gt)
>  	smask = irqs << 16;
>  
>  	/* Enable RCS, BCS, VCS and VECS class interrupts. */
> -	xe_mmio_write32(gt, RENDER_COPY_INTR_ENABLE.reg, dmask);
> -	xe_mmio_write32(gt, VCS_VECS_INTR_ENABLE.reg, dmask);
> +	xe_mmio_write32(gt, RENDER_COPY_INTR_ENABLE, dmask);
> +	xe_mmio_write32(gt, VCS_VECS_INTR_ENABLE, dmask);
>  	if (ccs_mask)
> -		xe_mmio_write32(gt, CCS_RSVD_INTR_ENABLE.reg, smask);
> +		xe_mmio_write32(gt, CCS_RSVD_INTR_ENABLE, smask);
>  
>  	/* Unmask irqs on RCS, BCS, VCS and VECS engines. */
> -	xe_mmio_write32(gt, RCS0_RSVD_INTR_MASK.reg, ~smask);
> -	xe_mmio_write32(gt, BCS_RSVD_INTR_MASK.reg, ~smask);
> +	xe_mmio_write32(gt, RCS0_RSVD_INTR_MASK, ~smask);
> +	xe_mmio_write32(gt, BCS_RSVD_INTR_MASK, ~smask);
>  	if (bcs_mask & (BIT(1)|BIT(2)))
> -		xe_mmio_write32(gt, XEHPC_BCS1_BCS2_INTR_MASK.reg, ~dmask);
> +		xe_mmio_write32(gt, XEHPC_BCS1_BCS2_INTR_MASK, ~dmask);
>  	if (bcs_mask & (BIT(3)|BIT(4)))
> -		xe_mmio_write32(gt, XEHPC_BCS3_BCS4_INTR_MASK.reg, ~dmask);
> +		xe_mmio_write32(gt, XEHPC_BCS3_BCS4_INTR_MASK, ~dmask);
>  	if (bcs_mask & (BIT(5)|BIT(6)))
> -		xe_mmio_write32(gt, XEHPC_BCS5_BCS6_INTR_MASK.reg, ~dmask);
> +		xe_mmio_write32(gt, XEHPC_BCS5_BCS6_INTR_MASK, ~dmask);
>  	if (bcs_mask & (BIT(7)|BIT(8)))
> -		xe_mmio_write32(gt, XEHPC_BCS7_BCS8_INTR_MASK.reg, ~dmask);
> -	xe_mmio_write32(gt, VCS0_VCS1_INTR_MASK.reg, ~dmask);
> -	xe_mmio_write32(gt, VCS2_VCS3_INTR_MASK.reg, ~dmask);
> -	xe_mmio_write32(gt, VECS0_VECS1_INTR_MASK.reg, ~dmask);
> +		xe_mmio_write32(gt, XEHPC_BCS7_BCS8_INTR_MASK, ~dmask);
> +	xe_mmio_write32(gt, VCS0_VCS1_INTR_MASK, ~dmask);
> +	xe_mmio_write32(gt, VCS2_VCS3_INTR_MASK, ~dmask);
> +	xe_mmio_write32(gt, VECS0_VECS1_INTR_MASK, ~dmask);
>  	if (ccs_mask & (BIT(0)|BIT(1)))
> -		xe_mmio_write32(gt, CCS0_CCS1_INTR_MASK.reg, ~dmask);
> +		xe_mmio_write32(gt, CCS0_CCS1_INTR_MASK, ~dmask);
>  	if (ccs_mask & (BIT(2)|BIT(3)))
> -		xe_mmio_write32(gt,  CCS2_CCS3_INTR_MASK.reg, ~dmask);
> +		xe_mmio_write32(gt,  CCS2_CCS3_INTR_MASK, ~dmask);
>  
>  	/*
>  	 * RPS interrupts will get enabled/disabled on demand when RPS itself
>  	 * is enabled/disabled.
>  	 */
>  	/* TODO: gt->pm_ier, gt->pm_imr */
> -	xe_mmio_write32(gt, GPM_WGBOXPERF_INTR_ENABLE.reg, 0);
> -	xe_mmio_write32(gt, GPM_WGBOXPERF_INTR_MASK.reg,  ~0);
> +	xe_mmio_write32(gt, GPM_WGBOXPERF_INTR_ENABLE, 0);
> +	xe_mmio_write32(gt, GPM_WGBOXPERF_INTR_MASK,  ~0);
>  
>  	/* Same thing for GuC interrupts */
> -	xe_mmio_write32(gt, GUC_SG_INTR_ENABLE.reg, 0);
> -	xe_mmio_write32(gt, GUC_SG_INTR_MASK.reg,  ~0);
> +	xe_mmio_write32(gt, GUC_SG_INTR_ENABLE, 0);
> +	xe_mmio_write32(gt, GUC_SG_INTR_MASK,  ~0);
>  }
>  
>  static void xelp_irq_postinstall(struct xe_device *xe, struct xe_gt *gt)
> @@ -192,7 +192,7 @@ gt_engine_identity(struct xe_device *xe,
>  
>  	lockdep_assert_held(&xe->irq.lock);
>  
> -	xe_mmio_write32(gt, IIR_REG_SELECTOR(bank).reg, BIT(bit));
> +	xe_mmio_write32(gt, IIR_REG_SELECTOR(bank), BIT(bit));
>  
>  	/*
>  	 * NB: Specs do not specify how long to spin wait,
> @@ -200,7 +200,7 @@ gt_engine_identity(struct xe_device *xe,
>  	 */
>  	timeout_ts = (local_clock() >> 10) + 100;
>  	do {
> -		ident = xe_mmio_read32(gt, INTR_IDENTITY_REG(bank).reg);
> +		ident = xe_mmio_read32(gt, INTR_IDENTITY_REG(bank));
>  	} while (!(ident & INTR_DATA_VALID) &&
>  		 !time_after32(local_clock() >> 10, timeout_ts));
>  
> @@ -210,7 +210,7 @@ gt_engine_identity(struct xe_device *xe,
>  		return 0;
>  	}
>  
> -	xe_mmio_write32(gt, INTR_IDENTITY_REG(bank).reg, INTR_DATA_VALID);
> +	xe_mmio_write32(gt, INTR_IDENTITY_REG(bank), INTR_DATA_VALID);
>  
>  	return ident;
>  }
> @@ -249,11 +249,11 @@ static void gt_irq_handler(struct xe_device *xe, struct xe_gt *gt,
>  
>  		if (!xe_gt_is_media_type(gt)) {
>  			intr_dw[bank] =
> -				xe_mmio_read32(gt, GT_INTR_DW(bank).reg);
> +				xe_mmio_read32(gt, GT_INTR_DW(bank));
>  			for_each_set_bit(bit, intr_dw + bank, 32)
>  				identity[bit] = gt_engine_identity(xe, gt,
>  								   bank, bit);
> -			xe_mmio_write32(gt, GT_INTR_DW(bank).reg,
> +			xe_mmio_write32(gt, GT_INTR_DW(bank),
>  					intr_dw[bank]);
>  		}
>  
> @@ -315,14 +315,14 @@ static u32 dg1_intr_disable(struct xe_device *xe)
>  	u32 val;
>  
>  	/* First disable interrupts */
> -	xe_mmio_write32(gt, DG1_MSTR_TILE_INTR.reg, 0);
> +	xe_mmio_write32(gt, DG1_MSTR_TILE_INTR, 0);
>  
>  	/* Get the indication levels and ack the master unit */
> -	val = xe_mmio_read32(gt, DG1_MSTR_TILE_INTR.reg);
> +	val = xe_mmio_read32(gt, DG1_MSTR_TILE_INTR);
>  	if (unlikely(!val))
>  		return 0;
>  
> -	xe_mmio_write32(gt, DG1_MSTR_TILE_INTR.reg, val);
> +	xe_mmio_write32(gt, DG1_MSTR_TILE_INTR, val);
>  
>  	return val;
>  }
> @@ -331,9 +331,9 @@ static void dg1_intr_enable(struct xe_device *xe, bool stall)
>  {
>  	struct xe_gt *gt = xe_device_get_gt(xe, 0);
>  
> -	xe_mmio_write32(gt, DG1_MSTR_TILE_INTR.reg, DG1_MSTR_IRQ);
> +	xe_mmio_write32(gt, DG1_MSTR_TILE_INTR, DG1_MSTR_IRQ);
>  	if (stall)
> -		xe_mmio_read32(gt, DG1_MSTR_TILE_INTR.reg);
> +		xe_mmio_read32(gt, DG1_MSTR_TILE_INTR);
>  }
>  
>  static void dg1_irq_postinstall(struct xe_device *xe, struct xe_gt *gt)
> @@ -373,7 +373,7 @@ static irqreturn_t dg1_irq_handler(int irq, void *arg)
>  			continue;
>  
>  		if (!xe_gt_is_media_type(gt))
> -			master_ctl = xe_mmio_read32(gt, GFX_MSTR_IRQ.reg);
> +			master_ctl = xe_mmio_read32(gt, GFX_MSTR_IRQ);
>  
>  		/*
>  		 * We might be in irq handler just when PCIe DPC is initiated
> @@ -387,7 +387,7 @@ static irqreturn_t dg1_irq_handler(int irq, void *arg)
>  		}
>  
>  		if (!xe_gt_is_media_type(gt))
> -			xe_mmio_write32(gt, GFX_MSTR_IRQ.reg, master_ctl);
> +			xe_mmio_write32(gt, GFX_MSTR_IRQ, master_ctl);
>  		gt_irq_handler(xe, gt, master_ctl, intr_dw, identity);
>  
>  		/*
> @@ -416,34 +416,34 @@ static void gt_irq_reset(struct xe_gt *gt)
>  	u32 bcs_mask = xe_hw_engine_mask_per_class(gt, XE_ENGINE_CLASS_COPY);
>  
>  	/* Disable RCS, BCS, VCS and VECS class engines. */
> -	xe_mmio_write32(gt, RENDER_COPY_INTR_ENABLE.reg,	 0);
> -	xe_mmio_write32(gt, VCS_VECS_INTR_ENABLE.reg,	 0);
> +	xe_mmio_write32(gt, RENDER_COPY_INTR_ENABLE,	 0);
> +	xe_mmio_write32(gt, VCS_VECS_INTR_ENABLE,	 0);
>  	if (ccs_mask)
> -		xe_mmio_write32(gt, CCS_RSVD_INTR_ENABLE.reg, 0);
> +		xe_mmio_write32(gt, CCS_RSVD_INTR_ENABLE, 0);
>  
>  	/* Restore masks irqs on RCS, BCS, VCS and VECS engines. */
> -	xe_mmio_write32(gt, RCS0_RSVD_INTR_MASK.reg,	~0);
> -	xe_mmio_write32(gt, BCS_RSVD_INTR_MASK.reg,	~0);
> +	xe_mmio_write32(gt, RCS0_RSVD_INTR_MASK,	~0);
> +	xe_mmio_write32(gt, BCS_RSVD_INTR_MASK,	~0);
>  	if (bcs_mask & (BIT(1)|BIT(2)))
> -		xe_mmio_write32(gt, XEHPC_BCS1_BCS2_INTR_MASK.reg, ~0);
> +		xe_mmio_write32(gt, XEHPC_BCS1_BCS2_INTR_MASK, ~0);
>  	if (bcs_mask & (BIT(3)|BIT(4)))
> -		xe_mmio_write32(gt, XEHPC_BCS3_BCS4_INTR_MASK.reg, ~0);
> +		xe_mmio_write32(gt, XEHPC_BCS3_BCS4_INTR_MASK, ~0);
>  	if (bcs_mask & (BIT(5)|BIT(6)))
> -		xe_mmio_write32(gt, XEHPC_BCS5_BCS6_INTR_MASK.reg, ~0);
> +		xe_mmio_write32(gt, XEHPC_BCS5_BCS6_INTR_MASK, ~0);
>  	if (bcs_mask & (BIT(7)|BIT(8)))
> -		xe_mmio_write32(gt, XEHPC_BCS7_BCS8_INTR_MASK.reg, ~0);
> -	xe_mmio_write32(gt, VCS0_VCS1_INTR_MASK.reg,	~0);
> -	xe_mmio_write32(gt, VCS2_VCS3_INTR_MASK.reg,	~0);
> -	xe_mmio_write32(gt, VECS0_VECS1_INTR_MASK.reg,	~0);
> +		xe_mmio_write32(gt, XEHPC_BCS7_BCS8_INTR_MASK, ~0);
> +	xe_mmio_write32(gt, VCS0_VCS1_INTR_MASK,	~0);
> +	xe_mmio_write32(gt, VCS2_VCS3_INTR_MASK,	~0);
> +	xe_mmio_write32(gt, VECS0_VECS1_INTR_MASK,	~0);
>  	if (ccs_mask & (BIT(0)|BIT(1)))
> -		xe_mmio_write32(gt, CCS0_CCS1_INTR_MASK.reg, ~0);
> +		xe_mmio_write32(gt, CCS0_CCS1_INTR_MASK, ~0);
>  	if (ccs_mask & (BIT(2)|BIT(3)))
> -		xe_mmio_write32(gt,  CCS2_CCS3_INTR_MASK.reg, ~0);
> +		xe_mmio_write32(gt,  CCS2_CCS3_INTR_MASK, ~0);
>  
> -	xe_mmio_write32(gt, GPM_WGBOXPERF_INTR_ENABLE.reg, 0);
> -	xe_mmio_write32(gt, GPM_WGBOXPERF_INTR_MASK.reg,  ~0);
> -	xe_mmio_write32(gt, GUC_SG_INTR_ENABLE.reg,	 0);
> -	xe_mmio_write32(gt, GUC_SG_INTR_MASK.reg,		~0);
> +	xe_mmio_write32(gt, GPM_WGBOXPERF_INTR_ENABLE, 0);
> +	xe_mmio_write32(gt, GPM_WGBOXPERF_INTR_MASK,  ~0);
> +	xe_mmio_write32(gt, GUC_SG_INTR_ENABLE,	 0);
> +	xe_mmio_write32(gt, GUC_SG_INTR_MASK,		~0);
>  }
>  
>  static void xelp_irq_reset(struct xe_gt *gt)
> diff --git a/drivers/gpu/drm/xe/xe_mmio.c b/drivers/gpu/drm/xe/xe_mmio.c
> index 3b719c774efa..0e91004fa06d 100644
> --- a/drivers/gpu/drm/xe/xe_mmio.c
> +++ b/drivers/gpu/drm/xe/xe_mmio.c
> @@ -153,13 +153,13 @@ int xe_mmio_total_vram_size(struct xe_device *xe, u64 *vram_size, u64 *usable_si
>  	struct xe_gt *gt = xe_device_get_gt(xe, 0);
>  	struct pci_dev *pdev = to_pci_dev(xe->drm.dev);
>  	int err;
> -	u32 reg;
> +	u32 reg_val;
>  
>  	if (!xe->info.has_flat_ccs)  {
>  		*vram_size = pci_resource_len(pdev, GEN12_LMEM_BAR);
>  		if (usable_size)
>  			*usable_size = min(*vram_size,
> -					   xe_mmio_read64(gt, GSMBASE.reg));
> +					   xe_mmio_read64(gt, GSMBASE));
>  		return 0;
>  	}
>  
> @@ -167,11 +167,11 @@ int xe_mmio_total_vram_size(struct xe_device *xe, u64 *vram_size, u64 *usable_si
>  	if (err)
>  		return err;
>  
> -	reg = xe_gt_mcr_unicast_read_any(gt, XEHP_TILE0_ADDR_RANGE);
> -	*vram_size = (u64)REG_FIELD_GET(GENMASK(14, 8), reg) * SZ_1G;
> +	reg_val = xe_gt_mcr_unicast_read_any(gt, XEHP_TILE0_ADDR_RANGE);
> +	*vram_size = (u64)REG_FIELD_GET(GENMASK(14, 8), reg_val) * SZ_1G;
>  	if (usable_size) {
> -		reg = xe_gt_mcr_unicast_read_any(gt, XEHP_FLAT_CCS_BASE_ADDR);
> -		*usable_size = (u64)REG_FIELD_GET(GENMASK(31, 8), reg) * SZ_64K;
> +		reg_val = xe_gt_mcr_unicast_read_any(gt, XEHP_FLAT_CCS_BASE_ADDR);
> +		*usable_size = (u64)REG_FIELD_GET(GENMASK(31, 8), reg_val) * SZ_64K;
>  		drm_info(&xe->drm, "vram_size: 0x%llx usable_size: 0x%llx\n",
>  			 *vram_size, *usable_size);
>  	}
> @@ -298,7 +298,7 @@ static void xe_mmio_probe_tiles(struct xe_device *xe)
>  	if (xe->info.tile_count == 1)
>  		return;
>  
> -	mtcfg = xe_mmio_read64(gt, XEHP_MTCFG_ADDR.reg);
> +	mtcfg = xe_mmio_read64(gt, XEHP_MTCFG_ADDR);
>  	adj_tile_count = xe->info.tile_count =
>  		REG_FIELD_GET(TILE_COUNT, mtcfg) + 1;
>  	if (xe->info.media_verx100 >= 1300)
> @@ -374,7 +374,7 @@ int xe_mmio_init(struct xe_device *xe)
>  	 * keep the GT powered down; we won't be able to communicate with it
>  	 * and we should not continue with driver initialization.
>  	 */
> -	if (IS_DGFX(xe) && !(xe_mmio_read32(gt, GU_CNTL.reg) & LMEM_INIT)) {
> +	if (IS_DGFX(xe) && !(xe_mmio_read32(gt, GU_CNTL) & LMEM_INIT)) {
>  		drm_err(&xe->drm, "VRAM not initialized by firmware\n");
>  		return -ENODEV;
>  	}
> @@ -403,6 +403,7 @@ int xe_mmio_ioctl(struct drm_device *dev, void *data,
>  	struct xe_device *xe = to_xe_device(dev);
>  	struct drm_xe_mmio *args = data;
>  	unsigned int bits_flag, bytes;
> +	struct xe_reg reg;
>  	bool allowed;
>  	int ret = 0;
>  
> @@ -435,6 +436,12 @@ int xe_mmio_ioctl(struct drm_device *dev, void *data,
>  	if (XE_IOCTL_ERR(xe, args->addr + bytes > xe->mmio.size))
>  		return -EINVAL;
>  
> +	/*
> +	 * TODO: migrate to xe_gt_mcr to lookup the mmio range and handle
> +	 * multicast registers. Steering would need uapi extension.
> +	 */
> +	reg = XE_REG(args->addr);
> +
>  	xe_force_wake_get(gt_to_fw(&xe->gt[0]), XE_FORCEWAKE_ALL);
>  
>  	if (args->flags & DRM_XE_MMIO_WRITE) {
> @@ -444,10 +451,10 @@ int xe_mmio_ioctl(struct drm_device *dev, void *data,
>  				ret = -EINVAL;
>  				goto exit;
>  			}
> -			xe_mmio_write32(to_gt(xe), args->addr, args->value);
> +			xe_mmio_write32(to_gt(xe), reg, args->value);
>  			break;
>  		case DRM_XE_MMIO_64BIT:
> -			xe_mmio_write64(to_gt(xe), args->addr, args->value);
> +			xe_mmio_write64(to_gt(xe), reg, args->value);
>  			break;
>  		default:
>  			drm_dbg(&xe->drm, "Invalid MMIO bit size");
> @@ -462,10 +469,10 @@ int xe_mmio_ioctl(struct drm_device *dev, void *data,
>  	if (args->flags & DRM_XE_MMIO_READ) {
>  		switch (bits_flag) {
>  		case DRM_XE_MMIO_32BIT:
> -			args->value = xe_mmio_read32(to_gt(xe), args->addr);
> +			args->value = xe_mmio_read32(to_gt(xe), reg);
>  			break;
>  		case DRM_XE_MMIO_64BIT:
> -			args->value = xe_mmio_read64(to_gt(xe), args->addr);
> +			args->value = xe_mmio_read64(to_gt(xe), reg);
>  			break;
>  		default:
>  			drm_dbg(&xe->drm, "Invalid MMIO bit size");
> diff --git a/drivers/gpu/drm/xe/xe_mmio.h b/drivers/gpu/drm/xe/xe_mmio.h
> index b72a0a75259f..821701f8ada6 100644
> --- a/drivers/gpu/drm/xe/xe_mmio.h
> +++ b/drivers/gpu/drm/xe/xe_mmio.h
> @@ -9,6 +9,7 @@
>  #include <linux/delay.h>
>  #include <linux/io-64-nonatomic-lo-hi.h>
>  
> +#include "regs/xe_reg_defs.h"
>  #include "xe_gt_types.h"
>  
>  struct drm_device;
> @@ -17,32 +18,32 @@ struct xe_device;
>  
>  int xe_mmio_init(struct xe_device *xe);
>  
> -static inline u8 xe_mmio_read8(struct xe_gt *gt, u32 reg)
> +static inline u8 xe_mmio_read8(struct xe_gt *gt, struct xe_reg reg)
>  {
> -	if (reg < gt->mmio.adj_limit)
> -		reg += gt->mmio.adj_offset;
> +	if (reg.reg < gt->mmio.adj_limit)
> +		reg.reg += gt->mmio.adj_offset;
>  
> -	return readb(gt->mmio.regs + reg);
> +	return readb(gt->mmio.regs + reg.reg);
>  }
>  
>  static inline void xe_mmio_write32(struct xe_gt *gt,
> -				   u32 reg, u32 val)
> +				   struct xe_reg reg, u32 val)
>  {
> -	if (reg < gt->mmio.adj_limit)
> -		reg += gt->mmio.adj_offset;
> +	if (reg.reg < gt->mmio.adj_limit)
> +		reg.reg += gt->mmio.adj_offset;
>  
> -	writel(val, gt->mmio.regs + reg);
> +	writel(val, gt->mmio.regs + reg.reg);
>  }
>  
> -static inline u32 xe_mmio_read32(struct xe_gt *gt, u32 reg)
> +static inline u32 xe_mmio_read32(struct xe_gt *gt, struct xe_reg reg)
>  {
> -	if (reg < gt->mmio.adj_limit)
> -		reg += gt->mmio.adj_offset;
> +	if (reg.reg < gt->mmio.adj_limit)
> +		reg.reg += gt->mmio.adj_offset;
>  
> -	return readl(gt->mmio.regs + reg);
> +	return readl(gt->mmio.regs + reg.reg);
>  }
>  
> -static inline u32 xe_mmio_rmw32(struct xe_gt *gt, u32 reg, u32 clr,
> +static inline u32 xe_mmio_rmw32(struct xe_gt *gt, struct xe_reg reg, u32 clr,
>  				 u32 set)
>  {
>  	u32 old, reg_val;
> @@ -55,24 +56,24 @@ static inline u32 xe_mmio_rmw32(struct xe_gt *gt, u32 reg, u32 clr,
>  }
>  
>  static inline void xe_mmio_write64(struct xe_gt *gt,
> -				   u32 reg, u64 val)
> +				   struct xe_reg reg, u64 val)
>  {
> -	if (reg < gt->mmio.adj_limit)
> -		reg += gt->mmio.adj_offset;
> +	if (reg.reg < gt->mmio.adj_limit)
> +		reg.reg += gt->mmio.adj_offset;
>  
> -	writeq(val, gt->mmio.regs + reg);
> +	writeq(val, gt->mmio.regs + reg.reg);
>  }
>  
> -static inline u64 xe_mmio_read64(struct xe_gt *gt, u32 reg)
> +static inline u64 xe_mmio_read64(struct xe_gt *gt, struct xe_reg reg)
>  {
> -	if (reg < gt->mmio.adj_limit)
> -		reg += gt->mmio.adj_offset;
> +	if (reg.reg < gt->mmio.adj_limit)
> +		reg.reg += gt->mmio.adj_offset;
>  
> -	return readq(gt->mmio.regs + reg);
> +	return readq(gt->mmio.regs + reg.reg);
>  }
>  
>  static inline int xe_mmio_write32_and_verify(struct xe_gt *gt,
> -					     u32 reg, u32 val,
> +					     struct xe_reg reg, u32 val,
>  					     u32 mask, u32 eval)
>  {
>  	u32 reg_val;
> @@ -83,8 +84,9 @@ static inline int xe_mmio_write32_and_verify(struct xe_gt *gt,
>  	return (reg_val & mask) != eval ? -EINVAL : 0;
>  }
>  
> -static inline int xe_mmio_wait32(struct xe_gt *gt, u32 reg, u32 val, u32 mask,
> -				 u32 timeout_us, u32 *out_val, bool atomic)
> +static inline int xe_mmio_wait32(struct xe_gt *gt, struct xe_reg reg, u32 val,
> +				 u32 mask, u32 timeout_us, u32 *out_val,
> +				 bool atomic)
>  {
>  	ktime_t cur = ktime_get_raw();
>  	const ktime_t end = ktime_add_us(cur, timeout_us);
> @@ -122,9 +124,10 @@ static inline int xe_mmio_wait32(struct xe_gt *gt, u32 reg, u32 val, u32 mask,
>  int xe_mmio_ioctl(struct drm_device *dev, void *data,
>  		  struct drm_file *file);
>  
> -static inline bool xe_mmio_in_range(const struct xe_mmio_range *range, u32 reg)
> +static inline bool xe_mmio_in_range(const struct xe_mmio_range *range,
> +				    struct xe_reg reg)
>  {
> -	return range && reg >= range->start && reg <= range->end;
> +	return range && reg.reg >= range->start && reg.reg <= range->end;
>  }
>  
>  int xe_mmio_probe_vram(struct xe_device *xe);
> diff --git a/drivers/gpu/drm/xe/xe_mocs.c b/drivers/gpu/drm/xe/xe_mocs.c
> index 0d07811a573f..1175dec5d90b 100644
> --- a/drivers/gpu/drm/xe/xe_mocs.c
> +++ b/drivers/gpu/drm/xe/xe_mocs.c
> @@ -477,8 +477,9 @@ static void __init_mocs_table(struct xe_gt *gt,
>  	for (i = 0;
>  	     i < info->n_entries ? (mocs = get_entry_control(info, i)), 1 : 0;
>  	     i++) {
> -		mocs_dbg(&gt->xe->drm, "%d 0x%x 0x%x\n", i, XE_REG(addr + i * 4).reg, mocs);
> -		xe_mmio_write32(gt, XE_REG(addr + i * 4).reg, mocs);
> +		struct xe_reg reg = XE_REG(addr + i * 4);
> +		mocs_dbg(&gt->xe->drm, "%d 0x%x 0x%x\n", i, reg.reg, mocs);
> +		xe_mmio_write32(gt, reg, mocs);
>  	}
>  }
>  
> @@ -514,7 +515,7 @@ static void init_l3cc_table(struct xe_gt *gt,
>  	     i++) {
>  		mocs_dbg(&gt->xe->drm, "%d 0x%x 0x%x\n", i, LNCFCMOCS(i).reg,
>  			 l3cc);
> -		xe_mmio_write32(gt, LNCFCMOCS(i).reg, l3cc);
> +		xe_mmio_write32(gt, LNCFCMOCS(i), l3cc);
>  	}
>  }
>  
> diff --git a/drivers/gpu/drm/xe/xe_pat.c b/drivers/gpu/drm/xe/xe_pat.c
> index abee41fa3cb9..b56a65779d26 100644
> --- a/drivers/gpu/drm/xe/xe_pat.c
> +++ b/drivers/gpu/drm/xe/xe_pat.c
> @@ -64,14 +64,20 @@ static const u32 mtl_pat_table[] = {
>  
>  static void program_pat(struct xe_gt *gt, const u32 table[], int n_entries)
>  {
> -	for (int i = 0; i < n_entries; i++)
> -		xe_mmio_write32(gt, _PAT_INDEX(i), table[i]);
> +	for (int i = 0; i < n_entries; i++) {
> +		struct xe_reg reg = XE_REG(_PAT_INDEX(i));
> +
> +		xe_mmio_write32(gt, reg, table[i]);
> +	}
>  }
>  
>  static void program_pat_mcr(struct xe_gt *gt, const u32 table[], int n_entries)
>  {
> -	for (int i = 0; i < n_entries; i++)
> -		xe_gt_mcr_multicast_write(gt, XE_REG_MCR(_PAT_INDEX(i)), table[i]);
> +	for (int i = 0; i < n_entries; i++) {
> +		struct xe_reg_mcr reg_mcr = XE_REG_MCR(_PAT_INDEX(i));
> +
> +		xe_gt_mcr_multicast_write(gt, reg_mcr, table[i]);
> +	}
>  }
>  
>  void xe_pat_init(struct xe_gt *gt)
> diff --git a/drivers/gpu/drm/xe/xe_pcode.c b/drivers/gpu/drm/xe/xe_pcode.c
> index 99bb730684ed..7ab70a83f88d 100644
> --- a/drivers/gpu/drm/xe/xe_pcode.c
> +++ b/drivers/gpu/drm/xe/xe_pcode.c
> @@ -43,7 +43,7 @@ static int pcode_mailbox_status(struct xe_gt *gt)
>  
>  	lockdep_assert_held(&gt->pcode.lock);
>  
> -	err = xe_mmio_read32(gt, PCODE_MAILBOX.reg) & PCODE_ERROR_MASK;
> +	err = xe_mmio_read32(gt, PCODE_MAILBOX) & PCODE_ERROR_MASK;
>  	if (err) {
>  		drm_err(&gt_to_xe(gt)->drm, "PCODE Mailbox failed: %d %s", err,
>  			err_decode[err].str ?: "Unknown");
> @@ -60,22 +60,22 @@ static int pcode_mailbox_rw(struct xe_gt *gt, u32 mbox, u32 *data0, u32 *data1,
>  	int err;
>  	lockdep_assert_held(&gt->pcode.lock);
>  
> -	if ((xe_mmio_read32(gt, PCODE_MAILBOX.reg) & PCODE_READY) != 0)
> +	if ((xe_mmio_read32(gt, PCODE_MAILBOX) & PCODE_READY) != 0)
>  		return -EAGAIN;
>  
> -	xe_mmio_write32(gt, PCODE_DATA0.reg, *data0);
> -	xe_mmio_write32(gt, PCODE_DATA1.reg, data1 ? *data1 : 0);
> -	xe_mmio_write32(gt, PCODE_MAILBOX.reg, PCODE_READY | mbox);
> +	xe_mmio_write32(gt, PCODE_DATA0, *data0);
> +	xe_mmio_write32(gt, PCODE_DATA1, data1 ? *data1 : 0);
> +	xe_mmio_write32(gt, PCODE_MAILBOX, PCODE_READY | mbox);
>  
> -	err = xe_mmio_wait32(gt, PCODE_MAILBOX.reg, 0, PCODE_READY,
> +	err = xe_mmio_wait32(gt, PCODE_MAILBOX, 0, PCODE_READY,
>  			     timeout_ms * 1000, NULL, atomic);
>  	if (err)
>  		return err;
>  
>  	if (return_data) {
> -		*data0 = xe_mmio_read32(gt, PCODE_DATA0.reg);
> +		*data0 = xe_mmio_read32(gt, PCODE_DATA0);
>  		if (data1)
> -			*data1 = xe_mmio_read32(gt, PCODE_DATA1.reg);
> +			*data1 = xe_mmio_read32(gt, PCODE_DATA1);
>  	}
>  
>  	return pcode_mailbox_status(gt);
> diff --git a/drivers/gpu/drm/xe/xe_reg_sr.c b/drivers/gpu/drm/xe/xe_reg_sr.c
> index 801f211fb733..51a40a9e532d 100644
> --- a/drivers/gpu/drm/xe/xe_reg_sr.c
> +++ b/drivers/gpu/drm/xe/xe_reg_sr.c
> @@ -163,7 +163,7 @@ static void apply_one_mmio(struct xe_gt *gt, struct xe_reg_sr_entry *entry)
>  	else if (entry->clr_bits + 1)
>  		val = (reg.mcr ?
>  		       xe_gt_mcr_unicast_read_any(gt, reg_mcr) :
> -		       xe_mmio_read32(gt, reg.reg)) & (~entry->clr_bits);
> +		       xe_mmio_read32(gt, reg)) & (~entry->clr_bits);
>  	else
>  		val = 0;
>  
> @@ -179,7 +179,7 @@ static void apply_one_mmio(struct xe_gt *gt, struct xe_reg_sr_entry *entry)
>  	if (entry->reg.mcr)
>  		xe_gt_mcr_multicast_write(gt, reg_mcr, val);
>  	else
> -		xe_mmio_write32(gt, reg.reg, val);
> +		xe_mmio_write32(gt, reg, val);
>  }
>  
>  void xe_reg_sr_apply_mmio(struct xe_reg_sr *sr, struct xe_gt *gt)
> @@ -232,15 +232,17 @@ void xe_reg_sr_apply_whitelist(struct xe_reg_sr *sr, u32 mmio_base,
>  	p = drm_debug_printer(KBUILD_MODNAME);
>  	xa_for_each(&sr->xa, reg, entry) {
>  		xe_reg_whitelist_print_entry(&p, 0, reg, entry);
> -		xe_mmio_write32(gt, RING_FORCE_TO_NONPRIV(mmio_base, slot).reg,
> +		xe_mmio_write32(gt, RING_FORCE_TO_NONPRIV(mmio_base, slot),
>  				reg | entry->set_bits);
>  		slot++;
>  	}
>  
>  	/* And clear the rest just in case of garbage */
> -	for (; slot < RING_MAX_NONPRIV_SLOTS; slot++)
> -		xe_mmio_write32(gt, RING_FORCE_TO_NONPRIV(mmio_base, slot).reg,
> -				RING_NOPID(mmio_base).reg);
> +	for (; slot < RING_MAX_NONPRIV_SLOTS; slot++) {
> +		u32 addr = RING_NOPID(mmio_base).reg;
> +
> +		xe_mmio_write32(gt, RING_FORCE_TO_NONPRIV(mmio_base, slot), addr);
> +	}
>  
>  	err = xe_force_wake_put(&gt->mmio.fw, XE_FORCEWAKE_ALL);
>  	XE_WARN_ON(err);
> diff --git a/drivers/gpu/drm/xe/xe_ring_ops.c b/drivers/gpu/drm/xe/xe_ring_ops.c
> index 75838b8bb9a8..733ed8a30c2e 100644
> --- a/drivers/gpu/drm/xe/xe_ring_ops.c
> +++ b/drivers/gpu/drm/xe/xe_ring_ops.c
> @@ -44,10 +44,11 @@ static u32 preparser_disable(bool state)
>  	return MI_ARB_CHECK | BIT(8) | state;
>  }
>  
> -static int emit_aux_table_inv(struct xe_gt *gt, u32 addr, u32 *dw, int i)
> +static int emit_aux_table_inv(struct xe_gt *gt, struct xe_reg reg,
> +			      u32 *dw, int i)
>  {
>  	dw[i++] = MI_LOAD_REGISTER_IMM(1) | MI_LRI_MMIO_REMAP_EN;
> -	dw[i++] = addr + gt->mmio.adj_offset;
> +	dw[i++] = reg.reg + gt->mmio.adj_offset;
>  	dw[i++] = AUX_INV;
>  	dw[i++] = MI_NOOP;
>  
> @@ -203,9 +204,9 @@ static void __emit_job_gen12_video(struct xe_sched_job *job, struct xe_lrc *lrc,
>  	/* hsdes: 1809175790 */
>  	if (!xe->info.has_flat_ccs) {
>  		if (decode)
> -			i = emit_aux_table_inv(gt, VD0_AUX_NV.reg, dw, i);
> +			i = emit_aux_table_inv(gt, VD0_AUX_NV, dw, i);
>  		else
> -			i = emit_aux_table_inv(gt, VE0_AUX_NV.reg, dw, i);
> +			i = emit_aux_table_inv(gt, VE0_AUX_NV, dw, i);
>  	}
>  	dw[i++] = preparser_disable(false);
>  
> @@ -248,7 +249,7 @@ static void __emit_job_gen12_render_compute(struct xe_sched_job *job,
>  
>  	/* hsdes: 1809175790 */
>  	if (!xe->info.has_flat_ccs)
> -		i = emit_aux_table_inv(gt, GFX_CCS_AUX_NV.reg, dw, i);
> +		i = emit_aux_table_inv(gt, GFX_CCS_AUX_NV, dw, i);
>  
>  	dw[i++] = preparser_disable(false);
>  
> diff --git a/drivers/gpu/drm/xe/xe_ttm_stolen_mgr.c b/drivers/gpu/drm/xe/xe_ttm_stolen_mgr.c
> index 9ce0a0585539..a3855870321f 100644
> --- a/drivers/gpu/drm/xe/xe_ttm_stolen_mgr.c
> +++ b/drivers/gpu/drm/xe/xe_ttm_stolen_mgr.c
> @@ -65,7 +65,7 @@ static s64 detect_bar2_dgfx(struct xe_device *xe, struct xe_ttm_stolen_mgr *mgr)
>  	}
>  
>  	/* Use DSM base address instead for stolen memory */
> -	mgr->stolen_base = xe_mmio_read64(gt, DSMBASE.reg) & BDSM_MASK;
> +	mgr->stolen_base = xe_mmio_read64(gt, DSMBASE) & BDSM_MASK;
>  	if (drm_WARN_ON(&xe->drm, vram_size < mgr->stolen_base))
>  		return 0;
>  
> @@ -88,7 +88,7 @@ static u32 detect_bar2_integrated(struct xe_device *xe, struct xe_ttm_stolen_mgr
>  	u32 stolen_size;
>  	u32 ggc, gms;
>  
> -	ggc = xe_mmio_read32(to_gt(xe), GGC.reg);
> +	ggc = xe_mmio_read32(to_gt(xe), GGC);
>  
>  	/* check GGMS, should be fixed 0x3 (8MB) */
>  	if (drm_WARN_ON(&xe->drm, (ggc & GGMS_MASK) != GGMS_MASK))
> diff --git a/drivers/gpu/drm/xe/xe_uc_fw.c b/drivers/gpu/drm/xe/xe_uc_fw.c
> index cd5433b5c970..5c3a571d2a29 100644
> --- a/drivers/gpu/drm/xe/xe_uc_fw.c
> +++ b/drivers/gpu/drm/xe/xe_uc_fw.c
> @@ -462,33 +462,33 @@ static int uc_fw_xfer(struct xe_uc_fw *uc_fw, u32 offset, u32 dma_flags)
>  
>  	/* Set the source address for the uCode */
>  	src_offset = uc_fw_ggtt_offset(uc_fw);
> -	xe_mmio_write32(gt, DMA_ADDR_0_LOW.reg, lower_32_bits(src_offset));
> -	xe_mmio_write32(gt, DMA_ADDR_0_HIGH.reg, upper_32_bits(src_offset));
> +	xe_mmio_write32(gt, DMA_ADDR_0_LOW, lower_32_bits(src_offset));
> +	xe_mmio_write32(gt, DMA_ADDR_0_HIGH, upper_32_bits(src_offset));
>  
>  	/* Set the DMA destination */
> -	xe_mmio_write32(gt, DMA_ADDR_1_LOW.reg, offset);
> -	xe_mmio_write32(gt, DMA_ADDR_1_HIGH.reg, DMA_ADDRESS_SPACE_WOPCM);
> +	xe_mmio_write32(gt, DMA_ADDR_1_LOW, offset);
> +	xe_mmio_write32(gt, DMA_ADDR_1_HIGH, DMA_ADDRESS_SPACE_WOPCM);
>  
>  	/*
>  	 * Set the transfer size. The header plus uCode will be copied to WOPCM
>  	 * via DMA, excluding any other components
>  	 */
> -	xe_mmio_write32(gt, DMA_COPY_SIZE.reg,
> +	xe_mmio_write32(gt, DMA_COPY_SIZE,
>  			sizeof(struct uc_css_header) + uc_fw->ucode_size);
>  
>  	/* Start the DMA */
> -	xe_mmio_write32(gt, DMA_CTRL.reg,
> +	xe_mmio_write32(gt, DMA_CTRL,
>  			_MASKED_BIT_ENABLE(dma_flags | START_DMA));
>  
>  	/* Wait for DMA to finish */
> -	ret = xe_mmio_wait32(gt, DMA_CTRL.reg, 0, START_DMA, 100000, &dma_ctrl,
> +	ret = xe_mmio_wait32(gt, DMA_CTRL, 0, START_DMA, 100000, &dma_ctrl,
>  			     false);
>  	if (ret)
>  		drm_err(&xe->drm, "DMA for %s fw failed, DMA_CTRL=%u\n",
>  			xe_uc_fw_type_repr(uc_fw->type), dma_ctrl);
>  
>  	/* Disable the bits once DMA is over */
> -	xe_mmio_write32(gt, DMA_CTRL.reg, _MASKED_BIT_DISABLE(dma_flags));
> +	xe_mmio_write32(gt, DMA_CTRL, _MASKED_BIT_DISABLE(dma_flags));
>  
>  	return ret;
>  }
> diff --git a/drivers/gpu/drm/xe/xe_wopcm.c b/drivers/gpu/drm/xe/xe_wopcm.c
> index 7b5014aea9c8..11eea970c207 100644
> --- a/drivers/gpu/drm/xe/xe_wopcm.c
> +++ b/drivers/gpu/drm/xe/xe_wopcm.c
> @@ -124,8 +124,8 @@ static bool __check_layout(struct xe_device *xe, u32 wopcm_size,
>  static bool __wopcm_regs_locked(struct xe_gt *gt,
>  				u32 *guc_wopcm_base, u32 *guc_wopcm_size)
>  {
> -	u32 reg_base = xe_mmio_read32(gt, DMA_GUC_WOPCM_OFFSET.reg);
> -	u32 reg_size = xe_mmio_read32(gt, GUC_WOPCM_SIZE.reg);
> +	u32 reg_base = xe_mmio_read32(gt, DMA_GUC_WOPCM_OFFSET);
> +	u32 reg_size = xe_mmio_read32(gt, GUC_WOPCM_SIZE);
>  
>  	if (!(reg_size & GUC_WOPCM_SIZE_LOCKED) ||
>  	    !(reg_base & GUC_WOPCM_OFFSET_VALID))
> @@ -152,13 +152,13 @@ static int __wopcm_init_regs(struct xe_device *xe, struct xe_gt *gt,
>  	XE_BUG_ON(size & ~GUC_WOPCM_SIZE_MASK);
>  
>  	mask = GUC_WOPCM_SIZE_MASK | GUC_WOPCM_SIZE_LOCKED;
> -	err = xe_mmio_write32_and_verify(gt, GUC_WOPCM_SIZE.reg, size, mask,
> +	err = xe_mmio_write32_and_verify(gt, GUC_WOPCM_SIZE, size, mask,
>  					 size | GUC_WOPCM_SIZE_LOCKED);
>  	if (err)
>  		goto err_out;
>  
>  	mask = GUC_WOPCM_OFFSET_MASK | GUC_WOPCM_OFFSET_VALID | huc_agent;
> -	err = xe_mmio_write32_and_verify(gt, DMA_GUC_WOPCM_OFFSET.reg,
> +	err = xe_mmio_write32_and_verify(gt, DMA_GUC_WOPCM_OFFSET,
>  					 base | huc_agent, mask,
>  					 base | huc_agent |
>  					 GUC_WOPCM_OFFSET_VALID);
> @@ -171,10 +171,10 @@ static int __wopcm_init_regs(struct xe_device *xe, struct xe_gt *gt,
>  	drm_notice(&xe->drm, "Failed to init uC WOPCM registers!\n");
>  	drm_notice(&xe->drm, "%s(%#x)=%#x\n", "DMA_GUC_WOPCM_OFFSET",
>  		   DMA_GUC_WOPCM_OFFSET.reg,
> -		   xe_mmio_read32(gt, DMA_GUC_WOPCM_OFFSET.reg));
> +		   xe_mmio_read32(gt, DMA_GUC_WOPCM_OFFSET));
>  	drm_notice(&xe->drm, "%s(%#x)=%#x\n", "GUC_WOPCM_SIZE",
>  		   GUC_WOPCM_SIZE.reg,
> -		   xe_mmio_read32(gt, GUC_WOPCM_SIZE.reg));
> +		   xe_mmio_read32(gt, GUC_WOPCM_SIZE));
>  
>  	return err;
>  }
> -- 
> 2.40.1
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Intel-xe] [PATCH v2 2/4] fixup! drm/xe/display: Implement display support
  2023-05-08 22:53 ` [Intel-xe] [PATCH v2 2/4] fixup! drm/xe/display: Implement display support Lucas De Marchi
@ 2023-05-09 15:26   ` Rodrigo Vivi
  2023-05-09 17:09     ` Lucas De Marchi
  0 siblings, 1 reply; 12+ messages in thread
From: Rodrigo Vivi @ 2023-05-09 15:26 UTC (permalink / raw)
  To: Lucas De Marchi; +Cc: intel-xe

On Mon, May 08, 2023 at 03:53:20PM -0700, Lucas De Marchi wrote:
> WARNING: This should only be squashed when the display implementation
> moves above commit "drm/xe/mmio: Use struct xe_reg".

I wonder if we should then try to move this patch under the display
instead waiting for the next round of moving the display up...

Also, could we change the subject from fixup to future-fixup so the
a git autosquash doesn't try to move this before we are ready?

> 
> With the move of display above xe_reg conversion in xe_mmio,
> it should use the new types everywhere.
> 
> Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
> Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> ---
>  .../drm/xe/compat-i915-headers/intel_uncore.h | 103 +++++++++++++-----
>  1 file changed, 74 insertions(+), 29 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/compat-i915-headers/intel_uncore.h b/drivers/gpu/drm/xe/compat-i915-headers/intel_uncore.h
> index 90d79290a211..14f195fe275d 100644
> --- a/drivers/gpu/drm/xe/compat-i915-headers/intel_uncore.h
> +++ b/drivers/gpu/drm/xe/compat-i915-headers/intel_uncore.h
> @@ -17,82 +17,127 @@ static inline struct xe_gt *__fake_uncore_to_gt(struct fake_uncore *uncore)
>  	return to_gt(xe);
>  }
>  
> -static inline u32 intel_uncore_read(struct fake_uncore *uncore, i915_reg_t reg)
> +static inline u32 intel_uncore_read(struct fake_uncore *uncore,
> +				    i915_reg_t i915_reg)
>  {
> -	return xe_mmio_read32(__fake_uncore_to_gt(uncore), reg.reg);
> +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
> +
> +	return xe_mmio_read32(__fake_uncore_to_gt(uncore), reg);
>  }
>  
> -static inline u32 intel_uncore_read8(struct fake_uncore *uncore, i915_reg_t reg)
> +static inline u32 intel_uncore_read8(struct fake_uncore *uncore,
> +				     i915_reg_t i915_reg)
>  {
> -	return xe_mmio_read8(__fake_uncore_to_gt(uncore), reg.reg);
> +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
> +
> +	return xe_mmio_read8(__fake_uncore_to_gt(uncore), reg);
>  }
>  
> -static inline u64 intel_uncore_read64_2x32(struct fake_uncore *uncore, i915_reg_t lower_reg, i915_reg_t upper_reg)
> +static inline u64
> +intel_uncore_read64_2x32(struct fake_uncore *uncore,
> +			 i915_reg_t i915_lower_reg, i915_reg_t i915_upper_reg)
>  {
> +	struct xe_reg lower_reg = XE_REG(i915_mmio_reg_offset(i915_lower_reg));
> +	struct xe_reg upper_reg = XE_REG(i915_mmio_reg_offset(i915_upper_reg));
>  	u32 upper, lower, old_upper;
>  	int loop = 0;
>  
> -	upper = xe_mmio_read32(__fake_uncore_to_gt(uncore), upper_reg.reg);
> +	upper = xe_mmio_read32(__fake_uncore_to_gt(uncore), upper_reg);
>  	do {
>  		old_upper = upper;
> -		lower = xe_mmio_read32(__fake_uncore_to_gt(uncore), lower_reg.reg);
> -		upper = xe_mmio_read32(__fake_uncore_to_gt(uncore), upper_reg.reg);
> +		lower = xe_mmio_read32(__fake_uncore_to_gt(uncore), lower_reg);
> +		upper = xe_mmio_read32(__fake_uncore_to_gt(uncore), upper_reg);
>  	} while (upper != old_upper && loop++ < 2);
>  
>  	return (u64)upper << 32 | lower;
>  }
>  
> -static inline void intel_uncore_posting_read(struct fake_uncore *uncore, i915_reg_t reg)
> +static inline void intel_uncore_posting_read(struct fake_uncore *uncore,
> +					     i915_reg_t i915_reg)
>  {
> -	xe_mmio_read32(__fake_uncore_to_gt(uncore), reg.reg);
> +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
> +
> +	xe_mmio_read32(__fake_uncore_to_gt(uncore), reg);
>  }
>  
> -static inline void intel_uncore_write(struct fake_uncore *uncore, i915_reg_t reg, u32 val)
> +static inline void intel_uncore_write(struct fake_uncore *uncore,
> +				      i915_reg_t i915_reg, u32 val)
>  {
> -	xe_mmio_write32(__fake_uncore_to_gt(uncore), reg.reg, val);
> +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
> +
> +	xe_mmio_write32(__fake_uncore_to_gt(uncore), reg, val);
>  }
>  
> -static inline u32 intel_uncore_rmw(struct fake_uncore *uncore, i915_reg_t reg, u32 clear, u32 set)
> +static inline u32 intel_uncore_rmw(struct fake_uncore *uncore,
> +				   i915_reg_t i915_reg, u32 clear, u32 set)
>  {
> -	return xe_mmio_rmw32(__fake_uncore_to_gt(uncore), reg.reg, clear, set);
> +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
> +
> +	return xe_mmio_rmw32(__fake_uncore_to_gt(uncore), reg, clear, set);
>  }
>  
> -static inline int intel_wait_for_register(struct fake_uncore *uncore, i915_reg_t reg, u32 mask, u32 value, unsigned int timeout)
> +static inline int intel_wait_for_register(struct fake_uncore *uncore,
> +					  i915_reg_t i915_reg, u32 mask,
> +					  u32 value, unsigned int timeout)
>  {
> -	return xe_mmio_wait32(__fake_uncore_to_gt(uncore), reg.reg, value, mask, timeout * USEC_PER_MSEC, NULL, false);
> +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
> +
> +	return xe_mmio_wait32(__fake_uncore_to_gt(uncore), reg, value, mask,
> +			      timeout * USEC_PER_MSEC, NULL, false);
>  }
>  
> -static inline int intel_wait_for_register_fw(struct fake_uncore *uncore, i915_reg_t reg, u32 mask, u32 value, unsigned int timeout)
> +static inline int intel_wait_for_register_fw(struct fake_uncore *uncore,
> +					     i915_reg_t i915_reg, u32 mask,
> +					     u32 value, unsigned int timeout)
>  {
> -	return xe_mmio_wait32(__fake_uncore_to_gt(uncore), reg.reg, value, mask, timeout * USEC_PER_MSEC, NULL, false);
> +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
> +
> +	return xe_mmio_wait32(__fake_uncore_to_gt(uncore), reg, value, mask,
> +			      timeout * USEC_PER_MSEC, NULL, false);
>  }
>  
> -static inline int __intel_wait_for_register(struct fake_uncore *uncore, i915_reg_t reg, u32 mask, u32 value,
> -					    unsigned int fast_timeout_us, unsigned int slow_timeout_ms, u32 *out_value)
> +static inline int
> +__intel_wait_for_register(struct fake_uncore *uncore, i915_reg_t i915_reg,
> +			  u32 mask, u32 value, unsigned int fast_timeout_us,
> +			  unsigned int slow_timeout_ms, u32 *out_value)
>  {
> -	return xe_mmio_wait32(__fake_uncore_to_gt(uncore), reg.reg, value, mask,
> +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
> +
> +	return xe_mmio_wait32(__fake_uncore_to_gt(uncore), reg, value, mask,
>  			      fast_timeout_us + 1000 * slow_timeout_ms,
>  			      out_value, false);
>  }
>  
> -static inline u32 intel_uncore_read_fw(struct fake_uncore *uncore, i915_reg_t reg)
> +static inline u32 intel_uncore_read_fw(struct fake_uncore *uncore,
> +				       i915_reg_t i915_reg)
>  {
> -	return xe_mmio_read32(__fake_uncore_to_gt(uncore), reg.reg);
> +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
> +
> +	return xe_mmio_read32(__fake_uncore_to_gt(uncore), reg);
>  }
>  
> -static inline void intel_uncore_write_fw(struct fake_uncore *uncore, i915_reg_t reg, u32 val)
> +static inline void intel_uncore_write_fw(struct fake_uncore *uncore,
> +					 i915_reg_t i915_reg, u32 val)
>  {
> -	xe_mmio_write32(__fake_uncore_to_gt(uncore), reg.reg, val);
> +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
> +
> +	xe_mmio_write32(__fake_uncore_to_gt(uncore), reg, val);
>  }
>  
> -static inline u32 intel_uncore_read_notrace(struct fake_uncore *uncore, i915_reg_t reg)
> +static inline u32 intel_uncore_read_notrace(struct fake_uncore *uncore,
> +					    i915_reg_t i915_reg)
>  {
> -	return xe_mmio_read32(__fake_uncore_to_gt(uncore), reg.reg);
> +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
> +
> +	return xe_mmio_read32(__fake_uncore_to_gt(uncore), reg);
>  }
>  
> -static inline void intel_uncore_write_notrace(struct fake_uncore *uncore, i915_reg_t reg, u32 val)
> +static inline void intel_uncore_write_notrace(struct fake_uncore *uncore,
> +					      i915_reg_t i915_reg, u32 val)
>  {
> -	xe_mmio_write32(__fake_uncore_to_gt(uncore), reg.reg, val);
> +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
> +
> +	xe_mmio_write32(__fake_uncore_to_gt(uncore), reg, val);
>  }
>  
>  #endif /* __INTEL_UNCORE_H__ */
> -- 
> 2.40.1
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Intel-xe] [PATCH v2 3/4] drm/xe: Rename reg field to addr
  2023-05-08 22:53 ` [Intel-xe] [PATCH v2 3/4] drm/xe: Rename reg field to addr Lucas De Marchi
@ 2023-05-09 15:27   ` Rodrigo Vivi
  0 siblings, 0 replies; 12+ messages in thread
From: Rodrigo Vivi @ 2023-05-09 15:27 UTC (permalink / raw)
  To: Lucas De Marchi; +Cc: intel-xe

On Mon, May 08, 2023 at 03:53:21PM -0700, Lucas De Marchi wrote:
> Rename the address field to "addr" rather than "reg" so it's easier to
> understand what it is.
> 
> Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
> Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>

also feel free to convert this to

Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>


> ---
>  drivers/gpu/drm/xe/regs/xe_reg_defs.h  |  6 ++---
>  drivers/gpu/drm/xe/tests/xe_rtp_test.c |  2 +-
>  drivers/gpu/drm/xe/xe_force_wake.c     |  2 +-
>  drivers/gpu/drm/xe/xe_gt_mcr.c         |  2 +-
>  drivers/gpu/drm/xe/xe_guc.c            |  2 +-
>  drivers/gpu/drm/xe/xe_guc_ads.c        |  2 +-
>  drivers/gpu/drm/xe/xe_hw_engine.c      |  8 +++----
>  drivers/gpu/drm/xe/xe_irq.c            |  2 +-
>  drivers/gpu/drm/xe/xe_mmio.c           |  2 +-
>  drivers/gpu/drm/xe/xe_mmio.h           | 32 +++++++++++++-------------
>  drivers/gpu/drm/xe/xe_mocs.c           |  6 ++---
>  drivers/gpu/drm/xe/xe_pci.c            |  4 ++--
>  drivers/gpu/drm/xe/xe_reg_sr.c         |  6 ++---
>  drivers/gpu/drm/xe/xe_ring_ops.c       |  2 +-
>  drivers/gpu/drm/xe/xe_rtp.c            |  2 +-
>  drivers/gpu/drm/xe/xe_wopcm.c          |  4 ++--
>  16 files changed, 42 insertions(+), 42 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/regs/xe_reg_defs.h b/drivers/gpu/drm/xe/regs/xe_reg_defs.h
> index da781bc7bdc7..4554362ff4d9 100644
> --- a/drivers/gpu/drm/xe/regs/xe_reg_defs.h
> +++ b/drivers/gpu/drm/xe/regs/xe_reg_defs.h
> @@ -18,8 +18,8 @@
>  struct xe_reg {
>  	union {
>  		struct {
> -			/** @reg: address */
> -			u32 reg:22;
> +			/** @addr: address */
> +			u32 addr:22;
>  			/**
>  			 * @masked: register is "masked", with upper 16bits used
>  			 * to identify the bits that are updated on the lower
> @@ -71,7 +71,7 @@ struct xe_reg_mcr {
>   * object of the right type. However when initializing static const storage,
>   * where a compound statement is not allowed, this can be used instead.
>   */
> -#define XE_REG_INITIALIZER(r_, ...)    { .reg = r_, __VA_ARGS__ }
> +#define XE_REG_INITIALIZER(r_, ...)    { .addr = r_, __VA_ARGS__ }
>  
>  
>  /**
> diff --git a/drivers/gpu/drm/xe/tests/xe_rtp_test.c b/drivers/gpu/drm/xe/tests/xe_rtp_test.c
> index ad2fe8a39a78..4b2aac5ccf28 100644
> --- a/drivers/gpu/drm/xe/tests/xe_rtp_test.c
> +++ b/drivers/gpu/drm/xe/tests/xe_rtp_test.c
> @@ -244,7 +244,7 @@ static void xe_rtp_process_tests(struct kunit *test)
>  	xe_rtp_process(param->entries, reg_sr, &xe->gt[0], NULL);
>  
>  	xa_for_each(&reg_sr->xa, idx, sre) {
> -		if (idx == param->expected_reg.reg)
> +		if (idx == param->expected_reg.addr)
>  			sr_entry = sre;
>  
>  		count++;
> diff --git a/drivers/gpu/drm/xe/xe_force_wake.c b/drivers/gpu/drm/xe/xe_force_wake.c
> index 363b81c3d746..f0f0592fc598 100644
> --- a/drivers/gpu/drm/xe/xe_force_wake.c
> +++ b/drivers/gpu/drm/xe/xe_force_wake.c
> @@ -129,7 +129,7 @@ static int domain_sleep_wait(struct xe_gt *gt,
>  	for (tmp__ = (mask__); tmp__; tmp__ &= ~BIT(ffs(tmp__) - 1)) \
>  		for_each_if((domain__ = ((fw__)->domains + \
>  					 (ffs(tmp__) - 1))) && \
> -					 domain__->reg_ctl.reg)
> +					 domain__->reg_ctl.addr)
>  
>  int xe_force_wake_get(struct xe_force_wake *fw,
>  		      enum xe_force_wake_domains domains)
> diff --git a/drivers/gpu/drm/xe/xe_gt_mcr.c b/drivers/gpu/drm/xe/xe_gt_mcr.c
> index c6b9e9869fee..3db550c85e32 100644
> --- a/drivers/gpu/drm/xe/xe_gt_mcr.c
> +++ b/drivers/gpu/drm/xe/xe_gt_mcr.c
> @@ -398,7 +398,7 @@ static bool xe_gt_mcr_get_nonterminated_steering(struct xe_gt *gt,
>  	 */
>  	drm_WARN(&gt_to_xe(gt)->drm, true,
>  		 "Did not find MCR register %#x in any MCR steering table\n",
> -		 reg.reg);
> +		 reg.addr);
>  	*group = 0;
>  	*instance = 0;
>  
> diff --git a/drivers/gpu/drm/xe/xe_guc.c b/drivers/gpu/drm/xe/xe_guc.c
> index e8a126ad400f..eb4af4c71124 100644
> --- a/drivers/gpu/drm/xe/xe_guc.c
> +++ b/drivers/gpu/drm/xe/xe_guc.c
> @@ -713,7 +713,7 @@ int xe_guc_mmio_send_recv(struct xe_guc *guc, const u32 *request,
>  		response_buf[0] = header;
>  
>  		for (i = 1; i < VF_SW_FLAG_COUNT; i++) {
> -			reply_reg.reg += i * sizeof(u32);
> +			reply_reg.addr += i * sizeof(u32);
>  			response_buf[i] = xe_mmio_read32(gt, reply_reg);
>  		}
>  	}
> diff --git a/drivers/gpu/drm/xe/xe_guc_ads.c b/drivers/gpu/drm/xe/xe_guc_ads.c
> index 683f2df09c49..6d550d746909 100644
> --- a/drivers/gpu/drm/xe/xe_guc_ads.c
> +++ b/drivers/gpu/drm/xe/xe_guc_ads.c
> @@ -426,7 +426,7 @@ static void guc_mmio_regset_write_one(struct xe_guc_ads *ads,
>  				      unsigned int n_entry)
>  {
>  	struct guc_mmio_reg entry = {
> -		.offset = reg.reg,
> +		.offset = reg.addr,
>  		.flags = reg.masked ? GUC_REGSET_MASKED : 0,
>  	};
>  
> diff --git a/drivers/gpu/drm/xe/xe_hw_engine.c b/drivers/gpu/drm/xe/xe_hw_engine.c
> index 5e275aff8974..696b9d949163 100644
> --- a/drivers/gpu/drm/xe/xe_hw_engine.c
> +++ b/drivers/gpu/drm/xe/xe_hw_engine.c
> @@ -236,20 +236,20 @@ static void hw_engine_fini(struct drm_device *drm, void *arg)
>  static void hw_engine_mmio_write32(struct xe_hw_engine *hwe, struct xe_reg reg,
>  				   u32 val)
>  {
> -	XE_BUG_ON(reg.reg & hwe->mmio_base);
> +	XE_BUG_ON(reg.addr & hwe->mmio_base);
>  	xe_force_wake_assert_held(gt_to_fw(hwe->gt), hwe->domain);
>  
> -	reg.reg += hwe->mmio_base;
> +	reg.addr += hwe->mmio_base;
>  
>  	xe_mmio_write32(hwe->gt, reg, val);
>  }
>  
>  static u32 hw_engine_mmio_read32(struct xe_hw_engine *hwe, struct xe_reg reg)
>  {
> -	XE_BUG_ON(reg.reg & hwe->mmio_base);
> +	XE_BUG_ON(reg.addr & hwe->mmio_base);
>  	xe_force_wake_assert_held(gt_to_fw(hwe->gt), hwe->domain);
>  
> -	reg.reg += hwe->mmio_base;
> +	reg.addr += hwe->mmio_base;
>  
>  	return xe_mmio_read32(hwe->gt, reg);
>  }
> diff --git a/drivers/gpu/drm/xe/xe_irq.c b/drivers/gpu/drm/xe/xe_irq.c
> index 7aa245792927..5bf359c81cc5 100644
> --- a/drivers/gpu/drm/xe/xe_irq.c
> +++ b/drivers/gpu/drm/xe/xe_irq.c
> @@ -36,7 +36,7 @@ static void assert_iir_is_zero(struct xe_gt *gt, struct xe_reg reg)
>  
>  	drm_WARN(&gt_to_xe(gt)->drm, 1,
>  		 "Interrupt register 0x%x is not zero: 0x%08x\n",
> -		 reg.reg, val);
> +		 reg.addr, val);
>  	xe_mmio_write32(gt, reg, 0xffffffff);
>  	xe_mmio_read32(gt, reg);
>  	xe_mmio_write32(gt, reg, 0xffffffff);
> diff --git a/drivers/gpu/drm/xe/xe_mmio.c b/drivers/gpu/drm/xe/xe_mmio.c
> index 0e91004fa06d..c7fbb1cc1f64 100644
> --- a/drivers/gpu/drm/xe/xe_mmio.c
> +++ b/drivers/gpu/drm/xe/xe_mmio.c
> @@ -421,7 +421,7 @@ int xe_mmio_ioctl(struct drm_device *dev, void *data,
>  		unsigned int i;
>  
>  		for (i = 0; i < ARRAY_SIZE(mmio_read_whitelist); i++) {
> -			if (mmio_read_whitelist[i].reg == args->addr) {
> +			if (mmio_read_whitelist[i].addr == args->addr) {
>  				allowed = true;
>  				break;
>  			}
> diff --git a/drivers/gpu/drm/xe/xe_mmio.h b/drivers/gpu/drm/xe/xe_mmio.h
> index 821701f8ada6..01732ff7e4c6 100644
> --- a/drivers/gpu/drm/xe/xe_mmio.h
> +++ b/drivers/gpu/drm/xe/xe_mmio.h
> @@ -20,27 +20,27 @@ int xe_mmio_init(struct xe_device *xe);
>  
>  static inline u8 xe_mmio_read8(struct xe_gt *gt, struct xe_reg reg)
>  {
> -	if (reg.reg < gt->mmio.adj_limit)
> -		reg.reg += gt->mmio.adj_offset;
> +	if (reg.addr < gt->mmio.adj_limit)
> +		reg.addr += gt->mmio.adj_offset;
>  
> -	return readb(gt->mmio.regs + reg.reg);
> +	return readb(gt->mmio.regs + reg.addr);
>  }
>  
>  static inline void xe_mmio_write32(struct xe_gt *gt,
>  				   struct xe_reg reg, u32 val)
>  {
> -	if (reg.reg < gt->mmio.adj_limit)
> -		reg.reg += gt->mmio.adj_offset;
> +	if (reg.addr < gt->mmio.adj_limit)
> +		reg.addr += gt->mmio.adj_offset;
>  
> -	writel(val, gt->mmio.regs + reg.reg);
> +	writel(val, gt->mmio.regs + reg.addr);
>  }
>  
>  static inline u32 xe_mmio_read32(struct xe_gt *gt, struct xe_reg reg)
>  {
> -	if (reg.reg < gt->mmio.adj_limit)
> -		reg.reg += gt->mmio.adj_offset;
> +	if (reg.addr < gt->mmio.adj_limit)
> +		reg.addr += gt->mmio.adj_offset;
>  
> -	return readl(gt->mmio.regs + reg.reg);
> +	return readl(gt->mmio.regs + reg.addr);
>  }
>  
>  static inline u32 xe_mmio_rmw32(struct xe_gt *gt, struct xe_reg reg, u32 clr,
> @@ -58,18 +58,18 @@ static inline u32 xe_mmio_rmw32(struct xe_gt *gt, struct xe_reg reg, u32 clr,
>  static inline void xe_mmio_write64(struct xe_gt *gt,
>  				   struct xe_reg reg, u64 val)
>  {
> -	if (reg.reg < gt->mmio.adj_limit)
> -		reg.reg += gt->mmio.adj_offset;
> +	if (reg.addr < gt->mmio.adj_limit)
> +		reg.addr += gt->mmio.adj_offset;
>  
> -	writeq(val, gt->mmio.regs + reg.reg);
> +	writeq(val, gt->mmio.regs + reg.addr);
>  }
>  
>  static inline u64 xe_mmio_read64(struct xe_gt *gt, struct xe_reg reg)
>  {
> -	if (reg.reg < gt->mmio.adj_limit)
> -		reg.reg += gt->mmio.adj_offset;
> +	if (reg.addr < gt->mmio.adj_limit)
> +		reg.addr += gt->mmio.adj_offset;
>  
> -	return readq(gt->mmio.regs + reg.reg);
> +	return readq(gt->mmio.regs + reg.addr);
>  }
>  
>  static inline int xe_mmio_write32_and_verify(struct xe_gt *gt,
> @@ -127,7 +127,7 @@ int xe_mmio_ioctl(struct drm_device *dev, void *data,
>  static inline bool xe_mmio_in_range(const struct xe_mmio_range *range,
>  				    struct xe_reg reg)
>  {
> -	return range && reg.reg >= range->start && reg.reg <= range->end;
> +	return range && reg.addr >= range->start && reg.addr <= range->end;
>  }
>  
>  int xe_mmio_probe_vram(struct xe_device *xe);
> diff --git a/drivers/gpu/drm/xe/xe_mocs.c b/drivers/gpu/drm/xe/xe_mocs.c
> index 1175dec5d90b..5698df87aba7 100644
> --- a/drivers/gpu/drm/xe/xe_mocs.c
> +++ b/drivers/gpu/drm/xe/xe_mocs.c
> @@ -478,7 +478,7 @@ static void __init_mocs_table(struct xe_gt *gt,
>  	     i < info->n_entries ? (mocs = get_entry_control(info, i)), 1 : 0;
>  	     i++) {
>  		struct xe_reg reg = XE_REG(addr + i * 4);
> -		mocs_dbg(&gt->xe->drm, "%d 0x%x 0x%x\n", i, reg.reg, mocs);
> +		mocs_dbg(&gt->xe->drm, "%d 0x%x 0x%x\n", i, reg.addr, mocs);
>  		xe_mmio_write32(gt, reg, mocs);
>  	}
>  }
> @@ -513,7 +513,7 @@ static void init_l3cc_table(struct xe_gt *gt,
>  	     (l3cc = l3cc_combine(get_entry_l3cc(info, 2 * i),
>  				  get_entry_l3cc(info, 2 * i + 1))), 1 : 0;
>  	     i++) {
> -		mocs_dbg(&gt->xe->drm, "%d 0x%x 0x%x\n", i, LNCFCMOCS(i).reg,
> +		mocs_dbg(&gt->xe->drm, "%d 0x%x 0x%x\n", i, LNCFCMOCS(i).addr,
>  			 l3cc);
>  		xe_mmio_write32(gt, LNCFCMOCS(i), l3cc);
>  	}
> @@ -540,7 +540,7 @@ void xe_mocs_init(struct xe_gt *gt)
>  	mocs_dbg(&gt->xe->drm, "flag:0x%x\n", flags);
>  
>  	if (flags & HAS_GLOBAL_MOCS)
> -		__init_mocs_table(gt, &table, GLOBAL_MOCS(0).reg);
> +		__init_mocs_table(gt, &table, GLOBAL_MOCS(0).addr);
>  
>  	/*
>  	 * Initialize the L3CC table as part of mocs initalization to make
> diff --git a/drivers/gpu/drm/xe/xe_pci.c b/drivers/gpu/drm/xe/xe_pci.c
> index 855cf8557056..a6858fc7fe8d 100644
> --- a/drivers/gpu/drm/xe/xe_pci.c
> +++ b/drivers/gpu/drm/xe/xe_pci.c
> @@ -442,7 +442,7 @@ static void handle_gmdid(struct xe_device *xe,
>  {
>  	u32 ver;
>  
> -	ver = peek_gmdid(xe, GMD_ID.reg);
> +	ver = peek_gmdid(xe, GMD_ID.addr);
>  	for (int i = 0; i < ARRAY_SIZE(graphics_ip_map); i++) {
>  		if (ver == graphics_ip_map[i].ver) {
>  			xe->info.graphics_verx100 = ver;
> @@ -457,7 +457,7 @@ static void handle_gmdid(struct xe_device *xe,
>  			ver / 100, ver % 100);
>  	}
>  
> -	ver = peek_gmdid(xe, GMD_ID.reg + 0x380000);
> +	ver = peek_gmdid(xe, GMD_ID.addr + 0x380000);
>  	for (int i = 0; i < ARRAY_SIZE(media_ip_map); i++) {
>  		if (ver == media_ip_map[i].ver) {
>  			xe->info.media_verx100 = ver;
> diff --git a/drivers/gpu/drm/xe/xe_reg_sr.c b/drivers/gpu/drm/xe/xe_reg_sr.c
> index 51a40a9e532d..0312823101ad 100644
> --- a/drivers/gpu/drm/xe/xe_reg_sr.c
> +++ b/drivers/gpu/drm/xe/xe_reg_sr.c
> @@ -93,7 +93,7 @@ static void reg_sr_inc_error(struct xe_reg_sr *sr)
>  int xe_reg_sr_add(struct xe_reg_sr *sr,
>  		  const struct xe_reg_sr_entry *e)
>  {
> -	unsigned long idx = e->reg.reg;
> +	unsigned long idx = e->reg.addr;
>  	struct xe_reg_sr_entry *pentry = xa_load(&sr->xa, idx);
>  	int ret;
>  
> @@ -174,7 +174,7 @@ static void apply_one_mmio(struct xe_gt *gt, struct xe_reg_sr_entry *entry)
>  	 */
>  	val |= entry->set_bits;
>  
> -	drm_dbg(&xe->drm, "REG[0x%x] = 0x%08x", reg.reg, val);
> +	drm_dbg(&xe->drm, "REG[0x%x] = 0x%08x", reg.addr, val);
>  
>  	if (entry->reg.mcr)
>  		xe_gt_mcr_multicast_write(gt, reg_mcr, val);
> @@ -239,7 +239,7 @@ void xe_reg_sr_apply_whitelist(struct xe_reg_sr *sr, u32 mmio_base,
>  
>  	/* And clear the rest just in case of garbage */
>  	for (; slot < RING_MAX_NONPRIV_SLOTS; slot++) {
> -		u32 addr = RING_NOPID(mmio_base).reg;
> +		u32 addr = RING_NOPID(mmio_base).addr;
>  
>  		xe_mmio_write32(gt, RING_FORCE_TO_NONPRIV(mmio_base, slot), addr);
>  	}
> diff --git a/drivers/gpu/drm/xe/xe_ring_ops.c b/drivers/gpu/drm/xe/xe_ring_ops.c
> index 733ed8a30c2e..74c1b5dfbaee 100644
> --- a/drivers/gpu/drm/xe/xe_ring_ops.c
> +++ b/drivers/gpu/drm/xe/xe_ring_ops.c
> @@ -48,7 +48,7 @@ static int emit_aux_table_inv(struct xe_gt *gt, struct xe_reg reg,
>  			      u32 *dw, int i)
>  {
>  	dw[i++] = MI_LOAD_REGISTER_IMM(1) | MI_LRI_MMIO_REMAP_EN;
> -	dw[i++] = reg.reg + gt->mmio.adj_offset;
> +	dw[i++] = reg.addr + gt->mmio.adj_offset;
>  	dw[i++] = AUX_INV;
>  	dw[i++] = MI_NOOP;
>  
> diff --git a/drivers/gpu/drm/xe/xe_rtp.c b/drivers/gpu/drm/xe/xe_rtp.c
> index f2a0e8eb4936..0c6a23e14a71 100644
> --- a/drivers/gpu/drm/xe/xe_rtp.c
> +++ b/drivers/gpu/drm/xe/xe_rtp.c
> @@ -101,7 +101,7 @@ static void rtp_add_sr_entry(const struct xe_rtp_action *action,
>  		.read_mask = action->read_mask,
>  	};
>  
> -	sr_entry.reg.reg += mmio_base;
> +	sr_entry.reg.addr += mmio_base;
>  	xe_reg_sr_add(sr, &sr_entry);
>  }
>  
> diff --git a/drivers/gpu/drm/xe/xe_wopcm.c b/drivers/gpu/drm/xe/xe_wopcm.c
> index 11eea970c207..35fde8965bca 100644
> --- a/drivers/gpu/drm/xe/xe_wopcm.c
> +++ b/drivers/gpu/drm/xe/xe_wopcm.c
> @@ -170,10 +170,10 @@ static int __wopcm_init_regs(struct xe_device *xe, struct xe_gt *gt,
>  err_out:
>  	drm_notice(&xe->drm, "Failed to init uC WOPCM registers!\n");
>  	drm_notice(&xe->drm, "%s(%#x)=%#x\n", "DMA_GUC_WOPCM_OFFSET",
> -		   DMA_GUC_WOPCM_OFFSET.reg,
> +		   DMA_GUC_WOPCM_OFFSET.addr,
>  		   xe_mmio_read32(gt, DMA_GUC_WOPCM_OFFSET));
>  	drm_notice(&xe->drm, "%s(%#x)=%#x\n", "GUC_WOPCM_SIZE",
> -		   GUC_WOPCM_SIZE.reg,
> +		   GUC_WOPCM_SIZE.addr,
>  		   xe_mmio_read32(gt, GUC_WOPCM_SIZE));
>  
>  	return err;
> -- 
> 2.40.1
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Intel-xe] [PATCH v2 2/4] fixup! drm/xe/display: Implement display support
  2023-05-09 15:26   ` Rodrigo Vivi
@ 2023-05-09 17:09     ` Lucas De Marchi
  2023-05-09 17:16       ` Rodrigo Vivi
  0 siblings, 1 reply; 12+ messages in thread
From: Lucas De Marchi @ 2023-05-09 17:09 UTC (permalink / raw)
  To: Rodrigo Vivi; +Cc: intel-xe

On Tue, May 09, 2023 at 11:26:56AM -0400, Rodrigo Vivi wrote:
>On Mon, May 08, 2023 at 03:53:20PM -0700, Lucas De Marchi wrote:
>> WARNING: This should only be squashed when the display implementation
>> moves above commit "drm/xe/mmio: Use struct xe_reg".
>
>I wonder if we should then try to move this patch under the display

that is the v1 of the patch

>instead waiting for the next round of moving the display up...

but that then means build will be broken for all the commits between the
current place display is in and the previous commit, rather than just 1
commit.

I think the next display move will be messy as there are commits in the
middle that depend on the display being down. My attempt to moving it up
last week
(https://gitlab.freedesktop.org/demarchi/xe/-/tree/tip-display-rebase)
led to a bigger squash at the end because leaving some commits behind
didn't make sense and adding them on top didn't look good neither.

Question: is display in an acceptable enough state now that we can stop
doing this and just leave it behind the build config? Maybe just move it
once more and stop doing that? Another option is to accept the display
move is painful enough and maintain it with just a single commit on top.

Lucas De Marchi

>
>Also, could we change the subject from fixup to future-fixup so the
>a git autosquash doesn't try to move this before we are ready?
>
>>
>> With the move of display above xe_reg conversion in xe_mmio,
>> it should use the new types everywhere.
>>
>> Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
>> Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
>> ---
>>  .../drm/xe/compat-i915-headers/intel_uncore.h | 103 +++++++++++++-----
>>  1 file changed, 74 insertions(+), 29 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/compat-i915-headers/intel_uncore.h b/drivers/gpu/drm/xe/compat-i915-headers/intel_uncore.h
>> index 90d79290a211..14f195fe275d 100644
>> --- a/drivers/gpu/drm/xe/compat-i915-headers/intel_uncore.h
>> +++ b/drivers/gpu/drm/xe/compat-i915-headers/intel_uncore.h
>> @@ -17,82 +17,127 @@ static inline struct xe_gt *__fake_uncore_to_gt(struct fake_uncore *uncore)
>>  	return to_gt(xe);
>>  }
>>
>> -static inline u32 intel_uncore_read(struct fake_uncore *uncore, i915_reg_t reg)
>> +static inline u32 intel_uncore_read(struct fake_uncore *uncore,
>> +				    i915_reg_t i915_reg)
>>  {
>> -	return xe_mmio_read32(__fake_uncore_to_gt(uncore), reg.reg);
>> +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
>> +
>> +	return xe_mmio_read32(__fake_uncore_to_gt(uncore), reg);
>>  }
>>
>> -static inline u32 intel_uncore_read8(struct fake_uncore *uncore, i915_reg_t reg)
>> +static inline u32 intel_uncore_read8(struct fake_uncore *uncore,
>> +				     i915_reg_t i915_reg)
>>  {
>> -	return xe_mmio_read8(__fake_uncore_to_gt(uncore), reg.reg);
>> +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
>> +
>> +	return xe_mmio_read8(__fake_uncore_to_gt(uncore), reg);
>>  }
>>
>> -static inline u64 intel_uncore_read64_2x32(struct fake_uncore *uncore, i915_reg_t lower_reg, i915_reg_t upper_reg)
>> +static inline u64
>> +intel_uncore_read64_2x32(struct fake_uncore *uncore,
>> +			 i915_reg_t i915_lower_reg, i915_reg_t i915_upper_reg)
>>  {
>> +	struct xe_reg lower_reg = XE_REG(i915_mmio_reg_offset(i915_lower_reg));
>> +	struct xe_reg upper_reg = XE_REG(i915_mmio_reg_offset(i915_upper_reg));
>>  	u32 upper, lower, old_upper;
>>  	int loop = 0;
>>
>> -	upper = xe_mmio_read32(__fake_uncore_to_gt(uncore), upper_reg.reg);
>> +	upper = xe_mmio_read32(__fake_uncore_to_gt(uncore), upper_reg);
>>  	do {
>>  		old_upper = upper;
>> -		lower = xe_mmio_read32(__fake_uncore_to_gt(uncore), lower_reg.reg);
>> -		upper = xe_mmio_read32(__fake_uncore_to_gt(uncore), upper_reg.reg);
>> +		lower = xe_mmio_read32(__fake_uncore_to_gt(uncore), lower_reg);
>> +		upper = xe_mmio_read32(__fake_uncore_to_gt(uncore), upper_reg);
>>  	} while (upper != old_upper && loop++ < 2);
>>
>>  	return (u64)upper << 32 | lower;
>>  }
>>
>> -static inline void intel_uncore_posting_read(struct fake_uncore *uncore, i915_reg_t reg)
>> +static inline void intel_uncore_posting_read(struct fake_uncore *uncore,
>> +					     i915_reg_t i915_reg)
>>  {
>> -	xe_mmio_read32(__fake_uncore_to_gt(uncore), reg.reg);
>> +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
>> +
>> +	xe_mmio_read32(__fake_uncore_to_gt(uncore), reg);
>>  }
>>
>> -static inline void intel_uncore_write(struct fake_uncore *uncore, i915_reg_t reg, u32 val)
>> +static inline void intel_uncore_write(struct fake_uncore *uncore,
>> +				      i915_reg_t i915_reg, u32 val)
>>  {
>> -	xe_mmio_write32(__fake_uncore_to_gt(uncore), reg.reg, val);
>> +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
>> +
>> +	xe_mmio_write32(__fake_uncore_to_gt(uncore), reg, val);
>>  }
>>
>> -static inline u32 intel_uncore_rmw(struct fake_uncore *uncore, i915_reg_t reg, u32 clear, u32 set)
>> +static inline u32 intel_uncore_rmw(struct fake_uncore *uncore,
>> +				   i915_reg_t i915_reg, u32 clear, u32 set)
>>  {
>> -	return xe_mmio_rmw32(__fake_uncore_to_gt(uncore), reg.reg, clear, set);
>> +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
>> +
>> +	return xe_mmio_rmw32(__fake_uncore_to_gt(uncore), reg, clear, set);
>>  }
>>
>> -static inline int intel_wait_for_register(struct fake_uncore *uncore, i915_reg_t reg, u32 mask, u32 value, unsigned int timeout)
>> +static inline int intel_wait_for_register(struct fake_uncore *uncore,
>> +					  i915_reg_t i915_reg, u32 mask,
>> +					  u32 value, unsigned int timeout)
>>  {
>> -	return xe_mmio_wait32(__fake_uncore_to_gt(uncore), reg.reg, value, mask, timeout * USEC_PER_MSEC, NULL, false);
>> +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
>> +
>> +	return xe_mmio_wait32(__fake_uncore_to_gt(uncore), reg, value, mask,
>> +			      timeout * USEC_PER_MSEC, NULL, false);
>>  }
>>
>> -static inline int intel_wait_for_register_fw(struct fake_uncore *uncore, i915_reg_t reg, u32 mask, u32 value, unsigned int timeout)
>> +static inline int intel_wait_for_register_fw(struct fake_uncore *uncore,
>> +					     i915_reg_t i915_reg, u32 mask,
>> +					     u32 value, unsigned int timeout)
>>  {
>> -	return xe_mmio_wait32(__fake_uncore_to_gt(uncore), reg.reg, value, mask, timeout * USEC_PER_MSEC, NULL, false);
>> +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
>> +
>> +	return xe_mmio_wait32(__fake_uncore_to_gt(uncore), reg, value, mask,
>> +			      timeout * USEC_PER_MSEC, NULL, false);
>>  }
>>
>> -static inline int __intel_wait_for_register(struct fake_uncore *uncore, i915_reg_t reg, u32 mask, u32 value,
>> -					    unsigned int fast_timeout_us, unsigned int slow_timeout_ms, u32 *out_value)
>> +static inline int
>> +__intel_wait_for_register(struct fake_uncore *uncore, i915_reg_t i915_reg,
>> +			  u32 mask, u32 value, unsigned int fast_timeout_us,
>> +			  unsigned int slow_timeout_ms, u32 *out_value)
>>  {
>> -	return xe_mmio_wait32(__fake_uncore_to_gt(uncore), reg.reg, value, mask,
>> +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
>> +
>> +	return xe_mmio_wait32(__fake_uncore_to_gt(uncore), reg, value, mask,
>>  			      fast_timeout_us + 1000 * slow_timeout_ms,
>>  			      out_value, false);
>>  }
>>
>> -static inline u32 intel_uncore_read_fw(struct fake_uncore *uncore, i915_reg_t reg)
>> +static inline u32 intel_uncore_read_fw(struct fake_uncore *uncore,
>> +				       i915_reg_t i915_reg)
>>  {
>> -	return xe_mmio_read32(__fake_uncore_to_gt(uncore), reg.reg);
>> +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
>> +
>> +	return xe_mmio_read32(__fake_uncore_to_gt(uncore), reg);
>>  }
>>
>> -static inline void intel_uncore_write_fw(struct fake_uncore *uncore, i915_reg_t reg, u32 val)
>> +static inline void intel_uncore_write_fw(struct fake_uncore *uncore,
>> +					 i915_reg_t i915_reg, u32 val)
>>  {
>> -	xe_mmio_write32(__fake_uncore_to_gt(uncore), reg.reg, val);
>> +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
>> +
>> +	xe_mmio_write32(__fake_uncore_to_gt(uncore), reg, val);
>>  }
>>
>> -static inline u32 intel_uncore_read_notrace(struct fake_uncore *uncore, i915_reg_t reg)
>> +static inline u32 intel_uncore_read_notrace(struct fake_uncore *uncore,
>> +					    i915_reg_t i915_reg)
>>  {
>> -	return xe_mmio_read32(__fake_uncore_to_gt(uncore), reg.reg);
>> +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
>> +
>> +	return xe_mmio_read32(__fake_uncore_to_gt(uncore), reg);
>>  }
>>
>> -static inline void intel_uncore_write_notrace(struct fake_uncore *uncore, i915_reg_t reg, u32 val)
>> +static inline void intel_uncore_write_notrace(struct fake_uncore *uncore,
>> +					      i915_reg_t i915_reg, u32 val)
>>  {
>> -	xe_mmio_write32(__fake_uncore_to_gt(uncore), reg.reg, val);
>> +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
>> +
>> +	xe_mmio_write32(__fake_uncore_to_gt(uncore), reg, val);
>>  }
>>
>>  #endif /* __INTEL_UNCORE_H__ */
>> --
>> 2.40.1
>>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Intel-xe] [PATCH v2 2/4] fixup! drm/xe/display: Implement display support
  2023-05-09 17:09     ` Lucas De Marchi
@ 2023-05-09 17:16       ` Rodrigo Vivi
  0 siblings, 0 replies; 12+ messages in thread
From: Rodrigo Vivi @ 2023-05-09 17:16 UTC (permalink / raw)
  To: Lucas De Marchi, Jani Nikula; +Cc: intel-xe, Rodrigo Vivi

On Tue, May 09, 2023 at 10:09:08AM -0700, Lucas De Marchi wrote:
> On Tue, May 09, 2023 at 11:26:56AM -0400, Rodrigo Vivi wrote:
> > On Mon, May 08, 2023 at 03:53:20PM -0700, Lucas De Marchi wrote:
> > > WARNING: This should only be squashed when the display implementation
> > > moves above commit "drm/xe/mmio: Use struct xe_reg".
> > 
> > I wonder if we should then try to move this patch under the display
> 
> that is the v1 of the patch
> 
> > instead waiting for the next round of moving the display up...
> 
> but that then means build will be broken for all the commits between the
> current place display is in and the previous commit, rather than just 1
> commit.
> 
> I think the next display move will be messy as there are commits in the
> middle that depend on the display being down. My attempt to moving it up
> last week
> (https://gitlab.freedesktop.org/demarchi/xe/-/tree/tip-display-rebase)
> led to a bigger squash at the end because leaving some commits behind
> didn't make sense and adding them on top didn't look good neither.
> 
> Question: is display in an acceptable enough state now that we can stop
> doing this and just leave it behind the build config? Maybe just move it
> once more and stop doing that? Another option is to accept the display
> move is painful enough and maintain it with just a single commit on top.

Jani, what are your thoughts on this?

> 
> Lucas De Marchi
> 
> > 
> > Also, could we change the subject from fixup to future-fixup so the
> > a git autosquash doesn't try to move this before we are ready?
> > 
> > > 
> > > With the move of display above xe_reg conversion in xe_mmio,
> > > it should use the new types everywhere.
> > > 
> > > Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
> > > Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > > ---
> > >  .../drm/xe/compat-i915-headers/intel_uncore.h | 103 +++++++++++++-----
> > >  1 file changed, 74 insertions(+), 29 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/xe/compat-i915-headers/intel_uncore.h b/drivers/gpu/drm/xe/compat-i915-headers/intel_uncore.h
> > > index 90d79290a211..14f195fe275d 100644
> > > --- a/drivers/gpu/drm/xe/compat-i915-headers/intel_uncore.h
> > > +++ b/drivers/gpu/drm/xe/compat-i915-headers/intel_uncore.h
> > > @@ -17,82 +17,127 @@ static inline struct xe_gt *__fake_uncore_to_gt(struct fake_uncore *uncore)
> > >  	return to_gt(xe);
> > >  }
> > > 
> > > -static inline u32 intel_uncore_read(struct fake_uncore *uncore, i915_reg_t reg)
> > > +static inline u32 intel_uncore_read(struct fake_uncore *uncore,
> > > +				    i915_reg_t i915_reg)
> > >  {
> > > -	return xe_mmio_read32(__fake_uncore_to_gt(uncore), reg.reg);
> > > +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
> > > +
> > > +	return xe_mmio_read32(__fake_uncore_to_gt(uncore), reg);
> > >  }
> > > 
> > > -static inline u32 intel_uncore_read8(struct fake_uncore *uncore, i915_reg_t reg)
> > > +static inline u32 intel_uncore_read8(struct fake_uncore *uncore,
> > > +				     i915_reg_t i915_reg)
> > >  {
> > > -	return xe_mmio_read8(__fake_uncore_to_gt(uncore), reg.reg);
> > > +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
> > > +
> > > +	return xe_mmio_read8(__fake_uncore_to_gt(uncore), reg);
> > >  }
> > > 
> > > -static inline u64 intel_uncore_read64_2x32(struct fake_uncore *uncore, i915_reg_t lower_reg, i915_reg_t upper_reg)
> > > +static inline u64
> > > +intel_uncore_read64_2x32(struct fake_uncore *uncore,
> > > +			 i915_reg_t i915_lower_reg, i915_reg_t i915_upper_reg)
> > >  {
> > > +	struct xe_reg lower_reg = XE_REG(i915_mmio_reg_offset(i915_lower_reg));
> > > +	struct xe_reg upper_reg = XE_REG(i915_mmio_reg_offset(i915_upper_reg));
> > >  	u32 upper, lower, old_upper;
> > >  	int loop = 0;
> > > 
> > > -	upper = xe_mmio_read32(__fake_uncore_to_gt(uncore), upper_reg.reg);
> > > +	upper = xe_mmio_read32(__fake_uncore_to_gt(uncore), upper_reg);
> > >  	do {
> > >  		old_upper = upper;
> > > -		lower = xe_mmio_read32(__fake_uncore_to_gt(uncore), lower_reg.reg);
> > > -		upper = xe_mmio_read32(__fake_uncore_to_gt(uncore), upper_reg.reg);
> > > +		lower = xe_mmio_read32(__fake_uncore_to_gt(uncore), lower_reg);
> > > +		upper = xe_mmio_read32(__fake_uncore_to_gt(uncore), upper_reg);
> > >  	} while (upper != old_upper && loop++ < 2);
> > > 
> > >  	return (u64)upper << 32 | lower;
> > >  }
> > > 
> > > -static inline void intel_uncore_posting_read(struct fake_uncore *uncore, i915_reg_t reg)
> > > +static inline void intel_uncore_posting_read(struct fake_uncore *uncore,
> > > +					     i915_reg_t i915_reg)
> > >  {
> > > -	xe_mmio_read32(__fake_uncore_to_gt(uncore), reg.reg);
> > > +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
> > > +
> > > +	xe_mmio_read32(__fake_uncore_to_gt(uncore), reg);
> > >  }
> > > 
> > > -static inline void intel_uncore_write(struct fake_uncore *uncore, i915_reg_t reg, u32 val)
> > > +static inline void intel_uncore_write(struct fake_uncore *uncore,
> > > +				      i915_reg_t i915_reg, u32 val)
> > >  {
> > > -	xe_mmio_write32(__fake_uncore_to_gt(uncore), reg.reg, val);
> > > +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
> > > +
> > > +	xe_mmio_write32(__fake_uncore_to_gt(uncore), reg, val);
> > >  }
> > > 
> > > -static inline u32 intel_uncore_rmw(struct fake_uncore *uncore, i915_reg_t reg, u32 clear, u32 set)
> > > +static inline u32 intel_uncore_rmw(struct fake_uncore *uncore,
> > > +				   i915_reg_t i915_reg, u32 clear, u32 set)
> > >  {
> > > -	return xe_mmio_rmw32(__fake_uncore_to_gt(uncore), reg.reg, clear, set);
> > > +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
> > > +
> > > +	return xe_mmio_rmw32(__fake_uncore_to_gt(uncore), reg, clear, set);
> > >  }
> > > 
> > > -static inline int intel_wait_for_register(struct fake_uncore *uncore, i915_reg_t reg, u32 mask, u32 value, unsigned int timeout)
> > > +static inline int intel_wait_for_register(struct fake_uncore *uncore,
> > > +					  i915_reg_t i915_reg, u32 mask,
> > > +					  u32 value, unsigned int timeout)
> > >  {
> > > -	return xe_mmio_wait32(__fake_uncore_to_gt(uncore), reg.reg, value, mask, timeout * USEC_PER_MSEC, NULL, false);
> > > +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
> > > +
> > > +	return xe_mmio_wait32(__fake_uncore_to_gt(uncore), reg, value, mask,
> > > +			      timeout * USEC_PER_MSEC, NULL, false);
> > >  }
> > > 
> > > -static inline int intel_wait_for_register_fw(struct fake_uncore *uncore, i915_reg_t reg, u32 mask, u32 value, unsigned int timeout)
> > > +static inline int intel_wait_for_register_fw(struct fake_uncore *uncore,
> > > +					     i915_reg_t i915_reg, u32 mask,
> > > +					     u32 value, unsigned int timeout)
> > >  {
> > > -	return xe_mmio_wait32(__fake_uncore_to_gt(uncore), reg.reg, value, mask, timeout * USEC_PER_MSEC, NULL, false);
> > > +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
> > > +
> > > +	return xe_mmio_wait32(__fake_uncore_to_gt(uncore), reg, value, mask,
> > > +			      timeout * USEC_PER_MSEC, NULL, false);
> > >  }
> > > 
> > > -static inline int __intel_wait_for_register(struct fake_uncore *uncore, i915_reg_t reg, u32 mask, u32 value,
> > > -					    unsigned int fast_timeout_us, unsigned int slow_timeout_ms, u32 *out_value)
> > > +static inline int
> > > +__intel_wait_for_register(struct fake_uncore *uncore, i915_reg_t i915_reg,
> > > +			  u32 mask, u32 value, unsigned int fast_timeout_us,
> > > +			  unsigned int slow_timeout_ms, u32 *out_value)
> > >  {
> > > -	return xe_mmio_wait32(__fake_uncore_to_gt(uncore), reg.reg, value, mask,
> > > +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
> > > +
> > > +	return xe_mmio_wait32(__fake_uncore_to_gt(uncore), reg, value, mask,
> > >  			      fast_timeout_us + 1000 * slow_timeout_ms,
> > >  			      out_value, false);
> > >  }
> > > 
> > > -static inline u32 intel_uncore_read_fw(struct fake_uncore *uncore, i915_reg_t reg)
> > > +static inline u32 intel_uncore_read_fw(struct fake_uncore *uncore,
> > > +				       i915_reg_t i915_reg)
> > >  {
> > > -	return xe_mmio_read32(__fake_uncore_to_gt(uncore), reg.reg);
> > > +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
> > > +
> > > +	return xe_mmio_read32(__fake_uncore_to_gt(uncore), reg);
> > >  }
> > > 
> > > -static inline void intel_uncore_write_fw(struct fake_uncore *uncore, i915_reg_t reg, u32 val)
> > > +static inline void intel_uncore_write_fw(struct fake_uncore *uncore,
> > > +					 i915_reg_t i915_reg, u32 val)
> > >  {
> > > -	xe_mmio_write32(__fake_uncore_to_gt(uncore), reg.reg, val);
> > > +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
> > > +
> > > +	xe_mmio_write32(__fake_uncore_to_gt(uncore), reg, val);
> > >  }
> > > 
> > > -static inline u32 intel_uncore_read_notrace(struct fake_uncore *uncore, i915_reg_t reg)
> > > +static inline u32 intel_uncore_read_notrace(struct fake_uncore *uncore,
> > > +					    i915_reg_t i915_reg)
> > >  {
> > > -	return xe_mmio_read32(__fake_uncore_to_gt(uncore), reg.reg);
> > > +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
> > > +
> > > +	return xe_mmio_read32(__fake_uncore_to_gt(uncore), reg);
> > >  }
> > > 
> > > -static inline void intel_uncore_write_notrace(struct fake_uncore *uncore, i915_reg_t reg, u32 val)
> > > +static inline void intel_uncore_write_notrace(struct fake_uncore *uncore,
> > > +					      i915_reg_t i915_reg, u32 val)
> > >  {
> > > -	xe_mmio_write32(__fake_uncore_to_gt(uncore), reg.reg, val);
> > > +	struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
> > > +
> > > +	xe_mmio_write32(__fake_uncore_to_gt(uncore), reg, val);
> > >  }
> > > 
> > >  #endif /* __INTEL_UNCORE_H__ */
> > > --
> > > 2.40.1
> > > 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Intel-xe] [PATCH v2 0/4] Convert xe_mmio to struct xe_reg
  2023-05-08 22:53 [Intel-xe] [PATCH v2 0/4] Convert xe_mmio to struct xe_reg Lucas De Marchi
                   ` (4 preceding siblings ...)
  2023-05-08 22:56 ` [Intel-xe] ✓ CI.Patch_applied: success for Convert xe_mmio to struct xe_reg (rev2) Patchwork
@ 2023-05-09 20:01 ` Lucas De Marchi
  5 siblings, 0 replies; 12+ messages in thread
From: Lucas De Marchi @ 2023-05-09 20:01 UTC (permalink / raw)
  To: intel-xe; +Cc: Rodrigo Vivi

On Mon, May 08, 2023 at 03:53:18PM -0700, Lucas De Marchi wrote:
>Now that struct xe_reg is in place, convert xe_mmio to use it so we
>avoid mistakes of passing the wrong argument.
>
>v2:
>  - First 2 patches from v1 already applied
>  - Drop controversial patch, "drm/xe: Use media base for GMD_ID access"
>  - Rebase on latest force pushes with display refactors
>
>Lucas De Marchi (4):
>  drm/xe/mmio: Use struct xe_reg
>  fixup! drm/xe/display: Implement display support
>  drm/xe: Rename reg field to addr
>  drm/xe: Fix indent in xe_hw_engine_print_state()


all patches now applied. Let's figure out next how to handle the move of
the display to the top.

thanks

Lucas De Marchi

>
> .../drm/xe/compat-i915-headers/intel_uncore.h | 103 +++++++++----
> drivers/gpu/drm/xe/regs/xe_reg_defs.h         |   6 +-
> drivers/gpu/drm/xe/tests/xe_rtp_test.c        |   2 +-
> drivers/gpu/drm/xe/xe_device.c                |   2 +-
> drivers/gpu/drm/xe/xe_execlist.c              |  18 +--
> drivers/gpu/drm/xe/xe_force_wake.c            |  25 ++--
> drivers/gpu/drm/xe/xe_force_wake_types.h      |   6 +-
> drivers/gpu/drm/xe/xe_ggtt.c                  |   6 +-
> drivers/gpu/drm/xe/xe_gt.c                    |   4 +-
> drivers/gpu/drm/xe/xe_gt_clock.c              |   6 +-
> drivers/gpu/drm/xe/xe_gt_mcr.c                |  39 ++---
> drivers/gpu/drm/xe/xe_gt_topology.c           |  18 +--
> drivers/gpu/drm/xe/xe_guc.c                   |  61 ++++----
> drivers/gpu/drm/xe/xe_guc_ads.c               |   5 +-
> drivers/gpu/drm/xe/xe_guc_pc.c                |  32 ++--
> drivers/gpu/drm/xe/xe_guc_types.h             |   3 +-
> drivers/gpu/drm/xe/xe_huc.c                   |   4 +-
> drivers/gpu/drm/xe/xe_hw_engine.c             | 103 +++++++------
> drivers/gpu/drm/xe/xe_irq.c                   | 140 +++++++++---------
> drivers/gpu/drm/xe/xe_mmio.c                  |  33 +++--
> drivers/gpu/drm/xe/xe_mmio.h                  |  55 +++----
> drivers/gpu/drm/xe/xe_mocs.c                  |  11 +-
> drivers/gpu/drm/xe/xe_pat.c                   |  14 +-
> drivers/gpu/drm/xe/xe_pci.c                   |   4 +-
> drivers/gpu/drm/xe/xe_pcode.c                 |  16 +-
> drivers/gpu/drm/xe/xe_reg_sr.c                |  18 ++-
> drivers/gpu/drm/xe/xe_ring_ops.c              |  11 +-
> drivers/gpu/drm/xe/xe_rtp.c                   |   2 +-
> drivers/gpu/drm/xe/xe_ttm_stolen_mgr.c        |   4 +-
> drivers/gpu/drm/xe/xe_uc_fw.c                 |  16 +-
> drivers/gpu/drm/xe/xe_wopcm.c                 |  16 +-
> 31 files changed, 429 insertions(+), 354 deletions(-)
>
>-- 
>2.40.1
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2023-05-09 20:01 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-08 22:53 [Intel-xe] [PATCH v2 0/4] Convert xe_mmio to struct xe_reg Lucas De Marchi
2023-05-08 22:53 ` [Intel-xe] [PATCH v2 1/4] drm/xe/mmio: Use " Lucas De Marchi
2023-05-09 15:24   ` Rodrigo Vivi
2023-05-08 22:53 ` [Intel-xe] [PATCH v2 2/4] fixup! drm/xe/display: Implement display support Lucas De Marchi
2023-05-09 15:26   ` Rodrigo Vivi
2023-05-09 17:09     ` Lucas De Marchi
2023-05-09 17:16       ` Rodrigo Vivi
2023-05-08 22:53 ` [Intel-xe] [PATCH v2 3/4] drm/xe: Rename reg field to addr Lucas De Marchi
2023-05-09 15:27   ` Rodrigo Vivi
2023-05-08 22:53 ` [Intel-xe] [PATCH v2 4/4] drm/xe: Fix indent in xe_hw_engine_print_state() Lucas De Marchi
2023-05-08 22:56 ` [Intel-xe] ✓ CI.Patch_applied: success for Convert xe_mmio to struct xe_reg (rev2) Patchwork
2023-05-09 20:01 ` [Intel-xe] [PATCH v2 0/4] Convert xe_mmio to struct xe_reg Lucas De Marchi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.