All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/1] [rfc] Port UVD from radeon for SI
@ 2017-09-01 15:30 Trevor Davenport
       [not found] ` <20170901153008.22075-1-trevor.davenport-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
  0 siblings, 1 reply; 2+ messages in thread
From: Trevor Davenport @ 2017-09-01 15:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Trevor Davenport

Here is my attempt at porting UVD from radeon to amdgpu for SI devices.

First, this in no way works. I started with UVD for CIK and have worked
backwards to arrive at this place.  I've gone back over most things from
radeon and most of it looks correct as far as I can see. It currently loads
the UVD firmware but fails the first initial ring test.  It was a bit
unclear to me how/when CIK calls uvd_4_2_start and I don't have the
hardware to test myself on CIK.  Radeon was calling uvd_v1_0_start before 
the ring test but calling uvd_v3_1_start before the ring test results in
it failing to start after trying to reset.  Any ideas on what could be
wrong would be greatly appreciated.

This does require a new firmware file.  Since amdgpu expects only
firmware files with the header I have added one to the existing
TAHITI_uvd.bin.  I verified that my header was correct by adding the new
firmware to radeon and verified UVD initializes correctly with it.  I
have uploaded a copy to my google drive if anyone wants it.  See link
below.

I'm testing on pitcairn device (R9 270X).  Any feedback welcome.  I
don't know if I will be able to do much more since there isn't open
any documentation for this besides the radeon but I'll likely keep
poking at it a bit longer.

You can find it on my github brance amdgpu-si-uvd.
https://github.com/tdaven/linux-amdgpu

tahiti_uvd.bin (should be places in /lib/firmware/radeon).
https://drive.google.com/file/d/0B9a9RNUDRbKFZU5Hd0VseUhKaGc/view?usp=sharing


Trevor Davenport (1):
  drm/amdgpu/uvd3: Initial port of UVD from radeon for SI.

 drivers/gpu/drm/amd/amdgpu/Makefile                |   3 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c            |  14 +
 drivers/gpu/drm/amd/amdgpu/si.c                    | 256 ++++++-
 drivers/gpu/drm/amd/amdgpu/si_ih.c                 |   6 +
 drivers/gpu/drm/amd/amdgpu/sid.h                   |  45 +-
 drivers/gpu/drm/amd/amdgpu/uvd_v3_1.c              | 832 +++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/uvd_v3_1.h              |  29 +
 .../gpu/drm/amd/include/asic_reg/uvd/uvd_3_1_d.h   |  91 +++
 .../drm/amd/include/asic_reg/uvd/uvd_3_1_sh_mask.h |  65 ++
 9 files changed, 1290 insertions(+), 51 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/uvd_v3_1.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/uvd_v3_1.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/uvd/uvd_3_1_d.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/uvd/uvd_3_1_sh_mask.h

-- 
2.13.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 2+ messages in thread

* [PATCH 1/1] drm/amdgpu/uvd3: Initial port of UVD from radeon for SI.
       [not found] ` <20170901153008.22075-1-trevor.davenport-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
@ 2017-09-01 15:30   ` Trevor Davenport
  0 siblings, 0 replies; 2+ messages in thread
From: Trevor Davenport @ 2017-09-01 15:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Trevor Davenport

---
 drivers/gpu/drm/amd/amdgpu/Makefile                |   3 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c            |  14 +
 drivers/gpu/drm/amd/amdgpu/si.c                    | 256 ++++++-
 drivers/gpu/drm/amd/amdgpu/si_ih.c                 |   6 +
 drivers/gpu/drm/amd/amdgpu/sid.h                   |  45 +-
 drivers/gpu/drm/amd/amdgpu/uvd_v3_1.c              | 832 +++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/uvd_v3_1.h              |  29 +
 .../gpu/drm/amd/include/asic_reg/uvd/uvd_3_1_d.h   |  91 +++
 .../drm/amd/include/asic_reg/uvd/uvd_3_1_sh_mask.h |  65 ++
 9 files changed, 1290 insertions(+), 51 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/uvd_v3_1.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/uvd_v3_1.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/uvd/uvd_3_1_d.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/uvd/uvd_3_1_sh_mask.h

diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile
index 658bac0cdc5e..be59d6a7bc28 100644
--- a/drivers/gpu/drm/amd/amdgpu/Makefile
+++ b/drivers/gpu/drm/amd/amdgpu/Makefile
@@ -32,7 +32,8 @@ amdgpu-$(CONFIG_DRM_AMDGPU_CIK)+= cik.o cik_ih.o kv_smc.o kv_dpm.o \
 	ci_smc.o ci_dpm.o dce_v8_0.o gfx_v7_0.o cik_sdma.o uvd_v4_2.o vce_v2_0.o \
 	amdgpu_amdkfd_gfx_v7.o
 
-amdgpu-$(CONFIG_DRM_AMDGPU_SI)+= si.o gmc_v6_0.o gfx_v6_0.o si_ih.o si_dma.o dce_v6_0.o si_dpm.o si_smc.o
+amdgpu-$(CONFIG_DRM_AMDGPU_SI)+= si.o gmc_v6_0.o gfx_v6_0.o uvd_v3_1.o si_ih.o \
+	si_dma.o dce_v6_0.o si_dpm.o si_smc.o
 
 amdgpu-y += \
 	vi.o mxgpu_vi.o nbio_v6_1.o soc15.o mxgpu_ai.o nbio_v7_0.o
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
index e19928dae8e3..dc2fec5d6fa7 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
@@ -52,6 +52,9 @@
 #define FW_1_66_16	((1 << 24) | (66 << 16) | (16 << 8))
 
 /* Firmware Names */
+#ifdef CONFIG_DRM_AMDGPU_SI
+#define FIRMWARE_TAHITI   	"radeon/tahiti_uvd.bin"
+#endif
 #ifdef CONFIG_DRM_AMDGPU_CIK
 #define FIRMWARE_BONAIRE	"radeon/bonaire_uvd.bin"
 #define FIRMWARE_KABINI	"radeon/kabini_uvd.bin"
@@ -94,6 +97,9 @@ struct amdgpu_uvd_cs_ctx {
 	unsigned *buf_sizes;
 };
 
+#ifdef CONFIG_DRM_AMDGPU_SI
+MODULE_FIRMWARE(FIRMWARE_TAHITI);
+#endif
 #ifdef CONFIG_DRM_AMDGPU_CIK
 MODULE_FIRMWARE(FIRMWARE_BONAIRE);
 MODULE_FIRMWARE(FIRMWARE_KABINI);
@@ -126,6 +132,14 @@ int amdgpu_uvd_sw_init(struct amdgpu_device *adev)
 	INIT_DELAYED_WORK(&adev->uvd.idle_work, amdgpu_uvd_idle_work_handler);
 
 	switch (adev->asic_type) {
+#ifdef CONFIG_DRM_AMDGPU_SI
+	case CHIP_TAHITI:
+	case CHIP_PITCAIRN:
+	case CHIP_VERDE:
+	case CHIP_OLAND:
+		fw_name = FIRMWARE_TAHITI;
+		break;
+#endif
 #ifdef CONFIG_DRM_AMDGPU_CIK
 	case CHIP_BONAIRE:
 		fw_name = FIRMWARE_BONAIRE;
diff --git a/drivers/gpu/drm/amd/amdgpu/si.c b/drivers/gpu/drm/amd/amdgpu/si.c
index 8284d5dbfc30..700cbe23fd6a 100644
--- a/drivers/gpu/drm/amd/amdgpu/si.c
+++ b/drivers/gpu/drm/amd/amdgpu/si.c
@@ -38,13 +38,15 @@
 #include "gmc_v6_0.h"
 #include "si_dma.h"
 #include "dce_v6_0.h"
+#include "uvd_v3_1.h"
 #include "si.h"
 #include "dce_virtual.h"
 #include "gca/gfx_6_0_d.h"
 #include "oss/oss_1_0_d.h"
+#include "oss/oss_1_0_sh_mask.h"
 #include "gmc/gmc_6_0_d.h"
 #include "dce/dce_6_0_d.h"
-#include "uvd/uvd_4_0_d.h"
+#include "uvd/uvd_3_1_d.h"
 #include "bif/bif_3_0_d.h"
 
 static const u32 tahiti_golden_registers[] =
@@ -970,6 +972,28 @@ static void si_smc_wreg(struct amdgpu_device *adev, u32 reg, u32 v)
 	spin_unlock_irqrestore(&adev->smc_idx_lock, flags);
 }
 
+static u32 si_uvd_ctx_rreg(struct amdgpu_device *adev, u32 reg)
+{
+   unsigned long flags;
+   u32 r;
+
+   spin_lock_irqsave(&adev->uvd_ctx_idx_lock, flags);
+   WREG32(mmUVD_CTX_INDEX, ((reg) & 0x1ff));
+   r = RREG32(mmUVD_CTX_DATA);
+   spin_unlock_irqrestore(&adev->uvd_ctx_idx_lock, flags);
+   return r;
+}
+
+static void si_uvd_ctx_wreg(struct amdgpu_device *adev, u32 reg, u32 v)
+{
+   unsigned long flags;
+
+   spin_lock_irqsave(&adev->uvd_ctx_idx_lock, flags);
+   WREG32(mmUVD_CTX_INDEX, ((reg) & 0x1ff));
+   WREG32(mmUVD_CTX_DATA, (v));
+   spin_unlock_irqrestore(&adev->uvd_ctx_idx_lock, flags);
+}
+
 static struct amdgpu_allowed_register_entry si_allowed_read_registers[] = {
 	{GRBM_STATUS},
 	{GB_ADDR_CONFIG},
@@ -1218,10 +1242,230 @@ static u32 si_get_xclk(struct amdgpu_device *adev)
 	return reference_clock;
 }
 
-//xxx:not implemented
+static unsigned si_uvd_calc_upll_post_div(unsigned vco_freq,
+                     unsigned target_freq,
+                     unsigned pd_min,
+                     unsigned pd_even)
+{
+   unsigned post_div = vco_freq / target_freq;
+
+   /* adjust to post divider minimum value */
+   if (post_div < pd_min)
+      post_div = pd_min;
+
+   /* we alway need a frequency less than or equal the target */
+   if ((vco_freq / post_div) > target_freq)
+      post_div += 1;
+
+   /* post dividers above a certain value must be even */
+   if (post_div > pd_even && post_div % 2)
+      post_div += 1;
+
+   return post_div;
+}
+
+/**
+ * si_uvd_calc_upll_dividers - calc UPLL clock dividers
+ *
+ * @rdev: radeon_device pointer
+ * @vclk: wanted VCLK
+ * @dclk: wanted DCLK
+ * @vco_min: minimum VCO frequency
+ * @vco_max: maximum VCO frequency
+ * @fb_factor: factor to multiply vco freq with
+ * @fb_mask: limit and bitmask for feedback divider
+ * @pd_min: post divider minimum
+ * @pd_max: post divider maximum
+ * @pd_even: post divider must be even above this value
+ * @optimal_fb_div: resulting feedback divider
+ * @optimal_vclk_div: resulting vclk post divider
+ * @optimal_dclk_div: resulting dclk post divider
+ *
+ * Calculate dividers for UVDs UPLL (R6xx-SI, except APUs).
+ * Returns zero on success -EINVAL on error.
+ */
+static int si_uvd_calc_upll_dividers(struct amdgpu_device *adev,
+              unsigned vclk, unsigned dclk,
+              unsigned vco_min, unsigned vco_max,
+              unsigned fb_factor, unsigned fb_mask,
+              unsigned pd_min, unsigned pd_max,
+              unsigned pd_even,
+              unsigned *optimal_fb_div,
+              unsigned *optimal_vclk_div,
+              unsigned *optimal_dclk_div)
+{
+   unsigned vco_freq, ref_freq = adev->clock.spll.reference_freq;
+
+   /* start off with something large */
+   unsigned optimal_score = ~0;
+
+   /* loop through vco from low to high */
+   vco_min = max(max(vco_min, vclk), dclk);
+   for (vco_freq = vco_min; vco_freq <= vco_max; vco_freq += 100) {
+
+      uint64_t fb_div = (uint64_t)vco_freq * fb_factor;
+      unsigned vclk_div, dclk_div, score;
+
+      do_div(fb_div, ref_freq);
+
+      /* fb div out of range ? */
+      if (fb_div > fb_mask)
+         break; /* it can oly get worse */
+
+      fb_div &= fb_mask;
+
+      /* calc vclk divider with current vco freq */
+      vclk_div = si_uvd_calc_upll_post_div(vco_freq, vclk,
+                      pd_min, pd_even);
+      if (vclk_div > pd_max)
+         break; /* vco is too big, it has to stop */
+
+      /* calc dclk divider with current vco freq */
+      dclk_div = si_uvd_calc_upll_post_div(vco_freq, dclk,
+                      pd_min, pd_even);
+      if (vclk_div > pd_max)
+         break; /* vco is too big, it has to stop */
+
+      /* calc score with current vco freq */
+      score = vclk - (vco_freq / vclk_div) + dclk - (vco_freq / dclk_div);
+
+      /* determine if this vco setting is better than current optimal settings */
+      if (score < optimal_score) {
+         *optimal_fb_div = fb_div;
+         *optimal_vclk_div = vclk_div;
+         *optimal_dclk_div = dclk_div;
+         optimal_score = score;
+         if (optimal_score == 0)
+            break; /* it can't get better than this */
+      }
+   }
+
+   /* did we found a valid setup ? */
+   if (optimal_score == ~0)
+      return -EINVAL;
+
+   return 0;
+}
+
+static int si_uvd_send_upll_ctlreq(struct amdgpu_device *adev,
+            unsigned cg_upll_func_cntl)
+{
+   unsigned i;
+
+   /* make sure UPLL_CTLREQ is deasserted */
+   WREG32_P(cg_upll_func_cntl, 0, ~UPLL_CTLREQ_MASK);
+
+   mdelay(10);
+
+   /* assert UPLL_CTLREQ */
+   WREG32_P(cg_upll_func_cntl, UPLL_CTLREQ_MASK, ~UPLL_CTLREQ_MASK);
+
+   /* wait for CTLACK and CTLACK2 to get asserted */
+   for (i = 0; i < 100; ++i) {
+      uint32_t mask = UPLL_CTLACK_MASK | UPLL_CTLACK2_MASK;
+      if ((RREG32(cg_upll_func_cntl) & mask) == mask)
+         break;
+      mdelay(10);
+   }
+
+   /* deassert UPLL_CTLREQ */
+   WREG32_P(cg_upll_func_cntl, 0, ~UPLL_CTLREQ_MASK);
+
+   if (i == 100) {
+      DRM_ERROR("Timeout setting UVD clocks!\n");
+      return -ETIMEDOUT;
+   }
+
+   return 0;
+}
+
 static int si_set_uvd_clocks(struct amdgpu_device *adev, u32 vclk, u32 dclk)
 {
-	return 0;
+   unsigned fb_div = 0, vclk_div = 0, dclk_div = 0;
+      int r;
+
+      /* bypass vclk and dclk with bclk */
+      WREG32_P(CG_UPLL_FUNC_CNTL_2,
+         VCLK_SRC_SEL(1) | DCLK_SRC_SEL(1),
+         ~(VCLK_SRC_SEL_MASK | DCLK_SRC_SEL_MASK));
+
+      /* put PLL in bypass mode */
+      WREG32_P(CG_UPLL_FUNC_CNTL, UPLL_BYPASS_EN_MASK, ~UPLL_BYPASS_EN_MASK);
+
+      if (!vclk || !dclk) {
+         /* keep the Bypass mode */
+         return 0;
+      }
+
+      r = si_uvd_calc_upll_dividers(adev, vclk, dclk, 125000, 250000,
+                    16384, 0x03FFFFFF, 0, 128, 5,
+                    &fb_div, &vclk_div, &dclk_div);
+      if (r)
+         return r;
+
+      /* set RESET_ANTI_MUX to 0 */
+      WREG32_P(CG_UPLL_FUNC_CNTL_5, 0, ~RESET_ANTI_MUX_MASK);
+
+      /* set VCO_MODE to 1 */
+      WREG32_P(CG_UPLL_FUNC_CNTL, UPLL_VCO_MODE_MASK, ~UPLL_VCO_MODE_MASK);
+
+      /* disable sleep mode */
+      WREG32_P(CG_UPLL_FUNC_CNTL, 0, ~UPLL_SLEEP_MASK);
+
+      /* deassert UPLL_RESET */
+      WREG32_P(CG_UPLL_FUNC_CNTL, 0, ~UPLL_RESET_MASK);
+
+      mdelay(1);
+
+      r = si_uvd_send_upll_ctlreq(adev, CG_UPLL_FUNC_CNTL);
+      if (r)
+         return r;
+
+      /* assert UPLL_RESET again */
+      WREG32_P(CG_UPLL_FUNC_CNTL, UPLL_RESET_MASK, ~UPLL_RESET_MASK);
+
+      /* disable spread spectrum. */
+      WREG32_P(CG_UPLL_SPREAD_SPECTRUM, 0, ~SSEN_MASK);
+
+      /* set feedback divider */
+      WREG32_P(CG_UPLL_FUNC_CNTL_3, UPLL_FB_DIV(fb_div), ~UPLL_FB_DIV_MASK);
+
+      /* set ref divider to 0 */
+      WREG32_P(CG_UPLL_FUNC_CNTL, 0, ~UPLL_REF_DIV_MASK);
+
+      if (fb_div < 307200)
+         WREG32_P(CG_UPLL_FUNC_CNTL_4, 0, ~UPLL_SPARE_ISPARE9);
+      else
+         WREG32_P(CG_UPLL_FUNC_CNTL_4, UPLL_SPARE_ISPARE9, ~UPLL_SPARE_ISPARE9);
+
+      /* set PDIV_A and PDIV_B */
+      WREG32_P(CG_UPLL_FUNC_CNTL_2,
+         UPLL_PDIV_A(vclk_div) | UPLL_PDIV_B(dclk_div),
+         ~(UPLL_PDIV_A_MASK | UPLL_PDIV_B_MASK));
+
+      /* give the PLL some time to settle */
+      mdelay(15);
+
+      /* deassert PLL_RESET */
+      WREG32_P(CG_UPLL_FUNC_CNTL, 0, ~UPLL_RESET_MASK);
+
+      mdelay(15);
+
+      /* switch from bypass mode to normal mode */
+      WREG32_P(CG_UPLL_FUNC_CNTL, 0, ~UPLL_BYPASS_EN_MASK);
+
+      r = si_uvd_send_upll_ctlreq(adev, CG_UPLL_FUNC_CNTL);
+      if (r)
+         return r;
+
+      /* switch VCLK and DCLK selection */
+      WREG32_P(CG_UPLL_FUNC_CNTL_2,
+         VCLK_SRC_SEL(2) | DCLK_SRC_SEL(2),
+         ~(VCLK_SRC_SEL_MASK | DCLK_SRC_SEL_MASK));
+
+      mdelay(100);
+
+      return 0;
 }
 
 static void si_detect_hw_virtualization(struct amdgpu_device *adev)
@@ -1259,8 +1503,8 @@ static int si_common_early_init(void *handle)
 	adev->pcie_wreg = &si_pcie_wreg;
 	adev->pciep_rreg = &si_pciep_rreg;
 	adev->pciep_wreg = &si_pciep_wreg;
-	adev->uvd_ctx_rreg = NULL;
-	adev->uvd_ctx_wreg = NULL;
+	adev->uvd_ctx_rreg = &si_uvd_ctx_rreg;
+	adev->uvd_ctx_wreg = &si_uvd_ctx_wreg;
 	adev->didt_rreg = NULL;
 	adev->didt_wreg = NULL;
 
@@ -1969,7 +2213,7 @@ int si_set_ip_blocks(struct amdgpu_device *adev)
 			amdgpu_ip_block_add(adev, &dce_v6_0_ip_block);
 		amdgpu_ip_block_add(adev, &gfx_v6_0_ip_block);
 		amdgpu_ip_block_add(adev, &si_dma_ip_block);
-		/* amdgpu_ip_block_add(adev, &uvd_v3_1_ip_block); */
+		amdgpu_ip_block_add(adev, &uvd_v3_1_ip_block);
 		/* amdgpu_ip_block_add(adev, &vce_v1_0_ip_block); */
 		break;
 	case CHIP_OLAND:
diff --git a/drivers/gpu/drm/amd/amdgpu/si_ih.c b/drivers/gpu/drm/amd/amdgpu/si_ih.c
index ce25e03a077d..32b629458e2d 100644
--- a/drivers/gpu/drm/amd/amdgpu/si_ih.c
+++ b/drivers/gpu/drm/amd/amdgpu/si_ih.c
@@ -26,6 +26,12 @@
 #include "sid.h"
 #include "si_ih.h"
 
+#include "bif/bif_3_0_d.h"
+#include "bif/bif_3_0_sh_mask.h"
+
+#include "oss/oss_1_0_d.h"
+#include "oss/oss_1_0_sh_mask.h"
+
 static void si_ih_set_interrupt_funcs(struct amdgpu_device *adev);
 
 static void si_ih_enable_interrupts(struct amdgpu_device *adev)
diff --git a/drivers/gpu/drm/amd/amdgpu/sid.h b/drivers/gpu/drm/amd/amdgpu/sid.h
index c57eff159374..cf329d7595eb 100644
--- a/drivers/gpu/drm/amd/amdgpu/sid.h
+++ b/drivers/gpu/drm/amd/amdgpu/sid.h
@@ -1619,33 +1619,9 @@
 #       define LC_SET_QUIESCE                             (1 << 13)
 
 /*
- * UVD
- */
-#define UVD_UDEC_ADDR_CONFIG				0x3bd3
-#define UVD_UDEC_DB_ADDR_CONFIG				0x3bd4
-#define UVD_UDEC_DBW_ADDR_CONFIG			0x3bd5
-#define UVD_RBC_RB_RPTR					0x3da4
-#define UVD_RBC_RB_WPTR					0x3da5
-#define UVD_STATUS					0x3daf
-
-#define	UVD_CGC_CTRL					0x3dc2
-#	define DCM					(1 << 0)
-#	define CG_DT(x)					((x) << 2)
-#	define CG_DT_MASK				(0xf << 2)
-#	define CLK_OD(x)				((x) << 6)
-#	define CLK_OD_MASK				(0x1f << 6)
-
- /* UVD CTX indirect */
-#define	UVD_CGC_MEM_CTRL				0xC0
-#define	UVD_CGC_CTRL2					0xC1
-#	define DYN_OR_EN				(1 << 0)
-#	define DYN_RR_EN				(1 << 1)
-#	define G_DIV_ID(x)				((x) << 2)
-#	define G_DIV_ID_MASK				(0x7 << 2)
-
-/*
  * PM4
  */
+#define RADEON_PACKET_TYPE0 0
 #define PACKET0(reg, n)	((RADEON_PACKET_TYPE0 << 30) |			\
 			 (((reg) >> 2) & 0xFFFF) |			\
 			 ((n) & 0x3FFF) << 16)
@@ -2320,11 +2296,6 @@
 #       define NI_INPUT_GAMMA_XVYCC_222                3
 #       define NI_OVL_INPUT_GAMMA_MODE(x)              (((x) & 0x3) << 4)
 
-#define IH_RB_WPTR__RB_OVERFLOW_MASK	0x1
-#define IH_RB_CNTL__WPTR_OVERFLOW_CLEAR_MASK 0x80000000
-#define SRBM_STATUS__IH_BUSY_MASK	0x20000
-#define SRBM_SOFT_RESET__SOFT_RESET_IH_MASK	0x400
-
 #define	BLACKOUT_MODE_MASK			0x00000007
 #define	VGA_RENDER_CONTROL			0xC0
 #define R_000300_VGA_RENDER_CONTROL             0xC0
@@ -2411,18 +2382,6 @@
 #define MC_SEQ_MISC0__MT__HBM    0x60000000
 #define MC_SEQ_MISC0__MT__DDR3   0xB0000000
 
-#define SRBM_STATUS__MCB_BUSY_MASK 0x200
-#define SRBM_STATUS__MCB_BUSY__SHIFT 0x9
-#define SRBM_STATUS__MCB_NON_DISPLAY_BUSY_MASK 0x400
-#define SRBM_STATUS__MCB_NON_DISPLAY_BUSY__SHIFT 0xa
-#define SRBM_STATUS__MCC_BUSY_MASK 0x800
-#define SRBM_STATUS__MCC_BUSY__SHIFT 0xb
-#define SRBM_STATUS__MCD_BUSY_MASK 0x1000
-#define SRBM_STATUS__MCD_BUSY__SHIFT 0xc
-#define SRBM_STATUS__VMC_BUSY_MASK 0x100
-#define SRBM_STATUS__VMC_BUSY__SHIFT 0x8
-
-
 #define GRBM_STATUS__GUI_ACTIVE_MASK 0x80000000
 #define CP_INT_CNTL_RING__TIME_STAMP_INT_ENABLE_MASK 0x4000000
 #define CP_INT_CNTL_RING0__PRIV_REG_INT_ENABLE_MASK 0x800000
@@ -2447,8 +2406,6 @@
 
 #define PCIE_BUS_CLK    10000
 #define TCLK            (PCIE_BUS_CLK / 10)
-#define CC_DRM_ID_STRAPS__ATI_REV_ID_MASK		0xf0000000
-#define CC_DRM_ID_STRAPS__ATI_REV_ID__SHIFT 0x1c
 #define	PCIE_PORT_INDEX					0xe
 #define	PCIE_PORT_DATA					0xf
 #define EVERGREEN_PIF_PHY0_INDEX                        0x8
diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v3_1.c b/drivers/gpu/drm/amd/amdgpu/uvd_v3_1.c
new file mode 100644
index 000000000000..713100a859d6
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/uvd_v3_1.c
@@ -0,0 +1,832 @@
+/*
+ * Copyright 2013 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Christian König <christian.koenig@amd.com>
+ */
+
+#include <linux/firmware.h>
+#include <drm/drmP.h>
+#include "amdgpu.h"
+#include "amdgpu_uvd.h"
+#include "sid.h"
+
+#include "uvd/uvd_3_1_d.h"
+#include "uvd/uvd_3_1_sh_mask.h"
+
+#include "oss/oss_1_0_d.h"
+#include "oss/oss_1_0_sh_mask.h"
+
+#include "bif/bif_3_0_d.h"
+
+#include "smu/smu_6_0_d.h"
+#include "smu/smu_6_0_sh_mask.h"
+
+
+static void uvd_v3_1_enable_mgcg(struct amdgpu_device *adev,
+				 bool enable);
+static void uvd_v3_1_mc_resume(struct amdgpu_device *adev);
+static void uvd_v3_1_set_ring_funcs(struct amdgpu_device *adev);
+static void uvd_v3_1_set_irq_funcs(struct amdgpu_device *adev);
+static int uvd_v3_1_start(struct amdgpu_device *adev);
+static void uvd_v3_1_stop(struct amdgpu_device *adev);
+static int uvd_v3_1_set_clockgating_state(void *handle,
+				enum amd_clockgating_state state);
+static void uvd_v3_1_set_dcm(struct amdgpu_device *adev,
+			     bool sw_mode);
+/**
+ * uvd_v3_1_ring_get_rptr - get read pointer
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Returns the current hardware read pointer
+ */
+static uint64_t uvd_v3_1_ring_get_rptr(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+
+	return RREG32(mmUVD_RBC_RB_RPTR);
+}
+
+/**
+ * uvd_v3_1_ring_get_wptr - get write pointer
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Returns the current hardware write pointer
+ */
+static uint64_t uvd_v3_1_ring_get_wptr(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+
+	return RREG32(mmUVD_RBC_RB_WPTR);
+}
+
+/**
+ * uvd_v3_1_ring_set_wptr - set write pointer
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Commits the write pointer to the hardware
+ */
+static void uvd_v3_1_ring_set_wptr(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+
+	WREG32(mmUVD_RBC_RB_WPTR, lower_32_bits(ring->wptr));
+}
+
+static int uvd_v3_1_early_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+
+	uvd_v3_1_set_ring_funcs(adev);
+	uvd_v3_1_set_irq_funcs(adev);
+
+	return 0;
+}
+
+static int uvd_v3_1_sw_init(void *handle)
+{
+	struct amdgpu_ring *ring;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	int r;
+
+
+	/* UVD TRAP */
+	r = amdgpu_irq_add_id(adev, AMDGPU_IH_CLIENTID_LEGACY, 124, &adev->uvd.irq);
+	if (r)
+		return r;
+
+	/* Loads UVD FW */
+	r = amdgpu_uvd_sw_init(adev);
+	if (r)
+		return r;
+
+	/* Copies UVD FW to BO */
+	r = amdgpu_uvd_resume(adev);
+	if (r)
+		return r;
+
+	/* Starts ring driver */
+	ring = &adev->uvd.ring;
+	sprintf(ring->name, "uvd");
+	r = amdgpu_ring_init(adev, ring, 512, &adev->uvd.irq, 0);
+
+	return r;
+}
+
+static int uvd_v3_1_sw_fini(void *handle)
+{
+	int r;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+
+	r = amdgpu_uvd_suspend(adev);
+	if (r)
+		return r;
+
+	return amdgpu_uvd_sw_fini(adev);
+}
+
+/**
+ * uvd_v3_1_hw_init - start and test UVD block
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Initialize the hardware, boot up the VCPU and do some testing
+ */
+static int uvd_v3_1_hw_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	struct amdgpu_ring *ring = &adev->uvd.ring;
+	uint32_t tmp;
+	int r;
+
+	uvd_v3_1_enable_mgcg(adev, true);
+    uvd_v3_1_set_dcm(adev, false);
+
+	amdgpu_asic_set_uvd_clocks(adev, 53300, 40000);
+
+	uvd_v3_1_start(adev);
+
+	ring->ready = true;
+	r = amdgpu_ring_test_ring(ring);
+	if (r) {
+		ring->ready = false;
+		goto done;
+	}
+
+	r = amdgpu_ring_alloc(ring, 10);
+	if (r) {
+		DRM_ERROR("amdgpu: ring failed to lock UVD ring (%d).\n", r);
+		goto done;
+	}
+
+	tmp = PACKET0(mmUVD_SEMA_WAIT_FAULT_TIMEOUT_CNTL, 0);
+	amdgpu_ring_write(ring, tmp);
+	amdgpu_ring_write(ring, 0xFFFFF);
+
+	tmp = PACKET0(mmUVD_SEMA_WAIT_INCOMPLETE_TIMEOUT_CNTL, 0);
+	amdgpu_ring_write(ring, tmp);
+	amdgpu_ring_write(ring, 0xFFFFF);
+
+	tmp = PACKET0(mmUVD_SEMA_SIGNAL_INCOMPLETE_TIMEOUT_CNTL, 0);
+	amdgpu_ring_write(ring, tmp);
+	amdgpu_ring_write(ring, 0xFFFFF);
+
+	/* Clear timeout status bits */
+	amdgpu_ring_write(ring, PACKET0(mmUVD_SEMA_TIMEOUT_STATUS, 0));
+	amdgpu_ring_write(ring, 0x8);
+
+	amdgpu_ring_write(ring, PACKET0(mmUVD_SEMA_CNTL, 0));
+	amdgpu_ring_write(ring, 3);
+
+	amdgpu_ring_commit(ring);
+
+done:
+	if (!r)
+		DRM_INFO("UVD initialized successfully.\n");
+
+	return r;
+}
+
+/**
+ * uvd_v3_1_hw_fini - stop the hardware block
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Stop the UVD block, mark ring as not ready any more
+ */
+static int uvd_v3_1_hw_fini(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	struct amdgpu_ring *ring = &adev->uvd.ring;
+
+	/* Stop if started. */
+	if (RREG32(mmUVD_STATUS) != 0)
+		uvd_v3_1_stop(adev);
+
+	/* SI always sets this to false*/
+    uvd_v3_1_set_dcm(adev, false);
+
+   /* Disable clock gating. */
+	uvd_v3_1_enable_mgcg(adev, false);
+
+	ring->ready = false;
+
+	return 0;
+}
+
+static int uvd_v3_1_suspend(void *handle)
+{
+	int r;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+
+	r = uvd_v3_1_hw_fini(adev);
+	if (r)
+		return r;
+
+	return amdgpu_uvd_suspend(adev);
+}
+
+static int uvd_v3_1_resume(void *handle)
+{
+	int r;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+
+	r = amdgpu_uvd_resume(adev);
+	if (r)
+		return r;
+
+	return uvd_v3_1_hw_init(adev);
+}
+
+/**
+ * uvd_v3_1_start - start UVD block
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Setup and start the UVD block
+ */
+static int uvd_v3_1_start(struct amdgpu_device *adev)
+{
+   struct amdgpu_ring *ring = &adev->uvd.ring;
+   uint32_t rb_bufsz;
+   int i, j, r;
+
+   /* disable byte swapping */
+   u32 lmi_swap_cntl = 0;
+   u32 mp_swap_cntl = 0;
+
+   /* disable clock gating */
+   WREG32(mmUVD_CGC_GATE, 0);
+
+   /* disable interupt */
+   WREG32_P(mmUVD_MASTINT_EN, 0, ~(1 << 1));
+
+   /* Stall UMC and register bus before resetting VCPU */
+   WREG32_P(mmUVD_LMI_CTRL2, 1 << 8, ~(1 << 8));
+   WREG32_P(mmUVD_RB_ARB_CTRL, 1 << 3, ~(1 << 3));
+   mdelay(1);
+
+   /* put LMI, VCPU, RBC etc... into reset */
+   WREG32(mmUVD_SOFT_RESET,
+      UVD_SOFT_RESET__LMI_SOFT_RESET_MASK  |
+      UVD_SOFT_RESET__VCPU_SOFT_RESET_MASK |
+      UVD_SOFT_RESET__LBSI_SOFT_RESET_MASK |
+      UVD_SOFT_RESET__RBC_SOFT_RESET_MASK  |
+      UVD_SOFT_RESET__CSM_SOFT_RESET_MASK  |
+      UVD_SOFT_RESET__CXW_SOFT_RESET_MASK  |
+      UVD_SOFT_RESET__TAP_SOFT_RESET_MASK  |
+      UVD_SOFT_RESET__LMI_UMC_SOFT_RESET_MASK);
+   mdelay(5);
+
+   /* take UVD block out of reset */
+   WREG32_P(mmSRBM_SOFT_RESET, 0, ~SRBM_SOFT_RESET__SOFT_RESET_UVD_MASK);
+   mdelay(5);
+
+   /* initialize UVD memory controller */
+   WREG32(mmUVD_LMI_CTRL, 0x40 | (1 << 8) | (1 << 13) |
+              (1 << 21) | (1 << 9) | (1 << 20));
+
+#ifdef __BIG_ENDIAN
+   /* swap (8 in 32) RB and IB */
+   lmi_swap_cntl = 0xa;
+   mp_swap_cntl = 0;
+#endif
+   WREG32(mmUVD_LMI_SWAP_CNTL, lmi_swap_cntl);
+   WREG32(mmUVD_MP_SWAP_CNTL, mp_swap_cntl);
+
+   WREG32(mmUVD_MPC_SET_MUXA0, 0x40c2040);
+   WREG32(mmUVD_MPC_SET_MUXA1, 0x0);
+   WREG32(mmUVD_MPC_SET_MUXB0, 0x40c2040);
+   WREG32(mmUVD_MPC_SET_MUXB1, 0x0);
+   WREG32(mmUVD_MPC_SET_ALU, 0);
+   WREG32(mmUVD_MPC_SET_MUX, 0x88);
+
+   /* Programs UVD HW to use FW */
+   uvd_v3_1_mc_resume(adev);
+
+   /* take all subblocks out of reset, except VCPU */
+   WREG32(mmUVD_SOFT_RESET, UVD_SOFT_RESET__VCPU_SOFT_RESET_MASK);
+   mdelay(5);
+
+   /* enable VCPU clock */
+   WREG32(mmUVD_VCPU_CNTL,  1 << 9);
+
+   /* enable UMC */
+   WREG32_P(mmUVD_LMI_CTRL2, 0, ~(1 << 8));
+
+   WREG32_P(mmUVD_RB_ARB_CTRL, 0, ~(1 << 3));
+
+   /* boot up the VCPU */
+   WREG32(mmUVD_SOFT_RESET, 0);
+   mdelay(10);
+
+   for (i = 0; i < 10; ++i) {
+      uint32_t status;
+      for (j = 0; j < 100; ++j) {
+         status = RREG32(mmUVD_STATUS);
+         if (status & 2)
+            break;
+         mdelay(10);
+      }
+      r = 0;
+      if (status & 2)
+         break;
+
+      DRM_ERROR("UVD not responding, trying to reset the VCPU!!!\n");
+      WREG32_P(mmUVD_SOFT_RESET,
+         UVD_SOFT_RESET__VCPU_SOFT_RESET_MASK,
+         ~UVD_SOFT_RESET__VCPU_SOFT_RESET_MASK);
+      mdelay(10);
+      WREG32_P(mmUVD_SOFT_RESET, 0, ~UVD_SOFT_RESET__VCPU_SOFT_RESET_MASK);
+      mdelay(10);
+      r = -1;
+   }
+
+   if (r) {
+      DRM_ERROR("UVD not responding, giving up!!!\n");
+      return r;
+   }
+
+   /* enable interupt */
+   WREG32_P(mmUVD_MASTINT_EN, 3<<1, ~(3 << 1));
+
+   /* force RBC into idle state */
+   WREG32(mmUVD_RBC_RB_CNTL, 0x11010101);
+
+   /* Set the write pointer delay */
+   WREG32(mmUVD_RBC_RB_WPTR_CNTL, 0);
+
+   /* programm the 4GB memory segment for rptr and ring buffer */
+   WREG32(mmUVD_LMI_EXT40_ADDR, upper_32_bits(ring->gpu_addr) |
+               (0x7 << 16) | (0x1 << 31));
+
+   /* Initialize the ring buffer's read and write pointers */
+   WREG32(mmUVD_RBC_RB_RPTR, 0x0);
+
+   ring->wptr = RREG32(mmUVD_RBC_RB_RPTR);
+   WREG32(mmUVD_RBC_RB_WPTR, ring->wptr);
+
+   /* set the ring address */
+   WREG32(mmUVD_RBC_RB_BASE, ring->gpu_addr);
+
+   /* Set ring buffer size */
+   rb_bufsz = order_base_2(ring->ring_size);
+   rb_bufsz = (0x1 << 8) | rb_bufsz;
+   WREG32_P(mmUVD_RBC_RB_CNTL, rb_bufsz, ~0x11f1f);
+
+   return 0;
+}
+
+/**
+ * uvd_v3_1_stop - stop UVD block
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * stop the UVD block
+ */
+static void uvd_v3_1_stop(struct amdgpu_device *adev)
+{
+   printk("%s\n", __FUNCTION__);
+
+   /* force RBC into idle state */
+   WREG32(mmUVD_RBC_RB_CNTL, 0x11010101);
+
+   /* Stall UMC and register bus before resetting VCPU */
+   WREG32_P(mmUVD_LMI_CTRL2, 1 << 8, ~(1 << 8));
+   WREG32_P(mmUVD_RB_ARB_CTRL, 1 << 3, ~(1 << 3));
+   mdelay(1);
+
+   /* put VCPU into reset */
+   WREG32(mmUVD_SOFT_RESET, UVD_SOFT_RESET__VCPU_SOFT_RESET_MASK);
+   mdelay(5);
+
+   /* disable VCPU clock */
+   WREG32(mmUVD_VCPU_CNTL, 0x0);
+
+   /* Unstall UMC and register bus */
+   WREG32_P(mmUVD_LMI_CTRL2, 0, ~(1 << 8));
+   WREG32_P(mmUVD_RB_ARB_CTRL, 0, ~(1 << 3));
+}
+
+/**
+ * uvd_v3_1_ring_emit_fence - emit an fence & trap command
+ *
+ * @ring: amdgpu_ring pointer
+ * @fence: fence to emit
+ *
+ * Write a fence and a trap command to the ring.
+ */
+static void uvd_v3_1_ring_emit_fence(struct amdgpu_ring *ring, u64 addr, u64 seq,
+				     unsigned flags)
+{
+	WARN_ON(flags & AMDGPU_FENCE_FLAG_64BIT);
+
+	amdgpu_ring_write(ring, PACKET0(mmUVD_CONTEXT_ID, 0));
+	amdgpu_ring_write(ring, seq);
+	amdgpu_ring_write(ring, PACKET0(mmUVD_GPCOM_VCPU_DATA0, 0));
+	amdgpu_ring_write(ring, addr & 0xffffffff);
+	amdgpu_ring_write(ring, PACKET0(mmUVD_GPCOM_VCPU_DATA1, 0));
+	amdgpu_ring_write(ring, upper_32_bits(addr) & 0xff);
+	amdgpu_ring_write(ring, PACKET0(mmUVD_GPCOM_VCPU_CMD, 0));
+	amdgpu_ring_write(ring, 0);
+
+	amdgpu_ring_write(ring, PACKET0(mmUVD_GPCOM_VCPU_DATA0, 0));
+	amdgpu_ring_write(ring, 0);
+	amdgpu_ring_write(ring, PACKET0(mmUVD_GPCOM_VCPU_DATA1, 0));
+	amdgpu_ring_write(ring, 0);
+	amdgpu_ring_write(ring, PACKET0(mmUVD_GPCOM_VCPU_CMD, 0));
+	amdgpu_ring_write(ring, 2);
+}
+
+/**
+ * uvd_v3_1_ring_emit_hdp_flush - emit an hdp flush
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Emits an hdp flush.
+ */
+static void uvd_v3_1_ring_emit_hdp_flush(struct amdgpu_ring *ring)
+{
+	amdgpu_ring_write(ring, PACKET0(mmHDP_MEM_COHERENCY_FLUSH_CNTL, 0));
+	amdgpu_ring_write(ring, 0);
+}
+
+/**
+ * uvd_v3_1_ring_hdp_invalidate - emit an hdp invalidate
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Emits an hdp invalidate.
+ */
+static void uvd_v3_1_ring_emit_hdp_invalidate(struct amdgpu_ring *ring)
+{
+	amdgpu_ring_write(ring, PACKET0(mmHDP_DEBUG0, 0));
+	amdgpu_ring_write(ring, 1);
+}
+
+/**
+ * uvd_v3_1_ring_test_ring - register write test
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Test if we can successfully write to the context register
+ */
+static int uvd_v3_1_ring_test_ring(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	uint32_t tmp = 0;
+	unsigned i;
+	int r;
+
+	WREG32(mmUVD_CONTEXT_ID, 0xCAFEDEAD);
+	r = amdgpu_ring_alloc(ring, 3);
+	if (r) {
+		DRM_ERROR("amdgpu: cp failed to lock ring %d (%d).\n",
+			  ring->idx, r);
+		return r;
+	}
+	amdgpu_ring_write(ring, PACKET0(mmUVD_CONTEXT_ID, 0));
+	amdgpu_ring_write(ring, 0xDEADBEEF);
+	amdgpu_ring_commit(ring);
+	for (i = 0; i < adev->usec_timeout; i++) {
+		tmp = RREG32(mmUVD_CONTEXT_ID);
+		if (tmp == 0xDEADBEEF)
+			break;
+		DRM_UDELAY(1);
+	}
+
+	if (i < adev->usec_timeout) {
+		DRM_INFO("ring test on %d succeeded in %d usecs\n",
+			 ring->idx, i);
+	} else {
+		DRM_ERROR("amdgpu: ring %d test failed (0x%08X)\n",
+			  ring->idx, tmp);
+		r = -EINVAL;
+	}
+	return r;
+}
+
+/**
+ * uvd_v3_1_ring_emit_ib - execute indirect buffer
+ *
+ * @ring: amdgpu_ring pointer
+ * @ib: indirect buffer to execute
+ *
+ * Write ring commands to execute the indirect buffer
+ */
+static void uvd_v3_1_ring_emit_ib(struct amdgpu_ring *ring,
+				  struct amdgpu_ib *ib,
+				  unsigned vm_id, bool ctx_switch)
+{
+	amdgpu_ring_write(ring, PACKET0(mmUVD_RBC_IB_BASE, 0));
+	amdgpu_ring_write(ring, ib->gpu_addr);
+	amdgpu_ring_write(ring, PACKET0(mmUVD_RBC_IB_SIZE, 0));
+	amdgpu_ring_write(ring, ib->length_dw);
+}
+
+/**
+ * uvd_v3_1_mc_resume - memory controller programming
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Let the UVD memory controller know it's offsets
+ */
+static void uvd_v3_1_mc_resume(struct amdgpu_device *adev)
+{
+	uint64_t addr;
+	uint32_t size;
+	uint32_t chip_id;
+
+	/* programm the VCPU memory controller bits 0-27 */
+	addr = (adev->uvd.gpu_addr + AMDGPU_UVD_FIRMWARE_OFFSET) >> 3;
+	size = AMDGPU_GPU_PAGE_ALIGN(adev->uvd.fw->size + 4) >> 3;
+	WREG32(mmUVD_VCPU_CACHE_OFFSET0, addr);
+	WREG32(mmUVD_VCPU_CACHE_SIZE0, size);
+
+	addr += size;
+	size = AMDGPU_UVD_HEAP_SIZE >> 3;
+	WREG32(mmUVD_VCPU_CACHE_OFFSET1, addr);
+	WREG32(mmUVD_VCPU_CACHE_SIZE1, size);
+
+	addr += size;
+	size = (AMDGPU_UVD_STACK_SIZE +
+	       (AMDGPU_UVD_SESSION_SIZE * adev->uvd.max_handles)) >> 3;
+	WREG32(mmUVD_VCPU_CACHE_OFFSET2, addr);
+	WREG32(mmUVD_VCPU_CACHE_SIZE2, size);
+
+	/* bits 28-31 */
+	addr = (adev->uvd.gpu_addr >> 28) & 0xF;
+	WREG32(mmUVD_LMI_ADDR_EXT, (addr << 12) | (addr << 0));
+
+	/* bits 32-39 */
+	addr = (adev->uvd.gpu_addr >> 32) & 0xFF;
+	WREG32(mmUVD_LMI_EXT40_ADDR, addr | (0x9 << 16) | (0x1 << 31));
+
+	/* SI used to do this in a different location .  CIK does it here.*/
+	WREG32(mmUVD_UDEC_ADDR_CONFIG, adev->gfx.config.gb_addr_config);
+	WREG32(mmUVD_UDEC_DB_ADDR_CONFIG, adev->gfx.config.gb_addr_config);
+	WREG32(mmUVD_UDEC_DBW_ADDR_CONFIG, adev->gfx.config.gb_addr_config);
+
+	/* CIK no longer has this. */
+	/* tell firmware which hardware it is running on */
+	switch (adev->asic_type) {
+	default:
+		BUG();
+		break;
+	case CHIP_TAHITI:
+		chip_id = 0x01000014;
+		break;
+	case CHIP_VERDE:
+		chip_id = 0x01000015;
+		break;
+	case CHIP_PITCAIRN:
+	case CHIP_OLAND:
+		chip_id = 0x01000016;
+		break;
+	}
+	WREG32(mmUVD_VCPU_CHIP_ID, chip_id);
+}
+
+static void uvd_v3_1_enable_mgcg(struct amdgpu_device *adev,
+				 bool enable)
+{
+	u32 orig, data;
+
+	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_UVD_MGCG)) {
+		data = RREG32_UVD_CTX(ixUVD_CGC_MEM_CTRL);
+		/* SI used 0x3fff? */
+		data |= 0xfff;
+		WREG32_UVD_CTX(ixUVD_CGC_MEM_CTRL, data);
+
+		orig = data = RREG32(mmUVD_CGC_CTRL);
+		data |= UVD_CGC_CTRL__DYN_CLOCK_MODE_MASK;
+		if (orig != data)
+			WREG32(mmUVD_CGC_CTRL, data);
+
+		/* CIK no longer has these two lines */
+		WREG32_SMC(SMC_CG_IND_START + CG_CGTT_LOCAL_0, 0);
+		WREG32_SMC(SMC_CG_IND_START + CG_CGTT_LOCAL_1, 0);
+	} else {
+		data = RREG32_UVD_CTX(ixUVD_CGC_MEM_CTRL);
+		/* SI used 0x3fff? */
+		data &= ~0xfff;
+		WREG32_UVD_CTX(ixUVD_CGC_MEM_CTRL, data);
+
+		orig = data = RREG32(mmUVD_CGC_CTRL);
+		data &= ~UVD_CGC_CTRL__DYN_CLOCK_MODE_MASK;
+		if (orig != data)
+			WREG32(mmUVD_CGC_CTRL, data);
+
+		/* CIK no longer has these two lines */
+		WREG32_SMC(SMC_CG_IND_START + CG_CGTT_LOCAL_0, 0xffffffff);
+		WREG32_SMC(SMC_CG_IND_START + CG_CGTT_LOCAL_1, 0xffffffff);
+	}
+}
+
+static void uvd_v3_1_set_dcm(struct amdgpu_device *adev,
+			     bool sw_mode)
+{
+	u32 tmp, tmp2;
+
+   /* SI didn't seem to have this but CIK does */
+	WREG32_FIELD(UVD_CGC_GATE, REGS, 0);
+
+	tmp = RREG32(mmUVD_CGC_CTRL);
+	tmp &= ~(UVD_CGC_CTRL__CLK_OFF_DELAY_MASK | UVD_CGC_CTRL__CLK_GATE_DLY_TIMER_MASK);
+	tmp |= UVD_CGC_CTRL__DYN_CLOCK_MODE_MASK |
+		(1 << UVD_CGC_CTRL__CLK_GATE_DLY_TIMER__SHIFT) |
+		(4 << UVD_CGC_CTRL__CLK_OFF_DELAY__SHIFT);
+
+	if (sw_mode) {
+		tmp &= ~0x7ffff800;
+		tmp2 = UVD_CGC_CTRL2__DYN_OCLK_RAMP_EN_MASK |
+			UVD_CGC_CTRL2__DYN_RCLK_RAMP_EN_MASK |
+			(7 << UVD_CGC_CTRL2__GATER_DIV_ID__SHIFT);
+	} else {
+		tmp |= 0x7ffff800;
+		tmp2 = 0;
+	}
+
+	WREG32(mmUVD_CGC_CTRL, tmp);
+	WREG32_UVD_CTX(ixUVD_CGC_CTRL2, tmp2);
+}
+
+static bool uvd_v3_1_is_idle(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+
+	return !(RREG32(mmSRBM_STATUS) & SRBM_STATUS__UVD_BUSY_MASK);
+}
+
+static int uvd_v3_1_wait_for_idle(void *handle)
+{
+	unsigned i;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+
+	for (i = 0; i < adev->usec_timeout; i++) {
+		if (!(RREG32(mmSRBM_STATUS) & SRBM_STATUS__UVD_BUSY_MASK))
+			return 0;
+	}
+	return -ETIMEDOUT;
+}
+
+static int uvd_v3_1_soft_reset(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+
+	uvd_v3_1_stop(adev);
+
+	WREG32_P(mmSRBM_SOFT_RESET, SRBM_SOFT_RESET__SOFT_RESET_UVD_MASK,
+			~SRBM_SOFT_RESET__SOFT_RESET_UVD_MASK);
+	mdelay(5);
+
+	return uvd_v3_1_start(adev);
+}
+
+static int uvd_v3_1_set_interrupt_state(struct amdgpu_device *adev,
+					struct amdgpu_irq_src *source,
+					unsigned type,
+					enum amdgpu_interrupt_state state)
+{
+
+
+	return 0;
+}
+
+static int uvd_v3_1_process_interrupt(struct amdgpu_device *adev,
+				      struct amdgpu_irq_src *source,
+				      struct amdgpu_iv_entry *entry)
+{
+	DRM_DEBUG("IH: UVD TRAP\n");
+
+
+	amdgpu_fence_process(&adev->uvd.ring);
+	return 0;
+}
+
+static int uvd_v3_1_set_clockgating_state(void *handle,
+					  enum amd_clockgating_state state)
+{
+   /* Not implemented */
+	return 0;
+}
+
+static int uvd_v3_1_set_powergating_state(void *handle,
+					  enum amd_powergating_state state)
+{
+   /* This doesn't actually powergate the UVD block.
+    * That's done in the dpm code via the SMC.  This
+    * just re-inits the block as necessary.  The actual
+    * gating still happens in the dpm code.  We should
+    * revisit this when there is a cleaner line between
+    * the smc and the hw blocks
+    */
+   struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+   if (state == AMD_PG_STATE_GATE) {
+      uvd_v3_1_stop(adev);
+      return 0;
+   } else {
+      return uvd_v3_1_start(adev);
+   }
+}
+
+static const struct amd_ip_funcs uvd_v3_1_ip_funcs = {
+	.name = "uvd_v3_1",
+	.early_init = uvd_v3_1_early_init,
+	.late_init = NULL,
+	.sw_init = uvd_v3_1_sw_init,
+	.sw_fini = uvd_v3_1_sw_fini,
+	.hw_init = uvd_v3_1_hw_init,
+	.hw_fini = uvd_v3_1_hw_fini,
+	.suspend = uvd_v3_1_suspend,
+	.resume = uvd_v3_1_resume,
+	.is_idle = uvd_v3_1_is_idle,
+	.wait_for_idle = uvd_v3_1_wait_for_idle,
+	.soft_reset = uvd_v3_1_soft_reset,
+	.set_clockgating_state = uvd_v3_1_set_clockgating_state,
+	.set_powergating_state = uvd_v3_1_set_powergating_state,
+};
+
+static const struct amdgpu_ring_funcs uvd_v3_1_ring_funcs = {
+	.type = AMDGPU_RING_TYPE_UVD,
+	.align_mask = 0xf,
+	.nop = PACKET0(mmUVD_NO_OP, 0),
+	.support_64bit_ptrs = false,
+	.get_rptr = uvd_v3_1_ring_get_rptr,
+	.get_wptr = uvd_v3_1_ring_get_wptr,
+	.set_wptr = uvd_v3_1_ring_set_wptr,
+	.parse_cs = amdgpu_uvd_ring_parse_cs,
+	.emit_frame_size =
+		2 + /* uvd_v3_1_ring_emit_hdp_flush */
+		2 + /* uvd_v3_1_ring_emit_hdp_invalidate */
+		14, /* uvd_v3_1_ring_emit_fence  x1 no user fence */
+	.emit_ib_size = 4, /* uvd_v3_1_ring_emit_ib */
+	.emit_ib = uvd_v3_1_ring_emit_ib,
+	.emit_fence = uvd_v3_1_ring_emit_fence,
+	.emit_hdp_flush = uvd_v3_1_ring_emit_hdp_flush,
+	.emit_hdp_invalidate = uvd_v3_1_ring_emit_hdp_invalidate,
+	.test_ring = uvd_v3_1_ring_test_ring,
+	.test_ib = amdgpu_uvd_ring_test_ib,
+	.insert_nop = amdgpu_ring_insert_nop,
+	.pad_ib = amdgpu_ring_generic_pad_ib,
+	.begin_use = amdgpu_uvd_ring_begin_use,
+	.end_use = amdgpu_uvd_ring_end_use,
+};
+
+static void uvd_v3_1_set_ring_funcs(struct amdgpu_device *adev)
+{
+	adev->uvd.ring.funcs = &uvd_v3_1_ring_funcs;
+}
+
+static const struct amdgpu_irq_src_funcs uvd_v3_1_irq_funcs = {
+	.set = uvd_v3_1_set_interrupt_state,
+	.process = uvd_v3_1_process_interrupt,
+};
+
+static void uvd_v3_1_set_irq_funcs(struct amdgpu_device *adev)
+{
+
+
+	adev->uvd.irq.num_types = 1;
+	adev->uvd.irq.funcs = &uvd_v3_1_irq_funcs;
+}
+
+const struct amdgpu_ip_block_version uvd_v3_1_ip_block =
+{
+		.type = AMD_IP_BLOCK_TYPE_UVD,
+		.major = 3,
+		.minor = 1,
+		.rev = 0,
+		.funcs = &uvd_v3_1_ip_funcs,
+};
diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v3_1.h b/drivers/gpu/drm/amd/amdgpu/uvd_v3_1.h
new file mode 100644
index 000000000000..cfde9a6e6ebd
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/uvd_v3_1.h
@@ -0,0 +1,29 @@
+/*
+ * Copyright 2014 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __UVD_V3_1_H__
+#define __UVD_V3_1_H__
+
+extern const struct amdgpu_ip_block_version uvd_v3_1_ip_block;
+
+#endif
diff --git a/drivers/gpu/drm/amd/include/asic_reg/uvd/uvd_3_1_d.h b/drivers/gpu/drm/amd/include/asic_reg/uvd/uvd_3_1_d.h
new file mode 100644
index 000000000000..17a56463317c
--- /dev/null
+++ b/drivers/gpu/drm/amd/include/asic_reg/uvd/uvd_3_1_d.h
@@ -0,0 +1,91 @@
+/*
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included
+ * in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+ * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN
+ * AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef UVD_3_1_D_H
+#define UVD_3_1_D_H
+
+#define ixUVD_CGC_CTRL2                           0x00C1
+#define ixUVD_CGC_MEM_CTRL                        0x00C0
+
+#define mmUVD_CGC_CTRL                            0x3D2C
+#define mmUVD_CGC_GATE                            0x3D2A
+
+#define mmUVD_CONTEXT_ID                          0x3DBD
+#define mmUVD_CTX_DATA                            0x3D29
+#define mmUVD_CTX_INDEX                           0x3D28
+
+#define mmUVD_GPCOM_VCPU_CMD                      0x3BC3
+#define mmUVD_GPCOM_VCPU_DATA0                    0x3BC4
+#define mmUVD_GPCOM_VCPU_DATA1                    0x3BC5
+
+#define mmUVD_RB_ARB_CTRL                         0x3D20
+
+#define mmUVD_LMI_ADDR_EXT                        0x3D65
+#define mmUVD_LMI_CTRL                            0x3D66
+#define mmUVD_LMI_CTRL2                           0x3D3D
+#define mmUVD_LMI_EXT40_ADDR                      0x3D26
+
+#define mmUVD_LMI_SWAP_CNTL                       0x3D6D
+#define mmUVD_MASTINT_EN                          0x3D40
+
+#define mmUVD_MPC_SET_ALU                         0x3D7E
+#define mmUVD_MPC_SET_MUX                         0x3D7D
+#define mmUVD_MPC_SET_MUXA0                       0x3D79
+#define mmUVD_MPC_SET_MUXA1                       0x3D7A
+#define mmUVD_MPC_SET_MUXB0                       0x3D7B
+#define mmUVD_MPC_SET_MUXB1                       0x3D7C
+#define mmUVD_MP_SWAP_CNTL                        0x3D6F
+#define mmUVD_NO_OP                               0x3BFF
+
+#define mmUVD_RBC_IB_BASE                         0x3DA1
+#define mmUVD_RBC_IB_SIZE                         0x3DA2
+
+#define mmUVD_RBC_RB_BASE                         0x3DA3
+#define mmUVD_RBC_RB_CNTL                         0x3DA9
+#define mmUVD_RBC_RB_RPTR                         0x3DA4
+
+#define mmUVD_RBC_RB_WPTR                         0x3DA5
+#define mmUVD_RBC_RB_WPTR_CNTL                    0x3DA6
+
+#define mmUVD_SEMA_CNTL                           0x3D00
+#define mmUVD_SEMA_SIGNAL_INCOMPLETE_TIMEOUT_CNTL 0x3DB3
+#define mmUVD_SEMA_TIMEOUT_STATUS                 0x3DB0
+#define mmUVD_SEMA_WAIT_FAULT_TIMEOUT_CNTL        0x3DB2
+#define mmUVD_SEMA_WAIT_INCOMPLETE_TIMEOUT_CNTL   0x3DB1
+
+#define mmUVD_SOFT_RESET                          0x3DA0
+#define mmUVD_STATUS                              0x3DAF
+
+#define mmUVD_UDEC_ADDR_CONFIG                    0x3BD3
+#define mmUVD_UDEC_DB_ADDR_CONFIG                 0x3BD4
+#define mmUVD_UDEC_DBW_ADDR_CONFIG                0x3BD5
+
+#define mmUVD_VCPU_CHIP_ID				          0x3D35
+#define mmUVD_VCPU_CACHE_OFFSET0                  0x3D36
+#define mmUVD_VCPU_CACHE_OFFSET1                  0x3D38
+#define mmUVD_VCPU_CACHE_OFFSET2                  0x3D3A
+#define mmUVD_VCPU_CACHE_SIZE0                    0x3D37
+#define mmUVD_VCPU_CACHE_SIZE1                    0x3D39
+#define mmUVD_VCPU_CACHE_SIZE2                    0x3D3B
+#define mmUVD_VCPU_CNTL                           0x3D98
+
+#endif
diff --git a/drivers/gpu/drm/amd/include/asic_reg/uvd/uvd_3_1_sh_mask.h b/drivers/gpu/drm/amd/include/asic_reg/uvd/uvd_3_1_sh_mask.h
new file mode 100644
index 000000000000..0558ea45e323
--- /dev/null
+++ b/drivers/gpu/drm/amd/include/asic_reg/uvd/uvd_3_1_sh_mask.h
@@ -0,0 +1,65 @@
+/*
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included
+ * in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+ * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN
+ * AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef UVD_3_1_SH_MASK_H
+#define UVD_3_1_SH_MASK_H
+
+#define UVD_CGC_CTRL2__DYN_OCLK_RAMP_EN_MASK 0x00000001L
+#define UVD_CGC_CTRL2__DYN_OCLK_RAMP_EN__SHIFT 0x00000000
+#define UVD_CGC_CTRL2__DYN_RCLK_RAMP_EN_MASK 0x00000002L
+#define UVD_CGC_CTRL2__DYN_RCLK_RAMP_EN__SHIFT 0x00000001
+#define UVD_CGC_CTRL2__GATER_DIV_ID_MASK 0x0000001cL
+#define UVD_CGC_CTRL2__GATER_DIV_ID__SHIFT 0x00000002
+#define UVD_CGC_CTRL__CLK_GATE_DLY_TIMER_MASK 0x0000003cL
+#define UVD_CGC_CTRL__CLK_GATE_DLY_TIMER__SHIFT 0x00000002
+#define UVD_CGC_CTRL__CLK_OFF_DELAY_MASK 0x000007c0L
+#define UVD_CGC_CTRL__CLK_OFF_DELAY__SHIFT 0x00000006
+#define UVD_CGC_CTRL__DYN_CLOCK_MODE_MASK 0x00000001L
+#define UVD_CGC_CTRL__DYN_CLOCK_MODE__SHIFT 0x00000000
+
+#define UVD_CGC_GATE__REGS_MASK 0x00000008L
+#define UVD_CGC_GATE__REGS__SHIFT 0x00000003
+
+#define UVD_SOFT_RESET__CSM_SOFT_RESET_MASK 0x00000020L
+#define UVD_SOFT_RESET__CSM_SOFT_RESET__SHIFT 0x00000005
+#define UVD_SOFT_RESET__CXW_SOFT_RESET_MASK 0x00000040L
+#define UVD_SOFT_RESET__CXW_SOFT_RESET__SHIFT 0x00000006
+
+#define UVD_SOFT_RESET__LBSI_SOFT_RESET_MASK 0x00000002L
+#define UVD_SOFT_RESET__LBSI_SOFT_RESET__SHIFT 0x00000001
+
+#define UVD_SOFT_RESET__LMI_SOFT_RESET_MASK 0x00000004L
+#define UVD_SOFT_RESET__LMI_SOFT_RESET__SHIFT 0x00000002
+#define UVD_SOFT_RESET__LMI_UMC_SOFT_RESET_MASK 0x00002000L
+#define UVD_SOFT_RESET__LMI_UMC_SOFT_RESET__SHIFT 0x0000000d
+
+#define UVD_SOFT_RESET__RBC_SOFT_RESET_MASK 0x00000001L
+#define UVD_SOFT_RESET__RBC_SOFT_RESET__SHIFT 0x00000000
+
+#define UVD_SOFT_RESET__TAP_SOFT_RESET_MASK 0x00000080L
+#define UVD_SOFT_RESET__TAP_SOFT_RESET__SHIFT 0x00000007
+
+#define UVD_SOFT_RESET__VCPU_SOFT_RESET_MASK 0x00000008L
+#define UVD_SOFT_RESET__VCPU_SOFT_RESET__SHIFT 0x00000003
+
+
+#endif
-- 
2.13.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2017-09-01 15:30 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-01 15:30 [PATCH 0/1] [rfc] Port UVD from radeon for SI Trevor Davenport
     [not found] ` <20170901153008.22075-1-trevor.davenport-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2017-09-01 15:30   ` [PATCH 1/1] drm/amdgpu/uvd3: Initial port of " Trevor Davenport

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.