* [PATCH 01/11] drm/amdgpu:use MACRO like other places @ 2017-02-08 9:26 Monk Liu [not found] ` <1486546019-31045-1-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 0 siblings, 1 reply; 28+ messages in thread From: Monk Liu @ 2017-02-08 9:26 UTC (permalink / raw) To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Monk Liu Change-Id: Ica8f86577a50d817119de4b4fb95068dc72652a9 Signed-off-by: Monk Liu <Monk.Liu@amd.com> --- drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c index 6734e55..8f545992 100644 --- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c @@ -4068,10 +4068,10 @@ static int gfx_v8_0_init_save_restore_list(struct amdgpu_device *adev) data = mmRLC_SRM_INDEX_CNTL_DATA_0; for (i = 0; i < sizeof(unique_indices) / sizeof(int); i++) { if (unique_indices[i] != 0) { - amdgpu_mm_wreg(adev, temp + i, - unique_indices[i] & 0x3FFFF, false); - amdgpu_mm_wreg(adev, data + i, - unique_indices[i] >> 20, false); + WREG32(temp + i, + unique_indices[i] & 0x3FFFF); + WREG32(data + i, + unique_indices[i] >> 20); } } kfree(register_list_format); -- 2.7.4 _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ^ permalink raw reply related [flat|nested] 28+ messages in thread
[parent not found: <1486546019-31045-1-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org>]
* [PATCH 02/11] drm/amdgpu:impl RREG32 no kiq version [not found] ` <1486546019-31045-1-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> @ 2017-02-08 9:26 ` Monk Liu [not found] ` <1486546019-31045-2-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 2017-02-08 9:26 ` [PATCH 03/11] drm/amdgpu:Refine handshake of mailbox Monk Liu ` (10 subsequent siblings) 11 siblings, 1 reply; 28+ messages in thread From: Monk Liu @ 2017-02-08 9:26 UTC (permalink / raw) To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Monk Liu some registers are PF & VF copy, and we can safely use mmio method to access them. and sometime we are forbid to use kiq to access registers for example in INTR context. we need a MACRO that always disable KIQ for regs accessing Change-Id: Ie6dc323dc86829a4a6ceb7073c269b106b534c4a Signed-off-by: Monk Liu <Monk.Liu@amd.com> --- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 21 ++++++++++++++------- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 12 ++++++------ 2 files changed, 20 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h index 402a895..74bffca8 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -1510,9 +1510,9 @@ void amdgpu_device_fini(struct amdgpu_device *adev); int amdgpu_gpu_wait_for_idle(struct amdgpu_device *adev); uint32_t amdgpu_mm_rreg(struct amdgpu_device *adev, uint32_t reg, - bool always_indirect); + uint32_t acc_flags); void amdgpu_mm_wreg(struct amdgpu_device *adev, uint32_t reg, uint32_t v, - bool always_indirect); + uint32_t acc_flags); u32 amdgpu_io_rreg(struct amdgpu_device *adev, u32 reg); void amdgpu_io_wreg(struct amdgpu_device *adev, u32 reg, u32 v); @@ -1523,11 +1523,18 @@ bool amdgpu_device_has_dc_support(struct amdgpu_device *adev); /* * Registers read & write functions. */ -#define RREG32(reg) amdgpu_mm_rreg(adev, (reg), false) -#define RREG32_IDX(reg) amdgpu_mm_rreg(adev, (reg), true) -#define DREG32(reg) printk(KERN_INFO "REGISTER: " #reg " : 0x%08X\n", amdgpu_mm_rreg(adev, (reg), false)) -#define WREG32(reg, v) amdgpu_mm_wreg(adev, (reg), (v), false) -#define WREG32_IDX(reg, v) amdgpu_mm_wreg(adev, (reg), (v), true) + +#define AMDGPU_REGS_IDX (1<<0) +#define AMDGPU_REGS_NO_KIQ (1<<1) + +#define RREG32_NO_KIQ(reg) amdgpu_mm_rreg(adev, (reg), AMDGPU_REGS_NO_KIQ) +#define WREG32_NO_KIQ(reg, v) amdgpu_mm_wreg(adev, (reg), (v), AMDGPU_REGS_NO_KIQ) + +#define RREG32(reg) amdgpu_mm_rreg(adev, (reg), 0) +#define RREG32_IDX(reg) amdgpu_mm_rreg(adev, (reg), AMDGPU_REGS_IDX) +#define DREG32(reg) printk(KERN_INFO "REGISTER: " #reg " : 0x%08X\n", amdgpu_mm_rreg(adev, (reg), 0)) +#define WREG32(reg, v) amdgpu_mm_wreg(adev, (reg), (v), 0) +#define WREG32_IDX(reg, v) amdgpu_mm_wreg(adev, (reg), (v), AMDGPU_REGS_IDX) #define REG_SET(FIELD, v) (((v) << FIELD##_SHIFT) & FIELD##_MASK) #define REG_GET(FIELD, v) (((v) << FIELD##_SHIFT) & FIELD##_MASK) #define RREG32_PCIE(reg) adev->pcie_rreg(adev, (reg)) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c index 3534089..5215fc5 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -91,16 +91,16 @@ bool amdgpu_device_is_px(struct drm_device *dev) * MMIO register access helper functions. */ uint32_t amdgpu_mm_rreg(struct amdgpu_device *adev, uint32_t reg, - bool always_indirect) + uint32_t acc_flags) { uint32_t ret; - if (amdgpu_sriov_runtime(adev)) { + if (!(acc_flags & AMDGPU_REGS_NO_KIQ) && amdgpu_sriov_runtime(adev)) { BUG_ON(in_interrupt()); return amdgpu_virt_kiq_rreg(adev, reg); } - if ((reg * 4) < adev->rmmio_size && !always_indirect) + if ((reg * 4) < adev->rmmio_size && !(acc_flags & AMDGPU_REGS_IDX)) ret = readl(((void __iomem *)adev->rmmio) + (reg * 4)); else { unsigned long flags; @@ -115,16 +115,16 @@ uint32_t amdgpu_mm_rreg(struct amdgpu_device *adev, uint32_t reg, } void amdgpu_mm_wreg(struct amdgpu_device *adev, uint32_t reg, uint32_t v, - bool always_indirect) + uint32_t acc_flags) { trace_amdgpu_mm_wreg(adev->pdev->device, reg, v); - if (amdgpu_sriov_runtime(adev)) { + if (!(acc_flags & AMDGPU_REGS_NO_KIQ) && amdgpu_sriov_runtime(adev)) { BUG_ON(in_interrupt()); return amdgpu_virt_kiq_wreg(adev, reg, v); } - if ((reg * 4) < adev->rmmio_size && !always_indirect) + if ((reg * 4) < adev->rmmio_size && !(acc_flags & AMDGPU_REGS_IDX)) writel(v, ((void __iomem *)adev->rmmio) + (reg * 4)); else { unsigned long flags; -- 2.7.4 _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ^ permalink raw reply related [flat|nested] 28+ messages in thread
[parent not found: <1486546019-31045-2-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org>]
* RE: [PATCH 02/11] drm/amdgpu:impl RREG32 no kiq version [not found] ` <1486546019-31045-2-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> @ 2017-02-08 16:37 ` Deucher, Alexander 2017-02-09 1:49 ` Yu, Xiangliang 1 sibling, 0 replies; 28+ messages in thread From: Deucher, Alexander @ 2017-02-08 16:37 UTC (permalink / raw) To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Liu, Monk > -----Original Message----- > From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf > Of Monk Liu > Sent: Wednesday, February 08, 2017 4:27 AM > To: amd-gfx@lists.freedesktop.org > Cc: Liu, Monk > Subject: [PATCH 02/11] drm/amdgpu:impl RREG32 no kiq version > > some registers are PF & VF copy, and we can safely use > mmio method to access them. > > and sometime we are forbid to use kiq to access registers > for example in INTR context. > > we need a MACRO that always disable KIQ for regs accessing > > Change-Id: Ie6dc323dc86829a4a6ceb7073c269b106b534c4a > Signed-off-by: Monk Liu <Monk.Liu@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> > --- > drivers/gpu/drm/amd/amdgpu/amdgpu.h | 21 ++++++++++++++------- > drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 12 ++++++------ > 2 files changed, 20 insertions(+), 13 deletions(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h > b/drivers/gpu/drm/amd/amdgpu/amdgpu.h > index 402a895..74bffca8 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h > @@ -1510,9 +1510,9 @@ void amdgpu_device_fini(struct amdgpu_device > *adev); > int amdgpu_gpu_wait_for_idle(struct amdgpu_device *adev); > > uint32_t amdgpu_mm_rreg(struct amdgpu_device *adev, uint32_t reg, > - bool always_indirect); > + uint32_t acc_flags); > void amdgpu_mm_wreg(struct amdgpu_device *adev, uint32_t reg, > uint32_t v, > - bool always_indirect); > + uint32_t acc_flags); > u32 amdgpu_io_rreg(struct amdgpu_device *adev, u32 reg); > void amdgpu_io_wreg(struct amdgpu_device *adev, u32 reg, u32 v); > > @@ -1523,11 +1523,18 @@ bool amdgpu_device_has_dc_support(struct > amdgpu_device *adev); > /* > * Registers read & write functions. > */ > -#define RREG32(reg) amdgpu_mm_rreg(adev, (reg), false) > -#define RREG32_IDX(reg) amdgpu_mm_rreg(adev, (reg), true) > -#define DREG32(reg) printk(KERN_INFO "REGISTER: " #reg " : 0x%08X\n", > amdgpu_mm_rreg(adev, (reg), false)) > -#define WREG32(reg, v) amdgpu_mm_wreg(adev, (reg), (v), false) > -#define WREG32_IDX(reg, v) amdgpu_mm_wreg(adev, (reg), (v), true) > + > +#define AMDGPU_REGS_IDX (1<<0) > +#define AMDGPU_REGS_NO_KIQ (1<<1) > + > +#define RREG32_NO_KIQ(reg) amdgpu_mm_rreg(adev, (reg), > AMDGPU_REGS_NO_KIQ) > +#define WREG32_NO_KIQ(reg, v) amdgpu_mm_wreg(adev, (reg), (v), > AMDGPU_REGS_NO_KIQ) > + > +#define RREG32(reg) amdgpu_mm_rreg(adev, (reg), 0) > +#define RREG32_IDX(reg) amdgpu_mm_rreg(adev, (reg), > AMDGPU_REGS_IDX) > +#define DREG32(reg) printk(KERN_INFO "REGISTER: " #reg " : 0x%08X\n", > amdgpu_mm_rreg(adev, (reg), 0)) > +#define WREG32(reg, v) amdgpu_mm_wreg(adev, (reg), (v), 0) > +#define WREG32_IDX(reg, v) amdgpu_mm_wreg(adev, (reg), (v), > AMDGPU_REGS_IDX) > #define REG_SET(FIELD, v) (((v) << FIELD##_SHIFT) & FIELD##_MASK) > #define REG_GET(FIELD, v) (((v) << FIELD##_SHIFT) & FIELD##_MASK) > #define RREG32_PCIE(reg) adev->pcie_rreg(adev, (reg)) > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c > b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c > index 3534089..5215fc5 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c > @@ -91,16 +91,16 @@ bool amdgpu_device_is_px(struct drm_device *dev) > * MMIO register access helper functions. > */ > uint32_t amdgpu_mm_rreg(struct amdgpu_device *adev, uint32_t reg, > - bool always_indirect) > + uint32_t acc_flags) > { > uint32_t ret; > > - if (amdgpu_sriov_runtime(adev)) { > + if (!(acc_flags & AMDGPU_REGS_NO_KIQ) && > amdgpu_sriov_runtime(adev)) { > BUG_ON(in_interrupt()); > return amdgpu_virt_kiq_rreg(adev, reg); > } > > - if ((reg * 4) < adev->rmmio_size && !always_indirect) > + if ((reg * 4) < adev->rmmio_size && !(acc_flags & > AMDGPU_REGS_IDX)) > ret = readl(((void __iomem *)adev->rmmio) + (reg * 4)); > else { > unsigned long flags; > @@ -115,16 +115,16 @@ uint32_t amdgpu_mm_rreg(struct amdgpu_device > *adev, uint32_t reg, > } > > void amdgpu_mm_wreg(struct amdgpu_device *adev, uint32_t reg, > uint32_t v, > - bool always_indirect) > + uint32_t acc_flags) > { > trace_amdgpu_mm_wreg(adev->pdev->device, reg, v); > > - if (amdgpu_sriov_runtime(adev)) { > + if (!(acc_flags & AMDGPU_REGS_NO_KIQ) && > amdgpu_sriov_runtime(adev)) { > BUG_ON(in_interrupt()); > return amdgpu_virt_kiq_wreg(adev, reg, v); > } > > - if ((reg * 4) < adev->rmmio_size && !always_indirect) > + if ((reg * 4) < adev->rmmio_size && !(acc_flags & > AMDGPU_REGS_IDX)) > writel(v, ((void __iomem *)adev->rmmio) + (reg * 4)); > else { > unsigned long flags; > -- > 2.7.4 > > _______________________________________________ > amd-gfx mailing list > amd-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/amd-gfx _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ^ permalink raw reply [flat|nested] 28+ messages in thread
* RE: [PATCH 02/11] drm/amdgpu:impl RREG32 no kiq version [not found] ` <1486546019-31045-2-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 2017-02-08 16:37 ` Deucher, Alexander @ 2017-02-09 1:49 ` Yu, Xiangliang 1 sibling, 0 replies; 28+ messages in thread From: Yu, Xiangliang @ 2017-02-09 1:49 UTC (permalink / raw) To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Liu, Monk Reviewed-by: Xiangliang.Yu <Xiangliang.Yu> Thanks! Xiangliang Yu > -----Original Message----- > From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf > Of Monk Liu > Sent: Wednesday, February 08, 2017 5:27 PM > To: amd-gfx@lists.freedesktop.org > Cc: Liu, Monk <Monk.Liu@amd.com> > Subject: [PATCH 02/11] drm/amdgpu:impl RREG32 no kiq version > > some registers are PF & VF copy, and we can safely use mmio method to > access them. > > and sometime we are forbid to use kiq to access registers for example in > INTR context. > > we need a MACRO that always disable KIQ for regs accessing > > Change-Id: Ie6dc323dc86829a4a6ceb7073c269b106b534c4a > Signed-off-by: Monk Liu <Monk.Liu@amd.com> > --- > drivers/gpu/drm/amd/amdgpu/amdgpu.h | 21 ++++++++++++++------- > drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 12 ++++++------ > 2 files changed, 20 insertions(+), 13 deletions(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h > b/drivers/gpu/drm/amd/amdgpu/amdgpu.h > index 402a895..74bffca8 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h > @@ -1510,9 +1510,9 @@ void amdgpu_device_fini(struct amdgpu_device > *adev); int amdgpu_gpu_wait_for_idle(struct amdgpu_device *adev); > > uint32_t amdgpu_mm_rreg(struct amdgpu_device *adev, uint32_t reg, > - bool always_indirect); > + uint32_t acc_flags); > void amdgpu_mm_wreg(struct amdgpu_device *adev, uint32_t reg, > uint32_t v, > - bool always_indirect); > + uint32_t acc_flags); > u32 amdgpu_io_rreg(struct amdgpu_device *adev, u32 reg); void > amdgpu_io_wreg(struct amdgpu_device *adev, u32 reg, u32 v); > > @@ -1523,11 +1523,18 @@ bool amdgpu_device_has_dc_support(struct > amdgpu_device *adev); > /* > * Registers read & write functions. > */ > -#define RREG32(reg) amdgpu_mm_rreg(adev, (reg), false) -#define > RREG32_IDX(reg) amdgpu_mm_rreg(adev, (reg), true) -#define DREG32(reg) > printk(KERN_INFO "REGISTER: " #reg " : 0x%08X\n", amdgpu_mm_rreg(adev, > (reg), false)) -#define WREG32(reg, v) amdgpu_mm_wreg(adev, (reg), (v), > false) -#define WREG32_IDX(reg, v) amdgpu_mm_wreg(adev, (reg), (v), > true) > + > +#define AMDGPU_REGS_IDX (1<<0) > +#define AMDGPU_REGS_NO_KIQ (1<<1) > + > +#define RREG32_NO_KIQ(reg) amdgpu_mm_rreg(adev, (reg), > +AMDGPU_REGS_NO_KIQ) #define WREG32_NO_KIQ(reg, v) > amdgpu_mm_wreg(adev, > +(reg), (v), AMDGPU_REGS_NO_KIQ) > + > +#define RREG32(reg) amdgpu_mm_rreg(adev, (reg), 0) #define > +RREG32_IDX(reg) amdgpu_mm_rreg(adev, (reg), AMDGPU_REGS_IDX) > #define > +DREG32(reg) printk(KERN_INFO "REGISTER: " #reg " : 0x%08X\n", > +amdgpu_mm_rreg(adev, (reg), 0)) #define WREG32(reg, v) > +amdgpu_mm_wreg(adev, (reg), (v), 0) #define WREG32_IDX(reg, v) > +amdgpu_mm_wreg(adev, (reg), (v), AMDGPU_REGS_IDX) > #define REG_SET(FIELD, v) (((v) << FIELD##_SHIFT) & FIELD##_MASK) > #define REG_GET(FIELD, v) (((v) << FIELD##_SHIFT) & FIELD##_MASK) > #define RREG32_PCIE(reg) adev->pcie_rreg(adev, (reg)) diff --git > a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c > b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c > index 3534089..5215fc5 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c > @@ -91,16 +91,16 @@ bool amdgpu_device_is_px(struct drm_device *dev) > * MMIO register access helper functions. > */ > uint32_t amdgpu_mm_rreg(struct amdgpu_device *adev, uint32_t reg, > - bool always_indirect) > + uint32_t acc_flags) > { > uint32_t ret; > > - if (amdgpu_sriov_runtime(adev)) { > + if (!(acc_flags & AMDGPU_REGS_NO_KIQ) && > amdgpu_sriov_runtime(adev)) { > BUG_ON(in_interrupt()); > return amdgpu_virt_kiq_rreg(adev, reg); > } > > - if ((reg * 4) < adev->rmmio_size && !always_indirect) > + if ((reg * 4) < adev->rmmio_size && !(acc_flags & > AMDGPU_REGS_IDX)) > ret = readl(((void __iomem *)adev->rmmio) + (reg * 4)); > else { > unsigned long flags; > @@ -115,16 +115,16 @@ uint32_t amdgpu_mm_rreg(struct amdgpu_device > *adev, uint32_t reg, } > > void amdgpu_mm_wreg(struct amdgpu_device *adev, uint32_t reg, > uint32_t v, > - bool always_indirect) > + uint32_t acc_flags) > { > trace_amdgpu_mm_wreg(adev->pdev->device, reg, v); > > - if (amdgpu_sriov_runtime(adev)) { > + if (!(acc_flags & AMDGPU_REGS_NO_KIQ) && > amdgpu_sriov_runtime(adev)) { > BUG_ON(in_interrupt()); > return amdgpu_virt_kiq_wreg(adev, reg, v); > } > > - if ((reg * 4) < adev->rmmio_size && !always_indirect) > + if ((reg * 4) < adev->rmmio_size && !(acc_flags & > AMDGPU_REGS_IDX)) > writel(v, ((void __iomem *)adev->rmmio) + (reg * 4)); > else { > unsigned long flags; > -- > 2.7.4 > > _______________________________________________ > amd-gfx mailing list > amd-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/amd-gfx _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH 03/11] drm/amdgpu:Refine handshake of mailbox [not found] ` <1486546019-31045-1-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 2017-02-08 9:26 ` [PATCH 02/11] drm/amdgpu:impl RREG32 no kiq version Monk Liu @ 2017-02-08 9:26 ` Monk Liu [not found] ` <1486546019-31045-3-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 2017-02-08 9:26 ` [PATCH 04/11] drm/amdgpu:no kiq for mailbox registers access Monk Liu ` (9 subsequent siblings) 11 siblings, 1 reply; 28+ messages in thread From: Monk Liu @ 2017-02-08 9:26 UTC (permalink / raw) To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Ken Xue From: Ken Xue <Ken.Xue@amd.com> Change-Id: If3a7d05824847234759b86563e8052949e171972 Signed-off-by: Ken Xue <Ken.Xue@amd.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> --- drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c | 24 +++++++++++++++++++++++- 1 file changed, 23 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c index d2622b6..b8edfe5 100644 --- a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c +++ b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c @@ -318,10 +318,25 @@ void xgpu_vi_init_golden_registers(struct amdgpu_device *adev) static void xgpu_vi_mailbox_send_ack(struct amdgpu_device *adev) { u32 reg; + int timeout = VI_MAILBOX_TIMEDOUT; + u32 mask = REG_FIELD_MASK(MAILBOX_CONTROL, RCV_MSG_VALID); reg = RREG32(mmMAILBOX_CONTROL); reg = REG_SET_FIELD(reg, MAILBOX_CONTROL, RCV_MSG_ACK, 1); WREG32(mmMAILBOX_CONTROL, reg); + + /*Wait for RCV_MSG_VALID to be 0*/ + reg = RREG32(mmMAILBOX_CONTROL); + while (reg & mask) { + if (timeout <= 0) { + pr_err("RCV_MSG_VALID is not cleared\n"); + break; + } + mdelay(1); + timeout -=1; + + reg = RREG32(mmMAILBOX_CONTROL); + } } static void xgpu_vi_mailbox_set_valid(struct amdgpu_device *adev, bool val) @@ -351,6 +366,11 @@ static int xgpu_vi_mailbox_rcv_msg(struct amdgpu_device *adev, enum idh_event event) { u32 reg; + u32 mask = REG_FIELD_MASK(MAILBOX_CONTROL, RCV_MSG_VALID); + + reg = RREG32(mmMAILBOX_CONTROL); + if (!(reg & mask)) + return -ENOENT; reg = RREG32(mmMAILBOX_MSGBUF_RCV_DW0); if (reg != event) @@ -419,7 +439,9 @@ static int xgpu_vi_send_access_requests(struct amdgpu_device *adev, xgpu_vi_mailbox_set_valid(adev, false); /* start to check msg if request is idh_req_gpu_init_access */ - if (request == IDH_REQ_GPU_INIT_ACCESS) { + if (request == IDH_REQ_GPU_INIT_ACCESS || + request == IDH_REQ_GPU_FINI_ACCESS || + request == IDH_REQ_GPU_RESET_ACCESS) { r = xgpu_vi_poll_msg(adev, IDH_READY_TO_ACCESS_GPU); if (r) return r; -- 2.7.4 _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ^ permalink raw reply related [flat|nested] 28+ messages in thread
[parent not found: <1486546019-31045-3-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org>]
* RE: [PATCH 03/11] drm/amdgpu:Refine handshake of mailbox [not found] ` <1486546019-31045-3-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> @ 2017-02-09 1:49 ` Yu, Xiangliang 0 siblings, 0 replies; 28+ messages in thread From: Yu, Xiangliang @ 2017-02-09 1:49 UTC (permalink / raw) To: Liu, Monk, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Xue, Ken Reviewed-by: Xiangliang Yu <Xiangliang.Yu@amd.com> Thanks! Xiangliang Yu > -----Original Message----- > From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf > Of Monk Liu > Sent: Wednesday, February 08, 2017 5:27 PM > To: amd-gfx@lists.freedesktop.org > Cc: Xue, Ken <Ken.Xue@amd.com> > Subject: [PATCH 03/11] drm/amdgpu:Refine handshake of mailbox > > From: Ken Xue <Ken.Xue@amd.com> > > Change-Id: If3a7d05824847234759b86563e8052949e171972 > Signed-off-by: Ken Xue <Ken.Xue@amd.com> > Acked-by: Alex Deucher <alexander.deucher@amd.com> > --- > drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c | 24 > +++++++++++++++++++++++- > 1 file changed, 23 insertions(+), 1 deletion(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c > b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c > index d2622b6..b8edfe5 100644 > --- a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c > +++ b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c > @@ -318,10 +318,25 @@ void xgpu_vi_init_golden_registers(struct > amdgpu_device *adev) static void xgpu_vi_mailbox_send_ack(struct > amdgpu_device *adev) { > u32 reg; > + int timeout = VI_MAILBOX_TIMEDOUT; > + u32 mask = REG_FIELD_MASK(MAILBOX_CONTROL, > RCV_MSG_VALID); > > reg = RREG32(mmMAILBOX_CONTROL); > reg = REG_SET_FIELD(reg, MAILBOX_CONTROL, RCV_MSG_ACK, 1); > WREG32(mmMAILBOX_CONTROL, reg); > + > + /*Wait for RCV_MSG_VALID to be 0*/ > + reg = RREG32(mmMAILBOX_CONTROL); > + while (reg & mask) { > + if (timeout <= 0) { > + pr_err("RCV_MSG_VALID is not cleared\n"); > + break; > + } > + mdelay(1); > + timeout -=1; > + > + reg = RREG32(mmMAILBOX_CONTROL); > + } > } > > static void xgpu_vi_mailbox_set_valid(struct amdgpu_device *adev, bool val) > @@ -351,6 +366,11 @@ static int xgpu_vi_mailbox_rcv_msg(struct > amdgpu_device *adev, > enum idh_event event) > { > u32 reg; > + u32 mask = REG_FIELD_MASK(MAILBOX_CONTROL, > RCV_MSG_VALID); > + > + reg = RREG32(mmMAILBOX_CONTROL); > + if (!(reg & mask)) > + return -ENOENT; > > reg = RREG32(mmMAILBOX_MSGBUF_RCV_DW0); > if (reg != event) > @@ -419,7 +439,9 @@ static int xgpu_vi_send_access_requests(struct > amdgpu_device *adev, > xgpu_vi_mailbox_set_valid(adev, false); > > /* start to check msg if request is idh_req_gpu_init_access */ > - if (request == IDH_REQ_GPU_INIT_ACCESS) { > + if (request == IDH_REQ_GPU_INIT_ACCESS || > + request == IDH_REQ_GPU_FINI_ACCESS || > + request == IDH_REQ_GPU_RESET_ACCESS) { > r = xgpu_vi_poll_msg(adev, IDH_READY_TO_ACCESS_GPU); > if (r) > return r; > -- > 2.7.4 > > _______________________________________________ > amd-gfx mailing list > amd-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/amd-gfx _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH 04/11] drm/amdgpu:no kiq for mailbox registers access [not found] ` <1486546019-31045-1-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 2017-02-08 9:26 ` [PATCH 02/11] drm/amdgpu:impl RREG32 no kiq version Monk Liu 2017-02-08 9:26 ` [PATCH 03/11] drm/amdgpu:Refine handshake of mailbox Monk Liu @ 2017-02-08 9:26 ` Monk Liu [not found] ` <1486546019-31045-4-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 2017-02-08 9:26 ` [PATCH 05/11] drm/amdgpu:use work instead of delay-work Monk Liu ` (8 subsequent siblings) 11 siblings, 1 reply; 28+ messages in thread From: Monk Liu @ 2017-02-08 9:26 UTC (permalink / raw) To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Monk Liu Use no kiq version reg access due to: 1) better performance 2) INTR context consideration (some routine in mailbox is in INTR context e.g.xgpu_vi_mailbox_rcv_irq) Change-Id: I383d7ce858a136d7b112180f86e3d632d37b4d1c Signed-off-by: Monk Liu <Monk.Liu@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> --- drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c | 32 ++++++++++++++++---------------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c index b8edfe5..7c7420f 100644 --- a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c +++ b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c @@ -321,12 +321,12 @@ static void xgpu_vi_mailbox_send_ack(struct amdgpu_device *adev) int timeout = VI_MAILBOX_TIMEDOUT; u32 mask = REG_FIELD_MASK(MAILBOX_CONTROL, RCV_MSG_VALID); - reg = RREG32(mmMAILBOX_CONTROL); + reg = RREG32_NO_KIQ(mmMAILBOX_CONTROL); reg = REG_SET_FIELD(reg, MAILBOX_CONTROL, RCV_MSG_ACK, 1); - WREG32(mmMAILBOX_CONTROL, reg); + WREG32_NO_KIQ(mmMAILBOX_CONTROL, reg); /*Wait for RCV_MSG_VALID to be 0*/ - reg = RREG32(mmMAILBOX_CONTROL); + reg = RREG32_NO_KIQ(mmMAILBOX_CONTROL); while (reg & mask) { if (timeout <= 0) { pr_err("RCV_MSG_VALID is not cleared\n"); @@ -335,7 +335,7 @@ static void xgpu_vi_mailbox_send_ack(struct amdgpu_device *adev) mdelay(1); timeout -=1; - reg = RREG32(mmMAILBOX_CONTROL); + reg = RREG32_NO_KIQ(mmMAILBOX_CONTROL); } } @@ -343,10 +343,10 @@ static void xgpu_vi_mailbox_set_valid(struct amdgpu_device *adev, bool val) { u32 reg; - reg = RREG32(mmMAILBOX_CONTROL); + reg = RREG32_NO_KIQ(mmMAILBOX_CONTROL); reg = REG_SET_FIELD(reg, MAILBOX_CONTROL, TRN_MSG_VALID, val ? 1 : 0); - WREG32(mmMAILBOX_CONTROL, reg); + WREG32_NO_KIQ(mmMAILBOX_CONTROL, reg); } static void xgpu_vi_mailbox_trans_msg(struct amdgpu_device *adev, @@ -354,10 +354,10 @@ static void xgpu_vi_mailbox_trans_msg(struct amdgpu_device *adev, { u32 reg; - reg = RREG32(mmMAILBOX_MSGBUF_TRN_DW0); + reg = RREG32_NO_KIQ(mmMAILBOX_MSGBUF_TRN_DW0); reg = REG_SET_FIELD(reg, MAILBOX_MSGBUF_TRN_DW0, MSGBUF_DATA, event); - WREG32(mmMAILBOX_MSGBUF_TRN_DW0, reg); + WREG32_NO_KIQ(mmMAILBOX_MSGBUF_TRN_DW0, reg); xgpu_vi_mailbox_set_valid(adev, true); } @@ -368,11 +368,11 @@ static int xgpu_vi_mailbox_rcv_msg(struct amdgpu_device *adev, u32 reg; u32 mask = REG_FIELD_MASK(MAILBOX_CONTROL, RCV_MSG_VALID); - reg = RREG32(mmMAILBOX_CONTROL); + reg = RREG32_NO_KIQ(mmMAILBOX_CONTROL); if (!(reg & mask)) return -ENOENT; - reg = RREG32(mmMAILBOX_MSGBUF_RCV_DW0); + reg = RREG32_NO_KIQ(mmMAILBOX_MSGBUF_RCV_DW0); if (reg != event) return -ENOENT; @@ -388,7 +388,7 @@ static int xgpu_vi_poll_ack(struct amdgpu_device *adev) u32 mask = REG_FIELD_MASK(MAILBOX_CONTROL, TRN_MSG_ACK); u32 reg; - reg = RREG32(mmMAILBOX_CONTROL); + reg = RREG32_NO_KIQ(mmMAILBOX_CONTROL); while (!(reg & mask)) { if (timeout <= 0) { pr_err("Doesn't get ack from pf.\n"); @@ -398,7 +398,7 @@ static int xgpu_vi_poll_ack(struct amdgpu_device *adev) msleep(1); timeout -= 1; - reg = RREG32(mmMAILBOX_CONTROL); + reg = RREG32_NO_KIQ(mmMAILBOX_CONTROL); } return r; @@ -490,11 +490,11 @@ static int xgpu_vi_set_mailbox_ack_irq(struct amdgpu_device *adev, unsigned type, enum amdgpu_interrupt_state state) { - u32 tmp = RREG32(mmMAILBOX_INT_CNTL); + u32 tmp = RREG32_NO_KIQ(mmMAILBOX_INT_CNTL); tmp = REG_SET_FIELD(tmp, MAILBOX_INT_CNTL, ACK_INT_EN, (state == AMDGPU_IRQ_STATE_ENABLE) ? 1 : 0); - WREG32(mmMAILBOX_INT_CNTL, tmp); + WREG32_NO_KIQ(mmMAILBOX_INT_CNTL, tmp); return 0; } @@ -519,11 +519,11 @@ static int xgpu_vi_set_mailbox_rcv_irq(struct amdgpu_device *adev, unsigned type, enum amdgpu_interrupt_state state) { - u32 tmp = RREG32(mmMAILBOX_INT_CNTL); + u32 tmp = RREG32_NO_KIQ(mmMAILBOX_INT_CNTL); tmp = REG_SET_FIELD(tmp, MAILBOX_INT_CNTL, VALID_INT_EN, (state == AMDGPU_IRQ_STATE_ENABLE) ? 1 : 0); - WREG32(mmMAILBOX_INT_CNTL, tmp); + WREG32_NO_KIQ(mmMAILBOX_INT_CNTL, tmp); return 0; } -- 2.7.4 _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ^ permalink raw reply related [flat|nested] 28+ messages in thread
[parent not found: <1486546019-31045-4-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org>]
* RE: [PATCH 04/11] drm/amdgpu:no kiq for mailbox registers access [not found] ` <1486546019-31045-4-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> @ 2017-02-09 1:50 ` Yu, Xiangliang 0 siblings, 0 replies; 28+ messages in thread From: Yu, Xiangliang @ 2017-02-09 1:50 UTC (permalink / raw) To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Liu, Monk Reviewed-by: Xiangliang Yu <Xiangliang.Yu@amd.com> Thanks! Xiangliang Yu > -----Original Message----- > From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf > Of Monk Liu > Sent: Wednesday, February 08, 2017 5:27 PM > To: amd-gfx@lists.freedesktop.org > Cc: Liu, Monk <Monk.Liu@amd.com> > Subject: [PATCH 04/11] drm/amdgpu:no kiq for mailbox registers access > > Use no kiq version reg access due to: > 1) better performance > 2) INTR context consideration (some routine in mailbox is in > INTR context e.g.xgpu_vi_mailbox_rcv_irq) > > Change-Id: I383d7ce858a136d7b112180f86e3d632d37b4d1c > Signed-off-by: Monk Liu <Monk.Liu@amd.com> > Reviewed-by: Alex Deucher <alexander.deucher@amd.com> > --- > drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c | 32 ++++++++++++++++-------- > -------- > 1 file changed, 16 insertions(+), 16 deletions(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c > b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c > index b8edfe5..7c7420f 100644 > --- a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c > +++ b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c > @@ -321,12 +321,12 @@ static void xgpu_vi_mailbox_send_ack(struct > amdgpu_device *adev) > int timeout = VI_MAILBOX_TIMEDOUT; > u32 mask = REG_FIELD_MASK(MAILBOX_CONTROL, > RCV_MSG_VALID); > > - reg = RREG32(mmMAILBOX_CONTROL); > + reg = RREG32_NO_KIQ(mmMAILBOX_CONTROL); > reg = REG_SET_FIELD(reg, MAILBOX_CONTROL, RCV_MSG_ACK, 1); > - WREG32(mmMAILBOX_CONTROL, reg); > + WREG32_NO_KIQ(mmMAILBOX_CONTROL, reg); > > /*Wait for RCV_MSG_VALID to be 0*/ > - reg = RREG32(mmMAILBOX_CONTROL); > + reg = RREG32_NO_KIQ(mmMAILBOX_CONTROL); > while (reg & mask) { > if (timeout <= 0) { > pr_err("RCV_MSG_VALID is not cleared\n"); @@ - > 335,7 +335,7 @@ static void xgpu_vi_mailbox_send_ack(struct > amdgpu_device *adev) > mdelay(1); > timeout -=1; > > - reg = RREG32(mmMAILBOX_CONTROL); > + reg = RREG32_NO_KIQ(mmMAILBOX_CONTROL); > } > } > > @@ -343,10 +343,10 @@ static void xgpu_vi_mailbox_set_valid(struct > amdgpu_device *adev, bool val) { > u32 reg; > > - reg = RREG32(mmMAILBOX_CONTROL); > + reg = RREG32_NO_KIQ(mmMAILBOX_CONTROL); > reg = REG_SET_FIELD(reg, MAILBOX_CONTROL, > TRN_MSG_VALID, val ? 1 : 0); > - WREG32(mmMAILBOX_CONTROL, reg); > + WREG32_NO_KIQ(mmMAILBOX_CONTROL, reg); > } > > static void xgpu_vi_mailbox_trans_msg(struct amdgpu_device *adev, @@ - > 354,10 +354,10 @@ static void xgpu_vi_mailbox_trans_msg(struct > amdgpu_device *adev, { > u32 reg; > > - reg = RREG32(mmMAILBOX_MSGBUF_TRN_DW0); > + reg = RREG32_NO_KIQ(mmMAILBOX_MSGBUF_TRN_DW0); > reg = REG_SET_FIELD(reg, MAILBOX_MSGBUF_TRN_DW0, > MSGBUF_DATA, event); > - WREG32(mmMAILBOX_MSGBUF_TRN_DW0, reg); > + WREG32_NO_KIQ(mmMAILBOX_MSGBUF_TRN_DW0, reg); > > xgpu_vi_mailbox_set_valid(adev, true); } @@ -368,11 +368,11 @@ > static int xgpu_vi_mailbox_rcv_msg(struct amdgpu_device *adev, > u32 reg; > u32 mask = REG_FIELD_MASK(MAILBOX_CONTROL, > RCV_MSG_VALID); > > - reg = RREG32(mmMAILBOX_CONTROL); > + reg = RREG32_NO_KIQ(mmMAILBOX_CONTROL); > if (!(reg & mask)) > return -ENOENT; > > - reg = RREG32(mmMAILBOX_MSGBUF_RCV_DW0); > + reg = RREG32_NO_KIQ(mmMAILBOX_MSGBUF_RCV_DW0); > if (reg != event) > return -ENOENT; > > @@ -388,7 +388,7 @@ static int xgpu_vi_poll_ack(struct amdgpu_device > *adev) > u32 mask = REG_FIELD_MASK(MAILBOX_CONTROL, TRN_MSG_ACK); > u32 reg; > > - reg = RREG32(mmMAILBOX_CONTROL); > + reg = RREG32_NO_KIQ(mmMAILBOX_CONTROL); > while (!(reg & mask)) { > if (timeout <= 0) { > pr_err("Doesn't get ack from pf.\n"); @@ -398,7 > +398,7 @@ static int xgpu_vi_poll_ack(struct amdgpu_device *adev) > msleep(1); > timeout -= 1; > > - reg = RREG32(mmMAILBOX_CONTROL); > + reg = RREG32_NO_KIQ(mmMAILBOX_CONTROL); > } > > return r; > @@ -490,11 +490,11 @@ static int xgpu_vi_set_mailbox_ack_irq(struct > amdgpu_device *adev, > unsigned type, > enum amdgpu_interrupt_state state) { > - u32 tmp = RREG32(mmMAILBOX_INT_CNTL); > + u32 tmp = RREG32_NO_KIQ(mmMAILBOX_INT_CNTL); > > tmp = REG_SET_FIELD(tmp, MAILBOX_INT_CNTL, ACK_INT_EN, > (state == AMDGPU_IRQ_STATE_ENABLE) ? 1 : 0); > - WREG32(mmMAILBOX_INT_CNTL, tmp); > + WREG32_NO_KIQ(mmMAILBOX_INT_CNTL, tmp); > > return 0; > } > @@ -519,11 +519,11 @@ static int xgpu_vi_set_mailbox_rcv_irq(struct > amdgpu_device *adev, > unsigned type, > enum amdgpu_interrupt_state state) { > - u32 tmp = RREG32(mmMAILBOX_INT_CNTL); > + u32 tmp = RREG32_NO_KIQ(mmMAILBOX_INT_CNTL); > > tmp = REG_SET_FIELD(tmp, MAILBOX_INT_CNTL, VALID_INT_EN, > (state == AMDGPU_IRQ_STATE_ENABLE) ? 1 : 0); > - WREG32(mmMAILBOX_INT_CNTL, tmp); > + WREG32_NO_KIQ(mmMAILBOX_INT_CNTL, tmp); > > return 0; > } > -- > 2.7.4 > > _______________________________________________ > amd-gfx mailing list > amd-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/amd-gfx _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH 05/11] drm/amdgpu:use work instead of delay-work [not found] ` <1486546019-31045-1-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> ` (2 preceding siblings ...) 2017-02-08 9:26 ` [PATCH 04/11] drm/amdgpu:no kiq for mailbox registers access Monk Liu @ 2017-02-08 9:26 ` Monk Liu [not found] ` <1486546019-31045-5-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 2017-02-08 9:26 ` [PATCH 06/11] drm/amdgpu:RUNTIME flag should clr later Monk Liu ` (7 subsequent siblings) 11 siblings, 1 reply; 28+ messages in thread From: Monk Liu @ 2017-02-08 9:26 UTC (permalink / raw) To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Monk Liu no need to use a delay work since we don't know how much time hypervisor takes on FLR, so just polling and waiting in a work. Change-Id: I41b6336baa00b1fd299311349402a17951b585a2 Signed-off-by: Monk Liu <Monk.Liu@amd.com> --- drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h | 2 +- drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c | 36 +++++++++++++++----------------- 2 files changed, 18 insertions(+), 20 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h index 4b05568..846f29c 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h @@ -50,7 +50,7 @@ struct amdgpu_virt { struct mutex lock_reset; struct amdgpu_irq_src ack_irq; struct amdgpu_irq_src rcv_irq; - struct delayed_work flr_work; + struct work_struct flr_work; const struct amdgpu_virt_ops *ops; }; diff --git a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c index 7c7420f..5f156d3 100644 --- a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c +++ b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c @@ -501,17 +501,19 @@ static int xgpu_vi_set_mailbox_ack_irq(struct amdgpu_device *adev, static void xgpu_vi_mailbox_flr_work(struct work_struct *work) { - struct amdgpu_virt *virt = container_of(work, - struct amdgpu_virt, flr_work.work); - struct amdgpu_device *adev = container_of(virt, - struct amdgpu_device, virt); - int r = 0; - - r = xgpu_vi_poll_msg(adev, IDH_FLR_NOTIFICATION_CMPL); - if (r) - DRM_ERROR("failed to get flr cmpl msg from hypervior.\n"); + struct amdgpu_virt *virt = container_of(work, struct amdgpu_virt, flr_work); + struct amdgpu_device *adev = container_of(virt, struct amdgpu_device, virt); + + /* wait until RCV_MSG become 3 */ + if (!xgpu_vi_poll_msg(adev, IDH_FLR_NOTIFICATION_CMPL)) + adev->virt.caps &= ~AMDGPU_SRIOV_CAPS_RUNTIME; + else { + pr_err("failed to recieve FLR_CMPL\n"); + return; + } - /* TODO: need to restore gfx states */ + /* Trigger recovery due to world switch failure */ + amdgpu_sriov_gpu_reset(adev, false); } static int xgpu_vi_set_mailbox_rcv_irq(struct amdgpu_device *adev, @@ -534,15 +536,12 @@ static int xgpu_vi_mailbox_rcv_irq(struct amdgpu_device *adev, { int r; - adev->virt.caps &= ~AMDGPU_SRIOV_CAPS_RUNTIME; + /* see what event we get */ r = xgpu_vi_mailbox_rcv_msg(adev, IDH_FLR_NOTIFICATION); - /* do nothing for other msg */ - if (r) - return 0; - /* TODO: need to save gfx states */ - schedule_delayed_work(&adev->virt.flr_work, - msecs_to_jiffies(VI_MAILBOX_RESET_TIME)); + /* only handle FLR_NOTIFY now */ + if (!r) + schedule_work(&adev->virt.flr_work); return 0; } @@ -595,14 +594,13 @@ int xgpu_vi_mailbox_get_irq(struct amdgpu_device *adev) return r; } - INIT_DELAYED_WORK(&adev->virt.flr_work, xgpu_vi_mailbox_flr_work); + INIT_WORK(&adev->virt.flr_work, xgpu_vi_mailbox_flr_work); return 0; } void xgpu_vi_mailbox_put_irq(struct amdgpu_device *adev) { - cancel_delayed_work_sync(&adev->virt.flr_work); amdgpu_irq_put(adev, &adev->virt.ack_irq, 0); amdgpu_irq_put(adev, &adev->virt.rcv_irq, 0); } -- 2.7.4 _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ^ permalink raw reply related [flat|nested] 28+ messages in thread
[parent not found: <1486546019-31045-5-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org>]
* RE: [PATCH 05/11] drm/amdgpu:use work instead of delay-work [not found] ` <1486546019-31045-5-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> @ 2017-02-08 16:38 ` Deucher, Alexander 2017-02-09 1:46 ` Yu, Xiangliang 1 sibling, 0 replies; 28+ messages in thread From: Deucher, Alexander @ 2017-02-08 16:38 UTC (permalink / raw) To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Liu, Monk > -----Original Message----- > From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf > Of Monk Liu > Sent: Wednesday, February 08, 2017 4:27 AM > To: amd-gfx@lists.freedesktop.org > Cc: Liu, Monk > Subject: [PATCH 05/11] drm/amdgpu:use work instead of delay-work > > no need to use a delay work since we don't know how > much time hypervisor takes on FLR, so just polling > and waiting in a work. > > Change-Id: I41b6336baa00b1fd299311349402a17951b585a2 > Signed-off-by: Monk Liu <Monk.Liu@amd.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> > --- > drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h | 2 +- > drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c | 36 +++++++++++++++-------- > --------- > 2 files changed, 18 insertions(+), 20 deletions(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h > b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h > index 4b05568..846f29c 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h > @@ -50,7 +50,7 @@ struct amdgpu_virt { > struct mutex lock_reset; > struct amdgpu_irq_src ack_irq; > struct amdgpu_irq_src rcv_irq; > - struct delayed_work flr_work; > + struct work_struct flr_work; > const struct amdgpu_virt_ops *ops; > }; > > diff --git a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c > b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c > index 7c7420f..5f156d3 100644 > --- a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c > +++ b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c > @@ -501,17 +501,19 @@ static int xgpu_vi_set_mailbox_ack_irq(struct > amdgpu_device *adev, > > static void xgpu_vi_mailbox_flr_work(struct work_struct *work) > { > - struct amdgpu_virt *virt = container_of(work, > - struct amdgpu_virt, flr_work.work); > - struct amdgpu_device *adev = container_of(virt, > - struct amdgpu_device, virt); > - int r = 0; > - > - r = xgpu_vi_poll_msg(adev, IDH_FLR_NOTIFICATION_CMPL); > - if (r) > - DRM_ERROR("failed to get flr cmpl msg from hypervior.\n"); > + struct amdgpu_virt *virt = container_of(work, struct amdgpu_virt, > flr_work); > + struct amdgpu_device *adev = container_of(virt, struct > amdgpu_device, virt); > + > + /* wait until RCV_MSG become 3 */ > + if (!xgpu_vi_poll_msg(adev, IDH_FLR_NOTIFICATION_CMPL)) > + adev->virt.caps &= ~AMDGPU_SRIOV_CAPS_RUNTIME; > + else { > + pr_err("failed to recieve FLR_CMPL\n"); > + return; > + } > > - /* TODO: need to restore gfx states */ > + /* Trigger recovery due to world switch failure */ > + amdgpu_sriov_gpu_reset(adev, false); > } > > static int xgpu_vi_set_mailbox_rcv_irq(struct amdgpu_device *adev, > @@ -534,15 +536,12 @@ static int xgpu_vi_mailbox_rcv_irq(struct > amdgpu_device *adev, > { > int r; > > - adev->virt.caps &= ~AMDGPU_SRIOV_CAPS_RUNTIME; > + /* see what event we get */ > r = xgpu_vi_mailbox_rcv_msg(adev, IDH_FLR_NOTIFICATION); > - /* do nothing for other msg */ > - if (r) > - return 0; > > - /* TODO: need to save gfx states */ > - schedule_delayed_work(&adev->virt.flr_work, > - msecs_to_jiffies(VI_MAILBOX_RESET_TIME)); > + /* only handle FLR_NOTIFY now */ > + if (!r) > + schedule_work(&adev->virt.flr_work); > > return 0; > } > @@ -595,14 +594,13 @@ int xgpu_vi_mailbox_get_irq(struct > amdgpu_device *adev) > return r; > } > > - INIT_DELAYED_WORK(&adev->virt.flr_work, > xgpu_vi_mailbox_flr_work); > + INIT_WORK(&adev->virt.flr_work, xgpu_vi_mailbox_flr_work); > > return 0; > } > > void xgpu_vi_mailbox_put_irq(struct amdgpu_device *adev) > { > - cancel_delayed_work_sync(&adev->virt.flr_work); > amdgpu_irq_put(adev, &adev->virt.ack_irq, 0); > amdgpu_irq_put(adev, &adev->virt.rcv_irq, 0); > } > -- > 2.7.4 > > _______________________________________________ > amd-gfx mailing list > amd-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/amd-gfx _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ^ permalink raw reply [flat|nested] 28+ messages in thread
* RE: [PATCH 05/11] drm/amdgpu:use work instead of delay-work [not found] ` <1486546019-31045-5-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 2017-02-08 16:38 ` Deucher, Alexander @ 2017-02-09 1:46 ` Yu, Xiangliang 1 sibling, 0 replies; 28+ messages in thread From: Yu, Xiangliang @ 2017-02-09 1:46 UTC (permalink / raw) To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Liu, Monk Reviewed-by: Xiangliang Yu <Xiangliang.Yu> > -----Original Message----- > From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf > Of Monk Liu > Sent: Wednesday, February 08, 2017 5:27 PM > To: amd-gfx@lists.freedesktop.org > Cc: Liu, Monk <Monk.Liu@amd.com> > Subject: [PATCH 05/11] drm/amdgpu:use work instead of delay-work > > no need to use a delay work since we don't know how much time hypervisor > takes on FLR, so just polling and waiting in a work. > > Change-Id: I41b6336baa00b1fd299311349402a17951b585a2 > Signed-off-by: Monk Liu <Monk.Liu@amd.com> > --- > drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h | 2 +- > drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c | 36 +++++++++++++++-------- > --------- > 2 files changed, 18 insertions(+), 20 deletions(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h > b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h > index 4b05568..846f29c 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h > @@ -50,7 +50,7 @@ struct amdgpu_virt { > struct mutex lock_reset; > struct amdgpu_irq_src ack_irq; > struct amdgpu_irq_src rcv_irq; > - struct delayed_work flr_work; > + struct work_struct flr_work; > const struct amdgpu_virt_ops *ops; > }; > > diff --git a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c > b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c > index 7c7420f..5f156d3 100644 > --- a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c > +++ b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c > @@ -501,17 +501,19 @@ static int xgpu_vi_set_mailbox_ack_irq(struct > amdgpu_device *adev, > > static void xgpu_vi_mailbox_flr_work(struct work_struct *work) { > - struct amdgpu_virt *virt = container_of(work, > - struct amdgpu_virt, flr_work.work); > - struct amdgpu_device *adev = container_of(virt, > - struct amdgpu_device, virt); > - int r = 0; > - > - r = xgpu_vi_poll_msg(adev, IDH_FLR_NOTIFICATION_CMPL); > - if (r) > - DRM_ERROR("failed to get flr cmpl msg from hypervior.\n"); > + struct amdgpu_virt *virt = container_of(work, struct amdgpu_virt, > flr_work); > + struct amdgpu_device *adev = container_of(virt, struct > amdgpu_device, > +virt); > + > + /* wait until RCV_MSG become 3 */ > + if (!xgpu_vi_poll_msg(adev, IDH_FLR_NOTIFICATION_CMPL)) > + adev->virt.caps &= ~AMDGPU_SRIOV_CAPS_RUNTIME; > + else { > + pr_err("failed to recieve FLR_CMPL\n"); > + return; > + } > > - /* TODO: need to restore gfx states */ > + /* Trigger recovery due to world switch failure */ > + amdgpu_sriov_gpu_reset(adev, false); > } > > static int xgpu_vi_set_mailbox_rcv_irq(struct amdgpu_device *adev, @@ - > 534,15 +536,12 @@ static int xgpu_vi_mailbox_rcv_irq(struct amdgpu_device > *adev, { > int r; > > - adev->virt.caps &= ~AMDGPU_SRIOV_CAPS_RUNTIME; > + /* see what event we get */ > r = xgpu_vi_mailbox_rcv_msg(adev, IDH_FLR_NOTIFICATION); > - /* do nothing for other msg */ > - if (r) > - return 0; > > - /* TODO: need to save gfx states */ > - schedule_delayed_work(&adev->virt.flr_work, > - msecs_to_jiffies(VI_MAILBOX_RESET_TIME)); > + /* only handle FLR_NOTIFY now */ > + if (!r) > + schedule_work(&adev->virt.flr_work); > > return 0; > } > @@ -595,14 +594,13 @@ int xgpu_vi_mailbox_get_irq(struct > amdgpu_device *adev) > return r; > } > > - INIT_DELAYED_WORK(&adev->virt.flr_work, > xgpu_vi_mailbox_flr_work); > + INIT_WORK(&adev->virt.flr_work, xgpu_vi_mailbox_flr_work); > > return 0; > } > > void xgpu_vi_mailbox_put_irq(struct amdgpu_device *adev) { > - cancel_delayed_work_sync(&adev->virt.flr_work); > amdgpu_irq_put(adev, &adev->virt.ack_irq, 0); > amdgpu_irq_put(adev, &adev->virt.rcv_irq, 0); } > -- > 2.7.4 > > _______________________________________________ > amd-gfx mailing list > amd-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/amd-gfx _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH 06/11] drm/amdgpu:RUNTIME flag should clr later [not found] ` <1486546019-31045-1-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> ` (3 preceding siblings ...) 2017-02-08 9:26 ` [PATCH 05/11] drm/amdgpu:use work instead of delay-work Monk Liu @ 2017-02-08 9:26 ` Monk Liu [not found] ` <1486546019-31045-6-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 2017-02-08 9:26 ` [PATCH 07/11] drm/amdgpu:new field in_resete introduced for gfx Monk Liu ` (6 subsequent siblings) 11 siblings, 1 reply; 28+ messages in thread From: Monk Liu @ 2017-02-08 9:26 UTC (permalink / raw) To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Monk Liu this flag will get cleared by request gpu access Change-Id: Ie484bb0141420055370e019dcd8c110fb34f8a1b Signed-off-by: Monk Liu <Monk.Liu@amd.com> --- drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c index 5f156d3..98cbcd9 100644 --- a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c +++ b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c @@ -505,9 +505,7 @@ static void xgpu_vi_mailbox_flr_work(struct work_struct *work) struct amdgpu_device *adev = container_of(virt, struct amdgpu_device, virt); /* wait until RCV_MSG become 3 */ - if (!xgpu_vi_poll_msg(adev, IDH_FLR_NOTIFICATION_CMPL)) - adev->virt.caps &= ~AMDGPU_SRIOV_CAPS_RUNTIME; - else { + if (xgpu_vi_poll_msg(adev, IDH_FLR_NOTIFICATION_CMPL)) { pr_err("failed to recieve FLR_CMPL\n"); return; } -- 2.7.4 _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ^ permalink raw reply related [flat|nested] 28+ messages in thread
[parent not found: <1486546019-31045-6-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org>]
* RE: [PATCH 06/11] drm/amdgpu:RUNTIME flag should clr later [not found] ` <1486546019-31045-6-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> @ 2017-02-08 16:39 ` Deucher, Alexander 2017-02-09 1:47 ` Yu, Xiangliang 1 sibling, 0 replies; 28+ messages in thread From: Deucher, Alexander @ 2017-02-08 16:39 UTC (permalink / raw) To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Liu, Monk > -----Original Message----- > From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf > Of Monk Liu > Sent: Wednesday, February 08, 2017 4:27 AM > To: amd-gfx@lists.freedesktop.org > Cc: Liu, Monk > Subject: [PATCH 06/11] drm/amdgpu:RUNTIME flag should clr later > > this flag will get cleared by request gpu access > > Change-Id: Ie484bb0141420055370e019dcd8c110fb34f8a1b > Signed-off-by: Monk Liu <Monk.Liu@amd.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> > --- > drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c | 4 +--- > 1 file changed, 1 insertion(+), 3 deletions(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c > b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c > index 5f156d3..98cbcd9 100644 > --- a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c > +++ b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c > @@ -505,9 +505,7 @@ static void xgpu_vi_mailbox_flr_work(struct > work_struct *work) > struct amdgpu_device *adev = container_of(virt, struct > amdgpu_device, virt); > > /* wait until RCV_MSG become 3 */ > - if (!xgpu_vi_poll_msg(adev, IDH_FLR_NOTIFICATION_CMPL)) > - adev->virt.caps &= ~AMDGPU_SRIOV_CAPS_RUNTIME; > - else { > + if (xgpu_vi_poll_msg(adev, IDH_FLR_NOTIFICATION_CMPL)) { > pr_err("failed to recieve FLR_CMPL\n"); > return; > } > -- > 2.7.4 > > _______________________________________________ > amd-gfx mailing list > amd-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/amd-gfx _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ^ permalink raw reply [flat|nested] 28+ messages in thread
* RE: [PATCH 06/11] drm/amdgpu:RUNTIME flag should clr later [not found] ` <1486546019-31045-6-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 2017-02-08 16:39 ` Deucher, Alexander @ 2017-02-09 1:47 ` Yu, Xiangliang 1 sibling, 0 replies; 28+ messages in thread From: Yu, Xiangliang @ 2017-02-09 1:47 UTC (permalink / raw) To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Liu, Monk Reviewed-by: Xiangliang.Yu <Xiangliang.Yu> > -----Original Message----- > From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf > Of Monk Liu > Sent: Wednesday, February 08, 2017 5:27 PM > To: amd-gfx@lists.freedesktop.org > Cc: Liu, Monk <Monk.Liu@amd.com> > Subject: [PATCH 06/11] drm/amdgpu:RUNTIME flag should clr later > > this flag will get cleared by request gpu access > > Change-Id: Ie484bb0141420055370e019dcd8c110fb34f8a1b > Signed-off-by: Monk Liu <Monk.Liu@amd.com> > --- > drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c | 4 +--- > 1 file changed, 1 insertion(+), 3 deletions(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c > b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c > index 5f156d3..98cbcd9 100644 > --- a/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c > +++ b/drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c > @@ -505,9 +505,7 @@ static void xgpu_vi_mailbox_flr_work(struct > work_struct *work) > struct amdgpu_device *adev = container_of(virt, struct > amdgpu_device, virt); > > /* wait until RCV_MSG become 3 */ > - if (!xgpu_vi_poll_msg(adev, IDH_FLR_NOTIFICATION_CMPL)) > - adev->virt.caps &= ~AMDGPU_SRIOV_CAPS_RUNTIME; > - else { > + if (xgpu_vi_poll_msg(adev, IDH_FLR_NOTIFICATION_CMPL)) { > pr_err("failed to recieve FLR_CMPL\n"); > return; > } > -- > 2.7.4 > > _______________________________________________ > amd-gfx mailing list > amd-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/amd-gfx _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH 07/11] drm/amdgpu:new field in_resete introduced for gfx [not found] ` <1486546019-31045-1-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> ` (4 preceding siblings ...) 2017-02-08 9:26 ` [PATCH 06/11] drm/amdgpu:RUNTIME flag should clr later Monk Liu @ 2017-02-08 9:26 ` Monk Liu [not found] ` <1486546019-31045-7-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 2017-02-08 9:26 ` [PATCH 08/11] drm/amdgpu:alloc mqd backup Monk Liu ` (5 subsequent siblings) 11 siblings, 1 reply; 28+ messages in thread From: Monk Liu @ 2017-02-08 9:26 UTC (permalink / raw) To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Monk Liu use it to seperate driver load and gpu reset/resume because gfx IP need different approach for different hw_init trigger source Change-Id: I991e0da52ccd197716d279bf9014de46d39acfea Signed-off-by: Monk Liu <Monk.Liu@amd.com> --- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h index 74bffca8..acd9970 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -901,6 +901,7 @@ struct amdgpu_gfx { /* reset mask */ uint32_t grbm_soft_reset; uint32_t srbm_soft_reset; + bool in_reset; }; int amdgpu_ib_get(struct amdgpu_device *adev, struct amdgpu_vm *vm, -- 2.7.4 _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ^ permalink raw reply related [flat|nested] 28+ messages in thread
[parent not found: <1486546019-31045-7-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org>]
* RE: [PATCH 07/11] drm/amdgpu:new field in_resete introduced for gfx [not found] ` <1486546019-31045-7-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> @ 2017-02-08 16:42 ` Deucher, Alexander 0 siblings, 0 replies; 28+ messages in thread From: Deucher, Alexander @ 2017-02-08 16:42 UTC (permalink / raw) To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Liu, Monk > -----Original Message----- > From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf > Of Monk Liu > Sent: Wednesday, February 08, 2017 4:27 AM > To: amd-gfx@lists.freedesktop.org > Cc: Liu, Monk > Subject: [PATCH 07/11] drm/amdgpu:new field in_resete introduced for gfx > > use it to seperate driver load and gpu reset/resume > because gfx IP need different approach for different > hw_init trigger source > > Change-Id: I991e0da52ccd197716d279bf9014de46d39acfea > Signed-off-by: Monk Liu <Monk.Liu@amd.com> Typo in the patch title (resete), with that fixed: Reviewed-by: Alex Deucher <alexander.deucher@amd.com> > --- > drivers/gpu/drm/amd/amdgpu/amdgpu.h | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h > b/drivers/gpu/drm/amd/amdgpu/amdgpu.h > index 74bffca8..acd9970 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h > @@ -901,6 +901,7 @@ struct amdgpu_gfx { > /* reset mask */ > uint32_t grbm_soft_reset; > uint32_t srbm_soft_reset; > + bool in_reset; > }; > > int amdgpu_ib_get(struct amdgpu_device *adev, struct amdgpu_vm *vm, > -- > 2.7.4 > > _______________________________________________ > amd-gfx mailing list > amd-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/amd-gfx _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH 08/11] drm/amdgpu:alloc mqd backup [not found] ` <1486546019-31045-1-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> ` (5 preceding siblings ...) 2017-02-08 9:26 ` [PATCH 07/11] drm/amdgpu:new field in_resete introduced for gfx Monk Liu @ 2017-02-08 9:26 ` Monk Liu [not found] ` <1486546019-31045-8-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 2017-02-08 9:26 ` [PATCH 09/11] drm/amdgpu:imple ring clear Monk Liu ` (4 subsequent siblings) 11 siblings, 1 reply; 28+ messages in thread From: Monk Liu @ 2017-02-08 9:26 UTC (permalink / raw) To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Monk Liu this is required for restoring the mqds after GPU reset. Change-Id: I84f821faa657a5d942c33d30f206eb66b579c2f8 Signed-off-by: Monk Liu <Monk.Liu@amd.com> --- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 1 + drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 10 ++++++++++ 2 files changed, 11 insertions(+) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h index acd9970..73086d0 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -781,6 +781,7 @@ struct amdgpu_mec { u32 num_pipe; u32 num_mec; u32 num_queue; + struct vi_mqd *mqd_backup[AMDGPU_MAX_COMPUTE_RINGS + 1]; }; struct amdgpu_kiq { diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c index 8f545992..b0612d1 100644 --- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c @@ -7309,6 +7309,11 @@ static int gfx_v8_0_compute_mqd_soft_init(struct amdgpu_device *adev) dev_warn(adev->dev, "failed to create ring mqd ob (%d)", r); return r; } + + /* prepare MQD backup */ + adev->gfx.mec.mqd_backup[AMDGPU_MAX_COMPUTE_RINGS] = kmalloc(sizeof(struct vi_mqd), GFP_KERNEL); + if (!adev->gfx.mec.mqd_backup[AMDGPU_MAX_COMPUTE_RINGS]) + dev_warn(adev->dev, "no memory to create MQD backup for ring %s\n", ring->name); } /* create MQD for each KCQ */ @@ -7323,6 +7328,11 @@ static int gfx_v8_0_compute_mqd_soft_init(struct amdgpu_device *adev) dev_warn(adev->dev, "failed to create ring mqd ob (%d)", r); return r; } + + /* prepare MQD backup */ + adev->gfx.mec.mqd_backup[i] = kmalloc(sizeof(struct vi_mqd), GFP_KERNEL); + if (!adev->gfx.mec.mqd_backup[i]) + dev_warn(adev->dev, "no memory to create MQD backup for ring %s\n", ring->name); } } -- 2.7.4 _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ^ permalink raw reply related [flat|nested] 28+ messages in thread
[parent not found: <1486546019-31045-8-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org>]
* RE: [PATCH 08/11] drm/amdgpu:alloc mqd backup [not found] ` <1486546019-31045-8-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> @ 2017-02-08 16:42 ` Deucher, Alexander 2017-02-09 1:51 ` Yu, Xiangliang 1 sibling, 0 replies; 28+ messages in thread From: Deucher, Alexander @ 2017-02-08 16:42 UTC (permalink / raw) To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Liu, Monk > -----Original Message----- > From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf > Of Monk Liu > Sent: Wednesday, February 08, 2017 4:27 AM > To: amd-gfx@lists.freedesktop.org > Cc: Liu, Monk > Subject: [PATCH 08/11] drm/amdgpu:alloc mqd backup > > this is required for restoring the mqds after GPU reset. > > Change-Id: I84f821faa657a5d942c33d30f206eb66b579c2f8 > Signed-off-by: Monk Liu <Monk.Liu@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> > --- > drivers/gpu/drm/amd/amdgpu/amdgpu.h | 1 + > drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 10 ++++++++++ > 2 files changed, 11 insertions(+) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h > b/drivers/gpu/drm/amd/amdgpu/amdgpu.h > index acd9970..73086d0 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h > @@ -781,6 +781,7 @@ struct amdgpu_mec { > u32 num_pipe; > u32 num_mec; > u32 num_queue; > + struct vi_mqd *mqd_backup[AMDGPU_MAX_COMPUTE_RINGS + > 1]; > }; > > struct amdgpu_kiq { > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c > b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c > index 8f545992..b0612d1 100644 > --- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c > @@ -7309,6 +7309,11 @@ static int gfx_v8_0_compute_mqd_soft_init(struct > amdgpu_device *adev) > dev_warn(adev->dev, "failed to create ring mqd ob > (%d)", r); > return r; > } > + > + /* prepare MQD backup */ > + adev- > >gfx.mec.mqd_backup[AMDGPU_MAX_COMPUTE_RINGS] = > kmalloc(sizeof(struct vi_mqd), GFP_KERNEL); > + if (!adev- > >gfx.mec.mqd_backup[AMDGPU_MAX_COMPUTE_RINGS]) > + dev_warn(adev->dev, "no memory to create > MQD backup for ring %s\n", ring->name); > } > > /* create MQD for each KCQ */ > @@ -7323,6 +7328,11 @@ static int gfx_v8_0_compute_mqd_soft_init(struct > amdgpu_device *adev) > dev_warn(adev->dev, "failed to create ring > mqd ob (%d)", r); > return r; > } > + > + /* prepare MQD backup */ > + adev->gfx.mec.mqd_backup[i] = > kmalloc(sizeof(struct vi_mqd), GFP_KERNEL); > + if (!adev->gfx.mec.mqd_backup[i]) > + dev_warn(adev->dev, "no memory to create > MQD backup for ring %s\n", ring->name); > } > } > > -- > 2.7.4 > > _______________________________________________ > amd-gfx mailing list > amd-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/amd-gfx _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ^ permalink raw reply [flat|nested] 28+ messages in thread
* RE: [PATCH 08/11] drm/amdgpu:alloc mqd backup [not found] ` <1486546019-31045-8-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 2017-02-08 16:42 ` Deucher, Alexander @ 2017-02-09 1:51 ` Yu, Xiangliang 1 sibling, 0 replies; 28+ messages in thread From: Yu, Xiangliang @ 2017-02-09 1:51 UTC (permalink / raw) To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Liu, Monk Reviewed-by: Xiangliang Yu <Xiangliang.Yu@amd.com> Thanks! Xiangliang Yu > -----Original Message----- > From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf > Of Monk Liu > Sent: Wednesday, February 08, 2017 5:27 PM > To: amd-gfx@lists.freedesktop.org > Cc: Liu, Monk <Monk.Liu@amd.com> > Subject: [PATCH 08/11] drm/amdgpu:alloc mqd backup > > this is required for restoring the mqds after GPU reset. > > Change-Id: I84f821faa657a5d942c33d30f206eb66b579c2f8 > Signed-off-by: Monk Liu <Monk.Liu@amd.com> > --- > drivers/gpu/drm/amd/amdgpu/amdgpu.h | 1 + > drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 10 ++++++++++ > 2 files changed, 11 insertions(+) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h > b/drivers/gpu/drm/amd/amdgpu/amdgpu.h > index acd9970..73086d0 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h > @@ -781,6 +781,7 @@ struct amdgpu_mec { > u32 num_pipe; > u32 num_mec; > u32 num_queue; > + struct vi_mqd *mqd_backup[AMDGPU_MAX_COMPUTE_RINGS + > 1]; > }; > > struct amdgpu_kiq { > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c > b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c > index 8f545992..b0612d1 100644 > --- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c > @@ -7309,6 +7309,11 @@ static int gfx_v8_0_compute_mqd_soft_init(struct > amdgpu_device *adev) > dev_warn(adev->dev, "failed to create ring mqd ob > (%d)", r); > return r; > } > + > + /* prepare MQD backup */ > + adev- > >gfx.mec.mqd_backup[AMDGPU_MAX_COMPUTE_RINGS] = > kmalloc(sizeof(struct vi_mqd), GFP_KERNEL); > + if (!adev- > >gfx.mec.mqd_backup[AMDGPU_MAX_COMPUTE_RINGS]) > + dev_warn(adev->dev, "no memory to create > MQD backup for ring %s\n", ring->name); > } > > /* create MQD for each KCQ */ > @@ -7323,6 +7328,11 @@ static int gfx_v8_0_compute_mqd_soft_init(struct > amdgpu_device *adev) > dev_warn(adev->dev, "failed to create ring > mqd ob (%d)", r); > return r; > } > + > + /* prepare MQD backup */ > + adev->gfx.mec.mqd_backup[i] = > kmalloc(sizeof(struct vi_mqd), GFP_KERNEL); > + if (!adev->gfx.mec.mqd_backup[i]) > + dev_warn(adev->dev, "no memory to create > MQD backup for ring %s\n", ring->name); > } > } > > -- > 2.7.4 > > _______________________________________________ > amd-gfx mailing list > amd-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/amd-gfx _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH 09/11] drm/amdgpu:imple ring clear [not found] ` <1486546019-31045-1-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> ` (6 preceding siblings ...) 2017-02-08 9:26 ` [PATCH 08/11] drm/amdgpu:alloc mqd backup Monk Liu @ 2017-02-08 9:26 ` Monk Liu [not found] ` <1486546019-31045-9-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 2017-02-08 9:26 ` [PATCH 10/11] drm/amdgpu:use clear_ring to clr RB Monk Liu ` (3 subsequent siblings) 11 siblings, 1 reply; 28+ messages in thread From: Monk Liu @ 2017-02-08 9:26 UTC (permalink / raw) To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Monk Liu we can use it clear ring buffer instead of fullfill 0, which is not correct for engine Change-Id: I89dcd7b6c4de558f9b2860209a2739c7d4af262d Signed-off-by: Monk Liu <Monk.Liu@amd.com> --- drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h index 0e57b04..3fd4ce8 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h @@ -186,5 +186,12 @@ int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring, unsigned ring_size, struct amdgpu_irq_src *irq_src, unsigned irq_type); void amdgpu_ring_fini(struct amdgpu_ring *ring); +static inline void amdgpu_ring_clear_ring(struct amdgpu_ring *ring) +{ + int i = 0; + while (i <= ring->ptr_mask) + ring->ring[i++] = ring->funcs->nop; + +} #endif -- 2.7.4 _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ^ permalink raw reply related [flat|nested] 28+ messages in thread
[parent not found: <1486546019-31045-9-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org>]
* RE: [PATCH 09/11] drm/amdgpu:imple ring clear [not found] ` <1486546019-31045-9-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> @ 2017-02-08 16:40 ` Deucher, Alexander 0 siblings, 0 replies; 28+ messages in thread From: Deucher, Alexander @ 2017-02-08 16:40 UTC (permalink / raw) To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Liu, Monk > -----Original Message----- > From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf > Of Monk Liu > Sent: Wednesday, February 08, 2017 4:27 AM > To: amd-gfx@lists.freedesktop.org > Cc: Liu, Monk > Subject: [PATCH 09/11] drm/amdgpu:imple ring clear > > we can use it clear ring buffer instead of fullfill > 0, which is not correct for engine > > Change-Id: I89dcd7b6c4de558f9b2860209a2739c7d4af262d > Signed-off-by: Monk Liu <Monk.Liu@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> > --- > drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h > b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h > index 0e57b04..3fd4ce8 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h > @@ -186,5 +186,12 @@ int amdgpu_ring_init(struct amdgpu_device *adev, > struct amdgpu_ring *ring, > unsigned ring_size, struct amdgpu_irq_src *irq_src, > unsigned irq_type); > void amdgpu_ring_fini(struct amdgpu_ring *ring); > +static inline void amdgpu_ring_clear_ring(struct amdgpu_ring *ring) > +{ > + int i = 0; > + while (i <= ring->ptr_mask) > + ring->ring[i++] = ring->funcs->nop; > + > +} > > #endif > -- > 2.7.4 > > _______________________________________________ > amd-gfx mailing list > amd-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/amd-gfx _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH 10/11] drm/amdgpu:use clear_ring to clr RB [not found] ` <1486546019-31045-1-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> ` (7 preceding siblings ...) 2017-02-08 9:26 ` [PATCH 09/11] drm/amdgpu:imple ring clear Monk Liu @ 2017-02-08 9:26 ` Monk Liu [not found] ` <1486546019-31045-10-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 2017-02-08 9:26 ` [PATCH 11/11] drm/amdgpu:fix kiq_resume routine (V2) Monk Liu ` (2 subsequent siblings) 11 siblings, 1 reply; 28+ messages in thread From: Monk Liu @ 2017-02-08 9:26 UTC (permalink / raw) To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Monk Liu In resume routine, we need clr RB prior to the ring test of engine, otherwise some engine hang duplicated during GPU reset. Change-Id: Ie28f5aa677074f922e4a1a2eeeb7fe06461d9bdb Signed-off-by: Monk Liu <Monk.Liu@amd.com> --- drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c | 2 +- drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 1 + drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c | 1 + 3 files changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c index 7bacf3c..37d8422 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c @@ -230,7 +230,7 @@ int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring, dev_err(adev->dev, "(%d) ring create failed\n", r); return r; } - memset((void *)ring->ring, 0, ring->ring_size); + amdgpu_ring_clear_ring(ring); } ring->ptr_mask = (ring->ring_size / 4) - 1; ring->max_dw = max_dw; diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c index b0612d1..6584173 100644 --- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c @@ -4509,6 +4509,7 @@ static int gfx_v8_0_cp_gfx_resume(struct amdgpu_device *adev) } /* start the ring */ + amdgpu_ring_clear_ring(ring); gfx_v8_0_cp_gfx_start(adev); ring->ready = true; r = amdgpu_ring_test_ring(ring); diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c index 9394ca6..d5206f5 100644 --- a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c @@ -615,6 +615,7 @@ static int sdma_v3_0_gfx_resume(struct amdgpu_device *adev) for (i = 0; i < adev->sdma.num_instances; i++) { ring = &adev->sdma.instance[i].ring; + amdgpu_ring_clear_ring(ring); wb_offset = (ring->rptr_offs * 4); mutex_lock(&adev->srbm_mutex); -- 2.7.4 _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ^ permalink raw reply related [flat|nested] 28+ messages in thread
[parent not found: <1486546019-31045-10-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org>]
* RE: [PATCH 10/11] drm/amdgpu:use clear_ring to clr RB [not found] ` <1486546019-31045-10-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> @ 2017-02-08 16:41 ` Deucher, Alexander 0 siblings, 0 replies; 28+ messages in thread From: Deucher, Alexander @ 2017-02-08 16:41 UTC (permalink / raw) To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Liu, Monk > -----Original Message----- > From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf > Of Monk Liu > Sent: Wednesday, February 08, 2017 4:27 AM > To: amd-gfx@lists.freedesktop.org > Cc: Liu, Monk > Subject: [PATCH 10/11] drm/amdgpu:use clear_ring to clr RB > > In resume routine, we need clr RB prior to the > ring test of engine, otherwise some engine hang > duplicated during GPU reset. > > Change-Id: Ie28f5aa677074f922e4a1a2eeeb7fe06461d9bdb > Signed-off-by: Monk Liu <Monk.Liu@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> > --- > drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c | 2 +- > drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 1 + > drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c | 1 + > 3 files changed, 3 insertions(+), 1 deletion(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c > b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c > index 7bacf3c..37d8422 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c > @@ -230,7 +230,7 @@ int amdgpu_ring_init(struct amdgpu_device *adev, > struct amdgpu_ring *ring, > dev_err(adev->dev, "(%d) ring create failed\n", r); > return r; > } > - memset((void *)ring->ring, 0, ring->ring_size); > + amdgpu_ring_clear_ring(ring); > } > ring->ptr_mask = (ring->ring_size / 4) - 1; > ring->max_dw = max_dw; > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c > b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c > index b0612d1..6584173 100644 > --- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c > @@ -4509,6 +4509,7 @@ static int gfx_v8_0_cp_gfx_resume(struct > amdgpu_device *adev) > } > > /* start the ring */ > + amdgpu_ring_clear_ring(ring); > gfx_v8_0_cp_gfx_start(adev); > ring->ready = true; > r = amdgpu_ring_test_ring(ring); > diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c > b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c > index 9394ca6..d5206f5 100644 > --- a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c > @@ -615,6 +615,7 @@ static int sdma_v3_0_gfx_resume(struct > amdgpu_device *adev) > > for (i = 0; i < adev->sdma.num_instances; i++) { > ring = &adev->sdma.instance[i].ring; > + amdgpu_ring_clear_ring(ring); > wb_offset = (ring->rptr_offs * 4); > > mutex_lock(&adev->srbm_mutex); > -- > 2.7.4 > > _______________________________________________ > amd-gfx mailing list > amd-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/amd-gfx _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH 11/11] drm/amdgpu:fix kiq_resume routine (V2) [not found] ` <1486546019-31045-1-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> ` (8 preceding siblings ...) 2017-02-08 9:26 ` [PATCH 10/11] drm/amdgpu:use clear_ring to clr RB Monk Liu @ 2017-02-08 9:26 ` Monk Liu [not found] ` <1486546019-31045-11-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 2017-02-08 9:34 ` [PATCH 01/11] drm/amdgpu:use MACRO like other places Michel Dänzer 2017-02-08 16:35 ` Deucher, Alexander 11 siblings, 1 reply; 28+ messages in thread From: Monk Liu @ 2017-02-08 9:26 UTC (permalink / raw) To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Monk Liu v2: use in_rest to fix compute ring test failure issue which occured after FLR/gpu_reset. we need backup a clean status of MQD which was created in drv load stage, and use it in resume stage, otherwise KCQ and KIQ all may faild in ring/ib test. Change-Id: I41be940454a6638e9a8a05f096601eaa1fbebaab Signed-off-by: Monk Liu <Monk.Liu@amd.com> --- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 2 ++ drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 44 ++++++++++++++++++++++-------- 2 files changed, 35 insertions(+), 11 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c index 5215fc5..afcae15 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -2410,6 +2410,7 @@ int amdgpu_sriov_gpu_reset(struct amdgpu_device *adev, bool voluntary) mutex_lock(&adev->virt.lock_reset); atomic_inc(&adev->gpu_reset_counter); + adev->gfx.in_reset = true; /* block TTM */ resched = ttm_bo_lock_delayed_workqueue(&adev->mman.bdev); @@ -2494,6 +2495,7 @@ int amdgpu_sriov_gpu_reset(struct amdgpu_device *adev, bool voluntary) dev_info(adev->dev, "GPU reset failed\n"); } + adev->gfx.in_reset = false; mutex_unlock(&adev->virt.lock_reset); return r; } diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c index 6584173..1822420 100644 --- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c @@ -4877,24 +4877,46 @@ static int gfx_v8_0_kiq_init_queue(struct amdgpu_ring *ring, struct amdgpu_kiq *kiq = &adev->gfx.kiq; uint64_t eop_gpu_addr; bool is_kiq = (ring->funcs->type == AMDGPU_RING_TYPE_KIQ); + int mqd_idx = AMDGPU_MAX_COMPUTE_RINGS; if (is_kiq) { eop_gpu_addr = kiq->eop_gpu_addr; gfx_v8_0_kiq_setting(&kiq->ring); - } else + } else { eop_gpu_addr = adev->gfx.mec.hpd_eop_gpu_addr + ring->queue * MEC_HPD_SIZE; + mqd_idx = ring - &adev->gfx.compute_ring[0]; + } - mutex_lock(&adev->srbm_mutex); - vi_srbm_select(adev, ring->me, ring->pipe, ring->queue, 0); + if (!adev->gfx.in_reset) { + memset((void *)mqd, 0, sizeof(*mqd)); + mutex_lock(&adev->srbm_mutex); + vi_srbm_select(adev, ring->me, ring->pipe, ring->queue, 0); + gfx_v8_0_mqd_init(adev, mqd, mqd_gpu_addr, eop_gpu_addr, ring); + if (is_kiq) + gfx_v8_0_kiq_init_register(adev, mqd, ring); + vi_srbm_select(adev, 0, 0, 0, 0); + mutex_unlock(&adev->srbm_mutex); - gfx_v8_0_mqd_init(adev, mqd, mqd_gpu_addr, eop_gpu_addr, ring); + if (adev->gfx.mec.mqd_backup[mqd_idx]) + memcpy(adev->gfx.mec.mqd_backup[mqd_idx], mqd, sizeof(*mqd)); + } else { /* for GPU_RESET case */ + /* reset MQD to a clean status */ + if (adev->gfx.mec.mqd_backup[mqd_idx]) + memcpy(mqd, adev->gfx.mec.mqd_backup[mqd_idx], sizeof(*mqd)); - if (is_kiq) - gfx_v8_0_kiq_init_register(adev, mqd, ring); - - vi_srbm_select(adev, 0, 0, 0, 0); - mutex_unlock(&adev->srbm_mutex); + /* reset ring buffer */ + ring->wptr = 0; + amdgpu_ring_clear_ring(ring); + + if (is_kiq) { + mutex_lock(&adev->srbm_mutex); + vi_srbm_select(adev, ring->me, ring->pipe, ring->queue, 0); + gfx_v8_0_kiq_init_register(adev, mqd, ring); + vi_srbm_select(adev, 0, 0, 0, 0); + mutex_unlock(&adev->srbm_mutex); + } + } if (is_kiq) gfx_v8_0_kiq_enable(ring); @@ -4913,9 +4935,9 @@ static int gfx_v8_0_kiq_resume(struct amdgpu_device *adev) ring = &adev->gfx.kiq.ring; if (!amdgpu_bo_kmap(ring->mqd_obj, (void **)&ring->mqd_ptr)) { - memset((void *)ring->mqd_ptr, 0, sizeof(struct vi_mqd)); r = gfx_v8_0_kiq_init_queue(ring, ring->mqd_ptr, ring->mqd_gpu_addr); amdgpu_bo_kunmap(ring->mqd_obj); + ring->mqd_ptr = NULL; if (r) return r; } else { @@ -4925,9 +4947,9 @@ static int gfx_v8_0_kiq_resume(struct amdgpu_device *adev) for (i = 0; i < adev->gfx.num_compute_rings; i++) { ring = &adev->gfx.compute_ring[i]; if (!amdgpu_bo_kmap(ring->mqd_obj, (void **)&ring->mqd_ptr)) { - memset((void *)ring->mqd_ptr, 0, sizeof(struct vi_mqd)); r = gfx_v8_0_kiq_init_queue(ring, ring->mqd_ptr, ring->mqd_gpu_addr); amdgpu_bo_kunmap(ring->mqd_obj); + ring->mqd_ptr = NULL; if (r) return r; } else { -- 2.7.4 _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ^ permalink raw reply related [flat|nested] 28+ messages in thread
[parent not found: <1486546019-31045-11-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org>]
* RE: [PATCH 11/11] drm/amdgpu:fix kiq_resume routine (V2) [not found] ` <1486546019-31045-11-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> @ 2017-02-08 16:43 ` Deucher, Alexander 2017-02-09 1:51 ` Yu, Xiangliang 1 sibling, 0 replies; 28+ messages in thread From: Deucher, Alexander @ 2017-02-08 16:43 UTC (permalink / raw) To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Liu, Monk > -----Original Message----- > From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf > Of Monk Liu > Sent: Wednesday, February 08, 2017 4:27 AM > To: amd-gfx@lists.freedesktop.org > Cc: Liu, Monk > Subject: [PATCH 11/11] drm/amdgpu:fix kiq_resume routine (V2) > > v2: > use in_rest to fix compute ring test failure issue > which occured after FLR/gpu_reset. > > we need backup a clean status of MQD which was created in drv load > stage, and use it in resume stage, otherwise KCQ and KIQ all may > faild in ring/ib test. > > Change-Id: I41be940454a6638e9a8a05f096601eaa1fbebaab > Signed-off-by: Monk Liu <Monk.Liu@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> > --- > drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 2 ++ > drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 44 > ++++++++++++++++++++++-------- > 2 files changed, 35 insertions(+), 11 deletions(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c > b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c > index 5215fc5..afcae15 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c > @@ -2410,6 +2410,7 @@ int amdgpu_sriov_gpu_reset(struct > amdgpu_device *adev, bool voluntary) > > mutex_lock(&adev->virt.lock_reset); > atomic_inc(&adev->gpu_reset_counter); > + adev->gfx.in_reset = true; > > /* block TTM */ > resched = ttm_bo_lock_delayed_workqueue(&adev->mman.bdev); > @@ -2494,6 +2495,7 @@ int amdgpu_sriov_gpu_reset(struct > amdgpu_device *adev, bool voluntary) > dev_info(adev->dev, "GPU reset failed\n"); > } > > + adev->gfx.in_reset = false; > mutex_unlock(&adev->virt.lock_reset); > return r; > } > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c > b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c > index 6584173..1822420 100644 > --- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c > @@ -4877,24 +4877,46 @@ static int gfx_v8_0_kiq_init_queue(struct > amdgpu_ring *ring, > struct amdgpu_kiq *kiq = &adev->gfx.kiq; > uint64_t eop_gpu_addr; > bool is_kiq = (ring->funcs->type == AMDGPU_RING_TYPE_KIQ); > + int mqd_idx = AMDGPU_MAX_COMPUTE_RINGS; > > if (is_kiq) { > eop_gpu_addr = kiq->eop_gpu_addr; > gfx_v8_0_kiq_setting(&kiq->ring); > - } else > + } else { > eop_gpu_addr = adev->gfx.mec.hpd_eop_gpu_addr + > ring->queue * MEC_HPD_SIZE; > + mqd_idx = ring - &adev->gfx.compute_ring[0]; > + } > > - mutex_lock(&adev->srbm_mutex); > - vi_srbm_select(adev, ring->me, ring->pipe, ring->queue, 0); > + if (!adev->gfx.in_reset) { > + memset((void *)mqd, 0, sizeof(*mqd)); > + mutex_lock(&adev->srbm_mutex); > + vi_srbm_select(adev, ring->me, ring->pipe, ring->queue, 0); > + gfx_v8_0_mqd_init(adev, mqd, mqd_gpu_addr, > eop_gpu_addr, ring); > + if (is_kiq) > + gfx_v8_0_kiq_init_register(adev, mqd, ring); > + vi_srbm_select(adev, 0, 0, 0, 0); > + mutex_unlock(&adev->srbm_mutex); > > - gfx_v8_0_mqd_init(adev, mqd, mqd_gpu_addr, eop_gpu_addr, > ring); > + if (adev->gfx.mec.mqd_backup[mqd_idx]) > + memcpy(adev->gfx.mec.mqd_backup[mqd_idx], > mqd, sizeof(*mqd)); > + } else { /* for GPU_RESET case */ > + /* reset MQD to a clean status */ > + if (adev->gfx.mec.mqd_backup[mqd_idx]) > + memcpy(mqd, adev- > >gfx.mec.mqd_backup[mqd_idx], sizeof(*mqd)); > > - if (is_kiq) > - gfx_v8_0_kiq_init_register(adev, mqd, ring); > - > - vi_srbm_select(adev, 0, 0, 0, 0); > - mutex_unlock(&adev->srbm_mutex); > + /* reset ring buffer */ > + ring->wptr = 0; > + amdgpu_ring_clear_ring(ring); > + > + if (is_kiq) { > + mutex_lock(&adev->srbm_mutex); > + vi_srbm_select(adev, ring->me, ring->pipe, ring->queue, > 0); > + gfx_v8_0_kiq_init_register(adev, mqd, ring); > + vi_srbm_select(adev, 0, 0, 0, 0); > + mutex_unlock(&adev->srbm_mutex); > + } > + } > > if (is_kiq) > gfx_v8_0_kiq_enable(ring); > @@ -4913,9 +4935,9 @@ static int gfx_v8_0_kiq_resume(struct > amdgpu_device *adev) > > ring = &adev->gfx.kiq.ring; > if (!amdgpu_bo_kmap(ring->mqd_obj, (void **)&ring->mqd_ptr)) { > - memset((void *)ring->mqd_ptr, 0, sizeof(struct vi_mqd)); > r = gfx_v8_0_kiq_init_queue(ring, ring->mqd_ptr, ring- > >mqd_gpu_addr); > amdgpu_bo_kunmap(ring->mqd_obj); > + ring->mqd_ptr = NULL; > if (r) > return r; > } else { > @@ -4925,9 +4947,9 @@ static int gfx_v8_0_kiq_resume(struct > amdgpu_device *adev) > for (i = 0; i < adev->gfx.num_compute_rings; i++) { > ring = &adev->gfx.compute_ring[i]; > if (!amdgpu_bo_kmap(ring->mqd_obj, (void **)&ring- > >mqd_ptr)) { > - memset((void *)ring->mqd_ptr, 0, sizeof(struct > vi_mqd)); > r = gfx_v8_0_kiq_init_queue(ring, ring->mqd_ptr, > ring->mqd_gpu_addr); > amdgpu_bo_kunmap(ring->mqd_obj); > + ring->mqd_ptr = NULL; > if (r) > return r; > } else { > -- > 2.7.4 > > _______________________________________________ > amd-gfx mailing list > amd-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/amd-gfx _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ^ permalink raw reply [flat|nested] 28+ messages in thread
* RE: [PATCH 11/11] drm/amdgpu:fix kiq_resume routine (V2) [not found] ` <1486546019-31045-11-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 2017-02-08 16:43 ` Deucher, Alexander @ 2017-02-09 1:51 ` Yu, Xiangliang 1 sibling, 0 replies; 28+ messages in thread From: Yu, Xiangliang @ 2017-02-09 1:51 UTC (permalink / raw) To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Liu, Monk Reviewed-by: Xiangliang Yu <Xiangliang.Yu@amd.com> Thanks! Xiangliang Yu > -----Original Message----- > From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf > Of Monk Liu > Sent: Wednesday, February 08, 2017 5:27 PM > To: amd-gfx@lists.freedesktop.org > Cc: Liu, Monk <Monk.Liu@amd.com> > Subject: [PATCH 11/11] drm/amdgpu:fix kiq_resume routine (V2) > > v2: > use in_rest to fix compute ring test failure issue which occured after > FLR/gpu_reset. > > we need backup a clean status of MQD which was created in drv load stage, > and use it in resume stage, otherwise KCQ and KIQ all may faild in ring/ib test. > > Change-Id: I41be940454a6638e9a8a05f096601eaa1fbebaab > Signed-off-by: Monk Liu <Monk.Liu@amd.com> > --- > drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 2 ++ > drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 44 > ++++++++++++++++++++++-------- > 2 files changed, 35 insertions(+), 11 deletions(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c > b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c > index 5215fc5..afcae15 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c > @@ -2410,6 +2410,7 @@ int amdgpu_sriov_gpu_reset(struct > amdgpu_device *adev, bool voluntary) > > mutex_lock(&adev->virt.lock_reset); > atomic_inc(&adev->gpu_reset_counter); > + adev->gfx.in_reset = true; > > /* block TTM */ > resched = ttm_bo_lock_delayed_workqueue(&adev->mman.bdev); > @@ -2494,6 +2495,7 @@ int amdgpu_sriov_gpu_reset(struct > amdgpu_device *adev, bool voluntary) > dev_info(adev->dev, "GPU reset failed\n"); > } > > + adev->gfx.in_reset = false; > mutex_unlock(&adev->virt.lock_reset); > return r; > } > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c > b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c > index 6584173..1822420 100644 > --- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c > @@ -4877,24 +4877,46 @@ static int gfx_v8_0_kiq_init_queue(struct > amdgpu_ring *ring, > struct amdgpu_kiq *kiq = &adev->gfx.kiq; > uint64_t eop_gpu_addr; > bool is_kiq = (ring->funcs->type == AMDGPU_RING_TYPE_KIQ); > + int mqd_idx = AMDGPU_MAX_COMPUTE_RINGS; > > if (is_kiq) { > eop_gpu_addr = kiq->eop_gpu_addr; > gfx_v8_0_kiq_setting(&kiq->ring); > - } else > + } else { > eop_gpu_addr = adev->gfx.mec.hpd_eop_gpu_addr + > ring->queue * MEC_HPD_SIZE; > + mqd_idx = ring - &adev->gfx.compute_ring[0]; > + } > > - mutex_lock(&adev->srbm_mutex); > - vi_srbm_select(adev, ring->me, ring->pipe, ring->queue, 0); > + if (!adev->gfx.in_reset) { > + memset((void *)mqd, 0, sizeof(*mqd)); > + mutex_lock(&adev->srbm_mutex); > + vi_srbm_select(adev, ring->me, ring->pipe, ring->queue, 0); > + gfx_v8_0_mqd_init(adev, mqd, mqd_gpu_addr, > eop_gpu_addr, ring); > + if (is_kiq) > + gfx_v8_0_kiq_init_register(adev, mqd, ring); > + vi_srbm_select(adev, 0, 0, 0, 0); > + mutex_unlock(&adev->srbm_mutex); > > - gfx_v8_0_mqd_init(adev, mqd, mqd_gpu_addr, eop_gpu_addr, > ring); > + if (adev->gfx.mec.mqd_backup[mqd_idx]) > + memcpy(adev->gfx.mec.mqd_backup[mqd_idx], > mqd, sizeof(*mqd)); > + } else { /* for GPU_RESET case */ > + /* reset MQD to a clean status */ > + if (adev->gfx.mec.mqd_backup[mqd_idx]) > + memcpy(mqd, adev- > >gfx.mec.mqd_backup[mqd_idx], sizeof(*mqd)); > > - if (is_kiq) > - gfx_v8_0_kiq_init_register(adev, mqd, ring); > - > - vi_srbm_select(adev, 0, 0, 0, 0); > - mutex_unlock(&adev->srbm_mutex); > + /* reset ring buffer */ > + ring->wptr = 0; > + amdgpu_ring_clear_ring(ring); > + > + if (is_kiq) { > + mutex_lock(&adev->srbm_mutex); > + vi_srbm_select(adev, ring->me, ring->pipe, ring->queue, > 0); > + gfx_v8_0_kiq_init_register(adev, mqd, ring); > + vi_srbm_select(adev, 0, 0, 0, 0); > + mutex_unlock(&adev->srbm_mutex); > + } > + } > > if (is_kiq) > gfx_v8_0_kiq_enable(ring); > @@ -4913,9 +4935,9 @@ static int gfx_v8_0_kiq_resume(struct > amdgpu_device *adev) > > ring = &adev->gfx.kiq.ring; > if (!amdgpu_bo_kmap(ring->mqd_obj, (void **)&ring->mqd_ptr)) { > - memset((void *)ring->mqd_ptr, 0, sizeof(struct vi_mqd)); > r = gfx_v8_0_kiq_init_queue(ring, ring->mqd_ptr, ring- > >mqd_gpu_addr); > amdgpu_bo_kunmap(ring->mqd_obj); > + ring->mqd_ptr = NULL; > if (r) > return r; > } else { > @@ -4925,9 +4947,9 @@ static int gfx_v8_0_kiq_resume(struct > amdgpu_device *adev) > for (i = 0; i < adev->gfx.num_compute_rings; i++) { > ring = &adev->gfx.compute_ring[i]; > if (!amdgpu_bo_kmap(ring->mqd_obj, (void **)&ring- > >mqd_ptr)) { > - memset((void *)ring->mqd_ptr, 0, sizeof(struct > vi_mqd)); > r = gfx_v8_0_kiq_init_queue(ring, ring->mqd_ptr, > ring->mqd_gpu_addr); > amdgpu_bo_kunmap(ring->mqd_obj); > + ring->mqd_ptr = NULL; > if (r) > return r; > } else { > -- > 2.7.4 > > _______________________________________________ > amd-gfx mailing list > amd-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/amd-gfx _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 01/11] drm/amdgpu:use MACRO like other places [not found] ` <1486546019-31045-1-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> ` (9 preceding siblings ...) 2017-02-08 9:26 ` [PATCH 11/11] drm/amdgpu:fix kiq_resume routine (V2) Monk Liu @ 2017-02-08 9:34 ` Michel Dänzer 2017-02-08 16:35 ` Deucher, Alexander 11 siblings, 0 replies; 28+ messages in thread From: Michel Dänzer @ 2017-02-08 9:34 UTC (permalink / raw) To: Monk Liu; +Cc: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW On 08/02/17 06:26 PM, Monk Liu wrote: > > + WREG32(temp + i, > + unique_indices[i] & 0x3FFFF); > + WREG32(data + i, > + unique_indices[i] >> 20); Use a single line in both cases (or fix the indentation of the second lines to align with the opening parens). -- Earthling Michel Dänzer | http://www.amd.com Libre software enthusiast | Mesa and X developer _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ^ permalink raw reply [flat|nested] 28+ messages in thread
* RE: [PATCH 01/11] drm/amdgpu:use MACRO like other places [not found] ` <1486546019-31045-1-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> ` (10 preceding siblings ...) 2017-02-08 9:34 ` [PATCH 01/11] drm/amdgpu:use MACRO like other places Michel Dänzer @ 2017-02-08 16:35 ` Deucher, Alexander 11 siblings, 0 replies; 28+ messages in thread From: Deucher, Alexander @ 2017-02-08 16:35 UTC (permalink / raw) To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Liu, Monk > -----Original Message----- > From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf > Of Monk Liu > Sent: Wednesday, February 08, 2017 4:27 AM > To: amd-gfx@lists.freedesktop.org > Cc: Liu, Monk > Subject: [PATCH 01/11] drm/amdgpu:use MACRO like other places > > Change-Id: Ica8f86577a50d817119de4b4fb95068dc72652a9 > Signed-off-by: Monk Liu <Monk.Liu@amd.com> With Michel's comments addressed, the patch is: Reviewed-by: Alex Deucher <alexander.deucher@amd.com> > --- > drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 8 ++++---- > 1 file changed, 4 insertions(+), 4 deletions(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c > b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c > index 6734e55..8f545992 100644 > --- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c > @@ -4068,10 +4068,10 @@ static int gfx_v8_0_init_save_restore_list(struct > amdgpu_device *adev) > data = mmRLC_SRM_INDEX_CNTL_DATA_0; > for (i = 0; i < sizeof(unique_indices) / sizeof(int); i++) { > if (unique_indices[i] != 0) { > - amdgpu_mm_wreg(adev, temp + i, > - unique_indices[i] & 0x3FFFF, false); > - amdgpu_mm_wreg(adev, data + i, > - unique_indices[i] >> 20, false); > + WREG32(temp + i, > + unique_indices[i] & 0x3FFFF); > + WREG32(data + i, > + unique_indices[i] >> 20); > } > } > kfree(register_list_format); > -- > 2.7.4 > > _______________________________________________ > amd-gfx mailing list > amd-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/amd-gfx _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ^ permalink raw reply [flat|nested] 28+ messages in thread
end of thread, other threads:[~2017-02-09 1:51 UTC | newest] Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2017-02-08 9:26 [PATCH 01/11] drm/amdgpu:use MACRO like other places Monk Liu [not found] ` <1486546019-31045-1-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 2017-02-08 9:26 ` [PATCH 02/11] drm/amdgpu:impl RREG32 no kiq version Monk Liu [not found] ` <1486546019-31045-2-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 2017-02-08 16:37 ` Deucher, Alexander 2017-02-09 1:49 ` Yu, Xiangliang 2017-02-08 9:26 ` [PATCH 03/11] drm/amdgpu:Refine handshake of mailbox Monk Liu [not found] ` <1486546019-31045-3-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 2017-02-09 1:49 ` Yu, Xiangliang 2017-02-08 9:26 ` [PATCH 04/11] drm/amdgpu:no kiq for mailbox registers access Monk Liu [not found] ` <1486546019-31045-4-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 2017-02-09 1:50 ` Yu, Xiangliang 2017-02-08 9:26 ` [PATCH 05/11] drm/amdgpu:use work instead of delay-work Monk Liu [not found] ` <1486546019-31045-5-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 2017-02-08 16:38 ` Deucher, Alexander 2017-02-09 1:46 ` Yu, Xiangliang 2017-02-08 9:26 ` [PATCH 06/11] drm/amdgpu:RUNTIME flag should clr later Monk Liu [not found] ` <1486546019-31045-6-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 2017-02-08 16:39 ` Deucher, Alexander 2017-02-09 1:47 ` Yu, Xiangliang 2017-02-08 9:26 ` [PATCH 07/11] drm/amdgpu:new field in_resete introduced for gfx Monk Liu [not found] ` <1486546019-31045-7-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 2017-02-08 16:42 ` Deucher, Alexander 2017-02-08 9:26 ` [PATCH 08/11] drm/amdgpu:alloc mqd backup Monk Liu [not found] ` <1486546019-31045-8-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 2017-02-08 16:42 ` Deucher, Alexander 2017-02-09 1:51 ` Yu, Xiangliang 2017-02-08 9:26 ` [PATCH 09/11] drm/amdgpu:imple ring clear Monk Liu [not found] ` <1486546019-31045-9-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 2017-02-08 16:40 ` Deucher, Alexander 2017-02-08 9:26 ` [PATCH 10/11] drm/amdgpu:use clear_ring to clr RB Monk Liu [not found] ` <1486546019-31045-10-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 2017-02-08 16:41 ` Deucher, Alexander 2017-02-08 9:26 ` [PATCH 11/11] drm/amdgpu:fix kiq_resume routine (V2) Monk Liu [not found] ` <1486546019-31045-11-git-send-email-Monk.Liu-5C7GfCeVMHo@public.gmane.org> 2017-02-08 16:43 ` Deucher, Alexander 2017-02-09 1:51 ` Yu, Xiangliang 2017-02-08 9:34 ` [PATCH 01/11] drm/amdgpu:use MACRO like other places Michel Dänzer 2017-02-08 16:35 ` Deucher, Alexander
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.