* [PATCH v2] drm/amd/amdgpu: Recovery vcn instance iterate.
@ 2021-07-13 10:30 Peng Ju Zhou
2021-07-14 14:49 ` Alex Deucher
0 siblings, 1 reply; 6+ messages in thread
From: Peng Ju Zhou @ 2021-07-13 10:30 UTC (permalink / raw)
To: amd-gfx
The previous logic is recording the amount of valid vcn instances
to use them on SRIOV, it is a hard task due to the vcn accessment is
based on the index of the vcn instance.
Check if the vcn instance enabled before do instance init.
Signed-off-by: Peng Ju Zhou <PengJu.Zhou@amd.com>
---
drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c | 33 ++++++++++++++++-----------
1 file changed, 20 insertions(+), 13 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
index c3580de3ea9c..d11fea2c9d90 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
@@ -88,9 +88,7 @@ static int vcn_v3_0_early_init(void *handle)
int i;
if (amdgpu_sriov_vf(adev)) {
- for (i = 0; i < VCN_INSTANCES_SIENNA_CICHLID; i++)
- if (amdgpu_vcn_is_disabled_vcn(adev, VCN_DECODE_RING, i))
- adev->vcn.num_vcn_inst++;
+ adev->vcn.num_vcn_inst = VCN_INSTANCES_SIENNA_CICHLID;
adev->vcn.harvest_config = 0;
adev->vcn.num_enc_rings = 1;
@@ -151,8 +149,7 @@ static int vcn_v3_0_sw_init(void *handle)
adev->firmware.fw_size +=
ALIGN(le32_to_cpu(hdr->ucode_size_bytes), PAGE_SIZE);
- if ((adev->vcn.num_vcn_inst == VCN_INSTANCES_SIENNA_CICHLID) ||
- (amdgpu_sriov_vf(adev) && adev->asic_type == CHIP_SIENNA_CICHLID)) {
+ if (adev->vcn.num_vcn_inst == VCN_INSTANCES_SIENNA_CICHLID) {
adev->firmware.ucode[AMDGPU_UCODE_ID_VCN1].ucode_id = AMDGPU_UCODE_ID_VCN1;
adev->firmware.ucode[AMDGPU_UCODE_ID_VCN1].fw = adev->vcn.fw;
adev->firmware.fw_size +=
@@ -322,18 +319,28 @@ static int vcn_v3_0_hw_init(void *handle)
continue;
ring = &adev->vcn.inst[i].ring_dec;
- ring->wptr = 0;
- ring->wptr_old = 0;
- vcn_v3_0_dec_ring_set_wptr(ring);
- ring->sched.ready = true;
-
- for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
- ring = &adev->vcn.inst[i].ring_enc[j];
+ if (amdgpu_vcn_is_disabled_vcn(adev, VCN_DECODE_RING, i)) {
+ ring->sched.ready = false;
+ dev_info(adev->dev, "ring %s is disabled by hypervisor\n", ring->name);
+ } else {
ring->wptr = 0;
ring->wptr_old = 0;
- vcn_v3_0_enc_ring_set_wptr(ring);
+ vcn_v3_0_dec_ring_set_wptr(ring);
ring->sched.ready = true;
}
+
+ for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
+ ring = &adev->vcn.inst[i].ring_enc[j];
+ if (amdgpu_vcn_is_disabled_vcn(adev, VCN_ENCODE_RING, i)) {
+ ring->sched.ready = false;
+ dev_info(adev->dev, "ring %s is disabled by hypervisor\n", ring->name);
+ } else {
+ ring->wptr = 0;
+ ring->wptr_old = 0;
+ vcn_v3_0_enc_ring_set_wptr(ring);
+ ring->sched.ready = true;
+ }
+ }
}
} else {
for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
--
2.17.1
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH v2] drm/amd/amdgpu: Recovery vcn instance iterate.
2021-07-13 10:30 [PATCH v2] drm/amd/amdgpu: Recovery vcn instance iterate Peng Ju Zhou
@ 2021-07-14 14:49 ` Alex Deucher
2021-07-14 23:54 ` Liu, Monk
0 siblings, 1 reply; 6+ messages in thread
From: Alex Deucher @ 2021-07-14 14:49 UTC (permalink / raw)
To: Peng Ju Zhou; +Cc: amd-gfx list
On Tue, Jul 13, 2021 at 6:31 AM Peng Ju Zhou <PengJu.Zhou@amd.com> wrote:
>
> The previous logic is recording the amount of valid vcn instances
> to use them on SRIOV, it is a hard task due to the vcn accessment is
> based on the index of the vcn instance.
>
> Check if the vcn instance enabled before do instance init.
>
> Signed-off-by: Peng Ju Zhou <PengJu.Zhou@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c | 33 ++++++++++++++++-----------
> 1 file changed, 20 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> index c3580de3ea9c..d11fea2c9d90 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> @@ -88,9 +88,7 @@ static int vcn_v3_0_early_init(void *handle)
> int i;
>
> if (amdgpu_sriov_vf(adev)) {
> - for (i = 0; i < VCN_INSTANCES_SIENNA_CICHLID; i++)
> - if (amdgpu_vcn_is_disabled_vcn(adev, VCN_DECODE_RING, i))
> - adev->vcn.num_vcn_inst++;
> + adev->vcn.num_vcn_inst = VCN_INSTANCES_SIENNA_CICHLID;
> adev->vcn.harvest_config = 0;
> adev->vcn.num_enc_rings = 1;
>
> @@ -151,8 +149,7 @@ static int vcn_v3_0_sw_init(void *handle)
> adev->firmware.fw_size +=
> ALIGN(le32_to_cpu(hdr->ucode_size_bytes), PAGE_SIZE);
>
> - if ((adev->vcn.num_vcn_inst == VCN_INSTANCES_SIENNA_CICHLID) ||
> - (amdgpu_sriov_vf(adev) && adev->asic_type == CHIP_SIENNA_CICHLID)) {
> + if (adev->vcn.num_vcn_inst == VCN_INSTANCES_SIENNA_CICHLID) {
> adev->firmware.ucode[AMDGPU_UCODE_ID_VCN1].ucode_id = AMDGPU_UCODE_ID_VCN1;
> adev->firmware.ucode[AMDGPU_UCODE_ID_VCN1].fw = adev->vcn.fw;
> adev->firmware.fw_size +=
> @@ -322,18 +319,28 @@ static int vcn_v3_0_hw_init(void *handle)
> continue;
>
> ring = &adev->vcn.inst[i].ring_dec;
> - ring->wptr = 0;
> - ring->wptr_old = 0;
> - vcn_v3_0_dec_ring_set_wptr(ring);
> - ring->sched.ready = true;
> -
> - for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
> - ring = &adev->vcn.inst[i].ring_enc[j];
> + if (amdgpu_vcn_is_disabled_vcn(adev, VCN_DECODE_RING, i)) {
> + ring->sched.ready = false;
> + dev_info(adev->dev, "ring %s is disabled by hypervisor\n", ring->name);
> + } else {
> ring->wptr = 0;
> ring->wptr_old = 0;
> - vcn_v3_0_enc_ring_set_wptr(ring);
> + vcn_v3_0_dec_ring_set_wptr(ring);
> ring->sched.ready = true;
> }
> +
> + for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
> + ring = &adev->vcn.inst[i].ring_enc[j];
> + if (amdgpu_vcn_is_disabled_vcn(adev, VCN_ENCODE_RING, i)) {
> + ring->sched.ready = false;
> + dev_info(adev->dev, "ring %s is disabled by hypervisor\n", ring->name);
> + } else {
> + ring->wptr = 0;
> + ring->wptr_old = 0;
> + vcn_v3_0_enc_ring_set_wptr(ring);
> + ring->sched.ready = true;
> + }
> + }
> }
> } else {
> for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
> --
> 2.17.1
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 6+ messages in thread
* RE: [PATCH v2] drm/amd/amdgpu: Recovery vcn instance iterate.
2021-07-14 14:49 ` Alex Deucher
@ 2021-07-14 23:54 ` Liu, Monk
2021-07-16 2:14 ` Zhou, Peng Ju
0 siblings, 1 reply; 6+ messages in thread
From: Liu, Monk @ 2021-07-14 23:54 UTC (permalink / raw)
To: Alex Deucher, Zhou, Peng Ju, Liu, Leo; +Cc: amd-gfx list
[AMD Official Use Only]
Reviewed-by: Monk Liu <monk.liu@amd.com>
You might need @Liu, Leo's review as well
Thanks
------------------------------------------
Monk Liu | Cloud-GPU Core team
------------------------------------------
-----Original Message-----
From: amd-gfx <amd-gfx-bounces@lists.freedesktop.org> On Behalf Of Alex Deucher
Sent: Wednesday, July 14, 2021 10:49 PM
To: Zhou, Peng Ju <PengJu.Zhou@amd.com>
Cc: amd-gfx list <amd-gfx@lists.freedesktop.org>
Subject: Re: [PATCH v2] drm/amd/amdgpu: Recovery vcn instance iterate.
On Tue, Jul 13, 2021 at 6:31 AM Peng Ju Zhou <PengJu.Zhou@amd.com> wrote:
>
> The previous logic is recording the amount of valid vcn instances to
> use them on SRIOV, it is a hard task due to the vcn accessment is
> based on the index of the vcn instance.
>
> Check if the vcn instance enabled before do instance init.
>
> Signed-off-by: Peng Ju Zhou <PengJu.Zhou@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c | 33
> ++++++++++++++++-----------
> 1 file changed, 20 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> index c3580de3ea9c..d11fea2c9d90 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> @@ -88,9 +88,7 @@ static int vcn_v3_0_early_init(void *handle)
> int i;
>
> if (amdgpu_sriov_vf(adev)) {
> - for (i = 0; i < VCN_INSTANCES_SIENNA_CICHLID; i++)
> - if (amdgpu_vcn_is_disabled_vcn(adev, VCN_DECODE_RING, i))
> - adev->vcn.num_vcn_inst++;
> + adev->vcn.num_vcn_inst = VCN_INSTANCES_SIENNA_CICHLID;
> adev->vcn.harvest_config = 0;
> adev->vcn.num_enc_rings = 1;
>
> @@ -151,8 +149,7 @@ static int vcn_v3_0_sw_init(void *handle)
> adev->firmware.fw_size +=
> ALIGN(le32_to_cpu(hdr->ucode_size_bytes),
> PAGE_SIZE);
>
> - if ((adev->vcn.num_vcn_inst == VCN_INSTANCES_SIENNA_CICHLID) ||
> - (amdgpu_sriov_vf(adev) && adev->asic_type == CHIP_SIENNA_CICHLID)) {
> + if (adev->vcn.num_vcn_inst ==
> + VCN_INSTANCES_SIENNA_CICHLID) {
> adev->firmware.ucode[AMDGPU_UCODE_ID_VCN1].ucode_id = AMDGPU_UCODE_ID_VCN1;
> adev->firmware.ucode[AMDGPU_UCODE_ID_VCN1].fw = adev->vcn.fw;
> adev->firmware.fw_size += @@ -322,18 +319,28
> @@ static int vcn_v3_0_hw_init(void *handle)
> continue;
>
> ring = &adev->vcn.inst[i].ring_dec;
> - ring->wptr = 0;
> - ring->wptr_old = 0;
> - vcn_v3_0_dec_ring_set_wptr(ring);
> - ring->sched.ready = true;
> -
> - for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
> - ring = &adev->vcn.inst[i].ring_enc[j];
> + if (amdgpu_vcn_is_disabled_vcn(adev, VCN_DECODE_RING, i)) {
> + ring->sched.ready = false;
> + dev_info(adev->dev, "ring %s is disabled by hypervisor\n", ring->name);
> + } else {
> ring->wptr = 0;
> ring->wptr_old = 0;
> - vcn_v3_0_enc_ring_set_wptr(ring);
> + vcn_v3_0_dec_ring_set_wptr(ring);
> ring->sched.ready = true;
> }
> +
> + for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
> + ring = &adev->vcn.inst[i].ring_enc[j];
> + if (amdgpu_vcn_is_disabled_vcn(adev, VCN_ENCODE_RING, i)) {
> + ring->sched.ready = false;
> + dev_info(adev->dev, "ring %s is disabled by hypervisor\n", ring->name);
> + } else {
> + ring->wptr = 0;
> + ring->wptr_old = 0;
> + vcn_v3_0_enc_ring_set_wptr(ring);
> + ring->sched.ready = true;
> + }
> + }
> }
> } else {
> for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
> --
> 2.17.1
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flist
> s.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=04%7C01%7Cmo
> nk.liu%40amd.com%7Ceee0db55446b43f11a5d08d946d69bda%7C3dd8961fe4884e60
> 8e11a82d994e183d%7C0%7C0%7C637618709836027968%7CUnknown%7CTWFpbGZsb3d8
> eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1
> 000&sdata=0lw4us%2FTz66cgN6I0kQSwQDQzYRKfa2VuSBemqDMhcs%3D&res
> erved=0
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=04%7C01%7Cmonk.liu%40amd.com%7Ceee0db55446b43f11a5d08d946d69bda%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637618709836027968%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=0lw4us%2FTz66cgN6I0kQSwQDQzYRKfa2VuSBemqDMhcs%3D&reserved=0
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 6+ messages in thread
* RE: [PATCH v2] drm/amd/amdgpu: Recovery vcn instance iterate.
2021-07-14 23:54 ` Liu, Monk
@ 2021-07-16 2:14 ` Zhou, Peng Ju
2021-07-19 3:35 ` Zhou, Peng Ju
2021-07-20 14:50 ` Leo Liu
0 siblings, 2 replies; 6+ messages in thread
From: Zhou, Peng Ju @ 2021-07-16 2:14 UTC (permalink / raw)
To: Liu, Monk, Alex Deucher, Liu, Leo; +Cc: amd-gfx list
[AMD Official Use Only]
Hi @Liu, Leo
Can you help to review this patch?
Monk and Alex have reviewed it.
----------------------------------------------------------------------
BW
Pengju Zhou
> -----Original Message-----
> From: Liu, Monk <Monk.Liu@amd.com>
> Sent: Thursday, July 15, 2021 7:54 AM
> To: Alex Deucher <alexdeucher@gmail.com>; Zhou, Peng Ju
> <PengJu.Zhou@amd.com>; Liu, Leo <Leo.Liu@amd.com>
> Cc: amd-gfx list <amd-gfx@lists.freedesktop.org>
> Subject: RE: [PATCH v2] drm/amd/amdgpu: Recovery vcn instance iterate.
>
> [AMD Official Use Only]
>
> Reviewed-by: Monk Liu <monk.liu@amd.com>
>
> You might need @Liu, Leo's review as well
>
> Thanks
>
> ------------------------------------------
> Monk Liu | Cloud-GPU Core team
> ------------------------------------------
>
> -----Original Message-----
> From: amd-gfx <amd-gfx-bounces@lists.freedesktop.org> On Behalf Of Alex
> Deucher
> Sent: Wednesday, July 14, 2021 10:49 PM
> To: Zhou, Peng Ju <PengJu.Zhou@amd.com>
> Cc: amd-gfx list <amd-gfx@lists.freedesktop.org>
> Subject: Re: [PATCH v2] drm/amd/amdgpu: Recovery vcn instance iterate.
>
> On Tue, Jul 13, 2021 at 6:31 AM Peng Ju Zhou <PengJu.Zhou@amd.com> wrote:
> >
> > The previous logic is recording the amount of valid vcn instances to
> > use them on SRIOV, it is a hard task due to the vcn accessment is
> > based on the index of the vcn instance.
> >
> > Check if the vcn instance enabled before do instance init.
> >
> > Signed-off-by: Peng Ju Zhou <PengJu.Zhou@amd.com>
>
> Acked-by: Alex Deucher <alexander.deucher@amd.com>
>
> > ---
> > drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c | 33
> > ++++++++++++++++-----------
> > 1 file changed, 20 insertions(+), 13 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> > b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> > index c3580de3ea9c..d11fea2c9d90 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> > @@ -88,9 +88,7 @@ static int vcn_v3_0_early_init(void *handle)
> > int i;
> >
> > if (amdgpu_sriov_vf(adev)) {
> > - for (i = 0; i < VCN_INSTANCES_SIENNA_CICHLID; i++)
> > - if (amdgpu_vcn_is_disabled_vcn(adev, VCN_DECODE_RING, i))
> > - adev->vcn.num_vcn_inst++;
> > + adev->vcn.num_vcn_inst = VCN_INSTANCES_SIENNA_CICHLID;
> > adev->vcn.harvest_config = 0;
> > adev->vcn.num_enc_rings = 1;
> >
> > @@ -151,8 +149,7 @@ static int vcn_v3_0_sw_init(void *handle)
> > adev->firmware.fw_size +=
> > ALIGN(le32_to_cpu(hdr->ucode_size_bytes),
> > PAGE_SIZE);
> >
> > - if ((adev->vcn.num_vcn_inst == VCN_INSTANCES_SIENNA_CICHLID)
> ||
> > - (amdgpu_sriov_vf(adev) && adev->asic_type ==
> CHIP_SIENNA_CICHLID)) {
> > + if (adev->vcn.num_vcn_inst ==
> > + VCN_INSTANCES_SIENNA_CICHLID) {
> > adev->firmware.ucode[AMDGPU_UCODE_ID_VCN1].ucode_id =
> AMDGPU_UCODE_ID_VCN1;
> > adev->firmware.ucode[AMDGPU_UCODE_ID_VCN1].fw = adev-
> >vcn.fw;
> > adev->firmware.fw_size += @@ -322,18 +319,28
> > @@ static int vcn_v3_0_hw_init(void *handle)
> > continue;
> >
> > ring = &adev->vcn.inst[i].ring_dec;
> > - ring->wptr = 0;
> > - ring->wptr_old = 0;
> > - vcn_v3_0_dec_ring_set_wptr(ring);
> > - ring->sched.ready = true;
> > -
> > - for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
> > - ring = &adev->vcn.inst[i].ring_enc[j];
> > + if (amdgpu_vcn_is_disabled_vcn(adev, VCN_DECODE_RING, i))
> {
> > + ring->sched.ready = false;
> > + dev_info(adev->dev, "ring %s is disabled by hypervisor\n",
> ring->name);
> > + } else {
> > ring->wptr = 0;
> > ring->wptr_old = 0;
> > - vcn_v3_0_enc_ring_set_wptr(ring);
> > + vcn_v3_0_dec_ring_set_wptr(ring);
> > ring->sched.ready = true;
> > }
> > +
> > + for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
> > + ring = &adev->vcn.inst[i].ring_enc[j];
> > + if (amdgpu_vcn_is_disabled_vcn(adev,
> VCN_ENCODE_RING, i)) {
> > + ring->sched.ready = false;
> > + dev_info(adev->dev, "ring %s is disabled by
> hypervisor\n", ring->name);
> > + } else {
> > + ring->wptr = 0;
> > + ring->wptr_old = 0;
> > + vcn_v3_0_enc_ring_set_wptr(ring);
> > + ring->sched.ready = true;
> > + }
> > + }
> > }
> > } else {
> > for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
> > --
> > 2.17.1
> >
> > _______________________________________________
> > amd-gfx mailing list
> > amd-gfx@lists.freedesktop.org
> > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flist
> > s.freedesktop.org%2Fmailman%2Flistinfo%2Famd-
> gfx&data=04%7C01%7Cmo
> >
> nk.liu%40amd.com%7Ceee0db55446b43f11a5d08d946d69bda%7C3dd8961fe4
> 884e60
> >
> 8e11a82d994e183d%7C0%7C0%7C637618709836027968%7CUnknown%7CTW
> FpbGZsb3d8
> >
> eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D
> %7C1
> >
> 000&sdata=0lw4us%2FTz66cgN6I0kQSwQDQzYRKfa2VuSBemqDMhcs%3D
> &res
> > erved=0
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.fr
> eedesktop.org%2Fmailman%2Flistinfo%2Famd-
> gfx&data=04%7C01%7Cmonk.liu%40amd.com%7Ceee0db55446b43f11a5
> d08d946d69bda%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637
> 618709836027968%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiL
> CJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=0lw4
> us%2FTz66cgN6I0kQSwQDQzYRKfa2VuSBemqDMhcs%3D&reserved=0
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 6+ messages in thread
* RE: [PATCH v2] drm/amd/amdgpu: Recovery vcn instance iterate.
2021-07-16 2:14 ` Zhou, Peng Ju
@ 2021-07-19 3:35 ` Zhou, Peng Ju
2021-07-20 14:50 ` Leo Liu
1 sibling, 0 replies; 6+ messages in thread
From: Zhou, Peng Ju @ 2021-07-19 3:35 UTC (permalink / raw)
To: Zhou, Peng Ju, Liu, Monk, Alex Deucher, Liu, Leo; +Cc: Wang, Yin, amd-gfx list
[AMD Official Use Only]
Hi Leo
Can you help to review this patch?
----------------------------------------------------------------------
BW
Pengju Zhou
> -----Original Message-----
> From: amd-gfx <amd-gfx-bounces@lists.freedesktop.org> On Behalf Of Zhou,
> Peng Ju
> Sent: Friday, July 16, 2021 10:15 AM
> To: Liu, Monk <Monk.Liu@amd.com>; Alex Deucher
> <alexdeucher@gmail.com>; Liu, Leo <Leo.Liu@amd.com>
> Cc: amd-gfx list <amd-gfx@lists.freedesktop.org>
> Subject: RE: [PATCH v2] drm/amd/amdgpu: Recovery vcn instance iterate.
>
> [AMD Official Use Only]
>
> Hi @Liu, Leo
>
> Can you help to review this patch?
> Monk and Alex have reviewed it.
>
>
> ----------------------------------------------------------------------
> BW
> Pengju Zhou
>
>
>
> > -----Original Message-----
> > From: Liu, Monk <Monk.Liu@amd.com>
> > Sent: Thursday, July 15, 2021 7:54 AM
> > To: Alex Deucher <alexdeucher@gmail.com>; Zhou, Peng Ju
> > <PengJu.Zhou@amd.com>; Liu, Leo <Leo.Liu@amd.com>
> > Cc: amd-gfx list <amd-gfx@lists.freedesktop.org>
> > Subject: RE: [PATCH v2] drm/amd/amdgpu: Recovery vcn instance iterate.
> >
> > [AMD Official Use Only]
> >
> > Reviewed-by: Monk Liu <monk.liu@amd.com>
> >
> > You might need @Liu, Leo's review as well
> >
> > Thanks
> >
> > ------------------------------------------
> > Monk Liu | Cloud-GPU Core team
> > ------------------------------------------
> >
> > -----Original Message-----
> > From: amd-gfx <amd-gfx-bounces@lists.freedesktop.org> On Behalf Of Alex
> > Deucher
> > Sent: Wednesday, July 14, 2021 10:49 PM
> > To: Zhou, Peng Ju <PengJu.Zhou@amd.com>
> > Cc: amd-gfx list <amd-gfx@lists.freedesktop.org>
> > Subject: Re: [PATCH v2] drm/amd/amdgpu: Recovery vcn instance iterate.
> >
> > On Tue, Jul 13, 2021 at 6:31 AM Peng Ju Zhou <PengJu.Zhou@amd.com>
> wrote:
> > >
> > > The previous logic is recording the amount of valid vcn instances to
> > > use them on SRIOV, it is a hard task due to the vcn accessment is
> > > based on the index of the vcn instance.
> > >
> > > Check if the vcn instance enabled before do instance init.
> > >
> > > Signed-off-by: Peng Ju Zhou <PengJu.Zhou@amd.com>
> >
> > Acked-by: Alex Deucher <alexander.deucher@amd.com>
> >
> > > ---
> > > drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c | 33
> > > ++++++++++++++++-----------
> > > 1 file changed, 20 insertions(+), 13 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> > > b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> > > index c3580de3ea9c..d11fea2c9d90 100644
> > > --- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> > > +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> > > @@ -88,9 +88,7 @@ static int vcn_v3_0_early_init(void *handle)
> > > int i;
> > >
> > > if (amdgpu_sriov_vf(adev)) {
> > > - for (i = 0; i < VCN_INSTANCES_SIENNA_CICHLID; i++)
> > > - if (amdgpu_vcn_is_disabled_vcn(adev, VCN_DECODE_RING, i))
> > > - adev->vcn.num_vcn_inst++;
> > > + adev->vcn.num_vcn_inst = VCN_INSTANCES_SIENNA_CICHLID;
> > > adev->vcn.harvest_config = 0;
> > > adev->vcn.num_enc_rings = 1;
> > >
> > > @@ -151,8 +149,7 @@ static int vcn_v3_0_sw_init(void *handle)
> > > adev->firmware.fw_size +=
> > > ALIGN(le32_to_cpu(hdr->ucode_size_bytes),
> > > PAGE_SIZE);
> > >
> > > - if ((adev->vcn.num_vcn_inst ==
> VCN_INSTANCES_SIENNA_CICHLID)
> > ||
> > > - (amdgpu_sriov_vf(adev) && adev->asic_type ==
> > CHIP_SIENNA_CICHLID)) {
> > > + if (adev->vcn.num_vcn_inst ==
> > > + VCN_INSTANCES_SIENNA_CICHLID) {
> > > adev->firmware.ucode[AMDGPU_UCODE_ID_VCN1].ucode_id
> =
> > AMDGPU_UCODE_ID_VCN1;
> > > adev->firmware.ucode[AMDGPU_UCODE_ID_VCN1].fw =
> adev-
> > >vcn.fw;
> > > adev->firmware.fw_size += @@ -322,18 +319,28
> > > @@ static int vcn_v3_0_hw_init(void *handle)
> > > continue;
> > >
> > > ring = &adev->vcn.inst[i].ring_dec;
> > > - ring->wptr = 0;
> > > - ring->wptr_old = 0;
> > > - vcn_v3_0_dec_ring_set_wptr(ring);
> > > - ring->sched.ready = true;
> > > -
> > > - for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
> > > - ring = &adev->vcn.inst[i].ring_enc[j];
> > > + if (amdgpu_vcn_is_disabled_vcn(adev, VCN_DECODE_RING,
> i))
> > {
> > > + ring->sched.ready = false;
> > > + dev_info(adev->dev, "ring %s is disabled by
> hypervisor\n",
> > ring->name);
> > > + } else {
> > > ring->wptr = 0;
> > > ring->wptr_old = 0;
> > > - vcn_v3_0_enc_ring_set_wptr(ring);
> > > + vcn_v3_0_dec_ring_set_wptr(ring);
> > > ring->sched.ready = true;
> > > }
> > > +
> > > + for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
> > > + ring = &adev->vcn.inst[i].ring_enc[j];
> > > + if (amdgpu_vcn_is_disabled_vcn(adev,
> > VCN_ENCODE_RING, i)) {
> > > + ring->sched.ready = false;
> > > + dev_info(adev->dev, "ring %s is disabled by
> > hypervisor\n", ring->name);
> > > + } else {
> > > + ring->wptr = 0;
> > > + ring->wptr_old = 0;
> > > + vcn_v3_0_enc_ring_set_wptr(ring);
> > > + ring->sched.ready = true;
> > > + }
> > > + }
> > > }
> > > } else {
> > > for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
> > > --
> > > 2.17.1
> > >
> > > _______________________________________________
> > > amd-gfx mailing list
> > > amd-gfx@lists.freedesktop.org
> > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flist
> > > s.freedesktop.org%2Fmailman%2Flistinfo%2Famd-
> > gfx&data=04%7C01%7Cmo
> > >
> >
> nk.liu%40amd.com%7Ceee0db55446b43f11a5d08d946d69bda%7C3dd8961fe4
> > 884e60
> > >
> >
> 8e11a82d994e183d%7C0%7C0%7C637618709836027968%7CUnknown%7CTW
> > FpbGZsb3d8
> > >
> >
> eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D
> > %7C1
> > >
> >
> 000&sdata=0lw4us%2FTz66cgN6I0kQSwQDQzYRKfa2VuSBemqDMhcs%3D
> > &res
> > > erved=0
> > _______________________________________________
> > amd-gfx mailing list
> > amd-gfx@lists.freedesktop.org
> >
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.fr
> > eedesktop.org%2Fmailman%2Flistinfo%2Famd-
> >
> gfx&data=04%7C01%7Cmonk.liu%40amd.com%7Ceee0db55446b43f11a5
> >
> d08d946d69bda%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637
> >
> 618709836027968%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiL
> >
> CJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=0lw4
> > us%2FTz66cgN6I0kQSwQDQzYRKfa2VuSBemqDMhcs%3D&reserved=0
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.fr
> eedesktop.org%2Fmailman%2Flistinfo%2Famd-
> gfx&data=04%7C01%7CPengju.Zhou%40amd.com%7C954d8355dd564f7a
> 5d4c08d947ff8051%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C6
> 37619984967610886%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDA
> iLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=Rjt
> b0ufSZEt%2B6GFS%2FaXK4YA%2FbNeXr7ptGyFPBLut994%3D&reserved=0
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2] drm/amd/amdgpu: Recovery vcn instance iterate.
2021-07-16 2:14 ` Zhou, Peng Ju
2021-07-19 3:35 ` Zhou, Peng Ju
@ 2021-07-20 14:50 ` Leo Liu
1 sibling, 0 replies; 6+ messages in thread
From: Leo Liu @ 2021-07-20 14:50 UTC (permalink / raw)
To: Zhou, Peng Ju, Liu, Monk, Alex Deucher; +Cc: amd-gfx list
It looks good to me for the non-sriov part.
Regards,
Leo
On 2021-07-15 10:14 p.m., Zhou, Peng Ju wrote:
> [AMD Official Use Only]
>
> Hi @Liu, Leo
>
> Can you help to review this patch?
> Monk and Alex have reviewed it.
>
>
> ----------------------------------------------------------------------
> BW
> Pengju Zhou
>
>
>
>> -----Original Message-----
>> From: Liu, Monk <Monk.Liu@amd.com>
>> Sent: Thursday, July 15, 2021 7:54 AM
>> To: Alex Deucher <alexdeucher@gmail.com>; Zhou, Peng Ju
>> <PengJu.Zhou@amd.com>; Liu, Leo <Leo.Liu@amd.com>
>> Cc: amd-gfx list <amd-gfx@lists.freedesktop.org>
>> Subject: RE: [PATCH v2] drm/amd/amdgpu: Recovery vcn instance iterate.
>>
>> [AMD Official Use Only]
>>
>> Reviewed-by: Monk Liu <monk.liu@amd.com>
>>
>> You might need @Liu, Leo's review as well
>>
>> Thanks
>>
>> ------------------------------------------
>> Monk Liu | Cloud-GPU Core team
>> ------------------------------------------
>>
>> -----Original Message-----
>> From: amd-gfx <amd-gfx-bounces@lists.freedesktop.org> On Behalf Of Alex
>> Deucher
>> Sent: Wednesday, July 14, 2021 10:49 PM
>> To: Zhou, Peng Ju <PengJu.Zhou@amd.com>
>> Cc: amd-gfx list <amd-gfx@lists.freedesktop.org>
>> Subject: Re: [PATCH v2] drm/amd/amdgpu: Recovery vcn instance iterate.
>>
>> On Tue, Jul 13, 2021 at 6:31 AM Peng Ju Zhou <PengJu.Zhou@amd.com> wrote:
>>> The previous logic is recording the amount of valid vcn instances to
>>> use them on SRIOV, it is a hard task due to the vcn accessment is
>>> based on the index of the vcn instance.
>>>
>>> Check if the vcn instance enabled before do instance init.
>>>
>>> Signed-off-by: Peng Ju Zhou <PengJu.Zhou@amd.com>
>> Acked-by: Alex Deucher <alexander.deucher@amd.com>
>>
>>> ---
>>> drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c | 33
>>> ++++++++++++++++-----------
>>> 1 file changed, 20 insertions(+), 13 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
>>> b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
>>> index c3580de3ea9c..d11fea2c9d90 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
>>> @@ -88,9 +88,7 @@ static int vcn_v3_0_early_init(void *handle)
>>> int i;
>>>
>>> if (amdgpu_sriov_vf(adev)) {
>>> - for (i = 0; i < VCN_INSTANCES_SIENNA_CICHLID; i++)
>>> - if (amdgpu_vcn_is_disabled_vcn(adev, VCN_DECODE_RING, i))
>>> - adev->vcn.num_vcn_inst++;
>>> + adev->vcn.num_vcn_inst = VCN_INSTANCES_SIENNA_CICHLID;
>>> adev->vcn.harvest_config = 0;
>>> adev->vcn.num_enc_rings = 1;
>>>
>>> @@ -151,8 +149,7 @@ static int vcn_v3_0_sw_init(void *handle)
>>> adev->firmware.fw_size +=
>>> ALIGN(le32_to_cpu(hdr->ucode_size_bytes),
>>> PAGE_SIZE);
>>>
>>> - if ((adev->vcn.num_vcn_inst == VCN_INSTANCES_SIENNA_CICHLID)
>> ||
>>> - (amdgpu_sriov_vf(adev) && adev->asic_type ==
>> CHIP_SIENNA_CICHLID)) {
>>> + if (adev->vcn.num_vcn_inst ==
>>> + VCN_INSTANCES_SIENNA_CICHLID) {
>>> adev->firmware.ucode[AMDGPU_UCODE_ID_VCN1].ucode_id =
>> AMDGPU_UCODE_ID_VCN1;
>>> adev->firmware.ucode[AMDGPU_UCODE_ID_VCN1].fw = adev-
>>> vcn.fw;
>>> adev->firmware.fw_size += @@ -322,18 +319,28
>>> @@ static int vcn_v3_0_hw_init(void *handle)
>>> continue;
>>>
>>> ring = &adev->vcn.inst[i].ring_dec;
>>> - ring->wptr = 0;
>>> - ring->wptr_old = 0;
>>> - vcn_v3_0_dec_ring_set_wptr(ring);
>>> - ring->sched.ready = true;
>>> -
>>> - for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
>>> - ring = &adev->vcn.inst[i].ring_enc[j];
>>> + if (amdgpu_vcn_is_disabled_vcn(adev, VCN_DECODE_RING, i))
>> {
>>> + ring->sched.ready = false;
>>> + dev_info(adev->dev, "ring %s is disabled by hypervisor\n",
>> ring->name);
>>> + } else {
>>> ring->wptr = 0;
>>> ring->wptr_old = 0;
>>> - vcn_v3_0_enc_ring_set_wptr(ring);
>>> + vcn_v3_0_dec_ring_set_wptr(ring);
>>> ring->sched.ready = true;
>>> }
>>> +
>>> + for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
>>> + ring = &adev->vcn.inst[i].ring_enc[j];
>>> + if (amdgpu_vcn_is_disabled_vcn(adev,
>> VCN_ENCODE_RING, i)) {
>>> + ring->sched.ready = false;
>>> + dev_info(adev->dev, "ring %s is disabled by
>> hypervisor\n", ring->name);
>>> + } else {
>>> + ring->wptr = 0;
>>> + ring->wptr_old = 0;
>>> + vcn_v3_0_enc_ring_set_wptr(ring);
>>> + ring->sched.ready = true;
>>> + }
>>> + }
>>> }
>>> } else {
>>> for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
>>> --
>>> 2.17.1
>>>
>>> _______________________________________________
>>> amd-gfx mailing list
>>> amd-gfx@lists.freedesktop.org
>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flist
>>> s.freedesktop.org%2Fmailman%2Flistinfo%2Famd-
>> gfx&data=04%7C01%7Cmo
>> nk.liu%40amd.com%7Ceee0db55446b43f11a5d08d946d69bda%7C3dd8961fe4
>> 884e60
>> 8e11a82d994e183d%7C0%7C0%7C637618709836027968%7CUnknown%7CTW
>> FpbGZsb3d8
>> eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D
>> %7C1
>> 000&sdata=0lw4us%2FTz66cgN6I0kQSwQDQzYRKfa2VuSBemqDMhcs%3D
>> &res
>>> erved=0
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.fr
>> eedesktop.org%2Fmailman%2Flistinfo%2Famd-
>> gfx&data=04%7C01%7Cmonk.liu%40amd.com%7Ceee0db55446b43f11a5
>> d08d946d69bda%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637
>> 618709836027968%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiL
>> CJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=0lw4
>> us%2FTz66cgN6I0kQSwQDQzYRKfa2VuSBemqDMhcs%3D&reserved=0
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2021-07-20 14:50 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-13 10:30 [PATCH v2] drm/amd/amdgpu: Recovery vcn instance iterate Peng Ju Zhou
2021-07-14 14:49 ` Alex Deucher
2021-07-14 23:54 ` Liu, Monk
2021-07-16 2:14 ` Zhou, Peng Ju
2021-07-19 3:35 ` Zhou, Peng Ju
2021-07-20 14:50 ` Leo Liu
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.