* [PATCH 1/5] drm/sched:add new priority level
@ 2021-08-24 5:55 Satyajit Sahu
2021-08-24 5:55 ` [PATCH 2/5] drm/amdgpu: map user set priority to drm sched priority Satyajit Sahu
` (4 more replies)
0 siblings, 5 replies; 17+ messages in thread
From: Satyajit Sahu @ 2021-08-24 5:55 UTC (permalink / raw)
To: amd-gfx
Cc: leo.liu, Alexander.Deucher, christian.koenig, shashank.sharma,
nirmoy.das, Satyajit Sahu
Adding a new priority level DRM_SCHED_PRIORITY_VERY_HIGH
Signed-off-by: Satyajit Sahu <satyajit.sahu@amd.com>
---
include/drm/gpu_scheduler.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index d18af49fd009..d0e5e234da5f 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -40,6 +40,7 @@ enum drm_sched_priority {
DRM_SCHED_PRIORITY_MIN,
DRM_SCHED_PRIORITY_NORMAL,
DRM_SCHED_PRIORITY_HIGH,
+ DRM_SCHED_PRIORITY_VERY_HIGH,
DRM_SCHED_PRIORITY_KERNEL,
DRM_SCHED_PRIORITY_COUNT,
--
2.25.1
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 2/5] drm/amdgpu: map user set priority to drm sched priority
2021-08-24 5:55 [PATCH 1/5] drm/sched:add new priority level Satyajit Sahu
@ 2021-08-24 5:55 ` Satyajit Sahu
2021-08-24 5:55 ` [PATCH 3/5] drm/amdgpu/vce:set vce ring priority level Satyajit Sahu
` (3 subsequent siblings)
4 siblings, 0 replies; 17+ messages in thread
From: Satyajit Sahu @ 2021-08-24 5:55 UTC (permalink / raw)
To: amd-gfx
Cc: leo.liu, Alexander.Deucher, christian.koenig, shashank.sharma,
nirmoy.das, Satyajit Sahu
Map UMD priority level to properly to drm sched priorrity.
Signed-off-by: Satyajit Sahu <satyajit.sahu@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
index b7d861ed5284..6ae0b723ffd7 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
@@ -37,7 +37,7 @@ int amdgpu_to_sched_priority(int amdgpu_priority,
{
switch (amdgpu_priority) {
case AMDGPU_CTX_PRIORITY_VERY_HIGH:
- *prio = DRM_SCHED_PRIORITY_HIGH;
+ *prio = DRM_SCHED_PRIORITY_VERY_HIGH;
break;
case AMDGPU_CTX_PRIORITY_HIGH:
*prio = DRM_SCHED_PRIORITY_HIGH;
--
2.25.1
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 3/5] drm/amdgpu/vce:set vce ring priority level
2021-08-24 5:55 [PATCH 1/5] drm/sched:add new priority level Satyajit Sahu
2021-08-24 5:55 ` [PATCH 2/5] drm/amdgpu: map user set priority to drm sched priority Satyajit Sahu
@ 2021-08-24 5:55 ` Satyajit Sahu
2021-08-24 6:09 ` Christian König
2021-08-24 5:55 ` [PATCH 4/5] drm/amdgpu/vcn:set vcn encode " Satyajit Sahu
` (2 subsequent siblings)
4 siblings, 1 reply; 17+ messages in thread
From: Satyajit Sahu @ 2021-08-24 5:55 UTC (permalink / raw)
To: amd-gfx
Cc: leo.liu, Alexander.Deucher, christian.koenig, shashank.sharma,
nirmoy.das, Satyajit Sahu
There are multiple rings available in VCE. Map each ring
to different priority.
Signed-off-by: Satyajit Sahu <satyajit.sahu@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c | 14 ++++++++++++++
drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h | 15 +++++++++++++++
2 files changed, 29 insertions(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
index 1ae7f824adc7..379f27f14b39 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
@@ -1168,3 +1168,17 @@ int amdgpu_vce_ring_test_ib(struct amdgpu_ring *ring, long timeout)
amdgpu_bo_free_kernel(&bo, NULL, NULL);
return r;
}
+
+enum vce_enc_ring_priority get_vce_ring_prio(int index)
+{
+ switch(index) {
+ case AMDGPU_VCE_GENERAL_PURPOSE:
+ return AMDGPU_VCE_ENC_PRIO_NORMAL;
+ case AMDGPU_VCE_LOW_LATENCY:
+ return AMDGPU_VCE_ENC_PRIO_HIGH;
+ case AMDGPU_VCE_REALTIME:
+ return AMDGPU_VCE_ENC_PRIO_VERY_HIGH;
+ default:
+ return AMDGPU_VCE_ENC_PRIO_NORMAL;
+ }
+}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h
index d6d83a3ec803..3cb34ee3e4b6 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h
@@ -32,6 +32,19 @@
#define AMDGPU_VCE_FW_53_45 ((53 << 24) | (45 << 16))
+enum vce_enc_ring_priority {
+ AMDGPU_VCE_ENC_PRIO_NORMAL = 1,
+ AMDGPU_VCE_ENC_PRIO_HIGH,
+ AMDGPU_VCE_ENC_PRIO_VERY_HIGH,
+ AMDGPU_VCE_ENC_PRIO_MAX
+};
+
+enum vce_enc_ring_type {
+ AMDGPU_VCE_GENERAL_PURPOSE,
+ AMDGPU_VCE_LOW_LATENCY,
+ AMDGPU_VCE_REALTIME
+};
+
struct amdgpu_vce {
struct amdgpu_bo *vcpu_bo;
uint64_t gpu_addr;
@@ -72,4 +85,6 @@ void amdgpu_vce_ring_end_use(struct amdgpu_ring *ring);
unsigned amdgpu_vce_ring_get_emit_ib_size(struct amdgpu_ring *ring);
unsigned amdgpu_vce_ring_get_dma_frame_size(struct amdgpu_ring *ring);
+enum vce_enc_ring_priority get_vce_ring_prio(int index);
+
#endif
--
2.25.1
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 4/5] drm/amdgpu/vcn:set vcn encode ring priority level
2021-08-24 5:55 [PATCH 1/5] drm/sched:add new priority level Satyajit Sahu
2021-08-24 5:55 ` [PATCH 2/5] drm/amdgpu: map user set priority to drm sched priority Satyajit Sahu
2021-08-24 5:55 ` [PATCH 3/5] drm/amdgpu/vce:set vce ring priority level Satyajit Sahu
@ 2021-08-24 5:55 ` Satyajit Sahu
2021-08-24 5:55 ` [PATCH 5/5] drm/amdgpu/vcn/vce:schedule encode job based on priorrity Satyajit Sahu
2021-08-24 6:10 ` [PATCH 1/5] drm/sched:add new priority level Christian König
4 siblings, 0 replies; 17+ messages in thread
From: Satyajit Sahu @ 2021-08-24 5:55 UTC (permalink / raw)
To: amd-gfx
Cc: leo.liu, Alexander.Deucher, christian.koenig, shashank.sharma,
nirmoy.das, Satyajit Sahu
There are multiple rings available in VCN encode. Map each ring
to different priority.
Signed-off-by: Satyajit Sahu <satyajit.sahu@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 14 ++++++++++++++
drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h | 14 ++++++++++++++
2 files changed, 28 insertions(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
index 6780df0fb265..14075f2c33f0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
@@ -951,3 +951,17 @@ int amdgpu_vcn_enc_ring_test_ib(struct amdgpu_ring *ring, long timeout)
return r;
}
+
+enum vcn_enc_ring_priority get_vcn_enc_ring_prio(int index)
+{
+ switch(index) {
+ case AMDGPU_VCN_ENC_GENERAL_PURPOSE:
+ return AMDGPU_VCN_ENC_PRIO_NORMAL;
+ case AMDGPU_VCN_ENC_LOW_LATENCY:
+ return AMDGPU_VCN_ENC_PRIO_HIGH;
+ case AMDGPU_VCN_ENC_REALTIME:
+ return AMDGPU_VCN_ENC_PRIO_VERY_HIGH;
+ default:
+ return AMDGPU_VCN_ENC_PRIO_NORMAL;
+ }
+}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
index d74c62b49795..14dcf23fcf0a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
@@ -290,6 +290,19 @@ enum vcn_ring_type {
VCN_UNIFIED_RING,
};
+enum vcn_enc_ring_priority {
+ AMDGPU_VCN_ENC_PRIO_NORMAL = 1,
+ AMDGPU_VCN_ENC_PRIO_HIGH,
+ AMDGPU_VCN_ENC_PRIO_VERY_HIGH,
+ AMDGPU_VCN_ENC_PRIO_MAX
+};
+
+enum vcn_enc_ring_type {
+ AMDGPU_VCN_ENC_GENERAL_PURPOSE,
+ AMDGPU_VCN_ENC_LOW_LATENCY,
+ AMDGPU_VCN_ENC_REALTIME
+};
+
int amdgpu_vcn_sw_init(struct amdgpu_device *adev);
int amdgpu_vcn_sw_fini(struct amdgpu_device *adev);
int amdgpu_vcn_suspend(struct amdgpu_device *adev);
@@ -307,5 +320,6 @@ int amdgpu_vcn_dec_sw_ring_test_ib(struct amdgpu_ring *ring, long timeout);
int amdgpu_vcn_enc_ring_test_ring(struct amdgpu_ring *ring);
int amdgpu_vcn_enc_ring_test_ib(struct amdgpu_ring *ring, long timeout);
+enum vcn_enc_ring_priority get_vcn_enc_ring_prio(int index);
#endif
--
2.25.1
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 5/5] drm/amdgpu/vcn/vce:schedule encode job based on priorrity
2021-08-24 5:55 [PATCH 1/5] drm/sched:add new priority level Satyajit Sahu
` (2 preceding siblings ...)
2021-08-24 5:55 ` [PATCH 4/5] drm/amdgpu/vcn:set vcn encode " Satyajit Sahu
@ 2021-08-24 5:55 ` Satyajit Sahu
2021-08-24 6:14 ` Christian König
2021-08-24 6:10 ` [PATCH 1/5] drm/sched:add new priority level Christian König
4 siblings, 1 reply; 17+ messages in thread
From: Satyajit Sahu @ 2021-08-24 5:55 UTC (permalink / raw)
To: amd-gfx
Cc: leo.liu, Alexander.Deucher, christian.koenig, shashank.sharma,
nirmoy.das, Satyajit Sahu
Schedule the encode job properly in the VCE/VCN encode
rings based on the priority set by UMD.
Signed-off-by: Satyajit Sahu <satyajit.sahu@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 40 +++++++++++++++++++++++--
drivers/gpu/drm/amd/amdgpu/vce_v2_0.c | 4 ++-
drivers/gpu/drm/amd/amdgpu/vce_v3_0.c | 4 ++-
drivers/gpu/drm/amd/amdgpu/vce_v4_0.c | 4 ++-
drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c | 4 ++-
drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c | 4 ++-
drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c | 4 ++-
drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c | 5 ++--
8 files changed, 58 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
index e7a010b7ca1f..adc11bb81787 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
@@ -73,15 +73,49 @@ static enum gfx_pipe_priority amdgpu_ctx_sched_prio_to_compute_prio(enum drm_sch
}
}
+static enum gfx_pipe_priority amdgpu_ctx_sched_prio_to_vcn_prio(enum drm_sched_priority prio)
+{
+ switch (prio) {
+ case DRM_SCHED_PRIORITY_HIGH:
+ return AMDGPU_VCN_ENC_PRIO_HIGH;
+ case DRM_SCHED_PRIORITY_VERY_HIGH:
+ return AMDGPU_VCN_ENC_PRIO_VERY_HIGH;
+ default:
+ return AMDGPU_VCN_ENC_PRIO_NORMAL;
+ }
+}
+
+static enum gfx_pipe_priority amdgpu_ctx_sched_prio_to_vce_prio(enum drm_sched_priority prio)
+{
+ switch (prio) {
+ case DRM_SCHED_PRIORITY_HIGH:
+ return AMDGPU_VCE_ENC_PRIO_HIGH;
+ case DRM_SCHED_PRIORITY_VERY_HIGH:
+ return AMDGPU_VCE_ENC_PRIO_VERY_HIGH;
+ default:
+ return AMDGPU_VCE_ENC_PRIO_NORMAL;
+ }
+}
+
static unsigned int amdgpu_ctx_prio_sched_to_hw(struct amdgpu_device *adev,
enum drm_sched_priority prio,
u32 hw_ip)
{
unsigned int hw_prio;
- hw_prio = (hw_ip == AMDGPU_HW_IP_COMPUTE) ?
- amdgpu_ctx_sched_prio_to_compute_prio(prio) :
- AMDGPU_RING_PRIO_DEFAULT;
+ switch(hw_ip) {
+ case AMDGPU_HW_IP_COMPUTE:
+ hw_prio = amdgpu_ctx_sched_prio_to_compute_prio(prio);
+ break;
+ case AMDGPU_HW_IP_VCN_ENC:
+ hw_prio = amdgpu_ctx_sched_prio_to_vcn_prio(prio);
+ break;
+ case AMDGPU_HW_IP_VCE:
+ hw_prio = amdgpu_ctx_sched_prio_to_vce_prio(prio);
+ break;
+ default:
+ hw_prio = AMDGPU_RING_PRIO_DEFAULT;
+ }
hw_ip = array_index_nospec(hw_ip, AMDGPU_HW_IP_NUM);
if (adev->gpu_sched[hw_ip][hw_prio].num_scheds == 0)
hw_prio = AMDGPU_RING_PRIO_DEFAULT;
diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v2_0.c b/drivers/gpu/drm/amd/amdgpu/vce_v2_0.c
index c7d28c169be5..2b6b7f1a77b9 100644
--- a/drivers/gpu/drm/amd/amdgpu/vce_v2_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vce_v2_0.c
@@ -431,10 +431,12 @@ static int vce_v2_0_sw_init(void *handle)
return r;
for (i = 0; i < adev->vce.num_rings; i++) {
+ unsigned int hw_prio = get_vce_ring_prio(i);
+
ring = &adev->vce.ring[i];
sprintf(ring->name, "vce%d", i);
r = amdgpu_ring_init(adev, ring, 512, &adev->vce.irq, 0,
- AMDGPU_RING_PRIO_DEFAULT, NULL);
+ hw_prio, NULL);
if (r)
return r;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c
index 3b82fb289ef6..5ce182a837f3 100644
--- a/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c
@@ -440,10 +440,12 @@ static int vce_v3_0_sw_init(void *handle)
return r;
for (i = 0; i < adev->vce.num_rings; i++) {
+ unsigned int hw_prio = get_vce_ring_prio(i);
+
ring = &adev->vce.ring[i];
sprintf(ring->name, "vce%d", i);
r = amdgpu_ring_init(adev, ring, 512, &adev->vce.irq, 0,
- AMDGPU_RING_PRIO_DEFAULT, NULL);
+ hw_prio, NULL);
if (r)
return r;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
index 90910d19db12..c085defaabfe 100644
--- a/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
@@ -463,6 +463,8 @@ static int vce_v4_0_sw_init(void *handle)
}
for (i = 0; i < adev->vce.num_rings; i++) {
+ unsigned int hw_prio = get_vce_ring_prio(i);
+
ring = &adev->vce.ring[i];
sprintf(ring->name, "vce%d", i);
if (amdgpu_sriov_vf(adev)) {
@@ -478,7 +480,7 @@ static int vce_v4_0_sw_init(void *handle)
ring->doorbell_index = adev->doorbell_index.uvd_vce.vce_ring2_3 * 2 + 1;
}
r = amdgpu_ring_init(adev, ring, 512, &adev->vce.irq, 0,
- AMDGPU_RING_PRIO_DEFAULT, NULL);
+ hw_prio, NULL);
if (r)
return r;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
index 284bb42d6c86..a41b2c40487e 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
@@ -145,10 +145,12 @@ static int vcn_v1_0_sw_init(void *handle)
SOC15_REG_OFFSET(UVD, 0, mmUVD_NO_OP);
for (i = 0; i < adev->vcn.num_enc_rings; ++i) {
+ unsigned int hw_prio = get_vcn_enc_ring_prio(i);
+
ring = &adev->vcn.inst->ring_enc[i];
sprintf(ring->name, "vcn_enc%d", i);
r = amdgpu_ring_init(adev, ring, 512, &adev->vcn.inst->irq, 0,
- AMDGPU_RING_PRIO_DEFAULT, NULL);
+ hw_prio, NULL);
if (r)
return r;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
index 8af567c546db..9729a383786b 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
@@ -159,6 +159,8 @@ static int vcn_v2_0_sw_init(void *handle)
adev->vcn.inst->external.nop = SOC15_REG_OFFSET(UVD, 0, mmUVD_NO_OP);
for (i = 0; i < adev->vcn.num_enc_rings; ++i) {
+ unsigned int hw_prio = get_vcn_enc_ring_prio(i);
+
ring = &adev->vcn.inst->ring_enc[i];
ring->use_doorbell = true;
if (!amdgpu_sriov_vf(adev))
@@ -167,7 +169,7 @@ static int vcn_v2_0_sw_init(void *handle)
ring->doorbell_index = (adev->doorbell_index.vcn.vcn_ring0_1 << 1) + 1 + i;
sprintf(ring->name, "vcn_enc%d", i);
r = amdgpu_ring_init(adev, ring, 512, &adev->vcn.inst->irq, 0,
- AMDGPU_RING_PRIO_DEFAULT, NULL);
+ hw_prio, NULL);
if (r)
return r;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
index 888b17d84691..9eca70d3ff30 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
@@ -194,6 +194,8 @@ static int vcn_v2_5_sw_init(void *handle)
return r;
for (i = 0; i < adev->vcn.num_enc_rings; ++i) {
+ unsigned int hw_prio = get_vcn_enc_ring_prio(i);
+
ring = &adev->vcn.inst[j].ring_enc[i];
ring->use_doorbell = true;
@@ -203,7 +205,7 @@ static int vcn_v2_5_sw_init(void *handle)
sprintf(ring->name, "vcn_enc_%d.%d", j, i);
r = amdgpu_ring_init(adev, ring, 512,
&adev->vcn.inst[j].irq, 0,
- AMDGPU_RING_PRIO_DEFAULT, NULL);
+ hw_prio, NULL);
if (r)
return r;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
index 47d4f04cbd69..926c9f4bfc21 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
@@ -227,6 +227,8 @@ static int vcn_v3_0_sw_init(void *handle)
return r;
for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
+ unsigned int hw_prio = get_vcn_enc_ring_prio(j);
+
/* VCN ENC TRAP */
r = amdgpu_irq_add_id(adev, amdgpu_ih_clientid_vcns[i],
j + VCN_2_0__SRCID__UVD_ENC_GENERAL_PURPOSE, &adev->vcn.inst[i].irq);
@@ -242,8 +244,7 @@ static int vcn_v3_0_sw_init(void *handle)
}
sprintf(ring->name, "vcn_enc_%d.%d", i, j);
r = amdgpu_ring_init(adev, ring, 512, &adev->vcn.inst[i].irq, 0,
- AMDGPU_RING_PRIO_DEFAULT,
- &adev->vcn.inst[i].sched_score);
+ hw_prio, &adev->vcn.inst[i].sched_score);
if (r)
return r;
}
--
2.25.1
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH 3/5] drm/amdgpu/vce:set vce ring priority level
2021-08-24 5:55 ` [PATCH 3/5] drm/amdgpu/vce:set vce ring priority level Satyajit Sahu
@ 2021-08-24 6:09 ` Christian König
0 siblings, 0 replies; 17+ messages in thread
From: Christian König @ 2021-08-24 6:09 UTC (permalink / raw)
To: Satyajit Sahu, amd-gfx
Cc: leo.liu, Alexander.Deucher, shashank.sharma, nirmoy.das
Am 24.08.21 um 07:55 schrieb Satyajit Sahu:
> There are multiple rings available in VCE. Map each ring
> to different priority.
>
> Signed-off-by: Satyajit Sahu <satyajit.sahu@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c | 14 ++++++++++++++
> drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h | 15 +++++++++++++++
> 2 files changed, 29 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
> index 1ae7f824adc7..379f27f14b39 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
> @@ -1168,3 +1168,17 @@ int amdgpu_vce_ring_test_ib(struct amdgpu_ring *ring, long timeout)
> amdgpu_bo_free_kernel(&bo, NULL, NULL);
> return r;
> }
> +
> +enum vce_enc_ring_priority get_vce_ring_prio(int index)
Please name the function amdgpu_vce_...
Same for VCN as well.
Regards,
Christian.
> +{
> + switch(index) {
> + case AMDGPU_VCE_GENERAL_PURPOSE:
> + return AMDGPU_VCE_ENC_PRIO_NORMAL;
> + case AMDGPU_VCE_LOW_LATENCY:
> + return AMDGPU_VCE_ENC_PRIO_HIGH;
> + case AMDGPU_VCE_REALTIME:
> + return AMDGPU_VCE_ENC_PRIO_VERY_HIGH;
> + default:
> + return AMDGPU_VCE_ENC_PRIO_NORMAL;
> + }
> +}
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h
> index d6d83a3ec803..3cb34ee3e4b6 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h
> @@ -32,6 +32,19 @@
>
> #define AMDGPU_VCE_FW_53_45 ((53 << 24) | (45 << 16))
>
> +enum vce_enc_ring_priority {
> + AMDGPU_VCE_ENC_PRIO_NORMAL = 1,
> + AMDGPU_VCE_ENC_PRIO_HIGH,
> + AMDGPU_VCE_ENC_PRIO_VERY_HIGH,
> + AMDGPU_VCE_ENC_PRIO_MAX
> +};
> +
> +enum vce_enc_ring_type {
> + AMDGPU_VCE_GENERAL_PURPOSE,
> + AMDGPU_VCE_LOW_LATENCY,
> + AMDGPU_VCE_REALTIME
> +};
> +
> struct amdgpu_vce {
> struct amdgpu_bo *vcpu_bo;
> uint64_t gpu_addr;
> @@ -72,4 +85,6 @@ void amdgpu_vce_ring_end_use(struct amdgpu_ring *ring);
> unsigned amdgpu_vce_ring_get_emit_ib_size(struct amdgpu_ring *ring);
> unsigned amdgpu_vce_ring_get_dma_frame_size(struct amdgpu_ring *ring);
>
> +enum vce_enc_ring_priority get_vce_ring_prio(int index);
> +
> #endif
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/5] drm/sched:add new priority level
2021-08-24 5:55 [PATCH 1/5] drm/sched:add new priority level Satyajit Sahu
` (3 preceding siblings ...)
2021-08-24 5:55 ` [PATCH 5/5] drm/amdgpu/vcn/vce:schedule encode job based on priorrity Satyajit Sahu
@ 2021-08-24 6:10 ` Christian König
2021-08-24 8:32 ` Sharma, Shashank
2021-08-24 11:57 ` Das, Nirmoy
4 siblings, 2 replies; 17+ messages in thread
From: Christian König @ 2021-08-24 6:10 UTC (permalink / raw)
To: Satyajit Sahu, amd-gfx
Cc: leo.liu, Alexander.Deucher, shashank.sharma, nirmoy.das
I haven't followed the previous discussion, but that looks like this
change is based on a misunderstanding.
Those here are the software priorities used in the scheduler, but what
you are working on are the hardware priorities.
That are two completely different things which we shouldn't mix up.
Regards,
Christian.
Am 24.08.21 um 07:55 schrieb Satyajit Sahu:
> Adding a new priority level DRM_SCHED_PRIORITY_VERY_HIGH
>
> Signed-off-by: Satyajit Sahu <satyajit.sahu@amd.com>
> ---
> include/drm/gpu_scheduler.h | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> index d18af49fd009..d0e5e234da5f 100644
> --- a/include/drm/gpu_scheduler.h
> +++ b/include/drm/gpu_scheduler.h
> @@ -40,6 +40,7 @@ enum drm_sched_priority {
> DRM_SCHED_PRIORITY_MIN,
> DRM_SCHED_PRIORITY_NORMAL,
> DRM_SCHED_PRIORITY_HIGH,
> + DRM_SCHED_PRIORITY_VERY_HIGH,
> DRM_SCHED_PRIORITY_KERNEL,
>
> DRM_SCHED_PRIORITY_COUNT,
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 5/5] drm/amdgpu/vcn/vce:schedule encode job based on priorrity
2021-08-24 5:55 ` [PATCH 5/5] drm/amdgpu/vcn/vce:schedule encode job based on priorrity Satyajit Sahu
@ 2021-08-24 6:14 ` Christian König
0 siblings, 0 replies; 17+ messages in thread
From: Christian König @ 2021-08-24 6:14 UTC (permalink / raw)
To: Satyajit Sahu, amd-gfx
Cc: leo.liu, Alexander.Deucher, shashank.sharma, nirmoy.das
Am 24.08.21 um 07:55 schrieb Satyajit Sahu:
> Schedule the encode job properly in the VCE/VCN encode
> rings based on the priority set by UMD.
>
> Signed-off-by: Satyajit Sahu <satyajit.sahu@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 40 +++++++++++++++++++++++--
> drivers/gpu/drm/amd/amdgpu/vce_v2_0.c | 4 ++-
> drivers/gpu/drm/amd/amdgpu/vce_v3_0.c | 4 ++-
> drivers/gpu/drm/amd/amdgpu/vce_v4_0.c | 4 ++-
> drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c | 4 ++-
> drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c | 4 ++-
> drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c | 4 ++-
> drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c | 5 ++--
> 8 files changed, 58 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> index e7a010b7ca1f..adc11bb81787 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> @@ -73,15 +73,49 @@ static enum gfx_pipe_priority amdgpu_ctx_sched_prio_to_compute_prio(enum drm_sch
> }
> }
>
> +static enum gfx_pipe_priority amdgpu_ctx_sched_prio_to_vcn_prio(enum drm_sched_priority prio)
> +{
> + switch (prio) {
> + case DRM_SCHED_PRIORITY_HIGH:
> + return AMDGPU_VCN_ENC_PRIO_HIGH;
> + case DRM_SCHED_PRIORITY_VERY_HIGH:
> + return AMDGPU_VCN_ENC_PRIO_VERY_HIGH;
> + default:
> + return AMDGPU_VCN_ENC_PRIO_NORMAL;
> + }
> +}
> +
> +static enum gfx_pipe_priority amdgpu_ctx_sched_prio_to_vce_prio(enum drm_sched_priority prio)
> +{
> + switch (prio) {
> + case DRM_SCHED_PRIORITY_HIGH:
> + return AMDGPU_VCE_ENC_PRIO_HIGH;
> + case DRM_SCHED_PRIORITY_VERY_HIGH:
> + return AMDGPU_VCE_ENC_PRIO_VERY_HIGH;
> + default:
> + return AMDGPU_VCE_ENC_PRIO_NORMAL;
> + }
> +}
> +
> static unsigned int amdgpu_ctx_prio_sched_to_hw(struct amdgpu_device *adev,
> enum drm_sched_priority prio,
> u32 hw_ip)
> {
> unsigned int hw_prio;
>
> - hw_prio = (hw_ip == AMDGPU_HW_IP_COMPUTE) ?
> - amdgpu_ctx_sched_prio_to_compute_prio(prio) :
> - AMDGPU_RING_PRIO_DEFAULT;
> + switch(hw_ip) {
> + case AMDGPU_HW_IP_COMPUTE:
> + hw_prio = amdgpu_ctx_sched_prio_to_compute_prio(prio);
> + break;
> + case AMDGPU_HW_IP_VCN_ENC:
> + hw_prio = amdgpu_ctx_sched_prio_to_vcn_prio(prio);
> + break;
> + case AMDGPU_HW_IP_VCE:
> + hw_prio = amdgpu_ctx_sched_prio_to_vce_prio(prio);
> + break;
> + default:
> + hw_prio = AMDGPU_RING_PRIO_DEFAULT;
> + }
Ok now I see where your confusion is coming from. Those functions here
should not work with the scheduler priority to begin with.
> hw_ip = array_index_nospec(hw_ip, AMDGPU_HW_IP_NUM);
> if (adev->gpu_sched[hw_ip][hw_prio].num_scheds == 0)
> hw_prio = AMDGPU_RING_PRIO_DEFAULT;
The rest of the changes starting from here should not be part of this
patch, but rather of the respectively VCN/VCE changes.
Regards,
Christian.
> diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v2_0.c b/drivers/gpu/drm/amd/amdgpu/vce_v2_0.c
> index c7d28c169be5..2b6b7f1a77b9 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vce_v2_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vce_v2_0.c
> @@ -431,10 +431,12 @@ static int vce_v2_0_sw_init(void *handle)
> return r;
>
> for (i = 0; i < adev->vce.num_rings; i++) {
> + unsigned int hw_prio = get_vce_ring_prio(i);
> +
> ring = &adev->vce.ring[i];
> sprintf(ring->name, "vce%d", i);
> r = amdgpu_ring_init(adev, ring, 512, &adev->vce.irq, 0,
> - AMDGPU_RING_PRIO_DEFAULT, NULL);
> + hw_prio, NULL);
> if (r)
> return r;
> }
> diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c
> index 3b82fb289ef6..5ce182a837f3 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c
> @@ -440,10 +440,12 @@ static int vce_v3_0_sw_init(void *handle)
> return r;
>
> for (i = 0; i < adev->vce.num_rings; i++) {
> + unsigned int hw_prio = get_vce_ring_prio(i);
> +
> ring = &adev->vce.ring[i];
> sprintf(ring->name, "vce%d", i);
> r = amdgpu_ring_init(adev, ring, 512, &adev->vce.irq, 0,
> - AMDGPU_RING_PRIO_DEFAULT, NULL);
> + hw_prio, NULL);
> if (r)
> return r;
> }
> diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
> index 90910d19db12..c085defaabfe 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
> @@ -463,6 +463,8 @@ static int vce_v4_0_sw_init(void *handle)
> }
>
> for (i = 0; i < adev->vce.num_rings; i++) {
> + unsigned int hw_prio = get_vce_ring_prio(i);
> +
> ring = &adev->vce.ring[i];
> sprintf(ring->name, "vce%d", i);
> if (amdgpu_sriov_vf(adev)) {
> @@ -478,7 +480,7 @@ static int vce_v4_0_sw_init(void *handle)
> ring->doorbell_index = adev->doorbell_index.uvd_vce.vce_ring2_3 * 2 + 1;
> }
> r = amdgpu_ring_init(adev, ring, 512, &adev->vce.irq, 0,
> - AMDGPU_RING_PRIO_DEFAULT, NULL);
> + hw_prio, NULL);
> if (r)
> return r;
> }
> diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
> index 284bb42d6c86..a41b2c40487e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
> @@ -145,10 +145,12 @@ static int vcn_v1_0_sw_init(void *handle)
> SOC15_REG_OFFSET(UVD, 0, mmUVD_NO_OP);
>
> for (i = 0; i < adev->vcn.num_enc_rings; ++i) {
> + unsigned int hw_prio = get_vcn_enc_ring_prio(i);
> +
> ring = &adev->vcn.inst->ring_enc[i];
> sprintf(ring->name, "vcn_enc%d", i);
> r = amdgpu_ring_init(adev, ring, 512, &adev->vcn.inst->irq, 0,
> - AMDGPU_RING_PRIO_DEFAULT, NULL);
> + hw_prio, NULL);
> if (r)
> return r;
> }
> diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
> index 8af567c546db..9729a383786b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
> @@ -159,6 +159,8 @@ static int vcn_v2_0_sw_init(void *handle)
> adev->vcn.inst->external.nop = SOC15_REG_OFFSET(UVD, 0, mmUVD_NO_OP);
>
> for (i = 0; i < adev->vcn.num_enc_rings; ++i) {
> + unsigned int hw_prio = get_vcn_enc_ring_prio(i);
> +
> ring = &adev->vcn.inst->ring_enc[i];
> ring->use_doorbell = true;
> if (!amdgpu_sriov_vf(adev))
> @@ -167,7 +169,7 @@ static int vcn_v2_0_sw_init(void *handle)
> ring->doorbell_index = (adev->doorbell_index.vcn.vcn_ring0_1 << 1) + 1 + i;
> sprintf(ring->name, "vcn_enc%d", i);
> r = amdgpu_ring_init(adev, ring, 512, &adev->vcn.inst->irq, 0,
> - AMDGPU_RING_PRIO_DEFAULT, NULL);
> + hw_prio, NULL);
> if (r)
> return r;
> }
> diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
> index 888b17d84691..9eca70d3ff30 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
> @@ -194,6 +194,8 @@ static int vcn_v2_5_sw_init(void *handle)
> return r;
>
> for (i = 0; i < adev->vcn.num_enc_rings; ++i) {
> + unsigned int hw_prio = get_vcn_enc_ring_prio(i);
> +
> ring = &adev->vcn.inst[j].ring_enc[i];
> ring->use_doorbell = true;
>
> @@ -203,7 +205,7 @@ static int vcn_v2_5_sw_init(void *handle)
> sprintf(ring->name, "vcn_enc_%d.%d", j, i);
> r = amdgpu_ring_init(adev, ring, 512,
> &adev->vcn.inst[j].irq, 0,
> - AMDGPU_RING_PRIO_DEFAULT, NULL);
> + hw_prio, NULL);
> if (r)
> return r;
> }
> diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> index 47d4f04cbd69..926c9f4bfc21 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> @@ -227,6 +227,8 @@ static int vcn_v3_0_sw_init(void *handle)
> return r;
>
> for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
> + unsigned int hw_prio = get_vcn_enc_ring_prio(j);
> +
> /* VCN ENC TRAP */
> r = amdgpu_irq_add_id(adev, amdgpu_ih_clientid_vcns[i],
> j + VCN_2_0__SRCID__UVD_ENC_GENERAL_PURPOSE, &adev->vcn.inst[i].irq);
> @@ -242,8 +244,7 @@ static int vcn_v3_0_sw_init(void *handle)
> }
> sprintf(ring->name, "vcn_enc_%d.%d", i, j);
> r = amdgpu_ring_init(adev, ring, 512, &adev->vcn.inst[i].irq, 0,
> - AMDGPU_RING_PRIO_DEFAULT,
> - &adev->vcn.inst[i].sched_score);
> + hw_prio, &adev->vcn.inst[i].sched_score);
> if (r)
> return r;
> }
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/5] drm/sched:add new priority level
2021-08-24 6:10 ` [PATCH 1/5] drm/sched:add new priority level Christian König
@ 2021-08-24 8:32 ` Sharma, Shashank
2021-08-24 8:55 ` Christian König
2021-08-24 11:57 ` Das, Nirmoy
1 sibling, 1 reply; 17+ messages in thread
From: Sharma, Shashank @ 2021-08-24 8:32 UTC (permalink / raw)
To: Christian König, Satyajit Sahu, amd-gfx
Cc: leo.liu, Alexander.Deucher, nirmoy.das
Hi Christian,
I am a bit curious here.
I thought it would be a good idea to add a new SW priority level, so
that any other driver can also utilize this SW infrastructure.
So it could be like, if you have a HW which matches with SW priority
levels, directly map your HW queue to the SW priority level, like:
DRM_SCHED_PRIORITY_VERY_HIGH: mapped to a queue in HW reserved for real
time or very high priority tasks, which can't be missed
DRM_SCHED_PRIORITY_HIGH : mapped to a queue of High priority tasks, for
better experience, like encode/decode operations.
DRM_SCHED_PRIORITY_NORMAL: default, mapped to a queue of tasks without a
priority context specified
DRM_SCHED_PRIORITY_MIN: queue for specifically mentioned low priority tasks
Depending on the HW we are running on, we can map these SW queues to
corresponding HW queues, isn't it ?
Regards
Shashank
On 8/24/2021 11:40 AM, Christian König wrote:
> I haven't followed the previous discussion, but that looks like this
> change is based on a misunderstanding.
>
> Those here are the software priorities used in the scheduler, but what
> you are working on are the hardware priorities.
>
> That are two completely different things which we shouldn't mix up.
>
> Regards,
> Christian.
>
> Am 24.08.21 um 07:55 schrieb Satyajit Sahu:
>> Adding a new priority level DRM_SCHED_PRIORITY_VERY_HIGH
>>
>> Signed-off-by: Satyajit Sahu <satyajit.sahu@amd.com>
>> ---
>> include/drm/gpu_scheduler.h | 1 +
>> 1 file changed, 1 insertion(+)
>>
>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>> index d18af49fd009..d0e5e234da5f 100644
>> --- a/include/drm/gpu_scheduler.h
>> +++ b/include/drm/gpu_scheduler.h
>> @@ -40,6 +40,7 @@ enum drm_sched_priority {
>> DRM_SCHED_PRIORITY_MIN,
>> DRM_SCHED_PRIORITY_NORMAL,
>> DRM_SCHED_PRIORITY_HIGH,
>> + DRM_SCHED_PRIORITY_VERY_HIGH,
>> DRM_SCHED_PRIORITY_KERNEL,
>> DRM_SCHED_PRIORITY_COUNT,
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/5] drm/sched:add new priority level
2021-08-24 8:32 ` Sharma, Shashank
@ 2021-08-24 8:55 ` Christian König
2021-08-24 9:45 ` Sharma, Shashank
0 siblings, 1 reply; 17+ messages in thread
From: Christian König @ 2021-08-24 8:55 UTC (permalink / raw)
To: Sharma, Shashank, Satyajit Sahu, amd-gfx
Cc: leo.liu, Alexander.Deucher, nirmoy.das
Nope that are two completely different things.
The DRM_SCHED_PRIORITY_* exposes a functionality of the software
scheduler. E.g. we try to serve kernel queues first and if those are
empty we use high priority etc....
But that functionality is completely independent from the hardware
priority handling. In other words you can different hardware queues with
priorities as well and each of them is served by a software scheduler.
In other words imagine the following setup: Two hardware queues, one
normal and one low latency. Each hardware queue is then feed by a
software scheduler with the priorities low,normal,high,kernel.
This configuration then gives you 8 different priorities to use.
Regards,
Christian.
Am 24.08.21 um 10:32 schrieb Sharma, Shashank:
> Hi Christian,
> I am a bit curious here.
>
> I thought it would be a good idea to add a new SW priority level, so
> that any other driver can also utilize this SW infrastructure.
>
> So it could be like, if you have a HW which matches with SW priority
> levels, directly map your HW queue to the SW priority level, like:
>
> DRM_SCHED_PRIORITY_VERY_HIGH: mapped to a queue in HW reserved for
> real time or very high priority tasks, which can't be missed
>
> DRM_SCHED_PRIORITY_HIGH : mapped to a queue of High priority tasks,
> for better experience, like encode/decode operations.
>
> DRM_SCHED_PRIORITY_NORMAL: default, mapped to a queue of tasks without
> a priority context specified
>
> DRM_SCHED_PRIORITY_MIN: queue for specifically mentioned low priority
> tasks
>
> Depending on the HW we are running on, we can map these SW queues to
> corresponding HW queues, isn't it ?
>
> Regards
> Shashank
>
> On 8/24/2021 11:40 AM, Christian König wrote:
>> I haven't followed the previous discussion, but that looks like this
>> change is based on a misunderstanding.
>>
>> Those here are the software priorities used in the scheduler, but
>> what you are working on are the hardware priorities.
>>
>> That are two completely different things which we shouldn't mix up.
>>
>> Regards,
>> Christian.
>>
>> Am 24.08.21 um 07:55 schrieb Satyajit Sahu:
>>> Adding a new priority level DRM_SCHED_PRIORITY_VERY_HIGH
>>>
>>> Signed-off-by: Satyajit Sahu <satyajit.sahu@amd.com>
>>> ---
>>> include/drm/gpu_scheduler.h | 1 +
>>> 1 file changed, 1 insertion(+)
>>>
>>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>>> index d18af49fd009..d0e5e234da5f 100644
>>> --- a/include/drm/gpu_scheduler.h
>>> +++ b/include/drm/gpu_scheduler.h
>>> @@ -40,6 +40,7 @@ enum drm_sched_priority {
>>> DRM_SCHED_PRIORITY_MIN,
>>> DRM_SCHED_PRIORITY_NORMAL,
>>> DRM_SCHED_PRIORITY_HIGH,
>>> + DRM_SCHED_PRIORITY_VERY_HIGH,
>>> DRM_SCHED_PRIORITY_KERNEL,
>>> DRM_SCHED_PRIORITY_COUNT,
>>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/5] drm/sched:add new priority level
2021-08-24 8:55 ` Christian König
@ 2021-08-24 9:45 ` Sharma, Shashank
2021-08-24 11:56 ` Christian König
0 siblings, 1 reply; 17+ messages in thread
From: Sharma, Shashank @ 2021-08-24 9:45 UTC (permalink / raw)
To: Christian König, Satyajit Sahu, amd-gfx
Cc: leo.liu, Alexander.Deucher, nirmoy.das
On 8/24/2021 2:25 PM, Christian König wrote:
> Nope that are two completely different things.
>
> The DRM_SCHED_PRIORITY_* exposes a functionality of the software
> scheduler. E.g. we try to serve kernel queues first and if those are
> empty we use high priority etc....
>
> But that functionality is completely independent from the hardware
> priority handling. In other words you can different hardware queues with
> priorities as well and each of them is served by a software scheduler.
>
> In other words imagine the following setup: Two hardware queues, one
> normal and one low latency. Each hardware queue is then feed by a
> software scheduler with the priorities low,normal,high,kernel.
>
> This configuration then gives you 8 different priorities to use.
>
Thanks for the details, I was under quite a different impression, this
explanation helps.
I guess this also means that the HW queues are completely left to be
managed by the core driver (Like AMDGPU or I915 etc) as of now, and the
DRM framework only provides SW schedulers ?
Does this suggest a scope of a common framework or abstraction layer for
HW queues in DRM ? Most of the architectures/HW will atleast have a
NORMAL and a higher priority work queue, and their drivers might be
handling them in very similar ways.
- Shashank
> Regards,
> Christian.
>
> Am 24.08.21 um 10:32 schrieb Sharma, Shashank:
>> Hi Christian,
>> I am a bit curious here.
>>
>> I thought it would be a good idea to add a new SW priority level, so
>> that any other driver can also utilize this SW infrastructure.
>>
>> So it could be like, if you have a HW which matches with SW priority
>> levels, directly map your HW queue to the SW priority level, like:
>>
>> DRM_SCHED_PRIORITY_VERY_HIGH: mapped to a queue in HW reserved for
>> real time or very high priority tasks, which can't be missed
>>
>> DRM_SCHED_PRIORITY_HIGH : mapped to a queue of High priority tasks,
>> for better experience, like encode/decode operations.
>>
>> DRM_SCHED_PRIORITY_NORMAL: default, mapped to a queue of tasks without
>> a priority context specified
>>
>> DRM_SCHED_PRIORITY_MIN: queue for specifically mentioned low priority
>> tasks
>>
>> Depending on the HW we are running on, we can map these SW queues to
>> corresponding HW queues, isn't it ?
>>
>> Regards
>> Shashank
>>
>> On 8/24/2021 11:40 AM, Christian König wrote:
>>> I haven't followed the previous discussion, but that looks like this
>>> change is based on a misunderstanding.
>>>
>>> Those here are the software priorities used in the scheduler, but
>>> what you are working on are the hardware priorities.
>>>
>>> That are two completely different things which we shouldn't mix up.
>>>
>>> Regards,
>>> Christian.
>>>
>>> Am 24.08.21 um 07:55 schrieb Satyajit Sahu:
>>>> Adding a new priority level DRM_SCHED_PRIORITY_VERY_HIGH
>>>>
>>>> Signed-off-by: Satyajit Sahu <satyajit.sahu@amd.com>
>>>> ---
>>>> include/drm/gpu_scheduler.h | 1 +
>>>> 1 file changed, 1 insertion(+)
>>>>
>>>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>>>> index d18af49fd009..d0e5e234da5f 100644
>>>> --- a/include/drm/gpu_scheduler.h
>>>> +++ b/include/drm/gpu_scheduler.h
>>>> @@ -40,6 +40,7 @@ enum drm_sched_priority {
>>>> DRM_SCHED_PRIORITY_MIN,
>>>> DRM_SCHED_PRIORITY_NORMAL,
>>>> DRM_SCHED_PRIORITY_HIGH,
>>>> + DRM_SCHED_PRIORITY_VERY_HIGH,
>>>> DRM_SCHED_PRIORITY_KERNEL,
>>>> DRM_SCHED_PRIORITY_COUNT,
>>>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/5] drm/sched:add new priority level
2021-08-24 9:45 ` Sharma, Shashank
@ 2021-08-24 11:56 ` Christian König
0 siblings, 0 replies; 17+ messages in thread
From: Christian König @ 2021-08-24 11:56 UTC (permalink / raw)
To: Sharma, Shashank, Satyajit Sahu, amd-gfx
Cc: leo.liu, Alexander.Deucher, nirmoy.das
Am 24.08.21 um 11:45 schrieb Sharma, Shashank:
> On 8/24/2021 2:25 PM, Christian König wrote:
>> Nope that are two completely different things.
>>
>> The DRM_SCHED_PRIORITY_* exposes a functionality of the software
>> scheduler. E.g. we try to serve kernel queues first and if those are
>> empty we use high priority etc....
>>
>> But that functionality is completely independent from the hardware
>> priority handling. In other words you can different hardware queues
>> with priorities as well and each of them is served by a software
>> scheduler.
>>
>> In other words imagine the following setup: Two hardware queues, one
>> normal and one low latency. Each hardware queue is then feed by a
>> software scheduler with the priorities low,normal,high,kernel.
>>
>> This configuration then gives you 8 different priorities to use.
>>
>
> Thanks for the details, I was under quite a different impression, this
> explanation helps.
The problem is that we used the SW scheduler enum for the init_priority
and override_priority. Which is most likely a bad idea.
> I guess this also means that the HW queues are completely left to be
> managed by the core driver (Like AMDGPU or I915 etc) as of now, and
> the DRM framework only provides SW schedulers ?
Yes, exactly.
> Does this suggest a scope of a common framework or abstraction layer
> for HW queues in DRM ? Most of the architectures/HW will atleast have
> a NORMAL and a higher priority work queue, and their drivers might be
> handling them in very similar ways.
I don't think so. IIRC we even have generalized ring buffer functions in
the linux kernel which barely anybody uses because nearly every hw ring
buffer is different in one way or another.
Christian.
>
> - Shashank
>
>> Regards,
>> Christian.
>>
>> Am 24.08.21 um 10:32 schrieb Sharma, Shashank:
>>> Hi Christian,
>>> I am a bit curious here.
>>>
>>> I thought it would be a good idea to add a new SW priority level, so
>>> that any other driver can also utilize this SW infrastructure.
>>>
>>> So it could be like, if you have a HW which matches with SW priority
>>> levels, directly map your HW queue to the SW priority level, like:
>>>
>>> DRM_SCHED_PRIORITY_VERY_HIGH: mapped to a queue in HW reserved for
>>> real time or very high priority tasks, which can't be missed
>>>
>>> DRM_SCHED_PRIORITY_HIGH : mapped to a queue of High priority tasks,
>>> for better experience, like encode/decode operations.
>>>
>>> DRM_SCHED_PRIORITY_NORMAL: default, mapped to a queue of tasks
>>> without a priority context specified
>>>
>>> DRM_SCHED_PRIORITY_MIN: queue for specifically mentioned low
>>> priority tasks
>>>
>>> Depending on the HW we are running on, we can map these SW queues to
>>> corresponding HW queues, isn't it ?
>>>
>>> Regards
>>> Shashank
>>>
>>> On 8/24/2021 11:40 AM, Christian König wrote:
>>>> I haven't followed the previous discussion, but that looks like
>>>> this change is based on a misunderstanding.
>>>>
>>>> Those here are the software priorities used in the scheduler, but
>>>> what you are working on are the hardware priorities.
>>>>
>>>> That are two completely different things which we shouldn't mix up.
>>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>> Am 24.08.21 um 07:55 schrieb Satyajit Sahu:
>>>>> Adding a new priority level DRM_SCHED_PRIORITY_VERY_HIGH
>>>>>
>>>>> Signed-off-by: Satyajit Sahu <satyajit.sahu@amd.com>
>>>>> ---
>>>>> include/drm/gpu_scheduler.h | 1 +
>>>>> 1 file changed, 1 insertion(+)
>>>>>
>>>>> diff --git a/include/drm/gpu_scheduler.h
>>>>> b/include/drm/gpu_scheduler.h
>>>>> index d18af49fd009..d0e5e234da5f 100644
>>>>> --- a/include/drm/gpu_scheduler.h
>>>>> +++ b/include/drm/gpu_scheduler.h
>>>>> @@ -40,6 +40,7 @@ enum drm_sched_priority {
>>>>> DRM_SCHED_PRIORITY_MIN,
>>>>> DRM_SCHED_PRIORITY_NORMAL,
>>>>> DRM_SCHED_PRIORITY_HIGH,
>>>>> + DRM_SCHED_PRIORITY_VERY_HIGH,
>>>>> DRM_SCHED_PRIORITY_KERNEL,
>>>>> DRM_SCHED_PRIORITY_COUNT,
>>>>
>>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/5] drm/sched:add new priority level
2021-08-24 6:10 ` [PATCH 1/5] drm/sched:add new priority level Christian König
2021-08-24 8:32 ` Sharma, Shashank
@ 2021-08-24 11:57 ` Das, Nirmoy
2021-08-24 12:07 ` Christian König
1 sibling, 1 reply; 17+ messages in thread
From: Das, Nirmoy @ 2021-08-24 11:57 UTC (permalink / raw)
To: Christian König, Satyajit Sahu, amd-gfx
Cc: leo.liu, Alexander.Deucher, shashank.sharma
Hi Christian,
On 8/24/2021 8:10 AM, Christian König wrote:
> I haven't followed the previous discussion, but that looks like this
> change is based on a misunderstanding.
In previous discussion I sort of suggested to have new DRM prio as I
didn't see any other way to map priority provided by the userspace to
this new 3rd hw priority.
Do you think we should use other information from userspace like queue
id to determine hardware priority ?
Regards,
Nirmoy
>
> Those here are the software priorities used in the scheduler, but what
> you are working on are the hardware priorities.
>
> That are two completely different things which we shouldn't mix up.
>
> Regards,
> Christian.
>
> Am 24.08.21 um 07:55 schrieb Satyajit Sahu:
>> Adding a new priority level DRM_SCHED_PRIORITY_VERY_HIGH
>>
>> Signed-off-by: Satyajit Sahu <satyajit.sahu@amd.com>
>> ---
>> include/drm/gpu_scheduler.h | 1 +
>> 1 file changed, 1 insertion(+)
>>
>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>> index d18af49fd009..d0e5e234da5f 100644
>> --- a/include/drm/gpu_scheduler.h
>> +++ b/include/drm/gpu_scheduler.h
>> @@ -40,6 +40,7 @@ enum drm_sched_priority {
>> DRM_SCHED_PRIORITY_MIN,
>> DRM_SCHED_PRIORITY_NORMAL,
>> DRM_SCHED_PRIORITY_HIGH,
>> + DRM_SCHED_PRIORITY_VERY_HIGH,
>> DRM_SCHED_PRIORITY_KERNEL,
>> DRM_SCHED_PRIORITY_COUNT,
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/5] drm/sched:add new priority level
2021-08-24 11:57 ` Das, Nirmoy
@ 2021-08-24 12:07 ` Christian König
2021-08-24 12:39 ` Das, Nirmoy
0 siblings, 1 reply; 17+ messages in thread
From: Christian König @ 2021-08-24 12:07 UTC (permalink / raw)
To: Das, Nirmoy, Satyajit Sahu, amd-gfx
Cc: leo.liu, Alexander.Deucher, shashank.sharma
Am 24.08.21 um 13:57 schrieb Das, Nirmoy:
> Hi Christian,
>
> On 8/24/2021 8:10 AM, Christian König wrote:
>> I haven't followed the previous discussion, but that looks like this
>> change is based on a misunderstanding.
>
>
> In previous discussion I sort of suggested to have new DRM prio as I
> didn't see any other way to map priority provided by the userspace to
> this new 3rd hw priority.
>
> Do you think we should use other information from userspace like queue
> id to determine hardware priority ?
If I'm not completely mistaken we have dropped the concept of exposing
multiple queues/instances completely.
What we should probably do is to use the (cleaned up) UAPI enum for
init_priority and override_priority instead of the drm scheduler enums.
Regards,
Christian.
>
>
> Regards,
>
> Nirmoy
>
>>
>> Those here are the software priorities used in the scheduler, but
>> what you are working on are the hardware priorities.
>>
>> That are two completely different things which we shouldn't mix up.
>>
>> Regards,
>> Christian.
>>
>> Am 24.08.21 um 07:55 schrieb Satyajit Sahu:
>>> Adding a new priority level DRM_SCHED_PRIORITY_VERY_HIGH
>>>
>>> Signed-off-by: Satyajit Sahu <satyajit.sahu@amd.com>
>>> ---
>>> include/drm/gpu_scheduler.h | 1 +
>>> 1 file changed, 1 insertion(+)
>>>
>>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>>> index d18af49fd009..d0e5e234da5f 100644
>>> --- a/include/drm/gpu_scheduler.h
>>> +++ b/include/drm/gpu_scheduler.h
>>> @@ -40,6 +40,7 @@ enum drm_sched_priority {
>>> DRM_SCHED_PRIORITY_MIN,
>>> DRM_SCHED_PRIORITY_NORMAL,
>>> DRM_SCHED_PRIORITY_HIGH,
>>> + DRM_SCHED_PRIORITY_VERY_HIGH,
>>> DRM_SCHED_PRIORITY_KERNEL,
>>> DRM_SCHED_PRIORITY_COUNT,
>>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/5] drm/sched:add new priority level
2021-08-24 12:07 ` Christian König
@ 2021-08-24 12:39 ` Das, Nirmoy
2021-08-24 13:18 ` Christian König
0 siblings, 1 reply; 17+ messages in thread
From: Das, Nirmoy @ 2021-08-24 12:39 UTC (permalink / raw)
To: Christian König, Satyajit Sahu, amd-gfx
Cc: leo.liu, Alexander.Deucher, shashank.sharma
On 8/24/2021 2:07 PM, Christian König wrote:
> Am 24.08.21 um 13:57 schrieb Das, Nirmoy:
>> Hi Christian,
>>
>> On 8/24/2021 8:10 AM, Christian König wrote:
>>> I haven't followed the previous discussion, but that looks like this
>>> change is based on a misunderstanding.
>>
>>
>> In previous discussion I sort of suggested to have new DRM prio as I
>> didn't see any other way to map priority provided by the userspace to
>> this new 3rd hw priority.
>>
>> Do you think we should use other information from userspace like
>> queue id to determine hardware priority ?
>
> If I'm not completely mistaken we have dropped the concept of exposing
> multiple queues/instances completely.
Yes, that is my understanding too.
>
> What we should probably do is to use the (cleaned up) UAPI enum for
> init_priority and override_priority instead of the drm scheduler enums.
I went through the drm code, now I see what you mean. So what we are now
doing is: mapping AMDGPU_CTX_PRIORITY_* to DRM_SCHED_PRIORITY_* and
then to hw priority which is not nice.
We should rather map AMDGPU_CTX_PRIORITY_* to hw priority directly.
Regards,
Nirmoy
>
> Regards,
> Christian.
>
>>
>>
>> Regards,
>>
>> Nirmoy
>>
>>>
>>> Those here are the software priorities used in the scheduler, but
>>> what you are working on are the hardware priorities.
>>>
>>> That are two completely different things which we shouldn't mix up.
>>>
>>> Regards,
>>> Christian.
>>>
>>> Am 24.08.21 um 07:55 schrieb Satyajit Sahu:
>>>> Adding a new priority level DRM_SCHED_PRIORITY_VERY_HIGH
>>>>
>>>> Signed-off-by: Satyajit Sahu <satyajit.sahu@amd.com>
>>>> ---
>>>> include/drm/gpu_scheduler.h | 1 +
>>>> 1 file changed, 1 insertion(+)
>>>>
>>>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>>>> index d18af49fd009..d0e5e234da5f 100644
>>>> --- a/include/drm/gpu_scheduler.h
>>>> +++ b/include/drm/gpu_scheduler.h
>>>> @@ -40,6 +40,7 @@ enum drm_sched_priority {
>>>> DRM_SCHED_PRIORITY_MIN,
>>>> DRM_SCHED_PRIORITY_NORMAL,
>>>> DRM_SCHED_PRIORITY_HIGH,
>>>> + DRM_SCHED_PRIORITY_VERY_HIGH,
>>>> DRM_SCHED_PRIORITY_KERNEL,
>>>> DRM_SCHED_PRIORITY_COUNT,
>>>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/5] drm/sched:add new priority level
2021-08-24 12:39 ` Das, Nirmoy
@ 2021-08-24 13:18 ` Christian König
2021-08-24 13:23 ` Das, Nirmoy
0 siblings, 1 reply; 17+ messages in thread
From: Christian König @ 2021-08-24 13:18 UTC (permalink / raw)
To: Das, Nirmoy, Christian König, Satyajit Sahu, amd-gfx
Cc: leo.liu, Alexander.Deucher, shashank.sharma
Am 24.08.21 um 14:39 schrieb Das, Nirmoy:
>
> On 8/24/2021 2:07 PM, Christian König wrote:
>> Am 24.08.21 um 13:57 schrieb Das, Nirmoy:
>>> Hi Christian,
>>>
>>> On 8/24/2021 8:10 AM, Christian König wrote:
>>>> I haven't followed the previous discussion, but that looks like
>>>> this change is based on a misunderstanding.
>>>
>>>
>>> In previous discussion I sort of suggested to have new DRM prio as I
>>> didn't see any other way to map priority provided by the userspace
>>> to this new 3rd hw priority.
>>>
>>> Do you think we should use other information from userspace like
>>> queue id to determine hardware priority ?
>>
>> If I'm not completely mistaken we have dropped the concept of
>> exposing multiple queues/instances completely.
>
>
> Yes, that is my understanding too.
>
>>
>> What we should probably do is to use the (cleaned up) UAPI enum for
>> init_priority and override_priority instead of the drm scheduler enums.
>
>
> I went through the drm code, now I see what you mean. So what we are
> now doing is: mapping AMDGPU_CTX_PRIORITY_* to
> DRM_SCHED_PRIORITY_* and then to hw priority which is not nice.
>
> We should rather map AMDGPU_CTX_PRIORITY_* to hw priority directly.
Exactly that's my idea, yes.
If you want feel free to put this on your TODO for a potential cleanup.
Christian.
>
>
> Regards,
>
> Nirmoy
>
>
>>
>> Regards,
>> Christian.
>>
>>>
>>>
>>> Regards,
>>>
>>> Nirmoy
>>>
>>>>
>>>> Those here are the software priorities used in the scheduler, but
>>>> what you are working on are the hardware priorities.
>>>>
>>>> That are two completely different things which we shouldn't mix up.
>>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>> Am 24.08.21 um 07:55 schrieb Satyajit Sahu:
>>>>> Adding a new priority level DRM_SCHED_PRIORITY_VERY_HIGH
>>>>>
>>>>> Signed-off-by: Satyajit Sahu <satyajit.sahu@amd.com>
>>>>> ---
>>>>> include/drm/gpu_scheduler.h | 1 +
>>>>> 1 file changed, 1 insertion(+)
>>>>>
>>>>> diff --git a/include/drm/gpu_scheduler.h
>>>>> b/include/drm/gpu_scheduler.h
>>>>> index d18af49fd009..d0e5e234da5f 100644
>>>>> --- a/include/drm/gpu_scheduler.h
>>>>> +++ b/include/drm/gpu_scheduler.h
>>>>> @@ -40,6 +40,7 @@ enum drm_sched_priority {
>>>>> DRM_SCHED_PRIORITY_MIN,
>>>>> DRM_SCHED_PRIORITY_NORMAL,
>>>>> DRM_SCHED_PRIORITY_HIGH,
>>>>> + DRM_SCHED_PRIORITY_VERY_HIGH,
>>>>> DRM_SCHED_PRIORITY_KERNEL,
>>>>> DRM_SCHED_PRIORITY_COUNT,
>>>>
>>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/5] drm/sched:add new priority level
2021-08-24 13:18 ` Christian König
@ 2021-08-24 13:23 ` Das, Nirmoy
0 siblings, 0 replies; 17+ messages in thread
From: Das, Nirmoy @ 2021-08-24 13:23 UTC (permalink / raw)
To: Christian König, Christian König, Satyajit Sahu, amd-gfx
Cc: leo.liu, Alexander.Deucher, shashank.sharma
On 8/24/2021 3:18 PM, Christian König wrote:
> Am 24.08.21 um 14:39 schrieb Das, Nirmoy:
>>
>> On 8/24/2021 2:07 PM, Christian König wrote:
>>> Am 24.08.21 um 13:57 schrieb Das, Nirmoy:
>>>> Hi Christian,
>>>>
>>>> On 8/24/2021 8:10 AM, Christian König wrote:
>>>>> I haven't followed the previous discussion, but that looks like
>>>>> this change is based on a misunderstanding.
>>>>
>>>>
>>>> In previous discussion I sort of suggested to have new DRM prio as
>>>> I didn't see any other way to map priority provided by the
>>>> userspace to this new 3rd hw priority.
>>>>
>>>> Do you think we should use other information from userspace like
>>>> queue id to determine hardware priority ?
>>>
>>> If I'm not completely mistaken we have dropped the concept of
>>> exposing multiple queues/instances completely.
>>
>>
>> Yes, that is my understanding too.
>>
>>>
>>> What we should probably do is to use the (cleaned up) UAPI enum for
>>> init_priority and override_priority instead of the drm scheduler enums.
>>
>>
>> I went through the drm code, now I see what you mean. So what we are
>> now doing is: mapping AMDGPU_CTX_PRIORITY_* to
>> DRM_SCHED_PRIORITY_* and then to hw priority which is not nice.
>>
>> We should rather map AMDGPU_CTX_PRIORITY_* to hw priority directly.
>
> Exactly that's my idea, yes.
>
> If you want feel free to put this on your TODO for a potential cleanup.
Yes Sure, I will try have a patch ready asap so that Satyajit can work
on top of that.
Regards,
Nirmoy
>
> Christian.
>
>>
>>
>> Regards,
>>
>> Nirmoy
>>
>>
>>>
>>> Regards,
>>> Christian.
>>>
>>>>
>>>>
>>>> Regards,
>>>>
>>>> Nirmoy
>>>>
>>>>>
>>>>> Those here are the software priorities used in the scheduler, but
>>>>> what you are working on are the hardware priorities.
>>>>>
>>>>> That are two completely different things which we shouldn't mix up.
>>>>>
>>>>> Regards,
>>>>> Christian.
>>>>>
>>>>> Am 24.08.21 um 07:55 schrieb Satyajit Sahu:
>>>>>> Adding a new priority level DRM_SCHED_PRIORITY_VERY_HIGH
>>>>>>
>>>>>> Signed-off-by: Satyajit Sahu <satyajit.sahu@amd.com>
>>>>>> ---
>>>>>> include/drm/gpu_scheduler.h | 1 +
>>>>>> 1 file changed, 1 insertion(+)
>>>>>>
>>>>>> diff --git a/include/drm/gpu_scheduler.h
>>>>>> b/include/drm/gpu_scheduler.h
>>>>>> index d18af49fd009..d0e5e234da5f 100644
>>>>>> --- a/include/drm/gpu_scheduler.h
>>>>>> +++ b/include/drm/gpu_scheduler.h
>>>>>> @@ -40,6 +40,7 @@ enum drm_sched_priority {
>>>>>> DRM_SCHED_PRIORITY_MIN,
>>>>>> DRM_SCHED_PRIORITY_NORMAL,
>>>>>> DRM_SCHED_PRIORITY_HIGH,
>>>>>> + DRM_SCHED_PRIORITY_VERY_HIGH,
>>>>>> DRM_SCHED_PRIORITY_KERNEL,
>>>>>> DRM_SCHED_PRIORITY_COUNT,
>>>>>
>>>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2021-08-24 13:24 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-24 5:55 [PATCH 1/5] drm/sched:add new priority level Satyajit Sahu
2021-08-24 5:55 ` [PATCH 2/5] drm/amdgpu: map user set priority to drm sched priority Satyajit Sahu
2021-08-24 5:55 ` [PATCH 3/5] drm/amdgpu/vce:set vce ring priority level Satyajit Sahu
2021-08-24 6:09 ` Christian König
2021-08-24 5:55 ` [PATCH 4/5] drm/amdgpu/vcn:set vcn encode " Satyajit Sahu
2021-08-24 5:55 ` [PATCH 5/5] drm/amdgpu/vcn/vce:schedule encode job based on priorrity Satyajit Sahu
2021-08-24 6:14 ` Christian König
2021-08-24 6:10 ` [PATCH 1/5] drm/sched:add new priority level Christian König
2021-08-24 8:32 ` Sharma, Shashank
2021-08-24 8:55 ` Christian König
2021-08-24 9:45 ` Sharma, Shashank
2021-08-24 11:56 ` Christian König
2021-08-24 11:57 ` Das, Nirmoy
2021-08-24 12:07 ` Christian König
2021-08-24 12:39 ` Das, Nirmoy
2021-08-24 13:18 ` Christian König
2021-08-24 13:23 ` Das, Nirmoy
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.