From: "Huang, Ray" <Ray.Huang-5C7GfCeVMHo@public.gmane.org> To: "Tuikov, Luben" <Luben.Tuikov-5C7GfCeVMHo@public.gmane.org> Cc: "Deucher, Alexander" <Alexander.Deucher-5C7GfCeVMHo@public.gmane.org>, "Pelloux-prayer, Pierre-eric" <Pierre-eric.Pelloux-prayer-5C7GfCeVMHo@public.gmane.org>, "Koenig, Christian" <Christian.Koenig-5C7GfCeVMHo@public.gmane.org>, "amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org" <amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org> Subject: Re: [PATCH] drm/amdgpu: GFX9, GFX10: GRBM requires 1-cycle delay Date: Fri, 25 Oct 2019 09:26:20 +0000 [thread overview] Message-ID: <MN2PR12MB33095371C6336C43E4F88C43EC650@MN2PR12MB3309.namprd12.prod.outlook.com> (raw) On Thu, Oct 24, 2019 at 09:16:55PM +0000, Tuikov, Luben wrote: > The GRBM interface is now capable of bursting 1-cycle op per register, > a WRITE followed by another WRITE, or a WRITE followed by a READ--much > faster than previous muti-cycle per completed-transaction interface. > This causes a problem, whereby status registers requiring a read/write > by hardware, have a 1-cycle delay, due to the register update having > to go through GRBM interface. > > This patch adds this delay. > > A one cycle read op is added after updating the invalidate request and > before reading the invalidate-ACK status. > > See also commit > 534991731cb5fa94b5519957646cf849ca10d17d. > > Signed-off-by: Luben Tuikov <luben.tuikov@amd.com> > --- > drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 4 ++-- > drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 4 ++-- > drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c | 9 +++++++++ > drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 8 ++++++++ > drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c | 2 +- > 5 files changed, 22 insertions(+), 5 deletions(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c > b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c > index ac43b1af69e3..0042868dbd53 100644 > --- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c > @@ -5129,7 +5129,7 @@ static const struct amdgpu_ring_funcs gfx_v10_0_ring_funcs_gfx = { > 5 + /* COND_EXEC */ > 7 + /* PIPELINE_SYNC */ > SOC15_FLUSH_GPU_TLB_NUM_WREG * 5 + > - SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 7 + > + SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 7 * 2 + > 2 + /* VM_FLUSH */ > 8 + /* FENCE for VM_FLUSH */ > 20 + /* GDS switch */ > @@ -5182,7 +5182,7 @@ static const struct amdgpu_ring_funcs gfx_v10_0_ring_funcs_compute = { > 5 + /* hdp invalidate */ > 7 + /* gfx_v10_0_ring_emit_pipeline_sync */ > SOC15_FLUSH_GPU_TLB_NUM_WREG * 5 + > - SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 7 + > + SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 7 * 2 + > 2 + /* gfx_v10_0_ring_emit_vm_flush */ > 8 + 8 + 8, /* gfx_v10_0_ring_emit_fence x3 for user fence, vm fence */ > .emit_ib_size = 7, /* gfx_v10_0_ring_emit_ib_compute */ > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c > b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c > index 9fe95e7693d5..9a7a717208de 100644 > --- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c > @@ -6218,7 +6218,7 @@ static const struct amdgpu_ring_funcs gfx_v9_0_ring_funcs_gfx = { > 5 + /* COND_EXEC */ > 7 + /* PIPELINE_SYNC */ > SOC15_FLUSH_GPU_TLB_NUM_WREG * 5 + > - SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 7 + > + SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 7 * 2 + > 2 + /* VM_FLUSH */ > 8 + /* FENCE for VM_FLUSH */ > 20 + /* GDS switch */ > @@ -6271,7 +6271,7 @@ static const struct amdgpu_ring_funcs gfx_v9_0_ring_funcs_compute = { > 5 + /* hdp invalidate */ > 7 + /* gfx_v9_0_ring_emit_pipeline_sync */ > SOC15_FLUSH_GPU_TLB_NUM_WREG * 5 + > - SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 7 + > + SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 7 * 2 + > 2 + /* gfx_v9_0_ring_emit_vm_flush */ > 8 + 8 + 8, /* gfx_v9_0_ring_emit_fence x3 for user fence, vm fence */ > .emit_ib_size = 7, /* gfx_v9_0_ring_emit_ib_compute */ > diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c > b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c > index 6e1b25bd1fe7..100d526e9a42 100644 > --- a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c > @@ -346,6 +346,15 @@ static uint64_t > gmc_v10_0_emit_flush_gpu_tlb(struct amdgpu_ring *ring, > > amdgpu_ring_emit_wreg(ring, hub->vm_inv_eng0_req + eng, req); > > + /* Insert a dummy read to delay one cycle before the ACK > + * inquiry. > + */ > + if (ring->funcs->type == AMDGPU_RING_TYPE_SDMA || > + ring->funcs->type == AMDGPU_RING_TYPE_GFX || > + ring->funcs->type == AMDGPU_RING_TYPE_COMPUTE) > + amdgpu_ring_emit_reg_wait(ring, > + hub->vm_inv_eng0_req + eng, 0, 0); > + > /* wait for the invalidate to complete */ > amdgpu_ring_emit_reg_wait(ring, hub->vm_inv_eng0_ack + eng, > 1 << vmid, 1 << vmid); > diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c > b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c > index 9f2a893871ec..8f3097e45299 100644 > --- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c > @@ -495,6 +495,14 @@ static uint64_t gmc_v9_0_emit_flush_gpu_tlb(struct amdgpu_ring *ring, > amdgpu_ring_emit_wreg(ring, hub->ctx0_ptb_addr_hi32 + (2 * vmid), > upper_32_bits(pd_addr)); > > + /* Insert a dummy read to delay one cycle before the ACK > + * inquiry. > + */ > + if (ring->funcs->type == AMDGPU_RING_TYPE_GFX || > + ring->funcs->type == AMDGPU_RING_TYPE_COMPUTE) > + amdgpu_ring_emit_reg_wait(ring, > + hub->vm_inv_eng0_req + eng, 0, 0); The workaround should be add a dummy read (one cycle delay) after we write VM_INVALIDATE_ENGx_REQ and before we poll the VM_INVALIDATE_ENGx_ACK. If you add it here, that cannot resolve the issue. I think you should implement the dummy read in below function: amdgpu_ring_emit_reg_write_reg_wait(). Thanks, Ray > + > amdgpu_ring_emit_reg_write_reg_wait(ring, hub->vm_inv_eng0_req + eng, > hub->vm_inv_eng0_ack + eng, > req, 1 << vmid); > diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c > b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c > index b8fdb192f6d6..0c41b4fdc58b 100644 > --- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c > @@ -1588,7 +1588,7 @@ static const struct amdgpu_ring_funcs sdma_v5_0_ring_funcs = { > 6 + /* sdma_v5_0_ring_emit_pipeline_sync */ > /* sdma_v5_0_ring_emit_vm_flush */ > SOC15_FLUSH_GPU_TLB_NUM_WREG * 3 + > - SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 6 + > + SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 6 * 2 + > 10 + 10 + 10, /* sdma_v5_0_ring_emit_fence x3 for user fence, vm fence */ > .emit_ib_size = 7 + 6, /* sdma_v5_0_ring_emit_ib */ > .emit_ib = sdma_v5_0_ring_emit_ib, > -- > 2.23.0.385.gbc12974a89 > > _______________________________________________ > amd-gfx mailing list > amd-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/amd-gfx _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx
WARNING: multiple messages have this Message-ID (diff)
From: "Huang, Ray" <Ray.Huang@amd.com> To: "Tuikov, Luben" <Luben.Tuikov@amd.com> Cc: "Deucher, Alexander" <Alexander.Deucher@amd.com>, "Pelloux-prayer, Pierre-eric" <Pierre-eric.Pelloux-prayer@amd.com>, "Koenig, Christian" <Christian.Koenig@amd.com>, "amd-gfx@lists.freedesktop.org" <amd-gfx@lists.freedesktop.org> Subject: Re: [PATCH] drm/amdgpu: GFX9, GFX10: GRBM requires 1-cycle delay Date: Fri, 25 Oct 2019 09:26:20 +0000 [thread overview] Message-ID: <MN2PR12MB33095371C6336C43E4F88C43EC650@MN2PR12MB3309.namprd12.prod.outlook.com> (raw) Message-ID: <20191025092620.TQP0ackgpHqw2syaz1QDxkXQ3iCO6x-3L9CqSCzayPA@z> (raw) On Thu, Oct 24, 2019 at 09:16:55PM +0000, Tuikov, Luben wrote: > The GRBM interface is now capable of bursting 1-cycle op per register, > a WRITE followed by another WRITE, or a WRITE followed by a READ--much > faster than previous muti-cycle per completed-transaction interface. > This causes a problem, whereby status registers requiring a read/write > by hardware, have a 1-cycle delay, due to the register update having > to go through GRBM interface. > > This patch adds this delay. > > A one cycle read op is added after updating the invalidate request and > before reading the invalidate-ACK status. > > See also commit > 534991731cb5fa94b5519957646cf849ca10d17d. > > Signed-off-by: Luben Tuikov <luben.tuikov@amd.com> > --- > drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 4 ++-- > drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 4 ++-- > drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c | 9 +++++++++ > drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 8 ++++++++ > drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c | 2 +- > 5 files changed, 22 insertions(+), 5 deletions(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c > b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c > index ac43b1af69e3..0042868dbd53 100644 > --- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c > @@ -5129,7 +5129,7 @@ static const struct amdgpu_ring_funcs gfx_v10_0_ring_funcs_gfx = { > 5 + /* COND_EXEC */ > 7 + /* PIPELINE_SYNC */ > SOC15_FLUSH_GPU_TLB_NUM_WREG * 5 + > - SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 7 + > + SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 7 * 2 + > 2 + /* VM_FLUSH */ > 8 + /* FENCE for VM_FLUSH */ > 20 + /* GDS switch */ > @@ -5182,7 +5182,7 @@ static const struct amdgpu_ring_funcs gfx_v10_0_ring_funcs_compute = { > 5 + /* hdp invalidate */ > 7 + /* gfx_v10_0_ring_emit_pipeline_sync */ > SOC15_FLUSH_GPU_TLB_NUM_WREG * 5 + > - SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 7 + > + SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 7 * 2 + > 2 + /* gfx_v10_0_ring_emit_vm_flush */ > 8 + 8 + 8, /* gfx_v10_0_ring_emit_fence x3 for user fence, vm fence */ > .emit_ib_size = 7, /* gfx_v10_0_ring_emit_ib_compute */ > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c > b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c > index 9fe95e7693d5..9a7a717208de 100644 > --- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c > @@ -6218,7 +6218,7 @@ static const struct amdgpu_ring_funcs gfx_v9_0_ring_funcs_gfx = { > 5 + /* COND_EXEC */ > 7 + /* PIPELINE_SYNC */ > SOC15_FLUSH_GPU_TLB_NUM_WREG * 5 + > - SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 7 + > + SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 7 * 2 + > 2 + /* VM_FLUSH */ > 8 + /* FENCE for VM_FLUSH */ > 20 + /* GDS switch */ > @@ -6271,7 +6271,7 @@ static const struct amdgpu_ring_funcs gfx_v9_0_ring_funcs_compute = { > 5 + /* hdp invalidate */ > 7 + /* gfx_v9_0_ring_emit_pipeline_sync */ > SOC15_FLUSH_GPU_TLB_NUM_WREG * 5 + > - SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 7 + > + SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 7 * 2 + > 2 + /* gfx_v9_0_ring_emit_vm_flush */ > 8 + 8 + 8, /* gfx_v9_0_ring_emit_fence x3 for user fence, vm fence */ > .emit_ib_size = 7, /* gfx_v9_0_ring_emit_ib_compute */ > diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c > b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c > index 6e1b25bd1fe7..100d526e9a42 100644 > --- a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c > @@ -346,6 +346,15 @@ static uint64_t > gmc_v10_0_emit_flush_gpu_tlb(struct amdgpu_ring *ring, > > amdgpu_ring_emit_wreg(ring, hub->vm_inv_eng0_req + eng, req); > > + /* Insert a dummy read to delay one cycle before the ACK > + * inquiry. > + */ > + if (ring->funcs->type == AMDGPU_RING_TYPE_SDMA || > + ring->funcs->type == AMDGPU_RING_TYPE_GFX || > + ring->funcs->type == AMDGPU_RING_TYPE_COMPUTE) > + amdgpu_ring_emit_reg_wait(ring, > + hub->vm_inv_eng0_req + eng, 0, 0); > + > /* wait for the invalidate to complete */ > amdgpu_ring_emit_reg_wait(ring, hub->vm_inv_eng0_ack + eng, > 1 << vmid, 1 << vmid); > diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c > b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c > index 9f2a893871ec..8f3097e45299 100644 > --- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c > @@ -495,6 +495,14 @@ static uint64_t gmc_v9_0_emit_flush_gpu_tlb(struct amdgpu_ring *ring, > amdgpu_ring_emit_wreg(ring, hub->ctx0_ptb_addr_hi32 + (2 * vmid), > upper_32_bits(pd_addr)); > > + /* Insert a dummy read to delay one cycle before the ACK > + * inquiry. > + */ > + if (ring->funcs->type == AMDGPU_RING_TYPE_GFX || > + ring->funcs->type == AMDGPU_RING_TYPE_COMPUTE) > + amdgpu_ring_emit_reg_wait(ring, > + hub->vm_inv_eng0_req + eng, 0, 0); The workaround should be add a dummy read (one cycle delay) after we write VM_INVALIDATE_ENGx_REQ and before we poll the VM_INVALIDATE_ENGx_ACK. If you add it here, that cannot resolve the issue. I think you should implement the dummy read in below function: amdgpu_ring_emit_reg_write_reg_wait(). Thanks, Ray > + > amdgpu_ring_emit_reg_write_reg_wait(ring, hub->vm_inv_eng0_req + eng, > hub->vm_inv_eng0_ack + eng, > req, 1 << vmid); > diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c > b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c > index b8fdb192f6d6..0c41b4fdc58b 100644 > --- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c > @@ -1588,7 +1588,7 @@ static const struct amdgpu_ring_funcs sdma_v5_0_ring_funcs = { > 6 + /* sdma_v5_0_ring_emit_pipeline_sync */ > /* sdma_v5_0_ring_emit_vm_flush */ > SOC15_FLUSH_GPU_TLB_NUM_WREG * 3 + > - SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 6 + > + SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 6 * 2 + > 10 + 10 + 10, /* sdma_v5_0_ring_emit_fence x3 for user fence, vm fence */ > .emit_ib_size = 7 + 6, /* sdma_v5_0_ring_emit_ib */ > .emit_ib = sdma_v5_0_ring_emit_ib, > -- > 2.23.0.385.gbc12974a89 > > _______________________________________________ > amd-gfx mailing list > amd-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/amd-gfx _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx
next reply other threads:[~2019-10-25 9:26 UTC|newest] Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-10-25 9:26 Huang, Ray [this message] 2019-10-25 9:26 ` [PATCH] drm/amdgpu: GFX9, GFX10: GRBM requires 1-cycle delay Huang, Ray [not found] ` <MN2PR12MB33095371C6336C43E4F88C43EC650-rweVpJHSKTpWdvXm18W95QdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org> 2019-10-25 14:22 ` Zhu, Changfeng 2019-10-25 14:22 ` Zhu, Changfeng [not found] ` <MN2PR12MB28967F025FA60291AE745FE6FD650-rweVpJHSKToIQ/pRnFqe/QdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org> 2019-10-25 15:53 ` Koenig, Christian 2019-10-25 15:53 ` Koenig, Christian [not found] ` <b54e3e37-ff15-079f-9b62-be7936836672-5C7GfCeVMHo@public.gmane.org> 2019-10-28 3:01 ` Zhu, Changfeng 2019-10-28 3:01 ` Zhu, Changfeng [not found] ` <MN2PR12MB2896E32084545C8EB240BC45FD660-rweVpJHSKToIQ/pRnFqe/QdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org> 2019-10-28 10:46 ` Koenig, Christian 2019-10-28 10:46 ` Koenig, Christian [not found] ` <924c7758-92ed-caf6-8068-ca12d7d77ed7-5C7GfCeVMHo@public.gmane.org> 2019-10-28 12:07 ` Zhu, Changfeng 2019-10-28 12:07 ` Zhu, Changfeng -- strict thread matches above, loose matches on Subject: below -- 2019-10-28 13:38 Koenig, Christian 2019-10-28 13:38 ` Koenig, Christian 2019-10-24 21:16 Tuikov, Luben 2019-10-24 21:16 ` Tuikov, Luben [not found] ` <20191024211430.25399-1-luben.tuikov-5C7GfCeVMHo@public.gmane.org> 2019-10-25 3:20 ` Zhu, Changfeng 2019-10-25 3:20 ` Zhu, Changfeng 2019-10-25 6:49 ` Koenig, Christian 2019-10-25 6:49 ` Koenig, Christian [not found] ` <6be2805a-dddc-7b02-84ea-f52fab9780b0-5C7GfCeVMHo@public.gmane.org> 2019-10-25 16:05 ` Alex Deucher 2019-10-25 16:05 ` Alex Deucher [not found] ` <CADnq5_NsTABDWTMBFcQBGfaBganBpzN+YQ0gmw55pa8PswNZYA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> 2019-10-25 16:19 ` Koenig, Christian 2019-10-25 16:19 ` Koenig, Christian [not found] ` <b40c78f1-17a5-f0f9-183e-0c78fd7163e9-5C7GfCeVMHo@public.gmane.org> 2019-10-25 22:45 ` Tuikov, Luben 2019-10-25 22:45 ` Tuikov, Luben [not found] ` <c3e496c7-2ace-149e-0c51-92dd1342d31d-5C7GfCeVMHo@public.gmane.org> 2019-10-26 12:09 ` Koenig, Christian 2019-10-26 12:09 ` Koenig, Christian [not found] ` <122f3bde-5fd0-1fa5-864c-547c0cefb744-5C7GfCeVMHo@public.gmane.org> 2019-10-27 21:25 ` Tuikov, Luben 2019-10-27 21:25 ` Tuikov, Luben
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=MN2PR12MB33095371C6336C43E4F88C43EC650@MN2PR12MB3309.namprd12.prod.outlook.com \ --to=ray.huang-5c7gfcevmho@public.gmane.org \ --cc=Alexander.Deucher-5C7GfCeVMHo@public.gmane.org \ --cc=Christian.Koenig-5C7GfCeVMHo@public.gmane.org \ --cc=Luben.Tuikov-5C7GfCeVMHo@public.gmane.org \ --cc=Pierre-eric.Pelloux-prayer-5C7GfCeVMHo@public.gmane.org \ --cc=amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.