From: Lizhi Hou <lizhi.hou@amd.com>
To: Christophe JAILLET <christophe.jaillet@wanadoo.fr>,
<vkoul@kernel.org>, <dmaengine@vger.kernel.org>,
<linux-kernel@vger.kernel.org>
Cc: Nishad Saraf <nishads@amd.com>, <max.zhen@amd.com>,
<sonal.santan@amd.com>, <nishad.saraf@amd.com>
Subject: Re: [PATCH V1 QDMA 1/1] dmaengine: amd: qdma: Add AMD QDMA driver
Date: Tue, 30 May 2023 11:29:45 -0700 [thread overview]
Message-ID: <aa375470-6297-0a8f-c520-7c3481520990@amd.com> (raw)
In-Reply-To: <41f58a00-1ab5-3eae-0e32-0f6e05282cf1@wanadoo.fr>
On 5/27/23 06:33, Christophe JAILLET wrote:
> Le 26/05/2023 à 18:49, Lizhi Hou a écrit :
>> From: Nishad Saraf <nishads@amd.com>
>>
>> Adds driver to enable PCIe board which uses AMD QDMA (the Queue-based
>> Direct Memory Access) subsystem. For example, Xilinx Alveo V70 AI
>> Accelerator devices.
>> https://www.xilinx.com/applications/data-center/v70.html
>>
>> The primary mechanism to transfer data using the QDMA is for the QDMA
>> engine to operate on instructions (descriptors) provided by the host
>> operating system. Using the descriptors, the QDMA can move data in both
>> the Host to Card (H2C) direction, or the Card to Host (C2H) direction.
>> The QDMA provides a per-queue basis option whether DMA traffic goes
>> to an AXI4 memory map (MM) interface or to an AXI4-Stream interface.
>>
>> The hardware detail is provided by
>> https://docs.xilinx.com/r/en-US/pg302-qdma
>>
>> Implements dmaengine APIs to support MM DMA transfers.
>> - probe the available DMA channels
>> - use dma_slave_map for channel lookup
>> - use virtual channel to manage dmaengine tx descriptors
>> - implement device_prep_slave_sg callback to handle host scatter gather
>> list
>> - implement descriptor metadata operations to set device address for DMA
>> transfer
>>
>> Signed-off-by: Nishad Saraf <nishads@amd.com>
>> Signed-off-by: Lizhi Hou <lizhi.hou@amd.com>
>> ---
>
> [...]
>
>> +/**
>> + * qdma_alloc_queue_resources() - Allocate queue resources
>> + * @chan: DMA channel
>> + */
>> +static int qdma_alloc_queue_resources(struct dma_chan *chan)
>> +{
>> + struct qdma_queue *queue = to_qdma_queue(chan);
>> + struct qdma_device *qdev = queue->qdev;
>> + struct qdma_ctxt_sw_desc desc;
>> + size_t size;
>> + int ret;
>> +
>> + ret = qdma_clear_queue_context(queue);
>> + if (ret)
>> + return ret;
>> +
>> + size = queue->ring_size * QDMA_MM_DESC_SIZE;
>> + queue->desc_base = dma_alloc_coherent(qdev->dma_dev.dev, size,
>> + &queue->dma_desc_base,
>> + GFP_KERNEL | __GFP_ZERO);
>
> Nit: Useless (but harmless).
> AFAIK, dma_alloc_coherent() always returned some zeroed memory.
> (should you remove the __GFP_ZERO, there is another usage below)
Sure. I will remove __GFP_ZERO.
>
>> + if (!queue->desc_base) {
>> + qdma_err(qdev, "Failed to allocate descriptor ring");
>> + return -ENOMEM;
>> + }
>> +
>
> [...]
>
>> +/**
>> + * struct qdma_platdata - Platform specific data for QDMA engine
>> + * @max_mm_channels: Maximum number of MM DMA channels in each
>> direction
>> + * @device_map: DMA slave map
>> + * @irq_index: The index of first IRQ
>> + */
>> +struct qdma_platdata {
>> + u32 max_mm_channels;
>> + struct dma_slave_map *device_map;
>> + u32 irq_index;
>> +};
>
> Noob question: this struct is only retrieved from dev_get_platdata(),
> but there is no dev_set_platdata().
> How the link is done? How this structure is filled?
The platdata is generated with platform device. For example, a PCI
driver may do
struct qdma_platdata data = { .... }
platform_device_register_resndata(.., &data, ...)
>
>
> Should it mater, keeping the 2 u32 one after the other, would avoid a
> hole.
Sure. I will fix this.
Thanks,
Lizhi
>
> CJ
>
>> +
>> +#endif /* _PLATDATA_AMD_QDMA_H */
>
prev parent reply other threads:[~2023-05-30 18:29 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-26 16:49 [PATCH V1 QDMA 0/1] AMD QDMA driver Lizhi Hou
2023-05-26 16:49 ` [PATCH V1 QDMA 1/1] dmaengine: amd: qdma: Add " Lizhi Hou
2023-05-27 10:43 ` kernel test robot
2023-05-27 11:47 ` kernel test robot
2023-05-27 13:33 ` Christophe JAILLET
2023-05-30 18:29 ` Lizhi Hou [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aa375470-6297-0a8f-c520-7c3481520990@amd.com \
--to=lizhi.hou@amd.com \
--cc=christophe.jaillet@wanadoo.fr \
--cc=dmaengine@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=max.zhen@amd.com \
--cc=nishad.saraf@amd.com \
--cc=nishads@amd.com \
--cc=sonal.santan@amd.com \
--cc=vkoul@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).