All of lore.kernel.org
 help / color / mirror / Atom feed
From: Wen He <wen.he_1@nxp.com>
To: Vinod <vkoul@kernel.org>
Cc: "dmaengine@vger.kernel.org" <dmaengine@vger.kernel.org>,
	"robh+dt@kernel.org" <robh+dt@kernel.org>,
	"devicetree@vger.kernel.org" <devicetree@vger.kernel.org>,
	Leo Li <leoyang.li@nxp.com>, Jiafei Pan <jiafei.pan@nxp.com>,
	Jiaheng Fan <jiaheng.fan@nxp.com>
Subject: [v4,2/6] dmaengine: fsl-qdma: Add qDMA controller driver for Layerscape SoCs
Date: Thu, 17 May 2018 11:27:45 +0000	[thread overview]
Message-ID: <DB6PR0401MB2503E7EEB3C97028AD131C7CE2910@DB6PR0401MB2503.eurprd04.prod.outlook.com> (raw)

> -----Original Message-----
> From: Vinod [mailto:vkoul@kernel.org]
> Sent: 2018年5月17日 14:05
> To: Wen He <wen.he_1@nxp.com>
> Cc: vinod.koul@intel.com; dmaengine@vger.kernel.org; robh+dt@kernel.org;
> devicetree@vger.kernel.org; Leo Li <leoyang.li@nxp.com>; Jiafei Pan
> <jiafei.pan@nxp.com>; Jiaheng Fan <jiaheng.fan@nxp.com>
> Subject: Re: [v4 2/6] dmaengine: fsl-qdma: Add qDMA controller driver for
> Layerscape SoCs
> 
> On 14-05-18, 20:03, Wen He wrote:
> 
> > +
> > +/* Registers for bit and genmask */
> > +#define FSL_QDMA_CQIDR_SQT		0x8000
> 
> BIT() ?

Sorry, Maybe I should replace 0x8000 to BIT(15).

> 
> > +#define QDMA_CCDF_MASK			GENMASK(28, 20)
> > +#define QDMA_CCDF_FOTMAT		BIT(29)
> > +#define QDMA_CCDF_SER			BIT(30)
> > +#define QDMA_SG_FIN			BIT(30)
> > +#define QDMA_SG_EXT			BIT(31)
> > +#define QDMA_SG_LEN_MASK		GENMASK(29, 0)
> > +
> > +#define QDMA_CCDF_STATUS		20
> > +#define QDMA_CCDF_OFFSET		20
> > +#define FSL_QDMA_BCQIER_CQTIE		0x8000
> > +#define FSL_QDMA_BCQIER_CQPEIE		0x800000
> > +#define FSL_QDMA_BSQICR_ICEN		0x80000000
> 
> here and few other places as well
> 

Got it, will be next version replace to BIT() definition.

> > +
> > +u64 pre_addr, pre_queue;
> 
> why do we have a global?

Let's us see qDMA that how is works?

First, the status notification for DMA jobs are reported back to the status queue.
Status information is carried within the command descriptor status/command field,
bits 120-127. The command descriptor dequeue pointer advances only after the
transaction has completed and the status information field has been updated.

Then, the command descriptor address field wiil pointer to the command descriptor in
its original format. It is the responsibity of the address of the status queue consumer
to deallocate buffers as needed when the command descriptor address pointer is non-zero.

More details of the Status Queue can be found in QorIQ Layerscape Soc datasheet.

So, these variable is used to record latest value that command descriptor queue
and status field.

Every time variables value is zero when set these variable to local, that's not what I want.

> 
> > +static void fsl_qdma_comp_fill_memcpy(struct fsl_qdma_comp *fsl_comp,
> > +					dma_addr_t dst, dma_addr_t src, u32 len) {
> > +	struct fsl_qdma_format *ccdf, *csgf_desc, *csgf_src, *csgf_dest;
> > +	struct fsl_qdma_sdf *sdf;
> > +	struct fsl_qdma_ddf *ddf;
> > +
> > +	ccdf = (struct fsl_qdma_format *)fsl_comp->virt_addr;
> 
> Cast are not required to/away from void
> 

Does means: remove force conver?

> > +	csgf_desc = (struct fsl_qdma_format *)fsl_comp->virt_addr + 1;
> > +	csgf_src = (struct fsl_qdma_format *)fsl_comp->virt_addr + 2;
> > +	csgf_dest = (struct fsl_qdma_format *)fsl_comp->virt_addr + 3;
> > +	sdf = (struct fsl_qdma_sdf *)fsl_comp->virt_addr + 4;
> > +	ddf = (struct fsl_qdma_ddf *)fsl_comp->virt_addr + 5;
> > +
> > +	memset(fsl_comp->virt_addr, 0, FSL_QDMA_BASE_BUFFER_SIZE);
> > +	/* Head Command Descriptor(Frame Descriptor) */
> > +	qdma_desc_addr_set64(ccdf, fsl_comp->bus_addr + 16);
> > +	qdma_ccdf_set_format(ccdf, qdma_ccdf_get_offset(ccdf));
> > +	qdma_ccdf_set_ser(ccdf, qdma_ccdf_get_status(ccdf));
> > +
> > +	/* Status notification is enqueued to status queue. */
> > +	/* Compound Command Descriptor(Frame List Table) */
> > +	qdma_desc_addr_set64(csgf_desc, fsl_comp->bus_addr + 64);
> > +	/* It must be 32 as Compound S/G Descriptor */
> > +	qdma_csgf_set_len(csgf_desc, 32);
> > +	qdma_desc_addr_set64(csgf_src, src);
> > +	qdma_csgf_set_len(csgf_src, len);
> > +	qdma_desc_addr_set64(csgf_dest, dst);
> > +	qdma_csgf_set_len(csgf_dest, len);
> > +	/* This entry is the last entry. */
> > +	qdma_csgf_set_f(csgf_dest, len);
> > +	/* Descriptor Buffer */
> > +	sdf->cmd = cpu_to_le32(
> > +			FSL_QDMA_CMD_RWTTYPE <<
> FSL_QDMA_CMD_RWTTYPE_OFFSET);
> > +	ddf->cmd = cpu_to_le32(
> > +			FSL_QDMA_CMD_RWTTYPE <<
> FSL_QDMA_CMD_RWTTYPE_OFFSET);
> > +	ddf->cmd |= cpu_to_le32(
> > +			FSL_QDMA_CMD_LWC << FSL_QDMA_CMD_LWC_OFFSET); }
> > +
> > +/*
> > + * Pre-request full command descriptor for enqueue.
> > + */
> > +static int fsl_qdma_pre_request_enqueue_desc(struct fsl_qdma_queue
> > +*queue) {
> > +	struct fsl_qdma_comp *comp_temp;
> > +	int i;
> > +
> > +	for (i = 0; i < queue->n_cq; i++) {
> > +		comp_temp = kzalloc(sizeof(*comp_temp), GFP_KERNEL);
> > +		if (!comp_temp)
> > +			return -ENOMEM;
> 
> where is the previous allocations freed? Return of this function is not even
> checked??
> 

Sorry, Maybe I forget.

> > +		comp_temp->virt_addr = dma_pool_alloc(queue->comp_pool,
> > +						      GFP_KERNEL,
> > +						      &comp_temp->bus_addr);
> > +		if (!comp_temp->virt_addr)
> > +			return -ENOMEM;
> 
> and here too

okay

> > +		list_add_tail(&comp_temp->list, &queue->comp_free);
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +/*
> > + * Request a command descriptor for enqueue.
> > + */
> > +static struct fsl_qdma_comp *fsl_qdma_request_enqueue_desc(
> > +					struct fsl_qdma_chan *fsl_chan,
> > +					unsigned int dst_nents,
> > +					unsigned int src_nents)
> > +{
> > +	struct fsl_qdma_comp *comp_temp;
> > +	struct fsl_qdma_sg *sg_block;
> > +	struct fsl_qdma_queue *queue = fsl_chan->queue;
> > +	unsigned long flags;
> > +	unsigned int dst_sg_entry_block, src_sg_entry_block, sg_entry_total,
> > +i;
> > +
> > +	spin_lock_irqsave(&queue->queue_lock, flags);
> > +	if (list_empty(&queue->comp_free)) {
> > +		spin_unlock_irqrestore(&queue->queue_lock, flags);
> > +		comp_temp = kzalloc(sizeof(*comp_temp), GFP_KERNEL);
> > +		if (!comp_temp)
> > +			return NULL;
> > +		comp_temp->virt_addr = dma_pool_alloc(queue->comp_pool,
> > +						      GFP_KERNEL,
> > +						      &comp_temp->bus_addr);
> > +		if (!comp_temp->virt_addr) {
> > +			kfree(comp_temp);
> > +			return NULL;
> > +		}
> > +
> > +	} else {
> > +		comp_temp = list_first_entry(&queue->comp_free,
> > +					     struct fsl_qdma_comp,
> > +					     list);
> > +		list_del(&comp_temp->list);
> > +		spin_unlock_irqrestore(&queue->queue_lock, flags);
> > +	}
> > +
> > +	if (dst_nents != 0)
> > +		dst_sg_entry_block = dst_nents /
> > +					(FSL_QDMA_EXPECT_SG_ENTRY_NUM - 1) + 1;
> 
> DIV_ROUND_UP()?
> 

The DIV_ROUND_UP() definition see below:

#define DIV_ROUND_UP __KERNEL_DIV_ROUND_UP
#define __KERNEL_DIV_ROUND_UP(n ,d) (((n) + (d) - 1) / (d))

But here is 'd / (n - 1) + 1' ?

> > +	else
> > +		dst_sg_entry_block = 0;
> > +
> > +	if (src_nents != 0)
> > +		src_sg_entry_block = src_nents /
> > +					(FSL_QDMA_EXPECT_SG_ENTRY_NUM - 1) + 1;
> > +	else
> > +		src_sg_entry_block = 0;
> > +
> > +	sg_entry_total = dst_sg_entry_block + src_sg_entry_block;
> > +	if (sg_entry_total) {
> > +		sg_block = kzalloc(sizeof(*sg_block) *
> > +					      sg_entry_total,
> > +					      GFP_KERNEL);
> 
> kcalloc?
> 

All right, replace.

> > +		if (!sg_block) {
> > +			dma_pool_free(queue->comp_pool,
> > +					comp_temp->virt_addr,
> > +					comp_temp->bus_addr);
> > +			return NULL;
> > +		}
> > +		comp_temp->sg_block = sg_block;
> > +		for (i = 0; i < sg_entry_total; i++) {
> > +			sg_block->virt_addr = dma_pool_alloc(queue->sg_pool,
> > +							GFP_KERNEL,
> > +							&sg_block->bus_addr);
> 
> no check if this succeeded?
> 

Sorry, will be next version fix.

> > +			memset(sg_block->virt_addr, 0,
> > +					FSL_QDMA_EXPECT_SG_ENTRY_NUM * 16);
> 
> why FSL_QDMA_EXPECT_SG_ENTRY_NUM * 16? and not what you
> allocated?
> 

see line 497.
The sg_pool buffer size created is FSL_QDMA_EXPECT_SG_ENTRY_NUM * 16.

> > +static struct fsl_qdma_queue *fsl_qdma_alloc_queue_resources(
> > +					struct platform_device *pdev,
> > +					unsigned int queue_num)
> > +{
> > +	struct fsl_qdma_queue *queue_head, *queue_temp;
> > +	int ret, len, i;
> > +	unsigned int queue_size[FSL_QDMA_QUEUE_MAX];
> > +
> > +	if (queue_num > FSL_QDMA_QUEUE_MAX)
> > +		queue_num = FSL_QDMA_QUEUE_MAX;
> > +	len = sizeof(*queue_head) * queue_num;
> > +	queue_head = devm_kzalloc(&pdev->dev, len, GFP_KERNEL);
> > +	if (!queue_head)
> > +		return NULL;
> > +
> > +	ret = device_property_read_u32_array(&pdev->dev, "queue-sizes",
> > +					queue_size, queue_num);
> > +	if (ret) {
> > +		dev_err(&pdev->dev, "Can't get queue-sizes.\n");
> > +		return NULL;
> > +	}
> > +
> > +	for (i = 0; i < queue_num; i++) {
> > +		if (queue_size[i] > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX
> > +			|| queue_size[i] < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
> > +			dev_err(&pdev->dev, "Get wrong queue-sizes.\n");
> > +			return NULL;
> > +		}
> > +		queue_temp = queue_head + i;
> > +		queue_temp->cq = dma_alloc_coherent(&pdev->dev,
> > +						sizeof(struct fsl_qdma_format) *
> > +						queue_size[i],
> > +						&queue_temp->bus_addr,
> > +						GFP_KERNEL);
> > +		if (!queue_temp->cq)
> > +			return NULL;
> > +		queue_temp->n_cq = queue_size[i];
> > +		queue_temp->id = i;
> > +		queue_temp->virt_head = queue_temp->cq;
> > +		queue_temp->virt_tail = queue_temp->cq;
> > +		/*
> > +		 * The dma pool for queue command buffer
> > +		 */
> > +		queue_temp->comp_pool = dma_pool_create("comp_pool",
> > +						&pdev->dev,
> > +						FSL_QDMA_BASE_BUFFER_SIZE,
> > +						16, 0);
> > +		if (!queue_temp->comp_pool)
> > +			goto err_free_comp;
> > +
> > +		/*
> > +		 * The dma pool for queue command buffer
> 
> same comment as prev?
> 

okay, second comment should be remove.

> > +static int fsl_qdma_queue_transfer_complete(struct fsl_qdma_engine
> > +*fsl_qdma) {
> > +	struct fsl_qdma_queue *fsl_queue = fsl_qdma->queue;
> > +	struct fsl_qdma_queue *fsl_status = fsl_qdma->status;
> > +	struct fsl_qdma_queue *temp_queue;
> > +	struct fsl_qdma_comp *fsl_comp;
> > +	struct fsl_qdma_format *status_addr;
> > +	struct fsl_qdma_format *csgf_src;
> > +	void __iomem *block = fsl_qdma->block_base;
> > +	u32 reg, i;
> > +	bool duplicate, duplicate_handle;
> > +
> > +	while (1) {
> > +		duplicate = 0;
> > +		duplicate_handle = 0;
> > +		reg = qdma_readl(fsl_qdma, block + FSL_QDMA_BSQSR);
> > +		if (reg & FSL_QDMA_BSQSR_QE)
> > +			return 0;
> > +		status_addr = fsl_status->virt_head;
> > +		if (qdma_ccdf_get_queue(status_addr) == pre_queue &&
> > +			qdma_ccdf_addr_get64(status_addr) == pre_addr)
> > +			duplicate = 1;
> > +		i = qdma_ccdf_get_queue(status_addr);
> > +		pre_queue = qdma_ccdf_get_queue(status_addr);
> > +		pre_addr = qdma_ccdf_addr_get64(status_addr);
> > +		temp_queue = fsl_queue + i;
> > +		spin_lock(&temp_queue->queue_lock);
> > +		if (list_empty(&temp_queue->comp_used)) {
> > +			if (duplicate)
> > +				duplicate_handle = 1;
> > +			else {
> > +				spin_unlock(&temp_queue->queue_lock);
> > +				return -1;
> 
> -1? really. You are in while(1) wouldn't break make sense here?
> 

Does means: using break?

> > +static enum dma_status fsl_qdma_tx_status(struct dma_chan *chan,
> > +		dma_cookie_t cookie, struct dma_tx_state *txstate) {
> > +	struct fsl_qdma_chan *fsl_chan = to_fsl_qdma_chan(chan);
> 
> why do you need this here, its unused
> 

okay , remove it.

> --
> ~Vinod
--
Best Regards,
Wen

WARNING: multiple messages have this Message-ID (diff)
From: Wen He <wen.he_1@nxp.com>
To: Vinod <vkoul@kernel.org>
Cc: "dmaengine@vger.kernel.org" <dmaengine@vger.kernel.org>,
	"robh+dt@kernel.org" <robh+dt@kernel.org>,
	"devicetree@vger.kernel.org" <devicetree@vger.kernel.org>,
	Leo Li <leoyang.li@nxp.com>, Jiafei Pan <jiafei.pan@nxp.com>,
	Jiaheng Fan <jiaheng.fan@nxp.com>
Subject: RE: [v4 2/6] dmaengine: fsl-qdma: Add qDMA controller driver for Layerscape SoCs
Date: Thu, 17 May 2018 11:27:45 +0000	[thread overview]
Message-ID: <DB6PR0401MB2503E7EEB3C97028AD131C7CE2910@DB6PR0401MB2503.eurprd04.prod.outlook.com> (raw)
In-Reply-To: <20180517060430.GU13271@vkoul-mobl>



> -----Original Message-----
> From: Vinod [mailto:vkoul@kernel.org]
> Sent: 2018年5月17日 14:05
> To: Wen He <wen.he_1@nxp.com>
> Cc: vinod.koul@intel.com; dmaengine@vger.kernel.org; robh+dt@kernel.org;
> devicetree@vger.kernel.org; Leo Li <leoyang.li@nxp.com>; Jiafei Pan
> <jiafei.pan@nxp.com>; Jiaheng Fan <jiaheng.fan@nxp.com>
> Subject: Re: [v4 2/6] dmaengine: fsl-qdma: Add qDMA controller driver for
> Layerscape SoCs
> 
> On 14-05-18, 20:03, Wen He wrote:
> 
> > +
> > +/* Registers for bit and genmask */
> > +#define FSL_QDMA_CQIDR_SQT		0x8000
> 
> BIT() ?

Sorry, Maybe I should replace 0x8000 to BIT(15).

> 
> > +#define QDMA_CCDF_MASK			GENMASK(28, 20)
> > +#define QDMA_CCDF_FOTMAT		BIT(29)
> > +#define QDMA_CCDF_SER			BIT(30)
> > +#define QDMA_SG_FIN			BIT(30)
> > +#define QDMA_SG_EXT			BIT(31)
> > +#define QDMA_SG_LEN_MASK		GENMASK(29, 0)
> > +
> > +#define QDMA_CCDF_STATUS		20
> > +#define QDMA_CCDF_OFFSET		20
> > +#define FSL_QDMA_BCQIER_CQTIE		0x8000
> > +#define FSL_QDMA_BCQIER_CQPEIE		0x800000
> > +#define FSL_QDMA_BSQICR_ICEN		0x80000000
> 
> here and few other places as well
> 

Got it, will be next version replace to BIT() definition.

> > +
> > +u64 pre_addr, pre_queue;
> 
> why do we have a global?

Let's us see qDMA that how is works?

First, the status notification for DMA jobs are reported back to the status queue.
Status information is carried within the command descriptor status/command field,
bits 120-127. The command descriptor dequeue pointer advances only after the
transaction has completed and the status information field has been updated.

Then, the command descriptor address field wiil pointer to the command descriptor in
its original format. It is the responsibity of the address of the status queue consumer
to deallocate buffers as needed when the command descriptor address pointer is non-zero.

More details of the Status Queue can be found in QorIQ Layerscape Soc datasheet.

So, these variable is used to record latest value that command descriptor queue
and status field.

Every time variables value is zero when set these variable to local, that's not what I want.

> 
> > +static void fsl_qdma_comp_fill_memcpy(struct fsl_qdma_comp *fsl_comp,
> > +					dma_addr_t dst, dma_addr_t src, u32 len) {
> > +	struct fsl_qdma_format *ccdf, *csgf_desc, *csgf_src, *csgf_dest;
> > +	struct fsl_qdma_sdf *sdf;
> > +	struct fsl_qdma_ddf *ddf;
> > +
> > +	ccdf = (struct fsl_qdma_format *)fsl_comp->virt_addr;
> 
> Cast are not required to/away from void
> 

Does means: remove force conver?

> > +	csgf_desc = (struct fsl_qdma_format *)fsl_comp->virt_addr + 1;
> > +	csgf_src = (struct fsl_qdma_format *)fsl_comp->virt_addr + 2;
> > +	csgf_dest = (struct fsl_qdma_format *)fsl_comp->virt_addr + 3;
> > +	sdf = (struct fsl_qdma_sdf *)fsl_comp->virt_addr + 4;
> > +	ddf = (struct fsl_qdma_ddf *)fsl_comp->virt_addr + 5;
> > +
> > +	memset(fsl_comp->virt_addr, 0, FSL_QDMA_BASE_BUFFER_SIZE);
> > +	/* Head Command Descriptor(Frame Descriptor) */
> > +	qdma_desc_addr_set64(ccdf, fsl_comp->bus_addr + 16);
> > +	qdma_ccdf_set_format(ccdf, qdma_ccdf_get_offset(ccdf));
> > +	qdma_ccdf_set_ser(ccdf, qdma_ccdf_get_status(ccdf));
> > +
> > +	/* Status notification is enqueued to status queue. */
> > +	/* Compound Command Descriptor(Frame List Table) */
> > +	qdma_desc_addr_set64(csgf_desc, fsl_comp->bus_addr + 64);
> > +	/* It must be 32 as Compound S/G Descriptor */
> > +	qdma_csgf_set_len(csgf_desc, 32);
> > +	qdma_desc_addr_set64(csgf_src, src);
> > +	qdma_csgf_set_len(csgf_src, len);
> > +	qdma_desc_addr_set64(csgf_dest, dst);
> > +	qdma_csgf_set_len(csgf_dest, len);
> > +	/* This entry is the last entry. */
> > +	qdma_csgf_set_f(csgf_dest, len);
> > +	/* Descriptor Buffer */
> > +	sdf->cmd = cpu_to_le32(
> > +			FSL_QDMA_CMD_RWTTYPE <<
> FSL_QDMA_CMD_RWTTYPE_OFFSET);
> > +	ddf->cmd = cpu_to_le32(
> > +			FSL_QDMA_CMD_RWTTYPE <<
> FSL_QDMA_CMD_RWTTYPE_OFFSET);
> > +	ddf->cmd |= cpu_to_le32(
> > +			FSL_QDMA_CMD_LWC << FSL_QDMA_CMD_LWC_OFFSET); }
> > +
> > +/*
> > + * Pre-request full command descriptor for enqueue.
> > + */
> > +static int fsl_qdma_pre_request_enqueue_desc(struct fsl_qdma_queue
> > +*queue) {
> > +	struct fsl_qdma_comp *comp_temp;
> > +	int i;
> > +
> > +	for (i = 0; i < queue->n_cq; i++) {
> > +		comp_temp = kzalloc(sizeof(*comp_temp), GFP_KERNEL);
> > +		if (!comp_temp)
> > +			return -ENOMEM;
> 
> where is the previous allocations freed? Return of this function is not even
> checked??
> 

Sorry, Maybe I forget.

> > +		comp_temp->virt_addr = dma_pool_alloc(queue->comp_pool,
> > +						      GFP_KERNEL,
> > +						      &comp_temp->bus_addr);
> > +		if (!comp_temp->virt_addr)
> > +			return -ENOMEM;
> 
> and here too

okay

> > +		list_add_tail(&comp_temp->list, &queue->comp_free);
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +/*
> > + * Request a command descriptor for enqueue.
> > + */
> > +static struct fsl_qdma_comp *fsl_qdma_request_enqueue_desc(
> > +					struct fsl_qdma_chan *fsl_chan,
> > +					unsigned int dst_nents,
> > +					unsigned int src_nents)
> > +{
> > +	struct fsl_qdma_comp *comp_temp;
> > +	struct fsl_qdma_sg *sg_block;
> > +	struct fsl_qdma_queue *queue = fsl_chan->queue;
> > +	unsigned long flags;
> > +	unsigned int dst_sg_entry_block, src_sg_entry_block, sg_entry_total,
> > +i;
> > +
> > +	spin_lock_irqsave(&queue->queue_lock, flags);
> > +	if (list_empty(&queue->comp_free)) {
> > +		spin_unlock_irqrestore(&queue->queue_lock, flags);
> > +		comp_temp = kzalloc(sizeof(*comp_temp), GFP_KERNEL);
> > +		if (!comp_temp)
> > +			return NULL;
> > +		comp_temp->virt_addr = dma_pool_alloc(queue->comp_pool,
> > +						      GFP_KERNEL,
> > +						      &comp_temp->bus_addr);
> > +		if (!comp_temp->virt_addr) {
> > +			kfree(comp_temp);
> > +			return NULL;
> > +		}
> > +
> > +	} else {
> > +		comp_temp = list_first_entry(&queue->comp_free,
> > +					     struct fsl_qdma_comp,
> > +					     list);
> > +		list_del(&comp_temp->list);
> > +		spin_unlock_irqrestore(&queue->queue_lock, flags);
> > +	}
> > +
> > +	if (dst_nents != 0)
> > +		dst_sg_entry_block = dst_nents /
> > +					(FSL_QDMA_EXPECT_SG_ENTRY_NUM - 1) + 1;
> 
> DIV_ROUND_UP()?
> 

The DIV_ROUND_UP() definition see below:

#define DIV_ROUND_UP __KERNEL_DIV_ROUND_UP
#define __KERNEL_DIV_ROUND_UP(n ,d) (((n) + (d) - 1) / (d))

But here is 'd / (n - 1) + 1' ?

> > +	else
> > +		dst_sg_entry_block = 0;
> > +
> > +	if (src_nents != 0)
> > +		src_sg_entry_block = src_nents /
> > +					(FSL_QDMA_EXPECT_SG_ENTRY_NUM - 1) + 1;
> > +	else
> > +		src_sg_entry_block = 0;
> > +
> > +	sg_entry_total = dst_sg_entry_block + src_sg_entry_block;
> > +	if (sg_entry_total) {
> > +		sg_block = kzalloc(sizeof(*sg_block) *
> > +					      sg_entry_total,
> > +					      GFP_KERNEL);
> 
> kcalloc?
> 

All right, replace.

> > +		if (!sg_block) {
> > +			dma_pool_free(queue->comp_pool,
> > +					comp_temp->virt_addr,
> > +					comp_temp->bus_addr);
> > +			return NULL;
> > +		}
> > +		comp_temp->sg_block = sg_block;
> > +		for (i = 0; i < sg_entry_total; i++) {
> > +			sg_block->virt_addr = dma_pool_alloc(queue->sg_pool,
> > +							GFP_KERNEL,
> > +							&sg_block->bus_addr);
> 
> no check if this succeeded?
> 

Sorry, will be next version fix.

> > +			memset(sg_block->virt_addr, 0,
> > +					FSL_QDMA_EXPECT_SG_ENTRY_NUM * 16);
> 
> why FSL_QDMA_EXPECT_SG_ENTRY_NUM * 16? and not what you
> allocated?
> 

see line 497.
The sg_pool buffer size created is FSL_QDMA_EXPECT_SG_ENTRY_NUM * 16.

> > +static struct fsl_qdma_queue *fsl_qdma_alloc_queue_resources(
> > +					struct platform_device *pdev,
> > +					unsigned int queue_num)
> > +{
> > +	struct fsl_qdma_queue *queue_head, *queue_temp;
> > +	int ret, len, i;
> > +	unsigned int queue_size[FSL_QDMA_QUEUE_MAX];
> > +
> > +	if (queue_num > FSL_QDMA_QUEUE_MAX)
> > +		queue_num = FSL_QDMA_QUEUE_MAX;
> > +	len = sizeof(*queue_head) * queue_num;
> > +	queue_head = devm_kzalloc(&pdev->dev, len, GFP_KERNEL);
> > +	if (!queue_head)
> > +		return NULL;
> > +
> > +	ret = device_property_read_u32_array(&pdev->dev, "queue-sizes",
> > +					queue_size, queue_num);
> > +	if (ret) {
> > +		dev_err(&pdev->dev, "Can't get queue-sizes.\n");
> > +		return NULL;
> > +	}
> > +
> > +	for (i = 0; i < queue_num; i++) {
> > +		if (queue_size[i] > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX
> > +			|| queue_size[i] < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
> > +			dev_err(&pdev->dev, "Get wrong queue-sizes.\n");
> > +			return NULL;
> > +		}
> > +		queue_temp = queue_head + i;
> > +		queue_temp->cq = dma_alloc_coherent(&pdev->dev,
> > +						sizeof(struct fsl_qdma_format) *
> > +						queue_size[i],
> > +						&queue_temp->bus_addr,
> > +						GFP_KERNEL);
> > +		if (!queue_temp->cq)
> > +			return NULL;
> > +		queue_temp->n_cq = queue_size[i];
> > +		queue_temp->id = i;
> > +		queue_temp->virt_head = queue_temp->cq;
> > +		queue_temp->virt_tail = queue_temp->cq;
> > +		/*
> > +		 * The dma pool for queue command buffer
> > +		 */
> > +		queue_temp->comp_pool = dma_pool_create("comp_pool",
> > +						&pdev->dev,
> > +						FSL_QDMA_BASE_BUFFER_SIZE,
> > +						16, 0);
> > +		if (!queue_temp->comp_pool)
> > +			goto err_free_comp;
> > +
> > +		/*
> > +		 * The dma pool for queue command buffer
> 
> same comment as prev?
> 

okay, second comment should be remove.

> > +static int fsl_qdma_queue_transfer_complete(struct fsl_qdma_engine
> > +*fsl_qdma) {
> > +	struct fsl_qdma_queue *fsl_queue = fsl_qdma->queue;
> > +	struct fsl_qdma_queue *fsl_status = fsl_qdma->status;
> > +	struct fsl_qdma_queue *temp_queue;
> > +	struct fsl_qdma_comp *fsl_comp;
> > +	struct fsl_qdma_format *status_addr;
> > +	struct fsl_qdma_format *csgf_src;
> > +	void __iomem *block = fsl_qdma->block_base;
> > +	u32 reg, i;
> > +	bool duplicate, duplicate_handle;
> > +
> > +	while (1) {
> > +		duplicate = 0;
> > +		duplicate_handle = 0;
> > +		reg = qdma_readl(fsl_qdma, block + FSL_QDMA_BSQSR);
> > +		if (reg & FSL_QDMA_BSQSR_QE)
> > +			return 0;
> > +		status_addr = fsl_status->virt_head;
> > +		if (qdma_ccdf_get_queue(status_addr) == pre_queue &&
> > +			qdma_ccdf_addr_get64(status_addr) == pre_addr)
> > +			duplicate = 1;
> > +		i = qdma_ccdf_get_queue(status_addr);
> > +		pre_queue = qdma_ccdf_get_queue(status_addr);
> > +		pre_addr = qdma_ccdf_addr_get64(status_addr);
> > +		temp_queue = fsl_queue + i;
> > +		spin_lock(&temp_queue->queue_lock);
> > +		if (list_empty(&temp_queue->comp_used)) {
> > +			if (duplicate)
> > +				duplicate_handle = 1;
> > +			else {
> > +				spin_unlock(&temp_queue->queue_lock);
> > +				return -1;
> 
> -1? really. You are in while(1) wouldn't break make sense here?
> 

Does means: using break?

> > +static enum dma_status fsl_qdma_tx_status(struct dma_chan *chan,
> > +		dma_cookie_t cookie, struct dma_tx_state *txstate) {
> > +	struct fsl_qdma_chan *fsl_chan = to_fsl_qdma_chan(chan);
> 
> why do you need this here, its unused
> 

okay , remove it.

> --
> ~Vinod
--
Best Regards,
Wen

             reply	other threads:[~2018-05-17 11:27 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-17 11:27 Wen He [this message]
2018-05-17 11:27 ` [v4 2/6] dmaengine: fsl-qdma: Add qDMA controller driver for Layerscape SoCs Wen He
  -- strict thread matches above, loose matches on Subject: below --
2018-05-24  7:20 [v4,3/6] dt-bindings: fsl-qdma: Add NXP Layerscpae qDMA controller bindings Wen He
2018-05-24  7:20 ` [v4 3/6] " Wen He
2018-05-23 19:59 [v4,3/6] " Rob Herring
2018-05-23 19:59 ` [v4 3/6] " Rob Herring
2018-05-21  9:49 [v4,2/6] dmaengine: fsl-qdma: Add qDMA controller driver for Layerscape SoCs Wen He
2018-05-21  9:49 ` [v4 2/6] " Wen He
2018-05-21  9:09 [v4,2/6] " Vinod Koul
2018-05-21  9:09 ` [v4 2/6] " Vinod Koul
2018-05-21  5:52 [v4,3/6] dt-bindings: fsl-qdma: Add NXP Layerscpae qDMA controller bindings Wen He
2018-05-21  5:52 ` [v4 3/6] " Wen He
2018-05-18 21:26 [v4,3/6] " Rob Herring
2018-05-18 21:26 ` [v4 3/6] " Rob Herring
2018-05-18 10:04 [v4,2/6] dmaengine: fsl-qdma: Add qDMA controller driver for Layerscape SoCs Wen He
2018-05-18 10:04 ` [v4 2/6] " Wen He
2018-05-18  4:21 [v4,2/6] " Vinod Koul
2018-05-18  4:21 ` [v4 2/6] " Vinod
2018-05-17  6:04 [v4,2/6] " Vinod Koul
2018-05-17  6:04 ` [v4 2/6] " Vinod
2018-05-14 12:03 [v4,6/6] arm: dts: ls1021a: add qdma device tree nodes Wen He
2018-05-14 12:03 ` [v4 6/6] " Wen He
2018-05-14 12:03 [v4,5/6] arm64: dts: ls1046a: " Wen He
2018-05-14 12:03 ` [v4 5/6] " Wen He
2018-05-14 12:03 [v4,4/6] arm64: dts: ls1043a: " Wen He
2018-05-14 12:03 ` [v4 4/6] " Wen He
2018-05-14 12:03 [v4,3/6] dt-bindings: fsl-qdma: Add NXP Layerscpae qDMA controller bindings Wen He
2018-05-14 12:03 ` [v4 3/6] " Wen He
2018-05-14 12:03 [v4,2/6] dmaengine: fsl-qdma: Add qDMA controller driver for Layerscape SoCs Wen He
2018-05-14 12:03 ` [v4 2/6] " Wen He
2018-05-14 12:03 [v4,1/6] dmaengine: fsldma: Replace DMA_IN/OUT by FSL_DMA_IN/OUT Wen He
2018-05-14 12:03 ` [v4 1/6] " Wen He

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DB6PR0401MB2503E7EEB3C97028AD131C7CE2910@DB6PR0401MB2503.eurprd04.prod.outlook.com \
    --to=wen.he_1@nxp.com \
    --cc=devicetree@vger.kernel.org \
    --cc=dmaengine@vger.kernel.org \
    --cc=jiafei.pan@nxp.com \
    --cc=jiaheng.fan@nxp.com \
    --cc=leoyang.li@nxp.com \
    --cc=robh+dt@kernel.org \
    --cc=vkoul@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.