devicetree.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Wen He <wen.he_1@nxp.com>
To: Vinod <vkoul@kernel.org>
Cc: "dmaengine@vger.kernel.org" <dmaengine@vger.kernel.org>,
	"robh+dt@kernel.org" <robh+dt@kernel.org>,
	"devicetree@vger.kernel.org" <devicetree@vger.kernel.org>,
	Leo Li <leoyang.li@nxp.com>, Jiafei Pan <jiafei.pan@nxp.com>,
	Jiaheng Fan <jiaheng.fan@nxp.com>
Subject: RE: [v4 2/6] dmaengine: fsl-qdma: Add qDMA controller driver for Layerscape SoCs
Date: Fri, 18 May 2018 10:04:30 +0000	[thread overview]
Message-ID: <DB6PR0401MB250380A95367C42C1BC79595E2900@DB6PR0401MB2503.eurprd04.prod.outlook.com> (raw)
In-Reply-To: <20180518042125.GE2932@vkoul-mobl>



> -----Original Message-----
> From: dmaengine-owner@vger.kernel.org
> [mailto:dmaengine-owner@vger.kernel.org] On Behalf Of Vinod
> Sent: 2018年5月18日 12:21
> To: Wen He <wen.he_1@nxp.com>
> Cc: dmaengine@vger.kernel.org; robh+dt@kernel.org;
> devicetree@vger.kernel.org; Leo Li <leoyang.li@nxp.com>; Jiafei Pan
> <jiafei.pan@nxp.com>; Jiaheng Fan <jiaheng.fan@nxp.com>
> Subject: Re: [v4 2/6] dmaengine: fsl-qdma: Add qDMA controller driver for
> Layerscape SoCs
> 
> On 17-05-18, 11:27, Wen He wrote:
> > > > +
> > > > +/* Registers for bit and genmask */
> > > > +#define FSL_QDMA_CQIDR_SQT		0x8000
> > >
> > > BIT() ?
> >
> > Sorry, Maybe I should replace 0x8000 to BIT(15).
> 
> yes please
> 
> > > > +u64 pre_addr, pre_queue;
> > >
> > > why do we have a global?
> >
> > Let's us see qDMA that how is works?
> >
> > First, the status notification for DMA jobs are reported back to the status
> queue.
> > Status information is carried within the command descriptor
> > status/command field, bits 120-127. The command descriptor dequeue
> > pointer advances only after the transaction has completed and the status
> information field has been updated.
> >
> > Then, the command descriptor address field wiil pointer to the command
> > descriptor in its original format. It is the responsibity of the
> > address of the status queue consumer to deallocate buffers as needed when
> the command descriptor address pointer is non-zero.
> >
> > More details of the Status Queue can be found in QorIQ Layerscape Soc
> datasheet.
> >
> > So, these variable is used to record latest value that command
> > descriptor queue and status field.
> >
> > Every time variables value is zero when set these variable to local, that's not
> what I want.
> 
> Why not store them in driver context?
> 

okay, maybe I should remove these variable to private struct?

> > > > +static void fsl_qdma_comp_fill_memcpy(struct fsl_qdma_comp
> *fsl_comp,
> > > > +					dma_addr_t dst, dma_addr_t src, u32 len) {
> > > > +	struct fsl_qdma_format *ccdf, *csgf_desc, *csgf_src, *csgf_dest;
> > > > +	struct fsl_qdma_sdf *sdf;
> > > > +	struct fsl_qdma_ddf *ddf;
> > > > +
> > > > +	ccdf = (struct fsl_qdma_format *)fsl_comp->virt_addr;
> > >
> > > Cast are not required to/away from void
> > >
> >
> > Does means: remove force conver?
> 
> yes and it would work
> 
> > > > +static struct fsl_qdma_comp *fsl_qdma_request_enqueue_desc(
> > > > +					struct fsl_qdma_chan *fsl_chan,
> > > > +					unsigned int dst_nents,
> > > > +					unsigned int src_nents)
> > > > +{
> > > > +	struct fsl_qdma_comp *comp_temp;
> > > > +	struct fsl_qdma_sg *sg_block;
> > > > +	struct fsl_qdma_queue *queue = fsl_chan->queue;
> > > > +	unsigned long flags;
> > > > +	unsigned int dst_sg_entry_block, src_sg_entry_block,
> > > > +sg_entry_total, i;
> > > > +
> > > > +	spin_lock_irqsave(&queue->queue_lock, flags);
> > > > +	if (list_empty(&queue->comp_free)) {
> > > > +		spin_unlock_irqrestore(&queue->queue_lock, flags);
> > > > +		comp_temp = kzalloc(sizeof(*comp_temp), GFP_KERNEL);
> > > > +		if (!comp_temp)
> > > > +			return NULL;
> > > > +		comp_temp->virt_addr = dma_pool_alloc(queue->comp_pool,
> > > > +						      GFP_KERNEL,
> > > > +						      &comp_temp->bus_addr);
> > > > +		if (!comp_temp->virt_addr) {
> > > > +			kfree(comp_temp);
> > > > +			return NULL;
> > > > +		}
> > > > +
> > > > +	} else {
> > > > +		comp_temp = list_first_entry(&queue->comp_free,
> > > > +					     struct fsl_qdma_comp,
> > > > +					     list);
> > > > +		list_del(&comp_temp->list);
> > > > +		spin_unlock_irqrestore(&queue->queue_lock, flags);
> > > > +	}
> > > > +
> > > > +	if (dst_nents != 0)
> > > > +		dst_sg_entry_block = dst_nents /
> > > > +					(FSL_QDMA_EXPECT_SG_ENTRY_NUM - 1) + 1;
> > >
> > > DIV_ROUND_UP()?
> > >
> >
> > The DIV_ROUND_UP() definition see below:
> >
> > #define DIV_ROUND_UP __KERNEL_DIV_ROUND_UP #define
> > __KERNEL_DIV_ROUND_UP(n ,d) (((n) + (d) - 1) / (d))
> >
> > But here is 'd / (n - 1) + 1' ?
> 
> Yeah this doesn't look apt here, check if any other macros suits...
> 

Got it.

> > > > +			memset(sg_block->virt_addr, 0,
> > > > +					FSL_QDMA_EXPECT_SG_ENTRY_NUM * 16);
> > >
> > > why FSL_QDMA_EXPECT_SG_ENTRY_NUM * 16? and not what you
> allocated?
> > >
> >
> > see line 497.
> > The sg_pool buffer size created is FSL_QDMA_EXPECT_SG_ENTRY_NUM *
> 16.
> 
> Please document this
> 
Add comment to this?

> > > > +static int fsl_qdma_queue_transfer_complete(struct
> > > > +fsl_qdma_engine
> > > > +*fsl_qdma) {
> > > > +	struct fsl_qdma_queue *fsl_queue = fsl_qdma->queue;
> > > > +	struct fsl_qdma_queue *fsl_status = fsl_qdma->status;
> > > > +	struct fsl_qdma_queue *temp_queue;
> > > > +	struct fsl_qdma_comp *fsl_comp;
> > > > +	struct fsl_qdma_format *status_addr;
> > > > +	struct fsl_qdma_format *csgf_src;
> > > > +	void __iomem *block = fsl_qdma->block_base;
> > > > +	u32 reg, i;
> > > > +	bool duplicate, duplicate_handle;
> > > > +
> > > > +	while (1) {
> > > > +		duplicate = 0;
> > > > +		duplicate_handle = 0;
> > > > +		reg = qdma_readl(fsl_qdma, block + FSL_QDMA_BSQSR);
> > > > +		if (reg & FSL_QDMA_BSQSR_QE)
> > > > +			return 0;
> > > > +		status_addr = fsl_status->virt_head;
> > > > +		if (qdma_ccdf_get_queue(status_addr) == pre_queue &&
> > > > +			qdma_ccdf_addr_get64(status_addr) == pre_addr)
> > > > +			duplicate = 1;
> > > > +		i = qdma_ccdf_get_queue(status_addr);
> > > > +		pre_queue = qdma_ccdf_get_queue(status_addr);
> > > > +		pre_addr = qdma_ccdf_addr_get64(status_addr);
> > > > +		temp_queue = fsl_queue + i;
> > > > +		spin_lock(&temp_queue->queue_lock);
> > > > +		if (list_empty(&temp_queue->comp_used)) {
> > > > +			if (duplicate)
> > > > +				duplicate_handle = 1;
> > > > +			else {
> > > > +				spin_unlock(&temp_queue->queue_lock);
> > > > +				return -1;
> > >
> > > -1? really. You are in while(1) wouldn't break make sense here?
> > >
> >
> > Does means: using break?
> 
> it means two things, we don't use return -1, if it is valid error then return a
> proper kernel error code second, since it is a while loop, do you want to use a
> break?
> 
Got it, Thanks.

> --
> ~Vinod
> --
> To unsubscribe from this list: send the line "unsubscribe dmaengine" in the
> body of a message to majordomo@vger.kernel.org More majordomo info at
> https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fvger.
> kernel.org%2Fmajordomo-info.html&data=02%7C01%7Cwen.he_1%40nxp.co
> m%7Cedcdbb818c534ec70b3108d5bc76d7b3%7C686ea1d3bc2b4c6fa92cd9
> 9c5c301635%7C0%7C0%7C636622140978593180&sdata=TuE40l7HejTz%2Bq
> S8LibIkdYV84Esh5sfD1jSIzvAwTY%3D&reserved=0

  reply	other threads:[~2018-05-18 10:04 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-14 12:03 [v4 1/6] dmaengine: fsldma: Replace DMA_IN/OUT by FSL_DMA_IN/OUT Wen He
2018-05-14 12:03 ` [v4 2/6] dmaengine: fsl-qdma: Add qDMA controller driver for Layerscape SoCs Wen He
2018-05-17  6:04   ` Vinod
2018-05-17 11:27     ` Wen He
2018-05-18  4:21       ` Vinod
2018-05-18 10:04         ` Wen He [this message]
2018-05-21  9:09           ` Vinod Koul
2018-05-21  9:49             ` Wen He
2018-05-14 12:03 ` [v4 3/6] dt-bindings: fsl-qdma: Add NXP Layerscpae qDMA controller bindings Wen He
2018-05-18 21:26   ` Rob Herring
2018-05-21  5:52     ` Wen He
2018-05-23 19:59       ` Rob Herring
2018-05-24  7:20         ` Wen He
2018-05-14 12:03 ` [v4 4/6] arm64: dts: ls1043a: add qdma device tree nodes Wen He
2018-05-14 12:03 ` [v4 5/6] arm64: dts: ls1046a: " Wen He
2018-05-14 12:03 ` [v4 6/6] arm: dts: ls1021a: " Wen He

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DB6PR0401MB250380A95367C42C1BC79595E2900@DB6PR0401MB2503.eurprd04.prod.outlook.com \
    --to=wen.he_1@nxp.com \
    --cc=devicetree@vger.kernel.org \
    --cc=dmaengine@vger.kernel.org \
    --cc=jiafei.pan@nxp.com \
    --cc=jiaheng.fan@nxp.com \
    --cc=leoyang.li@nxp.com \
    --cc=robh+dt@kernel.org \
    --cc=vkoul@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).