linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Lucas Stach <l.stach@pengutronix.de>
To: Robin Gong <yibin.gong@nxp.com>,
	"vkoul@kernel.org" <vkoul@kernel.org>,
	"dan.j.williams@intel.com" <dan.j.williams@intel.com>,
	"s.hauer@pengutronix.de" <s.hauer@pengutronix.de>,
	"linux@armlinux.org.uk" <linux@armlinux.org.uk>
Cc: "dmaengine@vger.kernel.org" <dmaengine@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	dl-linux-imx <linux-imx@nxp.com>,
	"kernel@pengutronix.de" <kernel@pengutronix.de>,
	"linux-arm-kernel@lists.infradead.org" 
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [PATCH v3 3/3] dmaengine: imx-sdma: allocate max 20 bds for one transfer
Date: Mon, 06 Aug 2018 14:29:50 +0200	[thread overview]
Message-ID: <1533558590.2809.1.camel@pengutronix.de> (raw)
In-Reply-To: <DB6PR04MB3223BB3159EB8D376BE64B3E89200@DB6PR04MB3223.eurprd04.prod.outlook.com>

Hi Robin,

Am Montag, den 06.08.2018, 08:04 +0000 schrieb Robin Gong:
> Hello Lucas,
> 	Any comment for my reply?

So I've looked at this again and sadly I need to NACK this patch. It is
a total API abuse of the dma_pool API and even the patch introducing
the dma_pool usage in this driver is wrong and should be reverted.

The SDMA need contiguous buffer descriptors for each channel, something
the dma_pool abstraction isn't able to provide. So either the dma_pool
implementation needs to be extended to support this use-case, or you
can't use this at all in the sdma driver. Adding hacks, which are
abusing the API, to cram a dma_pool into the sdma driver is not a valid
way to implement things for upstream.

Regards,
Lucas

> > -----Original Message-----
> > From: Robin Gong
> > Sent: 2018年7月25日 9:25
> > To: 'Lucas Stach' <l.stach@pengutronix.de>; vkoul@kernel.org;
> > dan.j.williams@intel.com; s.hauer@pengutronix.de; linux@armlinux.or
> > g.uk
> > Cc: dmaengine@vger.kernel.org; dl-linux-imx <linux-imx@nxp.com>;
> > kernel@pengutronix.de; linux-arm-kernel@lists.infradead.org;
> > linux-kernel@vger.kernel.org
> > Subject: RE: [PATCH v3 3/3] dmaengine: imx-sdma: allocate max 20
> > bds for one
> > transfer
> > 
> > > -----Original Message-----
> > > From: Lucas Stach [mailto:l.stach@pengutronix.de]
> > > Sent: 2018年7月24日 17:22
> > > To: Robin Gong <yibin.gong@nxp.com>; vkoul@kernel.org;
> > > dan.j.williams@intel.com; s.hauer@pengutronix.de;
> > > linux@armlinux.org.uk
> > > Cc: dmaengine@vger.kernel.org; dl-linux-imx <linux-imx@nxp.com>;
> > > kernel@pengutronix.de; linux-arm-kernel@lists.infradead.org;
> > > linux-kernel@vger.kernel.org
> > > Subject: Re: [PATCH v3 3/3] dmaengine: imx-sdma: allocate max 20
> > > bds
> > > for one transfer
> > > 
> > > Am Montag, den 23.07.2018, 13:55 +0000 schrieb Robin Gong:
> > > > > -----Original Message-----
> > > > > From: Lucas Stach [mailto:l.stach@pengutronix.de]
> > > > > Sent: 2018年7月23日 18:54
> > > > > To: Robin Gong <yibin.gong@nxp.com>; vkoul@kernel.org;
> > > > > dan.j.williams@intel.com; s.hauer@pengutronix.de;
> > > > > linux@armlinux.or g.uk
> > > > > Cc: dmaengine@vger.kernel.org; dl-linux-imx <linux-imx@nxp.co
> > > > > m>;
> > > > > kernel@pengutronix.de; linux-arm-kernel@lists.infradead.org;
> > > > > linux-kernel@vger.kernel.org
> > > > > Subject: Re: [PATCH v3 3/3] dmaengine: imx-sdma: allocate max
> > > > > 20
> > > > > bds for one transfer
> > > > > 
> > > > > Am Dienstag, den 24.07.2018, 01:46 +0800 schrieb Robin Gong:
> > > > > > If multi-bds used in one transfer, all bds should be
> > > > > > consisten
> > > > > > memory.To easily follow it, enlarge the dma pool size into
> > > > > > 20
> > > > > > bds, and it will report error if the number of bds is over
> > > > > > than
> > > > > > 20. For dmatest, the max count for single transfer is
> > > > > > NUM_BD *
> > > > > 
> > > > > SDMA_BD_MAX_CNT
> > > > > > = 20 * 65535 = ~1.28MB.
> > > > > 
> > > > > Both the commit message and the comment need a lot more care
> > > > > to
> > > > > actually tell what this commit is trying to achieve.
> > > > > Currently I
> > > > > don't follow at all. What does "consisten" mean? Do you mean
> > > > > BDs
> > > > > should be contiguous in memory?
> > > > 
> > > > Yes, BDs should be contiguous  one by one in memory.
> > > 
> > > Okay, but this isn't what the code change does. By increasing the
> > > size
> > > parameter of the dma pool you just allocate 20 times as much
> > > memory as
> > > needed for each BD. So actually the BDs end up being very non-
> > > contiguous in memory as there are now holes of 19 BD sizes
> > > between the
> > 
> > start of each BD.
> > Please notice only allocate bds memory from dma pool one time even
> > in multi
> > bds.
> > That's different with the common use case that allocate memory from
> > dma
> > pool everytime for every bd. Why do this is to make sure all bd
> > memory is
> > contiguous for single transfer whatever single bd or multi-bds,
> > since two call
> > dma_pool_alloc() can't promise the address is contiguous especially
> > for multi
> > thread case such as dmatest 'threads_per_chan = 5'. You can change
> > to '
> > norandom=true ' and ' test_buf_size = 163840' in dmatest.c to look
> > what issue
> > coming without this patch.
> > > 
> > > So something isn't right with this change.
> > 
> > I think this patch is the easy way to resolve the bd contiguous
> > issue, but the
> > cost is to allocate more dma pool memory which may not used.
> > > 
> > > Regards,
> > > Lucas
> > > 
> > > > > 
> > > > > What do you gain by over-allocating each BD by a factor of
> > > > > 20?
> > > > 
> > > > I guess dma_pool_alloc will return error in such case, and then
> > > > cause dma setup transfer failure.
> > > > > 
> > > > > Regards,
> > > > > Lucas
> > > > > 
> > > > > > Signed-off-by: Robin Gong <yibin.gong@nxp.com>
> > > > > > ---
> > > > > >  drivers/dma/imx-sdma.c | 17 ++++++++++++++++-
> > > > > >  1 file changed, 16 insertions(+), 1 deletion(-)
> > > > > > 
> > > > > > diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-
> > > > > > sdma.c
> > > > > > index
> > > > > > b4ec2d2..5973489 100644
> > > > > > --- a/drivers/dma/imx-sdma.c
> > > > > > +++ b/drivers/dma/imx-sdma.c
> > > > > > @@ -298,6 +298,15 @@ struct sdma_context_data {
> > > > > > >  	u32  scratch7;
> > > > > > 
> > > > > >  } __attribute__ ((packed));
> > > > > > 
> > > > > > +/*
> > > > > > + * All bds in one transfer should be consitent on SDMA. To
> > > > > > easily
> > > > > > +follow it,just
> > > > > > + * set the dma pool size as the enough bds. For example,
> > > > > > in
> > > > > > dmatest
> > > > > > +case, the
> > > > > > + * max 20 bds means the max for single transfer is NUM_BD
> > > > > > *
> > > > > > +SDMA_BD_MAX_CNT = 20
> > > > > > + * * 65535 = ~1.28MB. 20 bds supposed to be enough
> > > > > > basically.If
> > > > > > it's
> > > > > > +still not
> > > > > > + * enough in some specific cases, enlarge it here.Warning
> > > > > > message
> > > > > > +would also
> > > > > > + * appear if the bd numbers is over than 20.
> > > > > > + */
> > > > > > +#define NUM_BD 20
> > > > > > 
> > > > > >  struct sdma_engine;
> > > > > > 
> > > > > > @@ -1273,7 +1282,7 @@ static int
> > > > > > sdma_alloc_chan_resources(struct dma_chan *chan)
> > > > > > >  		goto disable_clk_ahb;
> > > > > > >  	sdmac->bd_pool = dma_pool_create("bd_pool",
> > > > > > > chan-
> > > > > > > > device->dev,
> > > > > > > 
> > > > > > > -				sizeof(struct
> > > > > > > sdma_buffer_descriptor),
> > > > > > > +				NUM_BD * sizeof(struct
> > > > > > > sdma_buffer_descriptor),
> > > > > > >  				32, 0);
> > > > > > >  	return 0;
> > > > > > 
> > > > > > @@ -1314,6 +1323,12 @@ static struct sdma_desc
> > > > > > *sdma_transfer_init(struct sdma_channel *sdmac,
> > > > > >  {
> > > > > > >  	struct sdma_desc *desc;
> > > > > > > +	if (bds > NUM_BD) {
> > > > > > > +		dev_err(sdmac->sdma->dev, "%d bds exceed
> > > > > > > the
> > > > > > > max %d\n",
> > > > > > > +			bds, NUM_BD);
> > > > > > > +		goto err_out;
> > > > > > > +	}
> > > > > > 
> > > > > > +
> > > > > > >  	desc = kzalloc((sizeof(*desc)), GFP_NOWAIT);
> > > > > > >  	if (!desc)
> > > > > > >  		goto err_out;

      reply	other threads:[~2018-08-06 12:30 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-07-23 17:46 [PATCH v3 0/3] add memcpy support for sdma Robin Gong
2018-07-23 17:46 ` [PATCH v3 1/3] dmaengine: imx-sdma: add SDMA_BD_MAX_CNT to replace '0xffff' Robin Gong
2018-07-30  5:04   ` Vinod
2018-07-23 17:46 ` [PATCH v3 2/3] dmaengine: imx-sdma: add memcpy interface Robin Gong
2018-07-30  5:04   ` Vinod
2018-07-23 17:46 ` [PATCH v3 3/3] dmaengine: imx-sdma: allocate max 20 bds for one transfer Robin Gong
2018-07-23 10:54   ` Lucas Stach
2018-07-23 13:55     ` Robin Gong
2018-07-24  9:22       ` Lucas Stach
2018-07-25  1:24         ` Robin Gong
2018-08-06  8:04           ` Robin Gong
2018-08-06 12:29             ` Lucas Stach [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1533558590.2809.1.camel@pengutronix.de \
    --to=l.stach@pengutronix.de \
    --cc=dan.j.williams@intel.com \
    --cc=dmaengine@vger.kernel.org \
    --cc=kernel@pengutronix.de \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-imx@nxp.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux@armlinux.org.uk \
    --cc=s.hauer@pengutronix.de \
    --cc=vkoul@kernel.org \
    --cc=yibin.gong@nxp.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).