linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vinod Koul <vkoul@kernel.org>
To: Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
Cc: Dan Williams <dan.j.williams@intel.com>,
	Masahiro Yamada <yamada.masahiro@socionext.com>,
	Rob Herring <robh+dt@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	dmaengine@vger.kernel.org, devicetree@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	Masami Hiramatsu <masami.hiramatsu@linaro.org>,
	Jassi Brar <jaswinder.singh@linaro.org>
Subject: Re: [PATCH 2/2] dmaengine: uniphier-xdmac: Add UniPhier external DMA controller driver
Date: Fri, 27 Dec 2019 12:04:11 +0530	[thread overview]
Message-ID: <20191227063411.GG3006@vkoul-mobl> (raw)
In-Reply-To: <1576630620-1977-3-git-send-email-hayashi.kunihiko@socionext.com>

On 18-12-19, 09:57, Kunihiko Hayashi wrote:
> This adds external DMA controller driver implemented in Socionext
> UniPhier SoCs. This driver supports DMA_MEMCPY and DMA_SLAVE modes.
> 
> Since this driver does not support the the way to transfer size
> unaligned to burst width, 'src_maxburst' or 'dst_maxburst' of

You mean driver does not support any unaligned bursts?

> +static int uniphier_xdmac_probe(struct platform_device *pdev)
> +{
> +	struct uniphier_xdmac_device *xdev;
> +	struct device *dev = &pdev->dev;
> +	struct dma_device *ddev;
> +	int irq;
> +	int nr_chans;
> +	int i, ret;
> +
> +	if (of_property_read_u32(dev->of_node, "dma-channels", &nr_chans))
> +		return -EINVAL;
> +	if (nr_chans > XDMAC_MAX_CHANS)
> +		nr_chans = XDMAC_MAX_CHANS;
> +
> +	xdev = devm_kzalloc(dev, struct_size(xdev, channels, nr_chans),
> +			    GFP_KERNEL);
> +	if (!xdev)
> +		return -ENOMEM;
> +
> +	xdev->nr_chans = nr_chans;
> +	xdev->reg_base = devm_platform_ioremap_resource(pdev, 0);
> +	if (IS_ERR(xdev->reg_base))
> +		return PTR_ERR(xdev->reg_base);
> +
> +	ddev = &xdev->ddev;
> +	ddev->dev = dev;
> +	dma_cap_zero(ddev->cap_mask);
> +	dma_cap_set(DMA_MEMCPY, ddev->cap_mask);
> +	dma_cap_set(DMA_SLAVE, ddev->cap_mask);
> +	ddev->src_addr_widths = UNIPHIER_XDMAC_BUSWIDTHS;
> +	ddev->dst_addr_widths = UNIPHIER_XDMAC_BUSWIDTHS;
> +	ddev->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV) |
> +			   BIT(DMA_MEM_TO_MEM);
> +	ddev->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
> +	ddev->max_burst = XDMAC_MAX_WORDS;
> +	ddev->device_free_chan_resources = uniphier_xdmac_free_chan_resources;
> +	ddev->device_prep_dma_memcpy = uniphier_xdmac_prep_dma_memcpy;
> +	ddev->device_prep_slave_sg = uniphier_xdmac_prep_slave_sg;
> +	ddev->device_config = uniphier_xdmac_slave_config;
> +	ddev->device_terminate_all = uniphier_xdmac_terminate_all;
> +	ddev->device_synchronize = uniphier_xdmac_synchronize;
> +	ddev->device_tx_status = dma_cookie_status;
> +	ddev->device_issue_pending = uniphier_xdmac_issue_pending;
> +	INIT_LIST_HEAD(&ddev->channels);
> +
> +	for (i = 0; i < nr_chans; i++) {
> +		ret = uniphier_xdmac_chan_init(xdev, i);
> +		if (ret) {
> +			dev_err(dev,
> +				"Failed to initialize XDMAC channel %d\n", i);
> +			return ret;

so on error for channel N we leave N-1 channels initialized?

> +static int uniphier_xdmac_remove(struct platform_device *pdev)
> +{
> +	struct uniphier_xdmac_device *xdev = platform_get_drvdata(pdev);
> +	struct dma_device *ddev = &xdev->ddev;
> +	struct dma_chan *chan;
> +	int ret;
> +
> +	/*
> +	 * Before reaching here, almost all descriptors have been freed by the
> +	 * ->device_free_chan_resources() hook. However, each channel might
> +	 * be still holding one descriptor that was on-flight at that moment.
> +	 * Terminate it to make sure this hardware is no longer running. Then,
> +	 * free the channel resources once again to avoid memory leak.
> +	 */
> +	list_for_each_entry(chan, &ddev->channels, device_node) {
> +		ret = dmaengine_terminate_sync(chan);
> +		if (ret)
> +			return ret;
> +		uniphier_xdmac_free_chan_resources(chan);

terminating sounds okayish but not freeing here. .ree_chan_resources()
should have been called already and that should ensure that termination
is already done...

-- 
~Vinod

  reply	other threads:[~2019-12-27  6:34 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-18  0:56 [PATCH 0/2] dmaengine: Add UniPhier XDMAC driver Kunihiko Hayashi
2019-12-18  0:56 ` [PATCH 1/2] dt-bindings: dmaengine: Add UniPhier external DMA controller bindings Kunihiko Hayashi
2020-01-08  3:55   ` Rob Herring
2020-01-09 12:20     ` Kunihiko Hayashi
2019-12-18  0:57 ` [PATCH 2/2] dmaengine: uniphier-xdmac: Add UniPhier external DMA controller driver Kunihiko Hayashi
2019-12-27  6:34   ` Vinod Koul [this message]
2020-01-09 12:12     ` Kunihiko Hayashi
2020-01-10  8:01       ` Vinod Koul

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191227063411.GG3006@vkoul-mobl \
    --to=vkoul@kernel.org \
    --cc=dan.j.williams@intel.com \
    --cc=devicetree@vger.kernel.org \
    --cc=dmaengine@vger.kernel.org \
    --cc=hayashi.kunihiko@socionext.com \
    --cc=jaswinder.singh@linaro.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=masami.hiramatsu@linaro.org \
    --cc=robh+dt@kernel.org \
    --cc=yamada.masahiro@socionext.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).