From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752153AbdAYR2O (ORCPT ); Wed, 25 Jan 2017 12:28:14 -0500 Received: from mga03.intel.com ([134.134.136.65]:46651 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751567AbdAYR2L (ORCPT ); Wed, 25 Jan 2017 12:28:11 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.33,284,1477983600"; d="scan'208";a="813252905" Message-ID: <1485365106.2133.331.camel@linux.intel.com> Subject: Re: [PATCH 2/2] dmaengine: Add DW AXI DMAC driver From: Andy Shevchenko To: Eugeniy Paltsev , dmaengine@vger.kernel.org Cc: linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-snps-arc@lists.infradead.org, Dan Williams , Vinod Koul , Mark Rutland , Rob Herring , Alexey Brodkin Date: Wed, 25 Jan 2017 19:25:06 +0200 In-Reply-To: <1485358457-22957-3-git-send-email-Eugeniy.Paltsev@synopsys.com> References: <1485358457-22957-1-git-send-email-Eugeniy.Paltsev@synopsys.com> <1485358457-22957-3-git-send-email-Eugeniy.Paltsev@synopsys.com> Organization: Intel Finland Oy Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.22.3-1 Mime-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 2017-01-25 at 18:34 +0300, Eugeniy Paltsev wrote: > This patch adds support for the DW AXI DMAC controller. > > DW AXI DMAC is a part of upcoming development board from Synopsys. > > In this driver implementation only DMA_MEMCPY and DMA_SG transfers > are supported. > Few more comments on top of not addressed/answered yet. > +static inline void axi_chan_disable(struct axi_dma_chan *chan) > +{ > + u32 val; > + > + val = axi_dma_ioread32(chan->chip, DMAC_CHEN); > + val &= ~(BIT(chan->id) << DMAC_CHAN_EN_SHIFT); > + val |=  (BIT(chan->id) << DMAC_CHAN_EN_WE_SHIFT); > + axi_dma_iowrite32(chan->chip, DMAC_CHEN, val); > +} > + > +static inline void axi_chan_enable(struct axi_dma_chan *chan) > +{ > + u32 val; > + > + val = axi_dma_ioread32(chan->chip, DMAC_CHEN); > + val |= (BIT(chan->id) << DMAC_CHAN_EN_SHIFT | > + BIT(chan->id) << DMAC_CHAN_EN_WE_SHIFT); > Redundant parens. > + axi_dma_iowrite32(chan->chip, DMAC_CHEN, val); > +} > +static void axi_desc_put(struct axi_dma_desc *desc) > +{ > + struct axi_dma_chan *chan = desc->chan; > + struct dw_axi_dma *dw = chan->chip->dw; > + struct axi_dma_desc *child, *_next; > + unsigned int descs_put = 0; > + > + if (unlikely(!desc)) > + return; Would it be the case? > +static void dma_chan_issue_pending(struct dma_chan *dchan) > +{ > + struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan); > + unsigned long flags; > + > + dev_dbg(dchan2dev(dchan), "%s: %s\n", __func__, > axi_chan_name(chan)); Messages like this kinda redundant. Either you use function tracer and see them anyway, or you are using Dynamic Debug, which may include function name. Basically you wrote an equivalent to something like dev_dbg(dev, "%s\n", channame); > + > + spin_lock_irqsave(&chan->vc.lock, flags); > + if (vchan_issue_pending(&chan->vc)) > + axi_chan_start_first_queued(chan); > + spin_unlock_irqrestore(&chan->vc.lock, flags); ...and taking into account the function itself one might expect something useful printed in _start_first_queued(). For some cases there is also dev_vdbg(). > +} > + > +static int dma_chan_alloc_chan_resources(struct dma_chan *dchan) > +{ > + struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan); > + > + /* ASSERT: channel is idle */ > + if (axi_chan_is_hw_enable(chan)) { > + dev_err(chan2dev(chan), "%s is non-idle!\n", > + axi_chan_name(chan)); > + return -EBUSY; > + } > + > + dev_dbg(dchan2dev(dchan), "%s: allocating\n", > axi_chan_name(chan)); Can you try to enable DMADEVICES_DEBUG and VDEBUG and see what is useful and what is not? Give a chance to function tracer as well. > +static void dma_chan_free_chan_resources(struct dma_chan *dchan) > +{ > + struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan); > + > > + /* ASSERT: channel is idle */ > + if (axi_chan_is_hw_enable(chan)) > + dev_err(dchan2dev(dchan), "%s is non-idle!\n", > + axi_chan_name(chan)); Yeah, as I said, dw_dmac is not a good example. And this one can be managed by runtime PM I suppose. > + > +/* TODO: REVISIT: how we should choose AXI master for mem-to-mem > transfer? */ Read datasheet for a SoC/platform? For dw_dmac is chosen with accordance to hardware topology. > + while (true) { Usually it makes readability harder and rises a red flag to the code. > /* Manage transfer list (xfer_list) */ > + if (!first) { > + first = desc; > + } else { > + list_add_tail(&desc->xfer_list, &first- > >xfer_list); > + write_desc_llp(prev, desc->vd.tx.phys | lms); > + } > + prev = desc; > + > + /* update the lengths and addresses for the next loop > cycle */ > + dst_len -= xfer_len; > + src_len -= xfer_len; > + dst_adr += xfer_len; > + src_adr += xfer_len; > + > + total_len += xfer_len; I would suggest to leave this on caller. At some point, if no one else do this faster than me, I would like to introduce something like struct dma_parms per DMA channel to allow caller prepare SG list suitable for the DMA device. > + > +static struct dma_async_tx_descriptor * > +dma_chan_prep_dma_memcpy(struct dma_chan *dchan, dma_addr_t dest, > +  dma_addr_t src, size_t len, unsigned long > flags) > +{ Do you indeed have a use case for this except debugging and testing? > + unsigned int nents = 1; > + struct scatterlist dst_sg; > + struct scatterlist src_sg; > + > + sg_init_table(&dst_sg, nents); > + sg_init_table(&src_sg, nents); > + > + sg_dma_address(&dst_sg) = dest; > + sg_dma_address(&src_sg) = src; > + > + sg_dma_len(&dst_sg) = len; > + sg_dma_len(&src_sg) = len; > + > + /* Implement memcpy transfer as sg transfer with single list > */ > + return dma_chan_prep_dma_sg(dchan, &dst_sg, nents, > +     &src_sg, nents, flags); One line? > +} > + > +static void axi_chan_dump_lli(struct axi_dma_chan *chan, > +       struct axi_dma_desc *desc) > +{ > + dev_err(dchan2dev(&chan->vc.chan), > + "SAR: 0x%x DAR: 0x%x LLP: 0x%x BTS 0x%x CTL: > 0x%x:%08x", > + le32_to_cpu(desc->lli.sar_lo), > + le32_to_cpu(desc->lli.dar_lo), > + le32_to_cpu(desc->lli.llp_lo), > + le32_to_cpu(desc->lli.block_ts_lo), > + le32_to_cpu(desc->lli.ctl_hi), > + le32_to_cpu(desc->lli.ctl_lo)); I hope at some point ARC (and other architectures which will use this IP) can implement MMIO tracer. > +} > + axi_chan_dump_lli(chan, desc); > +} > + > + Extra line. > +static void axi_chan_handle_err(struct axi_dma_chan *chan, u32 > status) > +{ > + > +static int dma_chan_pause(struct dma_chan *dchan) > +{ > + struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan); > + unsigned long flags; > + unsigned int timeout = 20; /* timeout iterations */ > + int ret = -EAGAIN; > + u32 val; > + > + spin_lock_irqsave(&chan->vc.lock, flags); > + > + val = axi_dma_ioread32(chan->chip, DMAC_CHEN); > + val |= (BIT(chan->id) << DMAC_CHAN_SUSP_SHIFT | > + BIT(chan->id) << DMAC_CHAN_SUSP_WE_SHIFT); Redundant parens. > + axi_dma_iowrite32(chan->chip, DMAC_CHEN, val); > + > + while (timeout--) { > + if (axi_chan_irq_read(chan) & > DWAXIDMAC_IRQ_SUSPENDED) { > + axi_chan_irq_clear(chan, > DWAXIDMAC_IRQ_SUSPENDED); You may move this invariant out from the loop. Makes code cleaner. > + ret = 0; > + break; > + } > + udelay(2); > + } > + > + chan->is_paused = true; This is indeed property of channel. That said, you may do a trick and use descriptor status for it. You channel and driver, I'm sure, can't serve in interleaved mode with descriptors. So, that makes channel(paused) == active descriptor(paused). The trick allows to make code cleaner. > + ret = device_property_read_u32_array(dev, "priority", carr, I'm not sure you will have a use case for that. Have you? > + ret = devm_request_irq(chip->dev, chip->irq, > dw_axi_dma_intretupt, > +        IRQF_SHARED, DRV_NAME, chip); > + if (ret) > + return ret; You can't do this unless you are using threaded IRQ handler without any tasklets involved. There was a nice mail by Thomas who explained what the problem you have there. It's a bug in your code. > +static int __init dw_init(void) > +{ > + return platform_driver_register(&dw_driver); > +} > +subsys_initcall(dw_init); Hmm... Why it can't be just a platform driver? You describe the dependency in Device Tree, if you have something happened, you may utilize -EPROBE_DEFER. -- Andy Shevchenko Intel Finland Oy From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andy Shevchenko Subject: Re: [PATCH 2/2] dmaengine: Add DW AXI DMAC driver Date: Wed, 25 Jan 2017 19:25:06 +0200 Message-ID: <1485365106.2133.331.camel@linux.intel.com> References: <1485358457-22957-1-git-send-email-Eugeniy.Paltsev@synopsys.com> <1485358457-22957-3-git-send-email-Eugeniy.Paltsev@synopsys.com> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: In-Reply-To: <1485358457-22957-3-git-send-email-Eugeniy.Paltsev@synopsys.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-snps-arc" Errors-To: linux-snps-arc-bounces+gla-linux-snps-arc=m.gmane.org@lists.infradead.org To: Eugeniy Paltsev , dmaengine@vger.kernel.org Cc: Mark Rutland , devicetree@vger.kernel.org, Vinod Koul , Alexey Brodkin , linux-kernel@vger.kernel.org, Rob Herring , Dan Williams , linux-snps-arc@lists.infradead.org List-Id: devicetree@vger.kernel.org T24gV2VkLCAyMDE3LTAxLTI1IGF0IDE4OjM0ICswMzAwLCBFdWdlbml5IFBhbHRzZXYgd3JvdGU6 Cj4gVGhpcyBwYXRjaCBhZGRzIHN1cHBvcnQgZm9yIHRoZSBEVyBBWEkgRE1BQyBjb250cm9sbGVy Lgo+IAo+IERXIEFYSSBETUFDIGlzIGEgcGFydCBvZiB1cGNvbWluZyBkZXZlbG9wbWVudCBib2Fy ZCBmcm9tIFN5bm9wc3lzLgo+IAo+IEluIHRoaXMgZHJpdmVyIGltcGxlbWVudGF0aW9uIG9ubHkg RE1BX01FTUNQWSBhbmQgRE1BX1NHIHRyYW5zZmVycwo+IGFyZSBzdXBwb3J0ZWQuCj4gCgpGZXcg bW9yZSBjb21tZW50cyBvbiB0b3Agb2Ygbm90IGFkZHJlc3NlZC9hbnN3ZXJlZCB5ZXQuCgo+ICtz dGF0aWMgaW5saW5lIHZvaWQgYXhpX2NoYW5fZGlzYWJsZShzdHJ1Y3QgYXhpX2RtYV9jaGFuICpj aGFuKQo+ICt7Cj4gKwl1MzIgdmFsOwo+ICsKPiArCXZhbCA9IGF4aV9kbWFfaW9yZWFkMzIoY2hh bi0+Y2hpcCwgRE1BQ19DSEVOKTsKPiArCXZhbCAmPSB+KEJJVChjaGFuLT5pZCkgPDwgRE1BQ19D SEFOX0VOX1NISUZUKTsKPiArCXZhbCB8PcKgwqAoQklUKGNoYW4tPmlkKSA8PCBETUFDX0NIQU5f RU5fV0VfU0hJRlQpOwo+ICsJYXhpX2RtYV9pb3dyaXRlMzIoY2hhbi0+Y2hpcCwgRE1BQ19DSEVO LCB2YWwpOwo+ICt9Cj4gKwo+ICtzdGF0aWMgaW5saW5lIHZvaWQgYXhpX2NoYW5fZW5hYmxlKHN0 cnVjdCBheGlfZG1hX2NoYW4gKmNoYW4pCj4gK3sKPiArCXUzMiB2YWw7Cj4gKwo+ICsJdmFsID0g YXhpX2RtYV9pb3JlYWQzMihjaGFuLT5jaGlwLCBETUFDX0NIRU4pOwoKPiArCXZhbCB8PSAoQklU KGNoYW4tPmlkKSA8PCBETUFDX0NIQU5fRU5fU0hJRlQgfAo+ICsJCUJJVChjaGFuLT5pZCkgPDwg RE1BQ19DSEFOX0VOX1dFX1NISUZUKTsKPiAKClJlZHVuZGFudCBwYXJlbnMuCgo+ICsJYXhpX2Rt YV9pb3dyaXRlMzIoY2hhbi0+Y2hpcCwgRE1BQ19DSEVOLCB2YWwpOwo+ICt9CgoKPiArc3RhdGlj IHZvaWQgYXhpX2Rlc2NfcHV0KHN0cnVjdCBheGlfZG1hX2Rlc2MgKmRlc2MpCj4gK3sKPiArCXN0 cnVjdCBheGlfZG1hX2NoYW4gKmNoYW4gPSBkZXNjLT5jaGFuOwo+ICsJc3RydWN0IGR3X2F4aV9k bWEgKmR3ID0gY2hhbi0+Y2hpcC0+ZHc7Cj4gKwlzdHJ1Y3QgYXhpX2RtYV9kZXNjICpjaGlsZCwg Kl9uZXh0Owo+ICsJdW5zaWduZWQgaW50IGRlc2NzX3B1dCA9IDA7Cj4gKwoKPiArCWlmICh1bmxp a2VseSghZGVzYykpCj4gKwkJcmV0dXJuOwoKV291bGQgaXQgYmUgdGhlIGNhc2U/Cgo+ICtzdGF0 aWMgdm9pZCBkbWFfY2hhbl9pc3N1ZV9wZW5kaW5nKHN0cnVjdCBkbWFfY2hhbiAqZGNoYW4pCj4g K3sKPiArCXN0cnVjdCBheGlfZG1hX2NoYW4gKmNoYW4gPSBkY2hhbl90b19heGlfZG1hX2NoYW4o ZGNoYW4pOwo+ICsJdW5zaWduZWQgbG9uZyBmbGFnczsKPiArCgo+ICsJZGV2X2RiZyhkY2hhbjJk ZXYoZGNoYW4pLCAiJXM6ICVzXG4iLCBfX2Z1bmNfXywKPiBheGlfY2hhbl9uYW1lKGNoYW4pKTsK Ck1lc3NhZ2VzIGxpa2UgdGhpcyBraW5kYSByZWR1bmRhbnQuCkVpdGhlciB5b3UgdXNlIGZ1bmN0 aW9uIHRyYWNlciBhbmQgc2VlIHRoZW0gYW55d2F5LCBvciB5b3UgYXJlIHVzaW5nCkR5bmFtaWMg RGVidWcsIHdoaWNoIG1heSBpbmNsdWRlIGZ1bmN0aW9uIG5hbWUuCgpCYXNpY2FsbHkgeW91IHdy b3RlIGFuIGVxdWl2YWxlbnQgdG8gc29tZXRoaW5nIGxpa2UKCmRldl9kYmcoZGV2LCAiJXNcbiIs IGNoYW5uYW1lKTsKCj4gKwo+ICsJc3Bpbl9sb2NrX2lycXNhdmUoJmNoYW4tPnZjLmxvY2ssIGZs YWdzKTsKPiArCWlmICh2Y2hhbl9pc3N1ZV9wZW5kaW5nKCZjaGFuLT52YykpCj4gKwkJYXhpX2No YW5fc3RhcnRfZmlyc3RfcXVldWVkKGNoYW4pOwo+ICsJc3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgm Y2hhbi0+dmMubG9jaywgZmxhZ3MpOwoKLi4uYW5kIHRha2luZyBpbnRvIGFjY291bnQgdGhlIGZ1 bmN0aW9uIGl0c2VsZiBvbmUgbWlnaHQgZXhwZWN0CnNvbWV0aGluZyB1c2VmdWwgcHJpbnRlZCBp biBfc3RhcnRfZmlyc3RfcXVldWVkKCkuCgpGb3Igc29tZSBjYXNlcyB0aGVyZSBpcyBhbHNvIGRl dl92ZGJnKCkuCgo+ICt9Cj4gKwo+ICtzdGF0aWMgaW50IGRtYV9jaGFuX2FsbG9jX2NoYW5fcmVz b3VyY2VzKHN0cnVjdCBkbWFfY2hhbiAqZGNoYW4pCj4gK3sKPiArCXN0cnVjdCBheGlfZG1hX2No YW4gKmNoYW4gPSBkY2hhbl90b19heGlfZG1hX2NoYW4oZGNoYW4pOwo+ICsKPiArCS8qIEFTU0VS VDogY2hhbm5lbCBpcyBpZGxlICovCj4gKwlpZiAoYXhpX2NoYW5faXNfaHdfZW5hYmxlKGNoYW4p KSB7Cj4gKwkJZGV2X2VycihjaGFuMmRldihjaGFuKSwgIiVzIGlzIG5vbi1pZGxlIVxuIiwKPiAr CQkJYXhpX2NoYW5fbmFtZShjaGFuKSk7Cj4gKwkJcmV0dXJuIC1FQlVTWTsKPiArCX0KPiArCgo+ ICsJZGV2X2RiZyhkY2hhbjJkZXYoZGNoYW4pLCAiJXM6IGFsbG9jYXRpbmdcbiIsCj4gYXhpX2No YW5fbmFtZShjaGFuKSk7CgpDYW4geW91IHRyeSB0byBlbmFibGUgRE1BREVWSUNFU19ERUJVRyBh bmQgVkRFQlVHIGFuZCBzZWUgd2hhdCBpcyB1c2VmdWwKYW5kIHdoYXQgaXMgbm90PwoKR2l2ZSBh IGNoYW5jZSB0byBmdW5jdGlvbiB0cmFjZXIgYXMgd2VsbC4KCj4gK3N0YXRpYyB2b2lkIGRtYV9j aGFuX2ZyZWVfY2hhbl9yZXNvdXJjZXMoc3RydWN0IGRtYV9jaGFuICpkY2hhbikKPiArewo+ICsJ c3RydWN0IGF4aV9kbWFfY2hhbiAqY2hhbiA9IGRjaGFuX3RvX2F4aV9kbWFfY2hhbihkY2hhbik7 Cj4gKwo+IAoKPiArCS8qIEFTU0VSVDogY2hhbm5lbCBpcyBpZGxlICovCj4gKwlpZiAoYXhpX2No YW5faXNfaHdfZW5hYmxlKGNoYW4pKQo+ICsJCWRldl9lcnIoZGNoYW4yZGV2KGRjaGFuKSwgIiVz IGlzIG5vbi1pZGxlIVxuIiwKPiArCQkJYXhpX2NoYW5fbmFtZShjaGFuKSk7CgpZZWFoLCBhcyBJ IHNhaWQsIGR3X2RtYWMgaXMgbm90IGEgZ29vZCBleGFtcGxlLiBBbmQgdGhpcyBvbmUgY2FuIGJl Cm1hbmFnZWQgYnkgcnVudGltZSBQTSBJIHN1cHBvc2UuCgo+ICsKPiArLyogVE9ETzogUkVWSVNJ VDogaG93IHdlIHNob3VsZCBjaG9vc2UgQVhJIG1hc3RlciBmb3IgbWVtLXRvLW1lbQo+IHRyYW5z ZmVyPyAqLwoKUmVhZCBkYXRhc2hlZXQgZm9yIGEgU29DL3BsYXRmb3JtPyBGb3IgZHdfZG1hYyBp cyBjaG9zZW4gd2l0aCBhY2NvcmRhbmNlCnRvIGhhcmR3YXJlIHRvcG9sb2d5LgoKPiArCXdoaWxl ICh0cnVlKSB7CgpVc3VhbGx5IGl0IG1ha2VzIHJlYWRhYmlsaXR5IGhhcmRlciBhbmQgcmlzZXMg YSByZWQgZmxhZyB0byB0aGUgY29kZS4KCj4gCQkvKiBNYW5hZ2UgdHJhbnNmZXIgbGlzdCAoeGZl cl9saXN0KSAqLwo+ICsJCWlmICghZmlyc3QpIHsKPiArCQkJZmlyc3QgPSBkZXNjOwo+ICsJCX0g ZWxzZSB7Cj4gKwkJCWxpc3RfYWRkX3RhaWwoJmRlc2MtPnhmZXJfbGlzdCwgJmZpcnN0LQo+ID54 ZmVyX2xpc3QpOwo+ICsJCQl3cml0ZV9kZXNjX2xscChwcmV2LCBkZXNjLT52ZC50eC5waHlzIHwg bG1zKTsKPiArCQl9Cj4gKwkJcHJldiA9IGRlc2M7Cj4gKwo+ICsJCS8qIHVwZGF0ZSB0aGUgbGVu Z3RocyBhbmQgYWRkcmVzc2VzIGZvciB0aGUgbmV4dCBsb29wCj4gY3ljbGUgKi8KPiArCQlkc3Rf bGVuIC09IHhmZXJfbGVuOwo+ICsJCXNyY19sZW4gLT0geGZlcl9sZW47Cj4gKwkJZHN0X2FkciAr PSB4ZmVyX2xlbjsKPiArCQlzcmNfYWRyICs9IHhmZXJfbGVuOwo+ICsKPiArCQl0b3RhbF9sZW4g Kz0geGZlcl9sZW47CgpJIHdvdWxkIHN1Z2dlc3QgdG8gbGVhdmUgdGhpcyBvbiBjYWxsZXIuIEF0 IHNvbWUgcG9pbnQsIGlmIG5vIG9uZSBlbHNlCmRvIHRoaXMgZmFzdGVyIHRoYW4gbWUsIEkgd291 bGQgbGlrZSB0byBpbnRyb2R1Y2Ugc29tZXRoaW5nIGxpa2Ugc3RydWN0CmRtYV9wYXJtcyBwZXIg RE1BIGNoYW5uZWwgdG8gYWxsb3cgY2FsbGVyIHByZXBhcmUgU0cgbGlzdCBzdWl0YWJsZSBmb3IK dGhlIERNQSBkZXZpY2UuCgo+ICsKPiArc3RhdGljIHN0cnVjdCBkbWFfYXN5bmNfdHhfZGVzY3Jp cHRvciAqCj4gK2RtYV9jaGFuX3ByZXBfZG1hX21lbWNweShzdHJ1Y3QgZG1hX2NoYW4gKmRjaGFu LCBkbWFfYWRkcl90IGRlc3QsCj4gKwkJCcKgZG1hX2FkZHJfdCBzcmMsIHNpemVfdCBsZW4sIHVu c2lnbmVkIGxvbmcKPiBmbGFncykKPiArewoKRG8geW91IGluZGVlZCBoYXZlIGEgdXNlIGNhc2Ug Zm9yIHRoaXMgZXhjZXB0IGRlYnVnZ2luZyBhbmQgdGVzdGluZz8KCj4gKwl1bnNpZ25lZCBpbnQg bmVudHMgPSAxOwo+ICsJc3RydWN0IHNjYXR0ZXJsaXN0IGRzdF9zZzsKPiArCXN0cnVjdCBzY2F0 dGVybGlzdCBzcmNfc2c7Cj4gKwo+ICsJc2dfaW5pdF90YWJsZSgmZHN0X3NnLCBuZW50cyk7Cj4g KwlzZ19pbml0X3RhYmxlKCZzcmNfc2csIG5lbnRzKTsKPiArCj4gKwlzZ19kbWFfYWRkcmVzcygm ZHN0X3NnKSA9IGRlc3Q7Cj4gKwlzZ19kbWFfYWRkcmVzcygmc3JjX3NnKSA9IHNyYzsKPiArCj4g KwlzZ19kbWFfbGVuKCZkc3Rfc2cpID0gbGVuOwo+ICsJc2dfZG1hX2xlbigmc3JjX3NnKSA9IGxl bjsKPiArCj4gKwkvKiBJbXBsZW1lbnQgbWVtY3B5IHRyYW5zZmVyIGFzIHNnIHRyYW5zZmVyIHdp dGggc2luZ2xlIGxpc3QKPiAqLwoKPiArCXJldHVybiBkbWFfY2hhbl9wcmVwX2RtYV9zZyhkY2hh biwgJmRzdF9zZywgbmVudHMsCj4gKwkJCQnCoMKgwqDCoCZzcmNfc2csIG5lbnRzLCBmbGFncyk7 CgpPbmUgbGluZT8KCj4gK30KCj4gKwo+ICtzdGF0aWMgdm9pZCBheGlfY2hhbl9kdW1wX2xsaShz dHJ1Y3QgYXhpX2RtYV9jaGFuICpjaGFuLAo+ICsJCQnCoMKgwqDCoMKgwqBzdHJ1Y3QgYXhpX2Rt YV9kZXNjICpkZXNjKQo+ICt7Cj4gKwlkZXZfZXJyKGRjaGFuMmRldigmY2hhbi0+dmMuY2hhbiks Cj4gKwkJIlNBUjogMHgleCBEQVI6IDB4JXggTExQOiAweCV4IEJUUyAweCV4IENUTDoKPiAweCV4 OiUwOHgiLAo+ICsJCWxlMzJfdG9fY3B1KGRlc2MtPmxsaS5zYXJfbG8pLAo+ICsJCWxlMzJfdG9f Y3B1KGRlc2MtPmxsaS5kYXJfbG8pLAo+ICsJCWxlMzJfdG9fY3B1KGRlc2MtPmxsaS5sbHBfbG8p LAo+ICsJCWxlMzJfdG9fY3B1KGRlc2MtPmxsaS5ibG9ja190c19sbyksCj4gKwkJbGUzMl90b19j cHUoZGVzYy0+bGxpLmN0bF9oaSksCj4gKwkJbGUzMl90b19jcHUoZGVzYy0+bGxpLmN0bF9sbykp OwoKSSBob3BlIGF0IHNvbWUgcG9pbnQgQVJDIChhbmQgb3RoZXIgYXJjaGl0ZWN0dXJlcyB3aGlj aCB3aWxsIHVzZSB0aGlzCklQKSBjYW4gaW1wbGVtZW50IE1NSU8gdHJhY2VyLgoKPiArfQoKPiAr CQlheGlfY2hhbl9kdW1wX2xsaShjaGFuLCBkZXNjKTsKPiArfQo+ICsKPiArCgpFeHRyYSBsaW5l LgoKPiArc3RhdGljIHZvaWQgYXhpX2NoYW5faGFuZGxlX2VycihzdHJ1Y3QgYXhpX2RtYV9jaGFu ICpjaGFuLCB1MzIKPiBzdGF0dXMpCj4gK3sKCj4gKwo+ICtzdGF0aWMgaW50IGRtYV9jaGFuX3Bh dXNlKHN0cnVjdCBkbWFfY2hhbiAqZGNoYW4pCj4gK3sKPiArCXN0cnVjdCBheGlfZG1hX2NoYW4g KmNoYW4gPSBkY2hhbl90b19heGlfZG1hX2NoYW4oZGNoYW4pOwo+ICsJdW5zaWduZWQgbG9uZyBm bGFnczsKPiArCXVuc2lnbmVkIGludCB0aW1lb3V0ID0gMjA7IC8qIHRpbWVvdXQgaXRlcmF0aW9u cyAqLwo+ICsJaW50IHJldCA9IC1FQUdBSU47Cj4gKwl1MzIgdmFsOwo+ICsKPiArCXNwaW5fbG9j a19pcnFzYXZlKCZjaGFuLT52Yy5sb2NrLCBmbGFncyk7Cj4gKwo+ICsJdmFsID0gYXhpX2RtYV9p b3JlYWQzMihjaGFuLT5jaGlwLCBETUFDX0NIRU4pOwo+ICsJdmFsIHw9IChCSVQoY2hhbi0+aWQp IDw8IERNQUNfQ0hBTl9TVVNQX1NISUZUIHwKPiArCQlCSVQoY2hhbi0+aWQpIDw8IERNQUNfQ0hB Tl9TVVNQX1dFX1NISUZUKTsKClJlZHVuZGFudCBwYXJlbnMuCgo+ICsJYXhpX2RtYV9pb3dyaXRl MzIoY2hhbi0+Y2hpcCwgRE1BQ19DSEVOLCB2YWwpOwo+ICsKPiArCXdoaWxlICh0aW1lb3V0LS0p IHsKCj4gKwkJaWYgKGF4aV9jaGFuX2lycV9yZWFkKGNoYW4pICYKPiBEV0FYSURNQUNfSVJRX1NV U1BFTkRFRCkgewoKPiArCQkJYXhpX2NoYW5faXJxX2NsZWFyKGNoYW4sCj4gRFdBWElETUFDX0lS UV9TVVNQRU5ERUQpOwoKWW91IG1heSBtb3ZlIHRoaXMgaW52YXJpYW50IG91dCBmcm9tIHRoZSBs b29wLiBNYWtlcyBjb2RlIGNsZWFuZXIuCgo+ICsJCQlyZXQgPSAwOwo+ICsJCQlicmVhazsKPiAr CQl9Cgo+ICsJCXVkZWxheSgyKTsKCgo+ICsJfQo+ICsKPiArCWNoYW4tPmlzX3BhdXNlZCA9IHRy dWU7CgpUaGlzIGlzIGluZGVlZCBwcm9wZXJ0eSBvZiBjaGFubmVsLgpUaGF0IHNhaWQsIHlvdSBt YXkgZG8gYSB0cmljayBhbmQgdXNlIGRlc2NyaXB0b3Igc3RhdHVzIGZvciBpdC4KWW91IGNoYW5u ZWwgYW5kIGRyaXZlciwgSSdtIHN1cmUsIGNhbid0IHNlcnZlIGluIGludGVybGVhdmVkIG1vZGUg d2l0aApkZXNjcmlwdG9ycy4gU28sIHRoYXQgbWFrZXMgY2hhbm5lbChwYXVzZWQpID09IGFjdGl2 ZQpkZXNjcmlwdG9yKHBhdXNlZCkuCgpUaGUgdHJpY2sgYWxsb3dzIHRvIG1ha2UgY29kZSBjbGVh bmVyLgoKPiArCXJldCA9IGRldmljZV9wcm9wZXJ0eV9yZWFkX3UzMl9hcnJheShkZXYsICJwcmlv cml0eSIsIGNhcnIsCgpJJ20gbm90IHN1cmUgeW91IHdpbGwgaGF2ZSBhIHVzZSBjYXNlIGZvciB0 aGF0LiBIYXZlIHlvdT8KCj4gKwlyZXQgPSBkZXZtX3JlcXVlc3RfaXJxKGNoaXAtPmRldiwgY2hp cC0+aXJxLAo+IGR3X2F4aV9kbWFfaW50cmV0dXB0LAo+ICsJCQnCoMKgwqDCoMKgwqDCoElSUUZf U0hBUkVELCBEUlZfTkFNRSwgY2hpcCk7Cj4gKwlpZiAocmV0KQo+ICsJCXJldHVybiByZXQ7CgpZ b3UgY2FuJ3QgZG8gdGhpcyB1bmxlc3MgeW91IGFyZSB1c2luZyB0aHJlYWRlZCBJUlEgaGFuZGxl ciB3aXRob3V0IGFueQp0YXNrbGV0cyBpbnZvbHZlZC4KClRoZXJlIHdhcyBhIG5pY2UgbWFpbCBi eSBUaG9tYXMgd2hvIGV4cGxhaW5lZCB3aGF0IHRoZSBwcm9ibGVtIHlvdSBoYXZlCnRoZXJlLgoK SXQncyBhIGJ1ZyBpbiB5b3VyIGNvZGUuCgo+ICtzdGF0aWMgaW50IF9faW5pdCBkd19pbml0KHZv aWQpCj4gK3sKPiArCXJldHVybiBwbGF0Zm9ybV9kcml2ZXJfcmVnaXN0ZXIoJmR3X2RyaXZlcik7 Cj4gK30KPiArc3Vic3lzX2luaXRjYWxsKGR3X2luaXQpOwoKSG1tLi4uIFdoeSBpdCBjYW4ndCBi ZSBqdXN0IGEgcGxhdGZvcm0gZHJpdmVyPyBZb3UgZGVzY3JpYmUgdGhlCmRlcGVuZGVuY3kgaW4g RGV2aWNlIFRyZWUsIGlmIHlvdSBoYXZlIHNvbWV0aGluZyBoYXBwZW5lZCwgeW91IG1heQp1dGls aXplIC1FUFJPQkVfREVGRVIuCgotLSAKQW5keSBTaGV2Y2hlbmtvIDxhbmRyaXkuc2hldmNoZW5r b0BsaW51eC5pbnRlbC5jb20+CkludGVsIEZpbmxhbmQgT3kKCl9fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fCmxpbnV4LXNucHMtYXJjIG1haWxpbmcgbGlzdAps aW51eC1zbnBzLWFyY0BsaXN0cy5pbmZyYWRlYWQub3JnCmh0dHA6Ly9saXN0cy5pbmZyYWRlYWQu b3JnL21haWxtYW4vbGlzdGluZm8vbGludXgtc25wcy1hcmM= From mboxrd@z Thu Jan 1 00:00:00 1970 From: andriy.shevchenko@linux.intel.com (Andy Shevchenko) Date: Wed, 25 Jan 2017 19:25:06 +0200 Subject: [PATCH 2/2] dmaengine: Add DW AXI DMAC driver In-Reply-To: <1485358457-22957-3-git-send-email-Eugeniy.Paltsev@synopsys.com> References: <1485358457-22957-1-git-send-email-Eugeniy.Paltsev@synopsys.com> <1485358457-22957-3-git-send-email-Eugeniy.Paltsev@synopsys.com> List-ID: Message-ID: <1485365106.2133.331.camel@linux.intel.com> To: linux-snps-arc@lists.infradead.org On Wed, 2017-01-25@18:34 +0300, Eugeniy Paltsev wrote: > This patch adds support for the DW AXI DMAC controller. > > DW AXI DMAC is a part of upcoming development board from Synopsys. > > In this driver implementation only DMA_MEMCPY and DMA_SG transfers > are supported. > Few more comments on top of not addressed/answered yet. > +static inline void axi_chan_disable(struct axi_dma_chan *chan) > +{ > + u32 val; > + > + val = axi_dma_ioread32(chan->chip, DMAC_CHEN); > + val &= ~(BIT(chan->id) << DMAC_CHAN_EN_SHIFT); > + val |=??(BIT(chan->id) << DMAC_CHAN_EN_WE_SHIFT); > + axi_dma_iowrite32(chan->chip, DMAC_CHEN, val); > +} > + > +static inline void axi_chan_enable(struct axi_dma_chan *chan) > +{ > + u32 val; > + > + val = axi_dma_ioread32(chan->chip, DMAC_CHEN); > + val |= (BIT(chan->id) << DMAC_CHAN_EN_SHIFT | > + BIT(chan->id) << DMAC_CHAN_EN_WE_SHIFT); > Redundant parens. > + axi_dma_iowrite32(chan->chip, DMAC_CHEN, val); > +} > +static void axi_desc_put(struct axi_dma_desc *desc) > +{ > + struct axi_dma_chan *chan = desc->chan; > + struct dw_axi_dma *dw = chan->chip->dw; > + struct axi_dma_desc *child, *_next; > + unsigned int descs_put = 0; > + > + if (unlikely(!desc)) > + return; Would it be the case? > +static void dma_chan_issue_pending(struct dma_chan *dchan) > +{ > + struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan); > + unsigned long flags; > + > + dev_dbg(dchan2dev(dchan), "%s: %s\n", __func__, > axi_chan_name(chan)); Messages like this kinda redundant. Either you use function tracer and see them anyway, or you are using Dynamic Debug, which may include function name. Basically you wrote an equivalent to something like dev_dbg(dev, "%s\n", channame); > + > + spin_lock_irqsave(&chan->vc.lock, flags); > + if (vchan_issue_pending(&chan->vc)) > + axi_chan_start_first_queued(chan); > + spin_unlock_irqrestore(&chan->vc.lock, flags); ...and taking into account the function itself one might expect something useful printed in _start_first_queued(). For some cases there is also dev_vdbg(). > +} > + > +static int dma_chan_alloc_chan_resources(struct dma_chan *dchan) > +{ > + struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan); > + > + /* ASSERT: channel is idle */ > + if (axi_chan_is_hw_enable(chan)) { > + dev_err(chan2dev(chan), "%s is non-idle!\n", > + axi_chan_name(chan)); > + return -EBUSY; > + } > + > + dev_dbg(dchan2dev(dchan), "%s: allocating\n", > axi_chan_name(chan)); Can you try to enable DMADEVICES_DEBUG and VDEBUG and see what is useful and what is not? Give a chance to function tracer as well. > +static void dma_chan_free_chan_resources(struct dma_chan *dchan) > +{ > + struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan); > + > > + /* ASSERT: channel is idle */ > + if (axi_chan_is_hw_enable(chan)) > + dev_err(dchan2dev(dchan), "%s is non-idle!\n", > + axi_chan_name(chan)); Yeah, as I said, dw_dmac is not a good example. And this one can be managed by runtime PM I suppose. > + > +/* TODO: REVISIT: how we should choose AXI master for mem-to-mem > transfer? */ Read datasheet for a SoC/platform? For dw_dmac is chosen with accordance to hardware topology. > + while (true) { Usually it makes readability harder and rises a red flag to the code. > /* Manage transfer list (xfer_list) */ > + if (!first) { > + first = desc; > + } else { > + list_add_tail(&desc->xfer_list, &first- > >xfer_list); > + write_desc_llp(prev, desc->vd.tx.phys | lms); > + } > + prev = desc; > + > + /* update the lengths and addresses for the next loop > cycle */ > + dst_len -= xfer_len; > + src_len -= xfer_len; > + dst_adr += xfer_len; > + src_adr += xfer_len; > + > + total_len += xfer_len; I would suggest to leave this on caller. At some point, if no one else do this faster than me, I would like to introduce something like struct dma_parms per DMA channel to allow caller prepare SG list suitable for the DMA device. > + > +static struct dma_async_tx_descriptor * > +dma_chan_prep_dma_memcpy(struct dma_chan *dchan, dma_addr_t dest, > + ?dma_addr_t src, size_t len, unsigned long > flags) > +{ Do you indeed have a use case for this except debugging and testing? > + unsigned int nents = 1; > + struct scatterlist dst_sg; > + struct scatterlist src_sg; > + > + sg_init_table(&dst_sg, nents); > + sg_init_table(&src_sg, nents); > + > + sg_dma_address(&dst_sg) = dest; > + sg_dma_address(&src_sg) = src; > + > + sg_dma_len(&dst_sg) = len; > + sg_dma_len(&src_sg) = len; > + > + /* Implement memcpy transfer as sg transfer with single list > */ > + return dma_chan_prep_dma_sg(dchan, &dst_sg, nents, > + ????&src_sg, nents, flags); One line? > +} > + > +static void axi_chan_dump_lli(struct axi_dma_chan *chan, > + ??????struct axi_dma_desc *desc) > +{ > + dev_err(dchan2dev(&chan->vc.chan), > + "SAR: 0x%x DAR: 0x%x LLP: 0x%x BTS 0x%x CTL: > 0x%x:%08x", > + le32_to_cpu(desc->lli.sar_lo), > + le32_to_cpu(desc->lli.dar_lo), > + le32_to_cpu(desc->lli.llp_lo), > + le32_to_cpu(desc->lli.block_ts_lo), > + le32_to_cpu(desc->lli.ctl_hi), > + le32_to_cpu(desc->lli.ctl_lo)); I hope at some point ARC (and other architectures which will use this IP) can implement MMIO tracer. > +} > + axi_chan_dump_lli(chan, desc); > +} > + > + Extra line. > +static void axi_chan_handle_err(struct axi_dma_chan *chan, u32 > status) > +{ > + > +static int dma_chan_pause(struct dma_chan *dchan) > +{ > + struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan); > + unsigned long flags; > + unsigned int timeout = 20; /* timeout iterations */ > + int ret = -EAGAIN; > + u32 val; > + > + spin_lock_irqsave(&chan->vc.lock, flags); > + > + val = axi_dma_ioread32(chan->chip, DMAC_CHEN); > + val |= (BIT(chan->id) << DMAC_CHAN_SUSP_SHIFT | > + BIT(chan->id) << DMAC_CHAN_SUSP_WE_SHIFT); Redundant parens. > + axi_dma_iowrite32(chan->chip, DMAC_CHEN, val); > + > + while (timeout--) { > + if (axi_chan_irq_read(chan) & > DWAXIDMAC_IRQ_SUSPENDED) { > + axi_chan_irq_clear(chan, > DWAXIDMAC_IRQ_SUSPENDED); You may move this invariant out from the loop. Makes code cleaner. > + ret = 0; > + break; > + } > + udelay(2); > + } > + > + chan->is_paused = true; This is indeed property of channel. That said, you may do a trick and use descriptor status for it. You channel and driver, I'm sure, can't serve in interleaved mode with descriptors. So, that makes channel(paused) == active descriptor(paused). The trick allows to make code cleaner. > + ret = device_property_read_u32_array(dev, "priority", carr, I'm not sure you will have a use case for that. Have you? > + ret = devm_request_irq(chip->dev, chip->irq, > dw_axi_dma_intretupt, > + ???????IRQF_SHARED, DRV_NAME, chip); > + if (ret) > + return ret; You can't do this unless you are using threaded IRQ handler without any tasklets involved. There was a nice mail by Thomas who explained what the problem you have there. It's a bug in your code. > +static int __init dw_init(void) > +{ > + return platform_driver_register(&dw_driver); > +} > +subsys_initcall(dw_init); Hmm... Why it can't be just a platform driver? You describe the dependency in Device Tree, if you have something happened, you may utilize -EPROBE_DEFER. -- Andy Shevchenko Intel Finland Oy