From: Zhi Li <lznuaa@gmail.com>
To: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Cc: Frank Li <Frank.Li@nxp.com>,
gustavo.pimentel@synopsys.com, hongxing.zhu@nxp.com,
Lucas Stach <l.stach@pengutronix.de>,
dl-linux-imx <linux-imx@nxp.com>,
linux-pci@vger.kernel.org, dmaengine@vger.kernel.org,
vkoul@kernel.org, lorenzo.pieralisi@arm.com, robh@kernel.org,
kw@linux.com, Bjorn Helgaas <bhelgaas@google.com>,
Shawn Guo <shawnguo@kernel.org>
Subject: Re: [PATCH v3 6/6] PCI: endpoint: functions/pci-epf-test: Support PCI controller DMA
Date: Wed, 9 Mar 2022 14:44:14 -0600 [thread overview]
Message-ID: <CAHrpEqS7_QuMXJsyxXU1peKh727R-dqjOOG-kLgB85SJtrDQ+A@mail.gmail.com> (raw)
In-Reply-To: <20220309114428.GA134091@thinkpad>
On Wed, Mar 9, 2022 at 5:44 AM Manivannan Sadhasivam
<manivannan.sadhasivam@linaro.org> wrote:
>
> On Mon, Mar 07, 2022 at 04:47:50PM -0600, Frank Li wrote:
> > Designware provided DMA support in controller. This enabled use
> > this DMA controller to transfer data.
> >
>
> Please use the term "eDMA (embedded DMA)"
>
> > The whole flow align with standard DMA usage module
> >
> > 1. Using dma_request_channel() and filter function to find correct
> > RX and TX Channel.
> > 2. dmaengine_slave_config() config remote side physcial address.
> > 3. using dmaengine_prep_slave_single() create transfer descriptor
> > 4. tx_submit();
> > 5. dma_async_issue_pending();
> >
> > Tested at i.MX8DXL platform.
> >
> > root@imx8qmmek:~# /usr/bin/pcitest -d -w
> > WRITE ( 102400 bytes): OKAY
> > root@imx8qmmek:~# /usr/bin/pcitest -d -r
> > READ ( 102400 bytes): OKAY
> >
> > WRITE => Size: 102400 bytes DMA: YES Time: 0.000180145 seconds Rate: 555108 KB/s
> > READ => Size: 102400 bytes DMA: YES Time: 0.000194397 seconds Rate: 514411 KB/s
> >
> > READ => Size: 102400 bytes DMA: NO Time: 0.013532597 seconds Rate: 7389 KB/s
> > WRITE => Size: 102400 bytes DMA: NO Time: 0.000857090 seconds Rate: 116673 KB/s
> >
> > Signed-off-by: Frank Li <Frank.Li@nxp.com>
> > ---
> > Resend added dmaengine@vger.kernel.org
> >
> > Change from v1 to v3
> > - none
> >
> > drivers/pci/endpoint/functions/pci-epf-test.c | 106 ++++++++++++++++--
> > 1 file changed, 96 insertions(+), 10 deletions(-)
> >
> > diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c
> > index 90d84d3bc868f..22ae420c30693 100644
> > --- a/drivers/pci/endpoint/functions/pci-epf-test.c
> > +++ b/drivers/pci/endpoint/functions/pci-epf-test.c
> > @@ -52,9 +52,11 @@ struct pci_epf_test {
> > enum pci_barno test_reg_bar;
> > size_t msix_table_offset;
> > struct delayed_work cmd_handler;
> > - struct dma_chan *dma_chan;
> > + struct dma_chan *dma_chan_tx;
> > + struct dma_chan *dma_chan_rx;
> > struct completion transfer_complete;
> > bool dma_supported;
> > + bool dma_private;
> > const struct pci_epc_features *epc_features;
> > };
> >
> > @@ -105,14 +107,17 @@ static void pci_epf_test_dma_callback(void *param)
> > */
> > static int pci_epf_test_data_transfer(struct pci_epf_test *epf_test,
> > dma_addr_t dma_dst, dma_addr_t dma_src,
> > - size_t len)
> > + size_t len, dma_addr_t remote,
>
> dma_remote to align with other parameters
>
> > + enum dma_transfer_direction dir)
> > {
> > enum dma_ctrl_flags flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT;
> > - struct dma_chan *chan = epf_test->dma_chan;
> > + struct dma_chan *chan = (dir == DMA_DEV_TO_MEM) ? epf_test->dma_chan_tx : epf_test->dma_chan_rx;
>
> Move this to top for reverse Xmas tree order
>
> > struct pci_epf *epf = epf_test->epf;
> > struct dma_async_tx_descriptor *tx;
> > struct device *dev = &epf->dev;
> > dma_cookie_t cookie;
> > + struct dma_slave_config sconf;
>
> struct dma_slave_config sconf = {}
>
> This can save one memset() below
>
> > + dma_addr_t local = (dir == DMA_MEM_TO_DEV) ? dma_src : dma_dst;
>
> dma_local?
>
> > int ret;
> >
> > if (IS_ERR_OR_NULL(chan)) {
> > @@ -120,7 +125,20 @@ static int pci_epf_test_data_transfer(struct pci_epf_test *epf_test,
> > return -EINVAL;
> > }
> >
> > - tx = dmaengine_prep_dma_memcpy(chan, dma_dst, dma_src, len, flags);
> > + if (epf_test->dma_private) {
> > + memset(&sconf, 0, sizeof(sconf));
> > + sconf.direction = dir;
> > + if (dir == DMA_MEM_TO_DEV)
> > + sconf.dst_addr = remote;
> > + else
> > + sconf.src_addr = remote;
> > +
> > + dmaengine_slave_config(chan, &sconf);
>
> This could fail
>
> > + tx = dmaengine_prep_slave_single(chan, local, len, dir, flags);
> > + } else {
> > + tx = dmaengine_prep_dma_memcpy(chan, dma_dst, dma_src, len, flags);
> > + }
> > +
> > if (!tx) {
> > dev_err(dev, "Failed to prepare DMA memcpy\n");
> > return -EIO;
> > @@ -148,6 +166,23 @@ static int pci_epf_test_data_transfer(struct pci_epf_test *epf_test,
> > return 0;
> > }
> >
> > +struct epf_dma_filter {
> > + struct device *dev;
> > + u32 dma_mask;
> > +};
> > +
> > +static bool epf_dma_filter_fn(struct dma_chan *chan, void *node)
> > +{
> > + struct epf_dma_filter *filter = node;
> > + struct dma_slave_caps caps;
> > +
> > + memset(&caps, 0, sizeof(caps));
> > + dma_get_slave_caps(chan, &caps);
> > +
> > + return chan->device->dev == filter->dev
> > + && (filter->dma_mask & caps.directions);
>
> This will not work when read/write channel counts are greater than 1. You would
> need this patch:
>
> https://git.linaro.org/landing-teams/working/qualcomm/kernel.git/commit/?h=tracking-qcomlt-sdx55-drivers&id=c77ad9d929372b1ff495709714b24486d266a810
>
> Feel free to pick it up in next iteration
>
> > +}
> > +
> > /**
> > * pci_epf_test_init_dma_chan() - Function to initialize EPF test DMA channel
> > * @epf_test: the EPF test device that performs data transfer operation
> > @@ -160,8 +195,42 @@ static int pci_epf_test_init_dma_chan(struct pci_epf_test *epf_test)
> > struct device *dev = &epf->dev;
> > struct dma_chan *dma_chan;
> > dma_cap_mask_t mask;
> > + struct epf_dma_filter filter;
>
> Please preserve the reverse Xmas tree order
>
> > int ret;
> >
> > + filter.dev = epf->epc->dev.parent;
> > + filter.dma_mask = BIT(DMA_DEV_TO_MEM);
> > +
> > + dma_cap_zero(mask);
> > + dma_cap_set(DMA_SLAVE, mask);
> > + dma_chan = dma_request_channel(mask, epf_dma_filter_fn, &filter);
> > + if (IS_ERR(dma_chan)) {
>
> dma_request_channel() can return NULL also. So use IS_ERR_OR_NULL() for error
> check
>
> > + dev_info(dev, "Failure get built-in DMA channel, fail back to try allocate general DMA channel\n");
>
> "Failed to get private DMA channel. Falling back to generic one"
>
> > + goto fail_back_tx;
> > + }
> > +
> > + epf_test->dma_chan_rx = dma_chan;
> > +
> > + filter.dma_mask = BIT(DMA_MEM_TO_DEV);
> > + dma_chan = dma_request_channel(mask, epf_dma_filter_fn, &filter);
> > +
> > + if (IS_ERR(dma_chan)) {
> > + dev_info(dev, "Failure get built-in DMA channel, fail back to try allocate general DMA channel\n");
>
> "Failed to get private DMA channel. Falling back to generic one"
>
> > + goto fail_back_rx;
> > + }
> > +
> > + epf_test->dma_chan_tx = dma_chan;
> > + epf_test->dma_private = true;
> > +
> > + init_completion(&epf_test->transfer_complete);
>
> You could use DECLARE_COMPLETION_ONSTACK() for simplifying the completion handling.
Keep consistent with general DMA code. It'd be better after this patch series.
>
> Thanks,
> Mani
>
> > +
> > + return 0;
> > +
> > +fail_back_rx:
> > + dma_release_channel(epf_test->dma_chan_rx);
> > + epf_test->dma_chan_tx = NULL;
> > +
> > +fail_back_tx:
> > dma_cap_zero(mask);
> > dma_cap_set(DMA_MEMCPY, mask);
> >
> > @@ -174,7 +243,7 @@ static int pci_epf_test_init_dma_chan(struct pci_epf_test *epf_test)
> > }
> > init_completion(&epf_test->transfer_complete);
> >
> > - epf_test->dma_chan = dma_chan;
> > + epf_test->dma_chan_tx = epf_test->dma_chan_rx = dma_chan;
> >
> > return 0;
> > }
> > @@ -190,8 +259,17 @@ static void pci_epf_test_clean_dma_chan(struct pci_epf_test *epf_test)
> > if (!epf_test->dma_supported)
> > return;
> >
> > - dma_release_channel(epf_test->dma_chan);
> > - epf_test->dma_chan = NULL;
> > + dma_release_channel(epf_test->dma_chan_tx);
> > + if (epf_test->dma_chan_tx == epf_test->dma_chan_rx) {
> > + epf_test->dma_chan_tx = NULL;
> > + epf_test->dma_chan_rx = NULL;
> > + return;
> > + }
> > +
> > + dma_release_channel(epf_test->dma_chan_rx);
> > + epf_test->dma_chan_rx = NULL;
> > +
> > + return;
> > }
> >
> > static void pci_epf_test_print_rate(const char *ops, u64 size,
> > @@ -280,8 +358,14 @@ static int pci_epf_test_copy(struct pci_epf_test *epf_test)
> > goto err_map_addr;
> > }
> >
> > + if (epf_test->dma_private) {
> > + dev_err(dev, "Cannot transfer data using DMA\n");
> > + ret = -EINVAL;
> > + goto err_map_addr;
> > + }
> > +
> > ret = pci_epf_test_data_transfer(epf_test, dst_phys_addr,
> > - src_phys_addr, reg->size);
> > + src_phys_addr, reg->size, 0, DMA_MEM_TO_MEM);
> > if (ret)
> > dev_err(dev, "Data transfer failed\n");
> > } else {
> > @@ -363,7 +447,8 @@ static int pci_epf_test_read(struct pci_epf_test *epf_test)
> >
> > ktime_get_ts64(&start);
> > ret = pci_epf_test_data_transfer(epf_test, dst_phys_addr,
> > - phys_addr, reg->size);
> > + phys_addr, reg->size,
> > + reg->src_addr, DMA_DEV_TO_MEM);
> > if (ret)
> > dev_err(dev, "Data transfer failed\n");
> > ktime_get_ts64(&end);
> > @@ -453,8 +538,9 @@ static int pci_epf_test_write(struct pci_epf_test *epf_test)
> > }
> >
> > ktime_get_ts64(&start);
> > +
> > ret = pci_epf_test_data_transfer(epf_test, phys_addr,
> > - src_phys_addr, reg->size);
> > + src_phys_addr, reg->size, reg->dst_addr, DMA_MEM_TO_DEV);
> > if (ret)
> > dev_err(dev, "Data transfer failed\n");
> > ktime_get_ts64(&end);
> > --
> > 2.24.0.rc1
> >
next prev parent reply other threads:[~2022-03-09 20:44 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-07 22:47 [PATCH v3 1/6] dmaengine: dw-edma: fix dw_edma_probe() can't be call globally Frank Li
2022-03-07 22:47 ` [PATCH v3 2/6] dmaengine: dw-edma-pcie: don't touch internal struct dw_edma Frank Li
2022-03-09 17:25 ` Serge Semin
2022-03-09 17:33 ` Zhi Li
2022-03-07 22:47 ` [PATCH v3 3/6] dmaengine: dw-edma: Fix programming the source & dest addresses for ep Frank Li
2022-03-07 22:47 ` [PATCH v3 4/6] dmaengine: dw-edma: Don't rely on the deprecated "direction" member Frank Li
2022-03-07 22:47 ` [PATCH v3 5/6] dmaengine: dw-edma: add flags at struct dw_edma_chip Frank Li
2022-03-10 7:44 ` Manivannan Sadhasivam
2022-03-10 17:00 ` Zhi Li
2022-03-18 18:40 ` Zhi Li
2022-03-18 19:28 ` Manivannan Sadhasivam
2022-03-10 7:55 ` Manivannan Sadhasivam
2022-03-07 22:47 ` [PATCH v3 6/6] PCI: endpoint: functions/pci-epf-test: Support PCI controller DMA Frank Li
2022-03-09 11:44 ` Manivannan Sadhasivam
2022-03-09 20:44 ` Zhi Li [this message]
2022-03-09 13:39 ` [PATCH v3 1/6] dmaengine: dw-edma: fix dw_edma_probe() can't be call globally Serge Semin
2022-03-09 16:37 ` Zhi Li
2022-03-09 18:09 ` Serge Semin
2022-03-09 18:12 ` Manivannan Sadhasivam
2022-03-09 19:01 ` Serge Semin
2022-03-10 6:22 ` Manivannan Sadhasivam
2022-03-10 8:41 ` Serge Semin
2022-03-10 8:56 ` Manivannan Sadhasivam
2022-03-10 10:51 ` Serge Semin
-- strict thread matches above, loose matches on Subject: below --
2022-03-07 16:24 Frank Li
2022-03-07 16:24 ` [PATCH v3 6/6] PCI: endpoint: functions/pci-epf-test: Support PCI controller DMA Frank Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAHrpEqS7_QuMXJsyxXU1peKh727R-dqjOOG-kLgB85SJtrDQ+A@mail.gmail.com \
--to=lznuaa@gmail.com \
--cc=Frank.Li@nxp.com \
--cc=bhelgaas@google.com \
--cc=dmaengine@vger.kernel.org \
--cc=gustavo.pimentel@synopsys.com \
--cc=hongxing.zhu@nxp.com \
--cc=kw@linux.com \
--cc=l.stach@pengutronix.de \
--cc=linux-imx@nxp.com \
--cc=linux-pci@vger.kernel.org \
--cc=lorenzo.pieralisi@arm.com \
--cc=manivannan.sadhasivam@linaro.org \
--cc=robh@kernel.org \
--cc=shawnguo@kernel.org \
--cc=vkoul@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).