dmaengine.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Peter Ujfalusi <peter.ujfalusi@ti.com>
To: Thomas Ruf <freelancer@rufusul.de>, Vinod Koul <vkoul@kernel.org>
Cc: Federico Vaga <federico.vaga@cern.ch>,
	Dave Jiang <dave.jiang@intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	<dmaengine@vger.kernel.org>, <linux-kernel@vger.kernel.org>
Subject: Re: DMA Engine: Transfer From Userspace
Date: Fri, 26 Jun 2020 13:29:23 +0300	[thread overview]
Message-ID: <1a610c67-73a4-f66d-877a-5c4d35cbf76a@ti.com> (raw)
In-Reply-To: <1666251320.1024432.1593007095381@mailbusiness.ionos.de>



On 24/06/2020 16.58, Thomas Ruf wrote:
> 
>> On 24 June 2020 at 14:07 Peter Ujfalusi <peter.ujfalusi@ti.com> wrote:
>> On 24/06/2020 12.38, Vinod Koul wrote:
>>> On 24-06-20, 11:30, Thomas Ruf wrote:
>>>
>>>> To make it short - i have two questions:
>>>> - what are the chances to revive DMA_SG?
>>>
>>> 100%, if we have a in-kernel user
>>
>> Most DMAs can not handle differently provisioned sg_list for src and dst.
>> Even if they could handle non symmetric SG setup it requires entirely
>> different setup (two independent channels sending the data to each
>> other, one reads, the other writes?).
> 
> Ok, i implemented that using zynqmp_dma on a Xilinx Zynq platform (obviously ;-) and it works nicely for us.

I see, if the HW does not support it then something along the lines of
what the atc_prep_dma_sg did can be implemented for most engines.

In essence: create a new set of sg_list which is symmetric.

> Don't think that it uses two channels from what a saw in their implementation.

I believe it was breaking it up like atc_prep_dma_sg did.

> Of course that was on kernel 4.19.x where DMA_SG was still available.
> 
>>>> - what are the chances to get my driver for memcpy like transfers from
>>>> user space using DMA_SG upstream? ("dma-sg-proxy")
>>>
>>> pretty bleak IMHO.
>>
>> fwiw, I also get requests time-to-time to DMA memcpy support from user
>> space from companies trying to move from bare-metal code to Linux.
>>
>> What could be plausible is a generic dmabuf-to-dmabuf copy driver (V4L2
>> can provide dma-buf, DRM can also).
>> If there is a DMA memcpy channel available, use that, otherwise use some
>> method to do the copy, user space should not care how it is done.
> 
> Yes, i'm using it together with a v4l2 capture driver and also saw the dma-buf thing but did not find a way how to bring this together with "ordinary user memory".

One of the aim of dma-buf is to share buffers between drivers and user
space (among drivers and/or drivers and userspace), but I might be
missing something.

> For me the root of my problem seems to be that dma_alloc_coherent leads to uncached memory on ARM platforms.

It depends, but in most cases that is true.

> But maybe i am doing it all wrong ;-)
> 
>> Where things are going to get a bit more trickier is when the copy needs
>> to be triggered by other DMA channel (completion of a frame reception
>> triggering an interleaved sub-frame extraction copy).
>> You don't want to extract from a buffer which can be modified while the
>> other channel is writing to it.
> 
> I think that would be no problem in case of our v4l2 capture driver doing both DMAs:
> Framebuffer DMA for streaming and Zynqmp DMA (using DMA_SG) to get it to "ordinary user memory".
> But as i wrote before i prefer to do the "logic and management" in userspace so the capture driver is just using the first DMA and the "dma-sg-proxy" driver is only used as a memcpy replacement.
> As said this is all working fine with kernel 4.19.x but now we are stuck :-(
> 
>> In Linux the DMA is used for kernel and user space can only use it
>> implicitly via standard subsystems.
>> Misused DMA can be very dangerous and giving full access to program a
>> transfer can open a can of worms.
> 
> Fully understand that!
> But i also hope you understand that we are developing a "closed system" and do not have a problem with that at all.
> We are also willing to bring that driver upstream for anyone doing the same but of course this should not affect security of any desktop or server systems.
> Maybe we just need the right place for that driver?!

What might be plausible is to introduce hw offloading support for memcpy
type of operations in a similar fashion how for example crypto does it?

The issue with a user space implemented logic is that it is not portable
between systems with different DMAs. It might be that on one DMA the
setup takes longer than do a CPU copy of X bytes, on the other DMA it
might be significantly less or higher.

Using CPU vs DMA for a copy in certain lengths and setups should not be
a concern of the user space.
Yes, you have a closed system with controlled parameters, but a generic
mem2mem_offload framework should be usable on other setups and the same
binary should be working on different DMAs where one is not efficient
for <512 bytes, the other shows benefits under 128bytes.

> Not sure if staging would change your concerns.
> 
> Thanks and best regards,
> Thomas
> 

- Péter

Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki


  reply	other threads:[~2020-06-26 10:28 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-19 22:47 DMA Engine: Transfer From Userspace Federico Vaga
2020-06-19 23:31 ` Dave Jiang
2020-06-21  7:24   ` Vinod Koul
2020-06-21 20:36     ` Federico Vaga
2020-06-21 20:45       ` Richard Weinberger
2020-06-21 22:32         ` Federico Vaga
2020-06-22  4:47       ` Vinod Koul
2020-06-22  6:57         ` Federico Vaga
2020-06-22 12:01         ` Thomas Ruf
2020-06-22 12:27           ` Richard Weinberger
2020-06-22 14:01             ` Thomas Ruf
2020-06-22 12:30           ` Federico Vaga
2020-06-22 14:03             ` Thomas Ruf
2020-06-22 15:54           ` Vinod Koul
2020-06-22 16:34             ` Thomas Ruf
2020-06-24  9:30               ` Thomas Ruf
2020-06-24  9:38                 ` Vinod Koul
2020-06-24 12:07                   ` Peter Ujfalusi
2020-06-24 13:58                     ` Thomas Ruf
2020-06-26 10:29                       ` Peter Ujfalusi [this message]
2020-06-29 15:18                         ` Thomas Ruf
2020-06-30 12:31                           ` Peter Ujfalusi
2020-07-01 16:13                             ` Thomas Ruf
2020-06-25  0:42     ` Dave Jiang
2020-06-25  8:11       ` Thomas Ruf
2020-06-26 20:08         ` Ira Weiny
2020-06-29 15:31           ` Thomas Ruf
2020-06-22  9:25 ` Federico Vaga
2020-06-22  9:42   ` Vinod Koul

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1a610c67-73a4-f66d-877a-5c4d35cbf76a@ti.com \
    --to=peter.ujfalusi@ti.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave.jiang@intel.com \
    --cc=dmaengine@vger.kernel.org \
    --cc=federico.vaga@cern.ch \
    --cc=freelancer@rufusul.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=vkoul@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).