dmaengine Archive on lore.kernel.org
 help / color / Atom feed
From: Thomas Ruf <freelancer@rufusul.de>
To: Vinod Koul <vkoul@kernel.org>
Cc: Federico Vaga <federico.vaga@cern.ch>,
	Dave Jiang <dave.jiang@intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: DMA Engine: Transfer From Userspace
Date: Wed, 24 Jun 2020 11:30:35 +0200 (CEST)
Message-ID: <2077253476.601371.1592991035969@mailbusiness.ionos.de> (raw)
In-Reply-To: <1835214773.354594.1592843644540@mailbusiness.ionos.de>


> On 22 June 2020 at 18:34 Thomas Ruf <freelancer@rufusul.de> wrote:
> 
> 
> 
> > On 22 June 2020 at 17:54 Vinod Koul <vkoul@kernel.org> wrote:
> > 
> > 
> > On 22-06-20, 14:01, Thomas Ruf wrote:
> > > > On 22 June 2020 at 06:47 Vinod Koul <vkoul@kernel.org> wrote:
> > > > 
> > > > On 21-06-20, 22:36, Federico Vaga wrote:
> > > > > On Sun, Jun 21, 2020 at 12:54:57PM +0530, Vinod Koul wrote:
> > > > > > On 19-06-20, 16:31, Dave Jiang wrote:
> > > > > > > 
> > > > > > > 
> > > > > > > On 6/19/2020 3:47 PM, Federico Vaga wrote:
> > > > > > > > Hello,
> > > > > > > >
> > > > > > > > is there the possibility of using a DMA engine channel from userspace?
> > > > > > > >
> > > > > > > > Something like:
> > > > > > > > - configure DMA using ioctl() (or whatever configuration mechanism)
> > > > > > > > - read() or write() to trigger the transfer
> > > > > > > >
> > > > > > > 
> > > > > > > I may have supposedly promised Vinod to look into possibly providing
> > > > > > > something like this in the future. But I have not gotten around to do that
> > > > > > > yet. Currently, no such support.
> > > > > > 
> > > > > > And I do still have serious reservations about this topic :) Opening up
> > > > > > userspace access to DMA does not sound very great from security point of
> > > > > > view.
> > > > > 
> > > > > I was thinking about a dedicated module, and not something that the DMA engine
> > > > > offers directly. You load the module only if you need it (like the test module)
> > > > 
> > > > But loading that module would expose dma to userspace. 
> > > > > 
> > > > > > Federico, what use case do you have in mind?
> > > > > 
> > > > > Userspace drivers
> > > > 
> > > > more the reason not do do so, why cant a kernel driver be added for your
> > > > usage?
> > > 
> > > by chance i have written a driver allowing dma from user space using a memcpy like interface ;-)
> > > now i am trying to get this code upstream but was hit by the fact that DMA_SG is gone since Aug 2017 :-(
> > > 
> > > just let me introduce myself and the project:
> > > - coding in C since '91
> > > - coding in C++ since '98
> > > - a lot of stuff not relevant for this ;-)
> > > - working as a freelancer since Nov '19
> > > - implemented a "dma-sg-proxy" driver for my client in Mar/Apr '20 to copy camera frames from uncached memory to cached memory using a second dma on a Zynq platform
> > > - last week we figured out that we can not upgrade from "Xilinx 2019.2" (kernel 4.19.x) to "2020.1" (kernel 5.4.x) because the DMA_SG interface is gone
> > > - subscribed to dmaengine on friday, saw the start of this discussion on saturday
> > > - talked to my client today if it is ok to try to revive DMA_SG and get our driver upstream to avoid such problems in future
> > 
> > DMA_SG was removed as it had no users, if we have a user (in-kernel) we
> > can certainly revert that removal patch.
> 
> yeah, already understood that.
> 
> > > 
> > > here the struct for the ioctl:
> > > 
> > > typedef struct {
> > >   unsigned int struct_size;
> > >   const void *src_user_ptr;
> > >   void *dst_user_ptr;
> > >   unsigned long length;
> > >   unsigned int timeout_in_ms;
> > > } dma_sg_proxy_arg_t;
> > 
> > Again, am not convinced opening DMA to userspace like this is a great
> > idea. Why not have Xilinx camera driver invoke the dmaengine and do
> > DMA_SG ?
> 
> In our case we have several camera pipelines, in some cases uncached memory is okay (e. g. image goes directly to display framebuffer), in some cases not because we need to process the images on cpu or gpu and we for that need to copy to oridinary user memoy first. This seems easier to do by decoupling the driver code.
> And one more thing: in case we engage the dma memcpy we want to copy to target memory which is prepared for IPC because we want to share these images with another process. The v4l2 interface did not look to be made for such cases but is possible by this "memcpy" approach.

To make it short - i have two questions:
- what are the chances to revive DMA_SG?
- what are the chances to get my driver for memcpy like transfers from user space using DMA_SG upstream? ("dma-sg-proxy")

Best regards,
Thomas

  reply index

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-19 22:47 Federico Vaga
2020-06-19 23:31 ` Dave Jiang
2020-06-21  7:24   ` Vinod Koul
2020-06-21 20:36     ` Federico Vaga
2020-06-21 20:45       ` Richard Weinberger
2020-06-21 22:32         ` Federico Vaga
2020-06-22  4:47       ` Vinod Koul
2020-06-22  6:57         ` Federico Vaga
2020-06-22 12:01         ` Thomas Ruf
2020-06-22 12:27           ` Richard Weinberger
2020-06-22 14:01             ` Thomas Ruf
2020-06-22 12:30           ` Federico Vaga
2020-06-22 14:03             ` Thomas Ruf
2020-06-22 15:54           ` Vinod Koul
2020-06-22 16:34             ` Thomas Ruf
2020-06-24  9:30               ` Thomas Ruf [this message]
2020-06-24  9:38                 ` Vinod Koul
2020-06-24 12:07                   ` Peter Ujfalusi
2020-06-24 13:58                     ` Thomas Ruf
2020-06-26 10:29                       ` Peter Ujfalusi
2020-06-29 15:18                         ` Thomas Ruf
2020-06-30 12:31                           ` Peter Ujfalusi
2020-07-01 16:13                             ` Thomas Ruf
2020-06-25  0:42     ` Dave Jiang
2020-06-25  8:11       ` Thomas Ruf
2020-06-26 20:08         ` Ira Weiny
2020-06-29 15:31           ` Thomas Ruf
2020-06-22  9:25 ` Federico Vaga
2020-06-22  9:42   ` Vinod Koul

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2077253476.601371.1592991035969@mailbusiness.ionos.de \
    --to=freelancer@rufusul.de \
    --cc=dan.j.williams@intel.com \
    --cc=dave.jiang@intel.com \
    --cc=dmaengine@vger.kernel.org \
    --cc=federico.vaga@cern.ch \
    --cc=linux-kernel@vger.kernel.org \
    --cc=vkoul@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

dmaengine Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/dmaengine/0 dmaengine/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dmaengine dmaengine/ https://lore.kernel.org/dmaengine \
		dmaengine@vger.kernel.org
	public-inbox-index dmaengine

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.dmaengine


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git