All of lore.kernel.org
 help / color / mirror / Atom feed
From: Peng Fan <peng.fan@nxp.com>
To: "hch@infradead.org" <hch@infradead.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Cc: "mst@redhat.com" <mst@redhat.com>,
	"jasowang@redhat.com" <jasowang@redhat.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"linux-remoteproc@vger.kernel.org"
	<linux-remoteproc@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"virtualization@lists.linux-foundation.org"
	<virtualization@lists.linux-foundation.org>,
	"luto@kernel.org" <luto@kernel.org>,
	"jgross@suse.com" <jgross@suse.com>,
	"boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	Andy Duan <fugang.duan@nxp.com>
Subject: RE: [Xen-devel] [RFC] virtio_ring: check dma_mem for xen_domain
Date: Fri, 25 Jan 2019 09:45:26 +0000	[thread overview]
Message-ID: <AM0PR04MB4481F3986CFB6D1EF26FA135889B0@AM0PR04MB4481.eurprd04.prod.outlook.com> (raw)
In-Reply-To: <20190123211405.GA4971@infradead.org>

Hi,

> -----Original Message-----
> From: hch@infradead.org [mailto:hch@infradead.org]
> Sent: 2019年1月24日 5:14
> To: Stefano Stabellini <sstabellini@kernel.org>
> Cc: hch@infradead.org; Peng Fan <peng.fan@nxp.com>; mst@redhat.com;
> jasowang@redhat.com; xen-devel@lists.xenproject.org;
> linux-remoteproc@vger.kernel.org; linux-kernel@vger.kernel.org;
> virtualization@lists.linux-foundation.org; luto@kernel.org; jgross@suse.com;
> boris.ostrovsky@oracle.com
> Subject: Re: [Xen-devel] [RFC] virtio_ring: check dma_mem for xen_domain
> 
> On Wed, Jan 23, 2019 at 01:04:33PM -0800, Stefano Stabellini wrote:
> > If vring_use_dma_api is actually supposed to return true when
> > dma_dev->dma_mem is set, then both Peng's patch and the patch I wrote
> > are not fixing the real issue here.
> >
> > I don't know enough about remoteproc to know where the problem
> > actually lies though.
> 
> The problem is the following:
> 
> Devices can declare a specific memory region that they want to use when the
> driver calls dma_alloc_coherent for the device, this is done using the
> shared-dma-pool DT attribute, which comes in two variants that would be a
> little to much to explain here.
> 
> remoteproc makes use of that because apparently the device can only
> communicate using that region.  But it then feeds back memory obtained
> with dma_alloc_coherent into the virtio code.  For that it calls
> vmalloc_to_page on the dma_alloc_coherent, which is a huge no-go for the
> ĐMA API and only worked accidentally on a few platform, and apparently
> arm64 just changed a few internals that made it stop working for remoteproc.
> 
> The right answer is to not use the DMA API to allocate memory from a
> device-speficic region, but to tie the driver directly into the DT reserved
> memory API in a way that allows it to easilt obtain a struct device for it.

Just have a question, 

Since vmalloc_to_page is ok for cma area, no need to take cma and per device
cma into consideration right? 

we only need to implement a piece code to handle per device specific region
using RESERVEDMEM_OF_DECLARE, just like:
RESERVEDMEM_OF_DECLARE(rpmsg-dma, "rpmsg-dma-pool", 
rmem_rpmsg_dma_setup);
And implement the device_init call back and build a map between page and phys.
Then in rpmsg driver, scatter list could use page structure, no need vmalloc_to_page
for per device dma.

Is this the right way?

Thanks
Peng.

> 
> This is orthogonal to another issue, and that is that hardware virtio devices
> really always need to use the DMA API, otherwise we'll bypass such features
> as the device specific DMA pools, DMA offsets, cache flushing, etc, etc.

  parent reply	other threads:[~2019-01-25  9:45 UTC|newest]

Thread overview: 61+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-21  4:51 [RFC] virtio_ring: check dma_mem for xen_domain Peng Fan
2019-01-21  4:51 ` Peng Fan
2019-01-21  4:51 ` Peng Fan
2019-01-21  8:28 ` hch
2019-01-21  8:28 ` hch
2019-01-21  8:28   ` hch
2019-01-22  2:32   ` Peng Fan
2019-01-22  2:32   ` Peng Fan
2019-01-22  2:32     ` Peng Fan
2019-01-22  2:36   ` Michael S. Tsirkin
2019-01-22  2:36   ` Michael S. Tsirkin
2019-01-22  2:36     ` Michael S. Tsirkin
2019-01-22  2:36   ` Michael S. Tsirkin
2019-01-21  8:28 ` hch
2019-01-22 19:59 ` Stefano Stabellini
2019-01-22 19:59 ` [Xen-devel] " Stefano Stabellini
2019-01-22 19:59   ` Stefano Stabellini
2019-01-23  2:57   ` Michael S. Tsirkin
2019-01-23  2:57   ` Michael S. Tsirkin
2019-01-23  2:57   ` [Xen-devel] " Michael S. Tsirkin
2019-01-23  2:57     ` Michael S. Tsirkin
2019-01-23  7:12   ` hch
2019-01-23  7:12     ` hch
2019-01-23 21:04     ` Stefano Stabellini
2019-01-23 21:04     ` [Xen-devel] " Stefano Stabellini
2019-01-23 21:04       ` Stefano Stabellini
2019-01-23 21:14       ` hch
2019-01-23 21:14       ` hch
2019-01-23 21:14       ` [Xen-devel] " hch
2019-01-23 21:14         ` hch
2019-01-23 23:43         ` Stefano Stabellini
2019-01-23 23:43           ` Stefano Stabellini
2019-01-24  6:47           ` Peng Fan
2019-01-24  6:47           ` [Xen-devel] " Peng Fan
2019-01-24  6:47             ` Peng Fan
2019-01-24 19:14             ` Stefano Stabellini
2019-01-24 19:14             ` [Xen-devel] " Stefano Stabellini
2019-01-24 19:14               ` Stefano Stabellini
2019-01-24 20:34               ` Michael S. Tsirkin
2019-01-24 20:34               ` Michael S. Tsirkin
2019-01-24 20:34               ` [Xen-devel] " Michael S. Tsirkin
2019-01-24 20:34                 ` Michael S. Tsirkin
2019-01-23 23:43         ` Stefano Stabellini
2019-01-25  9:45         ` Peng Fan [this message]
2019-01-25  9:45           ` [Xen-devel] " Peng Fan
2019-01-25 19:18           ` Stefano Stabellini
2019-01-25 19:18           ` [Xen-devel] " Stefano Stabellini
2019-01-25 19:18             ` Stefano Stabellini
2019-01-28  8:00           ` hch
2019-01-28  8:00           ` [Xen-devel] " hch
2019-01-28  8:00             ` hch
2019-01-29  9:26             ` Peng Fan
2019-01-29  9:26             ` [Xen-devel] " Peng Fan
2019-01-29  9:26               ` Peng Fan
2019-01-28  8:00           ` hch
2019-01-25  9:45         ` Peng Fan
2019-01-23  7:12   ` [Xen-devel] " hch
2019-01-23  7:12   ` hch
2019-01-24  6:42   ` Peng Fan
2019-01-24  6:42   ` [Xen-devel] " Peng Fan
2019-01-24  6:42     ` Peng Fan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AM0PR04MB4481F3986CFB6D1EF26FA135889B0@AM0PR04MB4481.eurprd04.prod.outlook.com \
    --to=peng.fan@nxp.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=fugang.duan@nxp.com \
    --cc=hch@infradead.org \
    --cc=jasowang@redhat.com \
    --cc=jgross@suse.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-remoteproc@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=mst@redhat.com \
    --cc=sstabellini@kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.