nvdimm.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
* dmaengine support for PMEM
@ 2018-08-21 17:37 Stephen  Bates
  2018-08-21 17:52 ` Dave Jiang
  0 siblings, 1 reply; 5+ messages in thread
From: Stephen  Bates @ 2018-08-21 17:37 UTC (permalink / raw)
  To: Dave Jiang, Logan Gunthorpe; +Cc: linux-nvdimm

Hi Dave

I hope you are well. Logan and I were looking at adding DMA support to PMEM and then were informed you have proposed some patches to do just that for the ioat DMA engine. The latest version of those I can see were the v7 from August 2017. Is there a more recent version? What happened to that series?

https://lists.01.org/pipermail/linux-nvdimm/2017-August/012208.html

Cheers
 
Stephen
 

_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: dmaengine support for PMEM
  2018-08-21 17:37 dmaengine support for PMEM Stephen  Bates
@ 2018-08-21 17:52 ` Dave Jiang
  2018-08-21 18:07   ` Stephen  Bates
  0 siblings, 1 reply; 5+ messages in thread
From: Dave Jiang @ 2018-08-21 17:52 UTC (permalink / raw)
  To: Stephen Bates, Logan Gunthorpe; +Cc: linux-nvdimm



On 08/21/2018 10:37 AM, Stephen  Bates wrote:
> Hi Dave
> 
> I hope you are well. Logan and I were looking at adding DMA support to PMEM and then were informed you have proposed some patches to do just that for the ioat DMA engine. The latest version of those I can see were the v7 from August 2017. Is there a more recent version? What happened to that series?
> 
> https://lists.01.org/pipermail/linux-nvdimm/2017-August/012208.html
> 
> Cheers
>  
> Stephen
>  
> 

Hi Guys. Nothing has happened...yet... It's just on hold for now.

Here's where I left it last
https://git.kernel.org/pub/scm/linux/kernel/git/djiang/linux.git/log/?h=pmem_blk_dma

I do think we need to do some rework with the dmaengine in order to get
better efficiency as well. At some point I would like to see a call in
dmaengine that will take a request (similar to mq) and just operate on
that and submit the descriptors in a single call. I think that can
possibly deprecate all the host of function pointers for dmaengine. I'm
hoping to find some time to take a look at some of this work towards the
end of the year. But I'd be highly interested if you guys have ideas and
thoughts on this topic. And you are welcome to take my patches and run
with it.
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: dmaengine support for PMEM
  2018-08-21 17:52 ` Dave Jiang
@ 2018-08-21 18:07   ` Stephen  Bates
  2018-08-21 18:11     ` Dave Jiang
  0 siblings, 1 reply; 5+ messages in thread
From: Stephen  Bates @ 2018-08-21 18:07 UTC (permalink / raw)
  To: Dave Jiang, Logan Gunthorpe; +Cc: linux-nvdimm

>    Here's where I left it last
>    https://git.kernel.org/pub/scm/linux/kernel/git/djiang/linux.git/log/?h=pmem_blk_dma

Thanks Dave. I'll certainly rebase these on 4.18.x and do some testing! 
    
> I do think we need to do some rework with the dmaengine in order to get
>  better efficiency as well. At some point I would like to see a call in
 > dmaengine that will take a request (similar to mq) and just operate on
 > that and submit the descriptors in a single call. I think that can
 > possibly deprecate all the host of function pointers for dmaengine. I'm
 > hoping to find some time to take a look at some of this work towards the
 > end of the year. But I'd be highly interested if you guys have ideas and
 > thoughts on this topic. And you are welcome to take my patches and run
 > with it.

OK, we were experimenting with a single PMEM driver and making decisions on DMA vs memcpy based on IO size rather than forcing the user to choose which driver to use. 

Stephen
    

_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: dmaengine support for PMEM
  2018-08-21 18:07   ` Stephen  Bates
@ 2018-08-21 18:11     ` Dave Jiang
  2018-08-21 18:13       ` Logan Gunthorpe
  0 siblings, 1 reply; 5+ messages in thread
From: Dave Jiang @ 2018-08-21 18:11 UTC (permalink / raw)
  To: Stephen Bates, Logan Gunthorpe; +Cc: linux-nvdimm



On 08/21/2018 11:07 AM, Stephen  Bates wrote:
>>    Here's where I left it last
>>    https://git.kernel.org/pub/scm/linux/kernel/git/djiang/linux.git/log/?h=pmem_blk_dma
> 
> Thanks Dave. I'll certainly rebase these on 4.18.x and do some testing! 
>     
>> I do think we need to do some rework with the dmaengine in order to get
>>  better efficiency as well. At some point I would like to see a call in
>  > dmaengine that will take a request (similar to mq) and just operate on
>  > that and submit the descriptors in a single call. I think that can
>  > possibly deprecate all the host of function pointers for dmaengine. I'm
>  > hoping to find some time to take a look at some of this work towards the
>  > end of the year. But I'd be highly interested if you guys have ideas and
>  > thoughts on this topic. And you are welcome to take my patches and run
>  > with it.
> 
> OK, we were experimenting with a single PMEM driver and making decisions on DMA vs memcpy based on IO size rather than forcing the user to choose which driver to use. 

Oh yeah. Also I think what we discovered is that the block layer will
not send anything larger than 4k buffers in SGs. So unless your DMA
engine is very efficient with processing 4k you don't get great
performance. Not sure how to get around that since existing DMA engines
tend to prefer larger buffers to get better performance.


> 
> Stephen
>     
> 
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: dmaengine support for PMEM
  2018-08-21 18:11     ` Dave Jiang
@ 2018-08-21 18:13       ` Logan Gunthorpe
  0 siblings, 0 replies; 5+ messages in thread
From: Logan Gunthorpe @ 2018-08-21 18:13 UTC (permalink / raw)
  To: Dave Jiang, Stephen Bates; +Cc: linux-nvdimm



On 21/08/18 12:11 PM, Dave Jiang wrote:
> 
> 
> On 08/21/2018 11:07 AM, Stephen  Bates wrote:
>>>    Here's where I left it last
>>>    https://git.kernel.org/pub/scm/linux/kernel/git/djiang/linux.git/log/?h=pmem_blk_dma
>>
>> Thanks Dave. I'll certainly rebase these on 4.18.x and do some testing! 
>>     
>>> I do think we need to do some rework with the dmaengine in order to get
>>>  better efficiency as well. At some point I would like to see a call in
>>  > dmaengine that will take a request (similar to mq) and just operate on
>>  > that and submit the descriptors in a single call. I think that can
>>  > possibly deprecate all the host of function pointers for dmaengine. I'm
>>  > hoping to find some time to take a look at some of this work towards the
>>  > end of the year. But I'd be highly interested if you guys have ideas and
>>  > thoughts on this topic. And you are welcome to take my patches and run
>>  > with it.
>>
>> OK, we were experimenting with a single PMEM driver and making decisions on DMA vs memcpy based on IO size rather than forcing the user to choose which driver to use. 
> 
> Oh yeah. Also I think what we discovered is that the block layer will
> not send anything larger than 4k buffers in SGs. So unless your DMA
> engine is very efficient with processing 4k you don't get great
> performance. Not sure how to get around that since existing DMA engines
> tend to prefer larger buffers to get better performance.

Yeah, that's exactly what we were running up against. Then we found your
patch set that pretty much dealt with a lot of the problems we were seeing.

>From a code perspective, I like the split modules, but I guess it puts a
burden on the user to blacklist one or the other to get DMA or not.
Which may depend on work load.

Logan
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2018-08-21 18:13 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-21 17:37 dmaengine support for PMEM Stephen  Bates
2018-08-21 17:52 ` Dave Jiang
2018-08-21 18:07   ` Stephen  Bates
2018-08-21 18:11     ` Dave Jiang
2018-08-21 18:13       ` Logan Gunthorpe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).