All of lore.kernel.org
 help / color / mirror / Atom feed
* [V2] dmaengine: fsl-qdma: add NXP Layerscape qDMA engine driver support
@ 2018-01-11  6:25 Vinod Koul
  0 siblings, 0 replies; 11+ messages in thread
From: Vinod Koul @ 2018-01-11  6:25 UTC (permalink / raw)
  To: Wen He
  Cc: kbuild test robot, kbuild-all, Leo Li, dmaengine, Jiafei Pan,
	Jiaheng Fan

On Tue, Jan 09, 2018 at 03:30:43AM +0000, Wen He wrote:
> 
> 
> > -----Original Message-----
> > From: dmaengine-owner@vger.kernel.org
> > [mailto:dmaengine-owner@vger.kernel.org] On Behalf Of Vinod Koul
> > Sent: 2018年1月8日 18:42
> > To: Wen He <wen.he_1@nxp.com>
> > Cc: kbuild test robot <lkp@intel.com>; kbuild-all@01.org; Leo Li
> > <leoyang.li@nxp.com>; dmaengine@vger.kernel.org; Jiafei Pan
> > <jiafei.pan@nxp.com>; Jiaheng Fan <jiaheng.fan@nxp.com>
> > Subject: Re: [V2] dmaengine: fsl-qdma: add NXP Layerscape qDMA engine
> > driver support
> > 
> > On Thu, Jan 04, 2018 at 07:36:09AM +0000, Wen He wrote:
> > > Hi Vinod,
> > 
> > Hi,
> > 
> > Please wrap you replies to 80 chars, I have reflow below..
> 
> okay
> 
> > >
> > > I don't know what's them compile? Does means 'the driver has public
> > > driver, so any arch will compile it'?
> > 
> > Today it does compile on all archs
> > 
> > >
> > > If so, the compile qdma module need enable config options
> > > 'CONFIG_FSl_QDMA' and other architecture should be hide the options.
> > > The driver supported arm or arm64 arch, If I change Kconfig to solve
> > > compile issues, Can I do that?
> > 
> > yes but as a last resort, it would still help if driver has no dependency on arch
> > and is able to compile on others..
> > 
> 
> Hi,
> 
> I want to do the same, but I can only verify in x86/x86_64, arm/arm64 and powerpc.
> 
> For now, the issues is ioread32/64 ioread32/64be and iowrite32/64 iowrite32/64be depends on arch defined.
> most arch defined it, but other not defined(such as x86,s390..).
> 
> do you have any good ideas?

ah okay that sounds okay then, I think putting "depends on ARM || ARM64"
sounds fair to me

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [V2] dmaengine: fsl-qdma: add NXP Layerscape qDMA engine driver support
@ 2018-01-11  9:17 Wen He
  0 siblings, 0 replies; 11+ messages in thread
From: Wen He @ 2018-01-11  9:17 UTC (permalink / raw)
  To: Vinod Koul
  Cc: kbuild test robot, kbuild-all, Leo Li, dmaengine, Jiafei Pan,
	Jiaheng Fan

DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogZG1hZW5naW5lLW93bmVy
QHZnZXIua2VybmVsLm9yZw0KPiBbbWFpbHRvOmRtYWVuZ2luZS1vd25lckB2Z2VyLmtlcm5lbC5v
cmddIE9uIEJlaGFsZiBPZiBWaW5vZCBLb3VsDQo+IFNlbnQ6IDIwMTjlubQx5pyIMTHml6UgMTQ6
MjUNCj4gVG86IFdlbiBIZSA8d2VuLmhlXzFAbnhwLmNvbT4NCj4gQ2M6IGtidWlsZCB0ZXN0IHJv
Ym90IDxsa3BAaW50ZWwuY29tPjsga2J1aWxkLWFsbEAwMS5vcmc7IExlbyBMaQ0KPiA8bGVveWFu
Zy5saUBueHAuY29tPjsgZG1hZW5naW5lQHZnZXIua2VybmVsLm9yZzsgSmlhZmVpIFBhbg0KPiA8
amlhZmVpLnBhbkBueHAuY29tPjsgSmlhaGVuZyBGYW4gPGppYWhlbmcuZmFuQG54cC5jb20+DQo+
IFN1YmplY3Q6IFJlOiBbVjJdIGRtYWVuZ2luZTogZnNsLXFkbWE6IGFkZCBOWFAgTGF5ZXJzY2Fw
ZSBxRE1BIGVuZ2luZQ0KPiBkcml2ZXIgc3VwcG9ydA0KPiANCj4gT24gVHVlLCBKYW4gMDksIDIw
MTggYXQgMDM6MzA6NDNBTSArMDAwMCwgV2VuIEhlIHdyb3RlOg0KPiA+DQo+ID4NCj4gPiA+IC0t
LS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+ID4gPiBGcm9tOiBkbWFlbmdpbmUtb3duZXJAdmdl
ci5rZXJuZWwub3JnDQo+ID4gPiBbbWFpbHRvOmRtYWVuZ2luZS1vd25lckB2Z2VyLmtlcm5lbC5v
cmddIE9uIEJlaGFsZiBPZiBWaW5vZCBLb3VsDQo+ID4gPiBTZW50OiAyMDE45bm0MeaciDjml6Ug
MTg6NDINCj4gPiA+IFRvOiBXZW4gSGUgPHdlbi5oZV8xQG54cC5jb20+DQo+ID4gPiBDYzoga2J1
aWxkIHRlc3Qgcm9ib3QgPGxrcEBpbnRlbC5jb20+OyBrYnVpbGQtYWxsQDAxLm9yZzsgTGVvIExp
DQo+ID4gPiA8bGVveWFuZy5saUBueHAuY29tPjsgZG1hZW5naW5lQHZnZXIua2VybmVsLm9yZzsg
SmlhZmVpIFBhbg0KPiA+ID4gPGppYWZlaS5wYW5AbnhwLmNvbT47IEppYWhlbmcgRmFuIDxqaWFo
ZW5nLmZhbkBueHAuY29tPg0KPiA+ID4gU3ViamVjdDogUmU6IFtWMl0gZG1hZW5naW5lOiBmc2wt
cWRtYTogYWRkIE5YUCBMYXllcnNjYXBlIHFETUENCj4gPiA+IGVuZ2luZSBkcml2ZXIgc3VwcG9y
dA0KPiA+ID4NCj4gPiA+IE9uIFRodSwgSmFuIDA0LCAyMDE4IGF0IDA3OjM2OjA5QU0gKzAwMDAs
IFdlbiBIZSB3cm90ZToNCj4gPiA+ID4gSGkgVmlub2QsDQo+ID4gPg0KPiA+ID4gSGksDQo+ID4g
Pg0KPiA+ID4gUGxlYXNlIHdyYXAgeW91IHJlcGxpZXMgdG8gODAgY2hhcnMsIEkgaGF2ZSByZWZs
b3cgYmVsb3cuLg0KPiA+DQo+ID4gb2theQ0KPiA+DQo+ID4gPiA+DQo+ID4gPiA+IEkgZG9uJ3Qg
a25vdyB3aGF0J3MgdGhlbSBjb21waWxlPyBEb2VzIG1lYW5zICd0aGUgZHJpdmVyIGhhcw0KPiA+
ID4gPiBwdWJsaWMgZHJpdmVyLCBzbyBhbnkgYXJjaCB3aWxsIGNvbXBpbGUgaXQnPw0KPiA+ID4N
Cj4gPiA+IFRvZGF5IGl0IGRvZXMgY29tcGlsZSBvbiBhbGwgYXJjaHMNCj4gPiA+DQo+ID4gPiA+
DQo+ID4gPiA+IElmIHNvLCB0aGUgY29tcGlsZSBxZG1hIG1vZHVsZSBuZWVkIGVuYWJsZSBjb25m
aWcgb3B0aW9ucw0KPiA+ID4gPiAnQ09ORklHX0ZTbF9RRE1BJyBhbmQgb3RoZXIgYXJjaGl0ZWN0
dXJlIHNob3VsZCBiZSBoaWRlIHRoZSBvcHRpb25zLg0KPiA+ID4gPiBUaGUgZHJpdmVyIHN1cHBv
cnRlZCBhcm0gb3IgYXJtNjQgYXJjaCwgSWYgSSBjaGFuZ2UgS2NvbmZpZyB0bw0KPiA+ID4gPiBz
b2x2ZSBjb21waWxlIGlzc3VlcywgQ2FuIEkgZG8gdGhhdD8NCj4gPiA+DQo+ID4gPiB5ZXMgYnV0
IGFzIGEgbGFzdCByZXNvcnQsIGl0IHdvdWxkIHN0aWxsIGhlbHAgaWYgZHJpdmVyIGhhcyBubw0K
PiA+ID4gZGVwZW5kZW5jeSBvbiBhcmNoIGFuZCBpcyBhYmxlIHRvIGNvbXBpbGUgb24gb3RoZXJz
Li4NCj4gPiA+DQo+ID4NCj4gPiBIaSwNCj4gPg0KPiA+IEkgd2FudCB0byBkbyB0aGUgc2FtZSwg
YnV0IEkgY2FuIG9ubHkgdmVyaWZ5IGluIHg4Ni94ODZfNjQsIGFybS9hcm02NCBhbmQNCj4gcG93
ZXJwYy4NCj4gPg0KPiA+IEZvciBub3csIHRoZSBpc3N1ZXMgaXMgaW9yZWFkMzIvNjQgaW9yZWFk
MzIvNjRiZSBhbmQgaW93cml0ZTMyLzY0DQo+IGlvd3JpdGUzMi82NGJlIGRlcGVuZHMgb24gYXJj
aCBkZWZpbmVkLg0KPiA+IG1vc3QgYXJjaCBkZWZpbmVkIGl0LCBidXQgb3RoZXIgbm90IGRlZmlu
ZWQoc3VjaCBhcyB4ODYsczM5MC4uKS4NCj4gPg0KPiA+IGRvIHlvdSBoYXZlIGFueSBnb29kIGlk
ZWFzPw0KPiANCj4gYWggb2theSB0aGF0IHNvdW5kcyBva2F5IHRoZW4sIEkgdGhpbmsgcHV0dGlu
ZyAiZGVwZW5kcyBvbiBBUk0gfHwgQVJNNjQiDQo+IHNvdW5kcyBmYWlyIHRvIG1lDQo+IA0KDQpP
a2F5LCB0aGFuayB0aGUgcmV2aWV3cy4gV2lsbCBiZSBuZXh0IHZlcnNpb24gYWRkICJkZXBlbmRz
IG9uIEFSTSB8fCBBUk02NCIuDQoNCkJlc3QgUmVnYXJkcywNCldlbg0KDQo+IC0tDQo+IH5WaW5v
ZA0KPiAtLQ0KPiBUbyB1bnN1YnNjcmliZSBmcm9tIHRoaXMgbGlzdDogc2VuZCB0aGUgbGluZSAi
dW5zdWJzY3JpYmUgZG1hZW5naW5lIiBpbiB0aGUNCj4gYm9keSBvZiBhIG1lc3NhZ2UgdG8gbWFq
b3Jkb21vQHZnZXIua2VybmVsLm9yZyBNb3JlIG1ham9yZG9tbyBpbmZvIGF0DQo+IGh0dHBzOi8v
ZW1lYTAxLnNhZmVsaW5rcy5wcm90ZWN0aW9uLm91dGxvb2suY29tLz91cmw9aHR0cCUzQSUyRiUy
RnZnZXIuDQo+IGtlcm5lbC5vcmclMkZtYWpvcmRvbW8taW5mby5odG1sJmRhdGE9MDIlN0MwMSU3
Q3dlbi5oZV8xJTQwbnhwLmNvDQo+IG0lN0NmMDBiMWM3NjRiNGQ0OWQxMGJkZDA4ZDU1OGJiODAw
MCU3QzY4NmVhMWQzYmMyYjRjNmZhOTJjZDk5DQo+IGM1YzMwMTYzNSU3QzAlN0MwJTdDNjM2NTEy
NDg0NzE0NjAxNTEwJnNkYXRhPTdmVUFvbGtqVXRUY0xONHElDQo+IDJGMkpqQjlTNGhraXhqVTlG
U0tsOHNIdkJtQTAlM0QmcmVzZXJ2ZWQ9MA0K
---
To unsubscribe from this list: send the line "unsubscribe dmaengine" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [V2] dmaengine: fsl-qdma: add NXP Layerscape qDMA engine driver support
@ 2018-01-09  3:30 Wen He
  0 siblings, 0 replies; 11+ messages in thread
From: Wen He @ 2018-01-09  3:30 UTC (permalink / raw)
  To: Vinod Koul
  Cc: kbuild test robot, kbuild-all, Leo Li, dmaengine, Jiafei Pan,
	Jiaheng Fan

DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogZG1hZW5naW5lLW93bmVy
QHZnZXIua2VybmVsLm9yZw0KPiBbbWFpbHRvOmRtYWVuZ2luZS1vd25lckB2Z2VyLmtlcm5lbC5v
cmddIE9uIEJlaGFsZiBPZiBWaW5vZCBLb3VsDQo+IFNlbnQ6IDIwMTjE6jHUwjjI1SAxODo0Mg0K
PiBUbzogV2VuIEhlIDx3ZW4uaGVfMUBueHAuY29tPg0KPiBDYzoga2J1aWxkIHRlc3Qgcm9ib3Qg
PGxrcEBpbnRlbC5jb20+OyBrYnVpbGQtYWxsQDAxLm9yZzsgTGVvIExpDQo+IDxsZW95YW5nLmxp
QG54cC5jb20+OyBkbWFlbmdpbmVAdmdlci5rZXJuZWwub3JnOyBKaWFmZWkgUGFuDQo+IDxqaWFm
ZWkucGFuQG54cC5jb20+OyBKaWFoZW5nIEZhbiA8amlhaGVuZy5mYW5AbnhwLmNvbT4NCj4gU3Vi
amVjdDogUmU6IFtWMl0gZG1hZW5naW5lOiBmc2wtcWRtYTogYWRkIE5YUCBMYXllcnNjYXBlIHFE
TUEgZW5naW5lDQo+IGRyaXZlciBzdXBwb3J0DQo+IA0KPiBPbiBUaHUsIEphbiAwNCwgMjAxOCBh
dCAwNzozNjowOUFNICswMDAwLCBXZW4gSGUgd3JvdGU6DQo+ID4gSGkgVmlub2QsDQo+IA0KPiBI
aSwNCj4gDQo+IFBsZWFzZSB3cmFwIHlvdSByZXBsaWVzIHRvIDgwIGNoYXJzLCBJIGhhdmUgcmVm
bG93IGJlbG93Li4NCg0Kb2theQ0KDQo+ID4NCj4gPiBJIGRvbid0IGtub3cgd2hhdCdzIHRoZW0g
Y29tcGlsZT8gRG9lcyBtZWFucyAndGhlIGRyaXZlciBoYXMgcHVibGljDQo+ID4gZHJpdmVyLCBz
byBhbnkgYXJjaCB3aWxsIGNvbXBpbGUgaXQnPw0KPiANCj4gVG9kYXkgaXQgZG9lcyBjb21waWxl
IG9uIGFsbCBhcmNocw0KPiANCj4gPg0KPiA+IElmIHNvLCB0aGUgY29tcGlsZSBxZG1hIG1vZHVs
ZSBuZWVkIGVuYWJsZSBjb25maWcgb3B0aW9ucw0KPiA+ICdDT05GSUdfRlNsX1FETUEnIGFuZCBv
dGhlciBhcmNoaXRlY3R1cmUgc2hvdWxkIGJlIGhpZGUgdGhlIG9wdGlvbnMuDQo+ID4gVGhlIGRy
aXZlciBzdXBwb3J0ZWQgYXJtIG9yIGFybTY0IGFyY2gsIElmIEkgY2hhbmdlIEtjb25maWcgdG8g
c29sdmUNCj4gPiBjb21waWxlIGlzc3VlcywgQ2FuIEkgZG8gdGhhdD8NCj4gDQo+IHllcyBidXQg
YXMgYSBsYXN0IHJlc29ydCwgaXQgd291bGQgc3RpbGwgaGVscCBpZiBkcml2ZXIgaGFzIG5vIGRl
cGVuZGVuY3kgb24gYXJjaA0KPiBhbmQgaXMgYWJsZSB0byBjb21waWxlIG9uIG90aGVycy4uDQo+
IA0KDQpIaSwNCg0KSSB3YW50IHRvIGRvIHRoZSBzYW1lLCBidXQgSSBjYW4gb25seSB2ZXJpZnkg
aW4geDg2L3g4Nl82NCwgYXJtL2FybTY0IGFuZCBwb3dlcnBjLg0KDQpGb3Igbm93LCB0aGUgaXNz
dWVzIGlzIGlvcmVhZDMyLzY0IGlvcmVhZDMyLzY0YmUgYW5kIGlvd3JpdGUzMi82NCBpb3dyaXRl
MzIvNjRiZSBkZXBlbmRzIG9uIGFyY2ggZGVmaW5lZC4NCm1vc3QgYXJjaCBkZWZpbmVkIGl0LCBi
dXQgb3RoZXIgbm90IGRlZmluZWQoc3VjaCBhcyB4ODYsczM5MC4uKS4NCg0KZG8geW91IGhhdmUg
YW55IGdvb2QgaWRlYXM/DQoNClJlZ2FyZHMsDQpXZW4NCj4gPg0KPiA+IGNvbmZpZyBGU0xfUURN
QQ0KPiA+ICAgICAgICB0cmlzdGF0ZSAiTlhQIExheWVyc2NhcGUgcURNQSBlbmdpbmUgc3VwcG9y
dCINCj4gPiArCSAgIGRlcGVuZHMgb24gQVJNIHx8IEFSTTY0DQo+ID4gICAgICAgIHNlbGVjdCBE
TUFfRU5HSU5FDQo+ID4gICAgICAgIHNlbGVjdCBETUFfVklSVFVBTF9DSEFOTkVMUw0KPiA+ICAg
ICAgICBzZWxlY3QgRE1BX0VOR0lORV9SQUlEDQo+ID4gICAgICAgIHNlbGVjdCBBU1lOQ19UWF9F
TkFCTEVfQ0hBTk5FTF9TV0lUQ0gNCj4gPiAgICAgICAgaGVscA0KPiA+ICAgICAgICAgIFN1cHBv
cnQgdGhlIE5YUCBMYXllcnNjYXBlIHFETUEgZW5naW5lIHdpdGggY29tbWFuZA0KPiBxdWV1ZSBh
bmQgbGVnYWN5IG1vZGUuDQo+ID4gICAgICAgICAgQ2hhbm5lbCB2aXJ0dWFsaXphdGlvbiBpcyBz
dXBwb3J0ZWQgdGhyb3VnaCBlbnF1ZXVpbmcgb2YgRE1BDQo+IGpvYnMgdG8sDQo+ID4gICAgICAg
ICAgb3IgZGVxdWV1aW5nIERNQSBqb2JzIGZyb20sIGRpZmZlcmVudCB3b3JrIHF1ZXVlcy4NCj4g
PiAgICAgICAgICBUaGlzIG1vZHVsZSBjYW4gYmUgZm91bmQgb24gTlhQIExheWVyc2NhcGUgU29D
cy4NCj4gPg0KPiA+IEJlc3QgUmVnYXJkcw0KPiA+IFdlbiBIZQ0KPiAtLQ0KPiB+Vmlub2QNCj4g
LS0NCj4gVG8gdW5zdWJzY3JpYmUgZnJvbSB0aGlzIGxpc3Q6IHNlbmQgdGhlIGxpbmUgInVuc3Vi
c2NyaWJlIGRtYWVuZ2luZSIgaW4gdGhlDQo+IGJvZHkgb2YgYSBtZXNzYWdlIHRvIG1ham9yZG9t
b0B2Z2VyLmtlcm5lbC5vcmcgTW9yZSBtYWpvcmRvbW8gaW5mbyBhdA0KPiBodHRwczovL2VtZWEw
MS5zYWZlbGlua3MucHJvdGVjdGlvbi5vdXRsb29rLmNvbS8/dXJsPWh0dHAlM0ElMkYlMkZ2Z2Vy
Lg0KPiBrZXJuZWwub3JnJTJGbWFqb3Jkb21vLWluZm8uaHRtbCZkYXRhPTAyJTdDMDElN0N3ZW4u
aGVfMSU0MG54cC5jbw0KPiBtJTdDOWZhNGQ0NzBjNmJkNDhhZTVkMDUwOGQ1NTY4M2ViMmElN0M2
ODZlYTFkM2JjMmI0YzZmYTkyY2Q5OQ0KPiBjNWMzMDE2MzUlN0MwJTdDMCU3QzYzNjUxMDA0Njk3
MzcxODQ3NyZzZGF0YT1SR1R3ZGlPRTI4JTJCSVgNCj4gdUxib1dpdmNLTGdHJTJCVUxtc05MOUpP
TUFPTXJ2UmslM0QmcmVzZXJ2ZWQ9MA0K
---
To unsubscribe from this list: send the line "unsubscribe dmaengine" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [V2] dmaengine: fsl-qdma: add NXP Layerscape qDMA engine driver support
@ 2018-01-08 10:42 Vinod Koul
  0 siblings, 0 replies; 11+ messages in thread
From: Vinod Koul @ 2018-01-08 10:42 UTC (permalink / raw)
  To: Wen He
  Cc: kbuild test robot, kbuild-all, Leo Li, dmaengine, Jiafei Pan,
	Jiaheng Fan

On Thu, Jan 04, 2018 at 07:36:09AM +0000, Wen He wrote:
> Hi Vinod,

Hi,

Please wrap you replies to 80 chars, I have reflow below..
> 
> I don't know what's them compile? Does means 'the driver has public
> driver, so any arch will compile it'?

Today it does compile on all archs

> 
> If so, the compile qdma module need enable config options
> 'CONFIG_FSl_QDMA' and other architecture should be hide the options.  The
> driver supported arm or arm64 arch, If I change Kconfig to solve compile
> issues, Can I do that?

yes but as a last resort, it would still help if driver has no dependency on
arch and is able to compile on others..

> 
> config FSL_QDMA
>        tristate "NXP Layerscape qDMA engine support"
> +	   depends on ARM || ARM64
>        select DMA_ENGINE
>        select DMA_VIRTUAL_CHANNELS
>        select DMA_ENGINE_RAID
>        select ASYNC_TX_ENABLE_CHANNEL_SWITCH
>        help
>          Support the NXP Layerscape qDMA engine with command queue and legacy mode.
>          Channel virtualization is supported through enqueuing of DMA jobs to,
>          or dequeuing DMA jobs from, different work queues.
>          This module can be found on NXP Layerscape SoCs.
> 
> Best Regards
> Wen He

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [V2] dmaengine: fsl-qdma: add NXP Layerscape qDMA engine driver support
@ 2018-01-04  7:36 Wen He
  0 siblings, 0 replies; 11+ messages in thread
From: Wen He @ 2018-01-04  7:36 UTC (permalink / raw)
  To: Vinod Koul
  Cc: kbuild test robot, kbuild-all, Leo Li, dmaengine, Jiafei Pan,
	Jiaheng Fan

Hi Vinod,

I don't know what's them compile? Does means 'the driver has public driver, so any arch will compile it'?

If so, the compile qdma module need enable config options 'CONFIG_FSl_QDMA' and other architecture should be hide the options.
The driver supported arm or arm64 arch, If I change Kconfig to solve compile issues, Can I do that?

config FSL_QDMA
       tristate "NXP Layerscape qDMA engine support"
+	   depends on ARM || ARM64
       select DMA_ENGINE
       select DMA_VIRTUAL_CHANNELS
       select DMA_ENGINE_RAID
       select ASYNC_TX_ENABLE_CHANNEL_SWITCH
       help
         Support the NXP Layerscape qDMA engine with command queue and legacy mode.
         Channel virtualization is supported through enqueuing of DMA jobs to,
         or dequeuing DMA jobs from, different work queues.
         This module can be found on NXP Layerscape SoCs.

Best Regards
Wen He

> -----Original Message-----
> From: dmaengine-owner@vger.kernel.org
> [mailto:dmaengine-owner@vger.kernel.org] On Behalf Of Vinod Koul
> Sent: 2018年1月3日 11:53
> To: Wen He <wen.he_1@nxp.com>
> Cc: kbuild test robot <lkp@intel.com>; kbuild-all@01.org; Leo Li
> <leoyang.li@nxp.com>; dmaengine@vger.kernel.org; Jiafei Pan
> <jiafei.pan@nxp.com>; Jiaheng Fan <jiaheng.fan@nxp.com>
> Subject: Re: [V2] dmaengine: fsl-qdma: add NXP Layerscape qDMA engine
> driver support
> 
> On Tue, Dec 26, 2017 at 05:15:35AM +0000, Wen He wrote:
> > Hi Vinod,
> >
> > This patch need applied NXP Layerscape Socs(arm/arm64).
> >
> > not supported other architecture.
> 
> yeah but right now driver compiles on them. It would be great if you cna
> ensure that it still compiles on others
> 
> --
> ~Vinod
> --
> To unsubscribe from this list: send the line "unsubscribe dmaengine" in the
> body of a message to majordomo@vger.kernel.org More majordomo info at
> https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fvger.
> kernel.org%2Fmajordomo-info.html&data=02%7C01%7Cwen.he_1%40nxp.co
> m%7C19e784c395d241dadfe708d5525cda19%7C686ea1d3bc2b4c6fa92cd99
> c5c301635%7C0%7C0%7C636505481147254261&sdata=s44f0zXnhR4nb67%
> 2Bq6yghDKYLsQMLnDQKyS0H9tDBFY%3D&reserved=0

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [V2] dmaengine: fsl-qdma: add NXP Layerscape qDMA engine driver support
@ 2018-01-03  3:52 Vinod Koul
  0 siblings, 0 replies; 11+ messages in thread
From: Vinod Koul @ 2018-01-03  3:52 UTC (permalink / raw)
  To: Wen He
  Cc: kbuild test robot, kbuild-all, Leo Li, dmaengine, Jiafei Pan,
	Jiaheng Fan

On Tue, Dec 26, 2017 at 05:15:35AM +0000, Wen He wrote:
> Hi Vinod,
> 
> This patch need applied NXP Layerscape Socs(arm/arm64). 
> 
> not supported other architecture.

yeah but right now driver compiles on them. It would be great if you cna
ensure that it still compiles on others

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [V2] dmaengine: fsl-qdma: add NXP Layerscape qDMA engine driver support
@ 2017-12-27  2:27 Wen He
  0 siblings, 0 replies; 11+ messages in thread
From: Wen He @ 2017-12-27  2:27 UTC (permalink / raw)
  To: kbuild test robot, vinod.koul; +Cc: kbuild-all, Leo Li, dmaengine

Hi Vinod,

There may be issues with this patch, because this patch depend on https://patchwork.kernel.org/patch/10132327/.

But my submission is not numbered, I resubmitted the new patch number [v2,[1-6]/6] yesterday.

Please rebuild new patch, Thanks.

Best Regards,
Wen


> -----Original Message-----
> From: dmaengine-owner@vger.kernel.org
> [mailto:dmaengine-owner@vger.kernel.org] On Behalf Of kbuild test robot
> Sent: 2017年12月27日 9:35
> To: Wen He <wen.he_1@nxp.com>
> Cc: kbuild-all@01.org; vinod.koul@intel.com; Leo Li <leoyang.li@nxp.com>;
> dmaengine@vger.kernel.org; Jiafei Pan <jiafei.pan@nxp.com>; Jiaheng Fan
> <jiaheng.fan@nxp.com>; Wen He <wen.he_1@nxp.com>
> Subject: Re: [V2] dmaengine: fsl-qdma: add NXP Layerscape qDMA engine
> driver support
> 
> Hi Wen,
> 
> Thank you for the patch! Yet something to improve:
> 
> [auto build test ERROR on linus/master]
> [also build test ERROR on v4.15-rc5 next-20171222] [if your patch is applied
> to the wrong git tree, please drop us a note to help improve the system]
> 
> url:
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgith
> ub.com%2F0day-ci%2Flinux%2Fcommits%2FWen-He%2Fdmaengine-fsl-qdma
> -add-NXP-Layerscape-qDMA-engine-driver-support%2F20171225-232227&da
> ta=02%7C01%7Cwen.he_1%40nxp.com%7C326440436d4341fa6a2f08d54cca
> 27ee%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C63649935351
> 5664674&sdata=qyJ3X%2BfHA7zgPwfb%2Fxl2OZb3q71ZULr23EqYC7nM%2BZ
> 8%3D&reserved=0
> config: arm-allyesconfig (attached as .config)
> compiler: arm-linux-gnueabi-gcc (Debian 7.2.0-11) 7.2.0
> reproduce:
>         wget
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fraw.
> githubusercontent.com%2Fintel%2Flkp-tests%2Fmaster%2Fsbin%2Fmake.cro
> ss&data=02%7C01%7Cwen.he_1%40nxp.com%7C326440436d4341fa6a2f08
> d54cca27ee%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C63649
> 9353515664674&sdata=y8xJH6DOWPm4BQaAa%2F5UkPMpbfNDx8Dbk8fQe
> rPm7h4%3D&reserved=0 -O ~/bin/make.cross
>         chmod +x ~/bin/make.cross
>         # save the attached .config to linux build tree
>         make.cross ARCH=arm
> 
> All errors (new ones prefixed by >>):
> 
>    In file included from drivers//dma/fsl-qdma.c:27:0:
>    drivers//dma/fsldma.h: In function 'in_be64':
>    drivers//dma/fsldma.h:202:15: error: implicit declaration of function
> 'in_be32'; did you mean 'in_be64'? [-Werror=implicit-function-declaration]
>      return ((u64)in_be32((u32 __iomem *)addr) << 32) |
>                   ^~~~~~~
>                   in_be64
>    drivers//dma/fsldma.h: In function 'out_be64':
>    drivers//dma/fsldma.h:208:2: error: implicit declaration of function
> 'out_be32'; did you mean 'out_be64'? [-Werror=implicit-function-declaration]
>      out_be32((u32 __iomem *)addr, val >> 32);
>      ^~~~~~~~
>      out_be64
>    drivers//dma/fsldma.h: In function 'in_le64':
>    drivers//dma/fsldma.h:215:15: error: implicit declaration of function
> 'in_le32'; did you mean 'in_le64'? [-Werror=implicit-function-declaration]
>      return ((u64)in_le32((u32 __iomem *)addr + 1) << 32) |
>                   ^~~~~~~
>                   in_le64
>    drivers//dma/fsldma.h: In function 'out_le64':
>    drivers//dma/fsldma.h:221:2: error: implicit declaration of function
> 'out_le32'; did you mean 'out_le64'? [-Werror=implicit-function-declaration]
>      out_le32((u32 __iomem *)addr + 1, val >> 32);
>      ^~~~~~~~
>      out_le64
>    drivers//dma/fsl-qdma.c: In function 'qdma_readl':
> >> drivers//dma/fsl-qdma.c:275:9: error: implicit declaration of
> >> function 'FSL_DMA_IN'; did you mean 'FSL_DMA_EOL'?
> >> [-Werror=implicit-function-declaration]
>      return FSL_DMA_IN(qdma, addr, 32);
>             ^~~~~~~~~~
>             FSL_DMA_EOL
>    drivers//dma/fsl-qdma.c: In function 'qdma_writel':
>    drivers//dma/fsl-qdma.c:281:2: error: implicit declaration of function
> 'FSL_DMA_OUT'; did you mean 'FSL_DMA_EOL'?
> [-Werror=implicit-function-declaration]
>      FSL_DMA_OUT(qdma, addr, val, 32);
>      ^~~~~~~~~~~
>      FSL_DMA_EOL
>    In file included from drivers//dma/fsl-qdma.c:27:0:
>    At top level:
>    drivers//dma/fsldma.h:219:13: warning: 'out_le64' defined but not used
> [-Wunused-function]
>     static void out_le64(u64 __iomem *addr, u64 val)
>                 ^~~~~~~~
>    drivers//dma/fsldma.h:213:12: warning: 'in_le64' defined but not used
> [-Wunused-function]
>     static u64 in_le64(const u64 __iomem *addr)
>                ^~~~~~~
>    drivers//dma/fsldma.h:206:13: warning: 'out_be64' defined but not used
> [-Wunused-function]
>     static void out_be64(u64 __iomem *addr, u64 val)
>                 ^~~~~~~~
>    drivers//dma/fsldma.h:200:12: warning: 'in_be64' defined but not used
> [-Wunused-function]
>     static u64 in_be64(const u64 __iomem *addr)
>                ^~~~~~~
>    cc1: some warnings being treated as errors
> 
> vim +275 drivers//dma/fsl-qdma.c
> 
>    272
>    273	static u32 qdma_readl(struct fsl_qdma_engine *qdma, void
> __iomem *addr)
>    274	{
>  > 275		return FSL_DMA_IN(qdma, addr, 32);
>    276	}
>    277
> 
> ---
> 0-DAY kernel test infrastructure                Open Source Technology
> Center
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.
> 01.org%2Fpipermail%2Fkbuild-all&data=02%7C01%7Cwen.he_1%40nxp.com
> %7C326440436d4341fa6a2f08d54cca27ee%7C686ea1d3bc2b4c6fa92cd99c5
> c301635%7C0%7C0%7C636499353515664674&sdata=%2BHLUxMOBnkan5
> Mt14%2F85CfF863gBOYbP3by%2F4pfz7ik%3D&reserved=0
> Intel Corporation

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [V2] dmaengine: fsl-qdma: add NXP Layerscape qDMA engine driver support
@ 2017-12-27  1:34 kbuild test robot
  0 siblings, 0 replies; 11+ messages in thread
From: kbuild test robot @ 2017-12-27  1:34 UTC (permalink / raw)
  To: Wen He
  Cc: kbuild-all, vinod.koul, leoyang.li, dmaengine, jiafei.pan, jiaheng.fan

Hi Wen,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.15-rc5 next-20171222]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Wen-He/dmaengine-fsl-qdma-add-NXP-Layerscape-qDMA-engine-driver-support/20171225-232227
config: arm-allyesconfig (attached as .config)
compiler: arm-linux-gnueabi-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=arm 

All errors (new ones prefixed by >>):

   In file included from drivers//dma/fsl-qdma.c:27:0:
   drivers//dma/fsldma.h: In function 'in_be64':
   drivers//dma/fsldma.h:202:15: error: implicit declaration of function 'in_be32'; did you mean 'in_be64'? [-Werror=implicit-function-declaration]
     return ((u64)in_be32((u32 __iomem *)addr) << 32) |
                  ^~~~~~~
                  in_be64
   drivers//dma/fsldma.h: In function 'out_be64':
   drivers//dma/fsldma.h:208:2: error: implicit declaration of function 'out_be32'; did you mean 'out_be64'? [-Werror=implicit-function-declaration]
     out_be32((u32 __iomem *)addr, val >> 32);
     ^~~~~~~~
     out_be64
   drivers//dma/fsldma.h: In function 'in_le64':
   drivers//dma/fsldma.h:215:15: error: implicit declaration of function 'in_le32'; did you mean 'in_le64'? [-Werror=implicit-function-declaration]
     return ((u64)in_le32((u32 __iomem *)addr + 1) << 32) |
                  ^~~~~~~
                  in_le64
   drivers//dma/fsldma.h: In function 'out_le64':
   drivers//dma/fsldma.h:221:2: error: implicit declaration of function 'out_le32'; did you mean 'out_le64'? [-Werror=implicit-function-declaration]
     out_le32((u32 __iomem *)addr + 1, val >> 32);
     ^~~~~~~~
     out_le64
   drivers//dma/fsl-qdma.c: In function 'qdma_readl':
>> drivers//dma/fsl-qdma.c:275:9: error: implicit declaration of function 'FSL_DMA_IN'; did you mean 'FSL_DMA_EOL'? [-Werror=implicit-function-declaration]
     return FSL_DMA_IN(qdma, addr, 32);
            ^~~~~~~~~~
            FSL_DMA_EOL
   drivers//dma/fsl-qdma.c: In function 'qdma_writel':
   drivers//dma/fsl-qdma.c:281:2: error: implicit declaration of function 'FSL_DMA_OUT'; did you mean 'FSL_DMA_EOL'? [-Werror=implicit-function-declaration]
     FSL_DMA_OUT(qdma, addr, val, 32);
     ^~~~~~~~~~~
     FSL_DMA_EOL
   In file included from drivers//dma/fsl-qdma.c:27:0:
   At top level:
   drivers//dma/fsldma.h:219:13: warning: 'out_le64' defined but not used [-Wunused-function]
    static void out_le64(u64 __iomem *addr, u64 val)
                ^~~~~~~~
   drivers//dma/fsldma.h:213:12: warning: 'in_le64' defined but not used [-Wunused-function]
    static u64 in_le64(const u64 __iomem *addr)
               ^~~~~~~
   drivers//dma/fsldma.h:206:13: warning: 'out_be64' defined but not used [-Wunused-function]
    static void out_be64(u64 __iomem *addr, u64 val)
                ^~~~~~~~
   drivers//dma/fsldma.h:200:12: warning: 'in_be64' defined but not used [-Wunused-function]
    static u64 in_be64(const u64 __iomem *addr)
               ^~~~~~~
   cc1: some warnings being treated as errors

vim +275 drivers//dma/fsl-qdma.c

   272	
   273	static u32 qdma_readl(struct fsl_qdma_engine *qdma, void __iomem *addr)
   274	{
 > 275		return FSL_DMA_IN(qdma, addr, 32);
   276	}
   277
---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [V2] dmaengine: fsl-qdma: add NXP Layerscape qDMA engine driver support
@ 2017-12-26  5:15 Wen He
  0 siblings, 0 replies; 11+ messages in thread
From: Wen He @ 2017-12-26  5:15 UTC (permalink / raw)
  To: kbuild test robot, vinod.koul
  Cc: kbuild-all, Leo Li, dmaengine, Jiafei Pan, Jiaheng Fan

SGkgVmlub2QsDQoNClRoaXMgcGF0Y2ggbmVlZCBhcHBsaWVkIE5YUCBMYXllcnNjYXBlIFNvY3Mo
YXJtL2FybTY0KS4gDQoNCm5vdCBzdXBwb3J0ZWQgb3RoZXIgYXJjaGl0ZWN0dXJlLg0KDQo+IC0t
LS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IGRtYWVuZ2luZS1vd25lckB2Z2VyLmtl
cm5lbC5vcmcNCj4gW21haWx0bzpkbWFlbmdpbmUtb3duZXJAdmdlci5rZXJuZWwub3JnXSBPbiBC
ZWhhbGYgT2Yga2J1aWxkIHRlc3Qgcm9ib3QNCj4gU2VudDogMjAxN8TqMTLUwjI2yNUgMTo0MA0K
PiBUbzogV2VuIEhlIDx3ZW4uaGVfMUBueHAuY29tPg0KPiBDYzoga2J1aWxkLWFsbEAwMS5vcmc7
IHZpbm9kLmtvdWxAaW50ZWwuY29tOyBMZW8gTGkgPGxlb3lhbmcubGlAbnhwLmNvbT47DQo+IGRt
YWVuZ2luZUB2Z2VyLmtlcm5lbC5vcmc7IEppYWZlaSBQYW4gPGppYWZlaS5wYW5AbnhwLmNvbT47
IEppYWhlbmcgRmFuDQo+IDxqaWFoZW5nLmZhbkBueHAuY29tPjsgV2VuIEhlIDx3ZW4uaGVfMUBu
eHAuY29tPg0KPiBTdWJqZWN0OiBSZTogW1YyXSBkbWFlbmdpbmU6IGZzbC1xZG1hOiBhZGQgTlhQ
IExheWVyc2NhcGUgcURNQSBlbmdpbmUNCj4gZHJpdmVyIHN1cHBvcnQNCj4gDQo+IEhpIFdlbiwN
Cj4gDQo+IFRoYW5rIHlvdSBmb3IgdGhlIHBhdGNoISBZZXQgc29tZXRoaW5nIHRvIGltcHJvdmU6
DQo+IA0KPiBbYXV0byBidWlsZCB0ZXN0IEVSUk9SIG9uIGxpbnVzL21hc3Rlcl0NCj4gW2Fsc28g
YnVpbGQgdGVzdCBFUlJPUiBvbiB2NC4xNS1yYzUgbmV4dC0yMDE3MTIyMl0gW2lmIHlvdXIgcGF0
Y2ggaXMgYXBwbGllZA0KPiB0byB0aGUgd3JvbmcgZ2l0IHRyZWUsIHBsZWFzZSBkcm9wIHVzIGEg
bm90ZSB0byBoZWxwIGltcHJvdmUgdGhlIHN5c3RlbV0NCj4gDQo+IHVybDoNCj4gaHR0cHM6Ly9l
bWVhMDEuc2FmZWxpbmtzLnByb3RlY3Rpb24ub3V0bG9vay5jb20vP3VybD1odHRwcyUzQSUyRiUy
RmdpdGgNCj4gdWIuY29tJTJGMGRheS1jaSUyRmxpbnV4JTJGY29tbWl0cyUyRldlbi1IZSUyRmRt
YWVuZ2luZS1mc2wtcWRtYQ0KPiAtYWRkLU5YUC1MYXllcnNjYXBlLXFETUEtZW5naW5lLWRyaXZl
ci1zdXBwb3J0JTJGMjAxNzEyMjUtMjMyMjI3JmRhDQo+IHRhPTAyJTdDMDElN0N3ZW4uaGVfMSU0
MG54cC5jb20lN0M2ZTNkNzNhMGNkNTQ0YTE2N2M5OTA4ZDU0YmINCj4gZTg0NzclN0M2ODZlYTFk
M2JjMmI0YzZmYTkyY2Q5OWM1YzMwMTYzNSU3QzAlN0MwJTdDNjM2NDk4MjA0MA0KPiA2Nzc0NDkw
NyZzZGF0YT1abW5rUHlVTGluRk96Yjc0c2xaQ2xOc0IzejkzMFppRXM5WEZYU2JmU2RBJTNEJnJl
cw0KPiBlcnZlZD0wDQo+IGNvbmZpZzogYmxhY2tmaW4tYWxseWVzY29uZmlnIChhdHRhY2hlZCBh
cyAuY29uZmlnKQ0KPiBjb21waWxlcjogYmZpbi11Y2xpbnV4LWdjYyAoR0NDKSA3LjIuMA0KPiBy
ZXByb2R1Y2U6DQo+ICAgICAgICAgd2dldA0KPiBodHRwczovL2VtZWEwMS5zYWZlbGlua3MucHJv
dGVjdGlvbi5vdXRsb29rLmNvbS8/dXJsPWh0dHBzJTNBJTJGJTJGcmF3Lg0KPiBnaXRodWJ1c2Vy
Y29udGVudC5jb20lMkZpbnRlbCUyRmxrcC10ZXN0cyUyRm1hc3RlciUyRnNiaW4lMkZtYWtlLmNy
bw0KPiBzcyZkYXRhPTAyJTdDMDElN0N3ZW4uaGVfMSU0MG54cC5jb20lN0M2ZTNkNzNhMGNkNTQ0
YTE2N2M5OTA4DQo+IGQ1NGJiZTg0NzclN0M2ODZlYTFkM2JjMmI0YzZmYTkyY2Q5OWM1YzMwMTYz
NSU3QzAlN0MwJTdDNjM2NDkNCj4gODIwNDA2Nzc0NDkwNyZzZGF0YT0zbVIxJTJCQkFmSllZWGx3
RlNUWnlOZG02RjN2T1dud0NQbUlVR3cNCj4gMVEzTm00JTNEJnJlc2VydmVkPTAgLU8gfi9iaW4v
bWFrZS5jcm9zcw0KPiAgICAgICAgIGNobW9kICt4IH4vYmluL21ha2UuY3Jvc3MNCj4gICAgICAg
ICAjIHNhdmUgdGhlIGF0dGFjaGVkIC5jb25maWcgdG8gbGludXggYnVpbGQgdHJlZQ0KPiAgICAg
ICAgIG1ha2UuY3Jvc3MgQVJDSD1ibGFja2Zpbg0KPiANCj4gQWxsIGVycm9ycyAobmV3IG9uZXMg
cHJlZml4ZWQgYnkgPj4pOg0KPiANCj4gICAgSW4gZmlsZSBpbmNsdWRlZCBmcm9tIGRyaXZlcnMv
ZG1hL2ZzbC1xZG1hLmM6Mjc6MDoNCj4gICAgZHJpdmVycy9kbWEvZnNsZG1hLmg6IEluIGZ1bmN0
aW9uICdpbl9iZTY0JzoNCj4gPj4gZHJpdmVycy9kbWEvZnNsZG1hLmg6MjAyOjE1OiBlcnJvcjog
aW1wbGljaXQgZGVjbGFyYXRpb24gb2YgZnVuY3Rpb24NCj4gPj4gJ2luX2JlMzInOyBkaWQgeW91
IG1lYW4gJ2luX2JlNjQnPw0KPiA+PiBbLVdlcnJvcj1pbXBsaWNpdC1mdW5jdGlvbi1kZWNsYXJh
dGlvbl0NCj4gICAgICByZXR1cm4gKCh1NjQpaW5fYmUzMigodTMyIF9faW9tZW0gKilhZGRyKSA8
PCAzMikgfA0KPiAgICAgICAgICAgICAgICAgICBefn5+fn5+DQo+ICAgICAgICAgICAgICAgICAg
IGluX2JlNjQNCj4gICAgZHJpdmVycy9kbWEvZnNsZG1hLmg6IEluIGZ1bmN0aW9uICdvdXRfYmU2
NCc6DQo+ID4+IGRyaXZlcnMvZG1hL2ZzbGRtYS5oOjIwODoyOiBlcnJvcjogaW1wbGljaXQgZGVj
bGFyYXRpb24gb2YgZnVuY3Rpb24NCj4gPj4gJ291dF9iZTMyJzsgZGlkIHlvdSBtZWFuICdvdXRf
YmU2NCc/DQo+ID4+IFstV2Vycm9yPWltcGxpY2l0LWZ1bmN0aW9uLWRlY2xhcmF0aW9uXQ0KPiAg
ICAgIG91dF9iZTMyKCh1MzIgX19pb21lbSAqKWFkZHIsIHZhbCA+PiAzMik7DQo+ICAgICAgXn5+
fn5+fn4NCj4gICAgICBvdXRfYmU2NA0KPiAgICBkcml2ZXJzL2RtYS9mc2xkbWEuaDogSW4gZnVu
Y3Rpb24gJ2luX2xlNjQnOg0KPiA+PiBkcml2ZXJzL2RtYS9mc2xkbWEuaDoyMTU6MTU6IGVycm9y
OiBpbXBsaWNpdCBkZWNsYXJhdGlvbiBvZiBmdW5jdGlvbg0KPiA+PiAnaW5fbGUzMic7IGRpZCB5
b3UgbWVhbiAnaW5fbGU2NCc/DQo+ID4+IFstV2Vycm9yPWltcGxpY2l0LWZ1bmN0aW9uLWRlY2xh
cmF0aW9uXQ0KPiAgICAgIHJldHVybiAoKHU2NClpbl9sZTMyKCh1MzIgX19pb21lbSAqKWFkZHIg
KyAxKSA8PCAzMikgfA0KPiAgICAgICAgICAgICAgICAgICBefn5+fn5+DQo+ICAgICAgICAgICAg
ICAgICAgIGluX2xlNjQNCj4gICAgZHJpdmVycy9kbWEvZnNsZG1hLmg6IEluIGZ1bmN0aW9uICdv
dXRfbGU2NCc6DQo+ID4+IGRyaXZlcnMvZG1hL2ZzbGRtYS5oOjIyMToyOiBlcnJvcjogaW1wbGlj
aXQgZGVjbGFyYXRpb24gb2YgZnVuY3Rpb24NCj4gPj4gJ291dF9sZTMyJzsgZGlkIHlvdSBtZWFu
ICdvdXRfbGU2NCc/DQo+ID4+IFstV2Vycm9yPWltcGxpY2l0LWZ1bmN0aW9uLWRlY2xhcmF0aW9u
XQ0KPiAgICAgIG91dF9sZTMyKCh1MzIgX19pb21lbSAqKWFkZHIgKyAxLCB2YWwgPj4gMzIpOw0K
PiAgICAgIF5+fn5+fn5+DQo+ICAgICAgb3V0X2xlNjQNCj4gICAgZHJpdmVycy9kbWEvZnNsLXFk
bWEuYzogSW4gZnVuY3Rpb24gJ3FkbWFfcmVhZGwnOg0KPiA+PiBkcml2ZXJzL2RtYS9mc2wtcWRt
YS5jOjI3NTo5OiBlcnJvcjogaW1wbGljaXQgZGVjbGFyYXRpb24gb2YgZnVuY3Rpb24NCj4gPj4g
J0ZTTF9ETUFfSU4nOyBkaWQgeW91IG1lYW4gJ0ZTTF9ETUFfU05FTic/DQo+ID4+IFstV2Vycm9y
PWltcGxpY2l0LWZ1bmN0aW9uLWRlY2xhcmF0aW9uXQ0KPiAgICAgIHJldHVybiBGU0xfRE1BX0lO
KHFkbWEsIGFkZHIsIDMyKTsNCj4gICAgICAgICAgICAgXn5+fn5+fn5+fg0KPiAgICAgICAgICAg
ICBGU0xfRE1BX1NORU4NCj4gICAgZHJpdmVycy9kbWEvZnNsLXFkbWEuYzogSW4gZnVuY3Rpb24g
J3FkbWFfd3JpdGVsJzoNCj4gPj4gZHJpdmVycy9kbWEvZnNsLXFkbWEuYzoyODE6MjogZXJyb3I6
IGltcGxpY2l0IGRlY2xhcmF0aW9uIG9mIGZ1bmN0aW9uDQo+ID4+ICdGU0xfRE1BX09VVCc7IGRp
ZCB5b3UgbWVhbiAnRlNMX0RNQV9FT0wnPw0KPiA+PiBbLVdlcnJvcj1pbXBsaWNpdC1mdW5jdGlv
bi1kZWNsYXJhdGlvbl0NCj4gICAgICBGU0xfRE1BX09VVChxZG1hLCBhZGRyLCB2YWwsIDMyKTsN
Cj4gICAgICBefn5+fn5+fn5+fg0KPiAgICAgIEZTTF9ETUFfRU9MDQo+ICAgIEluIGZpbGUgaW5j
bHVkZWQgZnJvbSBkcml2ZXJzL2RtYS9mc2wtcWRtYS5jOjI3OjA6DQo+ICAgIEF0IHRvcCBsZXZl
bDoNCj4gICAgZHJpdmVycy9kbWEvZnNsZG1hLmg6MjE5OjEzOiB3YXJuaW5nOiAnb3V0X2xlNjQn
IGRlZmluZWQgYnV0IG5vdCB1c2VkDQo+IFstV3VudXNlZC1mdW5jdGlvbl0NCj4gICAgIHN0YXRp
YyB2b2lkIG91dF9sZTY0KHU2NCBfX2lvbWVtICphZGRyLCB1NjQgdmFsKQ0KPiAgICAgICAgICAg
ICAgICAgXn5+fn5+fn4NCj4gICAgZHJpdmVycy9kbWEvZnNsZG1hLmg6MjEzOjEyOiB3YXJuaW5n
OiAnaW5fbGU2NCcgZGVmaW5lZCBidXQgbm90IHVzZWQNCj4gWy1XdW51c2VkLWZ1bmN0aW9uXQ0K
PiAgICAgc3RhdGljIHU2NCBpbl9sZTY0KGNvbnN0IHU2NCBfX2lvbWVtICphZGRyKQ0KPiAgICAg
ICAgICAgICAgICBefn5+fn5+DQo+ICAgIGRyaXZlcnMvZG1hL2ZzbGRtYS5oOjIwNjoxMzogd2Fy
bmluZzogJ291dF9iZTY0JyBkZWZpbmVkIGJ1dCBub3QgdXNlZA0KPiBbLVd1bnVzZWQtZnVuY3Rp
b25dDQo+ICAgICBzdGF0aWMgdm9pZCBvdXRfYmU2NCh1NjQgX19pb21lbSAqYWRkciwgdTY0IHZh
bCkNCj4gICAgICAgICAgICAgICAgIF5+fn5+fn5+DQo+ICAgIGRyaXZlcnMvZG1hL2ZzbGRtYS5o
OjIwMDoxMjogd2FybmluZzogJ2luX2JlNjQnIGRlZmluZWQgYnV0IG5vdCB1c2VkDQo+IFstV3Vu
dXNlZC1mdW5jdGlvbl0NCj4gICAgIHN0YXRpYyB1NjQgaW5fYmU2NChjb25zdCB1NjQgX19pb21l
bSAqYWRkcikNCj4gICAgICAgICAgICAgICAgXn5+fn5+fg0KPiAgICBjYzE6IHNvbWUgd2Fybmlu
Z3MgYmVpbmcgdHJlYXRlZCBhcyBlcnJvcnMNCj4gDQo+IHZpbSArMjAyIGRyaXZlcnMvZG1hL2Zz
bGRtYS5oDQo+IA0KPiAxNzNhY2M3YyBaaGFuZyBXZWkgMjAwOC0wMy0wMSAgMTk4DQo+IDE3M2Fj
YzdjIFpoYW5nIFdlaSAyMDA4LTAzLTAxICAxOTkgICNpZm5kZWYgX19wb3dlcnBjNjRfXyAxNzNh
Y2M3Yw0KPiBaaGFuZyBXZWkgMjAwOC0wMy0wMSAgMjAwICBzdGF0aWMgdTY0IGluX2JlNjQoY29u
c3QgdTY0IF9faW9tZW0gKmFkZHIpDQo+IDE3M2FjYzdjIFpoYW5nIFdlaSAyMDA4LTAzLTAxICAy
MDEgIHsNCj4gYTRlNmQ1ZDMgQWwgVmlybyAgIDIwMDgtMDMtMjkgQDIwMiAgCXJldHVybiAoKHU2
NClpbl9iZTMyKCh1MzINCj4gX19pb21lbSAqKWFkZHIpIDw8IDMyKSB8DQo+IGE0ZTZkNWQzIEFs
IFZpcm8gICAyMDA4LTAzLTI5ICAyMDMgIAkJKGluX2JlMzIoKHUzMiBfX2lvbWVtDQo+ICopYWRk
ciArIDEpKTsNCj4gMTczYWNjN2MgWmhhbmcgV2VpIDIwMDgtMDMtMDEgIDIwNCAgfQ0KPiAxNzNh
Y2M3YyBaaGFuZyBXZWkgMjAwOC0wMy0wMSAgMjA1DQo+IDE3M2FjYzdjIFpoYW5nIFdlaSAyMDA4
LTAzLTAxICAyMDYgIHN0YXRpYyB2b2lkIG91dF9iZTY0KHU2NCBfX2lvbWVtDQo+ICphZGRyLCB1
NjQgdmFsKSAxNzNhY2M3YyBaaGFuZyBXZWkgMjAwOC0wMy0wMSAgMjA3ICB7DQo+IGE0ZTZkNWQz
IEFsIFZpcm8gICAyMDA4LTAzLTI5IEAyMDggIAlvdXRfYmUzMigodTMyIF9faW9tZW0gKilhZGRy
LA0KPiB2YWwgPj4gMzIpOw0KPiBhNGU2ZDVkMyBBbCBWaXJvICAgMjAwOC0wMy0yOSAgMjA5ICAJ
b3V0X2JlMzIoKHUzMiBfX2lvbWVtICopYWRkciArDQo+IDEsICh1MzIpdmFsKTsNCj4gMTczYWNj
N2MgWmhhbmcgV2VpIDIwMDgtMDMtMDEgIDIxMCAgfQ0KPiAxNzNhY2M3YyBaaGFuZyBXZWkgMjAw
OC0wMy0wMSAgMjExDQo+IDE3M2FjYzdjIFpoYW5nIFdlaSAyMDA4LTAzLTAxICAyMTIgIC8qIFRo
ZXJlIGlzIG5vIGFzbSBpbnN0cnVjdGlvbnMgZm9yDQo+IDY0IGJpdHMgcmV2ZXJzZSBsb2FkcyBh
bmQgc3RvcmVzICovIDE3M2FjYzdjIFpoYW5nIFdlaSAyMDA4LTAzLTAxICAyMTMNCj4gc3RhdGlj
IHU2NCBpbl9sZTY0KGNvbnN0IHU2NCBfX2lvbWVtICphZGRyKSAxNzNhY2M3YyBaaGFuZyBXZWkN
Cj4gMjAwOC0wMy0wMSAgMjE0ICB7DQo+IGE0ZTZkNWQzIEFsIFZpcm8gICAyMDA4LTAzLTI5IEAy
MTUgIAlyZXR1cm4gKCh1NjQpaW5fbGUzMigodTMyDQo+IF9faW9tZW0gKilhZGRyICsgMSkgPDwg
MzIpIHwNCj4gYTRlNmQ1ZDMgQWwgVmlybyAgIDIwMDgtMDMtMjkgIDIxNiAgCQkoaW5fbGUzMigo
dTMyIF9faW9tZW0NCj4gKilhZGRyKSk7DQo+IDE3M2FjYzdjIFpoYW5nIFdlaSAyMDA4LTAzLTAx
ICAyMTcgIH0NCj4gMTczYWNjN2MgWmhhbmcgV2VpIDIwMDgtMDMtMDEgIDIxOA0KPiAxNzNhY2M3
YyBaaGFuZyBXZWkgMjAwOC0wMy0wMSAgMjE5ICBzdGF0aWMgdm9pZCBvdXRfbGU2NCh1NjQgX19p
b21lbQ0KPiAqYWRkciwgdTY0IHZhbCkgMTczYWNjN2MgWmhhbmcgV2VpIDIwMDgtMDMtMDEgIDIy
MCAgew0KPiBhNGU2ZDVkMyBBbCBWaXJvICAgMjAwOC0wMy0yOSBAMjIxICAJb3V0X2xlMzIoKHUz
MiBfX2lvbWVtICopYWRkciArDQo+IDEsIHZhbCA+PiAzMik7DQo+IGE0ZTZkNWQzIEFsIFZpcm8g
ICAyMDA4LTAzLTI5ICAyMjIgIAlvdXRfbGUzMigodTMyIF9faW9tZW0gKilhZGRyLA0KPiAodTMy
KXZhbCk7DQo+IDE3M2FjYzdjIFpoYW5nIFdlaSAyMDA4LTAzLTAxICAyMjMgIH0NCj4gMTczYWNj
N2MgWmhhbmcgV2VpIDIwMDgtMDMtMDEgIDIyNCAgI2VuZGlmIDE3M2FjYzdjIFpoYW5nIFdlaQ0K
PiAyMDA4LTAzLTAxICAyMjUNCj4gDQo+IDo6Ojo6OiBUaGUgY29kZSBhdCBsaW5lIDIwMiB3YXMg
Zmlyc3QgaW50cm9kdWNlZCBieSBjb21taXQNCj4gOjo6Ojo6IGE0ZTZkNWQzODE3ZWJhZTE2N2U3
OGU1OTU3Y2Q5ZTYyNGJlMjAwYzcgZml4IHRoZSBicm9rZW4NCj4gYW5ub3RhdGlvbnMgaW4gZnNs
ZG1hDQo+IA0KPiA6Ojo6OjogVE86IEFsIFZpcm8gPHZpcm9AZnRwLmxpbnV4Lm9yZy51az4NCj4g
Ojo6Ojo6IENDOiBMaW51cyBUb3J2YWxkcyA8dG9ydmFsZHNAbGludXgtZm91bmRhdGlvbi5vcmc+
DQo+IA0KPiAtLS0NCj4gMC1EQVkga2VybmVsIHRlc3QgaW5mcmFzdHJ1Y3R1cmUgICAgICAgICAg
ICAgICAgT3BlbiBTb3VyY2UgVGVjaG5vbG9neQ0KPiBDZW50ZXINCj4gaHR0cHM6Ly9lbWVhMDEu
c2FmZWxpbmtzLnByb3RlY3Rpb24ub3V0bG9vay5jb20vP3VybD1odHRwcyUzQSUyRiUyRmxpc3Rz
Lg0KPiAwMS5vcmclMkZwaXBlcm1haWwlMkZrYnVpbGQtYWxsJmRhdGE9MDIlN0MwMSU3Q3dlbi5o
ZV8xJTQwbnhwLmNvbQ0KPiAlN0M2ZTNkNzNhMGNkNTQ0YTE2N2M5OTA4ZDU0YmJlODQ3NyU3QzY4
NmVhMWQzYmMyYjRjNmZhOTJjZDk5Yw0KPiA1YzMwMTYzNSU3QzAlN0MwJTdDNjM2NDk4MjA0MDY3
NzQ0OTA3JnNkYXRhPTl4Y3IzR2FVS2IlMkZMQkx1DQo+IDRmWmlhTmdYTGdvNkJEOFZOZWhXM2hi
RERjVUUlM0QmcmVzZXJ2ZWQ9MA0KPiBJbnRlbCBDb3Jwb3JhdGlvbg0K
---
To unsubscribe from this list: send the line "unsubscribe dmaengine" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [V2] dmaengine: fsl-qdma: add NXP Layerscape qDMA engine driver support
@ 2017-12-25 17:39 kbuild test robot
  0 siblings, 0 replies; 11+ messages in thread
From: kbuild test robot @ 2017-12-25 17:39 UTC (permalink / raw)
  To: Wen He
  Cc: kbuild-all, vinod.koul, leoyang.li, dmaengine, jiafei.pan, jiaheng.fan

Hi Wen,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.15-rc5 next-20171222]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Wen-He/dmaengine-fsl-qdma-add-NXP-Layerscape-qDMA-engine-driver-support/20171225-232227
config: blackfin-allyesconfig (attached as .config)
compiler: bfin-uclinux-gcc (GCC) 7.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=blackfin 

All errors (new ones prefixed by >>):

   In file included from drivers/dma/fsl-qdma.c:27:0:
   drivers/dma/fsldma.h: In function 'in_be64':
>> drivers/dma/fsldma.h:202:15: error: implicit declaration of function 'in_be32'; did you mean 'in_be64'? [-Werror=implicit-function-declaration]
     return ((u64)in_be32((u32 __iomem *)addr) << 32) |
                  ^~~~~~~
                  in_be64
   drivers/dma/fsldma.h: In function 'out_be64':
>> drivers/dma/fsldma.h:208:2: error: implicit declaration of function 'out_be32'; did you mean 'out_be64'? [-Werror=implicit-function-declaration]
     out_be32((u32 __iomem *)addr, val >> 32);
     ^~~~~~~~
     out_be64
   drivers/dma/fsldma.h: In function 'in_le64':
>> drivers/dma/fsldma.h:215:15: error: implicit declaration of function 'in_le32'; did you mean 'in_le64'? [-Werror=implicit-function-declaration]
     return ((u64)in_le32((u32 __iomem *)addr + 1) << 32) |
                  ^~~~~~~
                  in_le64
   drivers/dma/fsldma.h: In function 'out_le64':
>> drivers/dma/fsldma.h:221:2: error: implicit declaration of function 'out_le32'; did you mean 'out_le64'? [-Werror=implicit-function-declaration]
     out_le32((u32 __iomem *)addr + 1, val >> 32);
     ^~~~~~~~
     out_le64
   drivers/dma/fsl-qdma.c: In function 'qdma_readl':
>> drivers/dma/fsl-qdma.c:275:9: error: implicit declaration of function 'FSL_DMA_IN'; did you mean 'FSL_DMA_SNEN'? [-Werror=implicit-function-declaration]
     return FSL_DMA_IN(qdma, addr, 32);
            ^~~~~~~~~~
            FSL_DMA_SNEN
   drivers/dma/fsl-qdma.c: In function 'qdma_writel':
>> drivers/dma/fsl-qdma.c:281:2: error: implicit declaration of function 'FSL_DMA_OUT'; did you mean 'FSL_DMA_EOL'? [-Werror=implicit-function-declaration]
     FSL_DMA_OUT(qdma, addr, val, 32);
     ^~~~~~~~~~~
     FSL_DMA_EOL
   In file included from drivers/dma/fsl-qdma.c:27:0:
   At top level:
   drivers/dma/fsldma.h:219:13: warning: 'out_le64' defined but not used [-Wunused-function]
    static void out_le64(u64 __iomem *addr, u64 val)
                ^~~~~~~~
   drivers/dma/fsldma.h:213:12: warning: 'in_le64' defined but not used [-Wunused-function]
    static u64 in_le64(const u64 __iomem *addr)
               ^~~~~~~
   drivers/dma/fsldma.h:206:13: warning: 'out_be64' defined but not used [-Wunused-function]
    static void out_be64(u64 __iomem *addr, u64 val)
                ^~~~~~~~
   drivers/dma/fsldma.h:200:12: warning: 'in_be64' defined but not used [-Wunused-function]
    static u64 in_be64(const u64 __iomem *addr)
               ^~~~~~~
   cc1: some warnings being treated as errors

vim +202 drivers/dma/fsldma.h

173acc7c Zhang Wei 2008-03-01  198  
173acc7c Zhang Wei 2008-03-01  199  #ifndef __powerpc64__
173acc7c Zhang Wei 2008-03-01  200  static u64 in_be64(const u64 __iomem *addr)
173acc7c Zhang Wei 2008-03-01  201  {
a4e6d5d3 Al Viro   2008-03-29 @202  	return ((u64)in_be32((u32 __iomem *)addr) << 32) |
a4e6d5d3 Al Viro   2008-03-29  203  		(in_be32((u32 __iomem *)addr + 1));
173acc7c Zhang Wei 2008-03-01  204  }
173acc7c Zhang Wei 2008-03-01  205  
173acc7c Zhang Wei 2008-03-01  206  static void out_be64(u64 __iomem *addr, u64 val)
173acc7c Zhang Wei 2008-03-01  207  {
a4e6d5d3 Al Viro   2008-03-29 @208  	out_be32((u32 __iomem *)addr, val >> 32);
a4e6d5d3 Al Viro   2008-03-29  209  	out_be32((u32 __iomem *)addr + 1, (u32)val);
173acc7c Zhang Wei 2008-03-01  210  }
173acc7c Zhang Wei 2008-03-01  211  
173acc7c Zhang Wei 2008-03-01  212  /* There is no asm instructions for 64 bits reverse loads and stores */
173acc7c Zhang Wei 2008-03-01  213  static u64 in_le64(const u64 __iomem *addr)
173acc7c Zhang Wei 2008-03-01  214  {
a4e6d5d3 Al Viro   2008-03-29 @215  	return ((u64)in_le32((u32 __iomem *)addr + 1) << 32) |
a4e6d5d3 Al Viro   2008-03-29  216  		(in_le32((u32 __iomem *)addr));
173acc7c Zhang Wei 2008-03-01  217  }
173acc7c Zhang Wei 2008-03-01  218  
173acc7c Zhang Wei 2008-03-01  219  static void out_le64(u64 __iomem *addr, u64 val)
173acc7c Zhang Wei 2008-03-01  220  {
a4e6d5d3 Al Viro   2008-03-29 @221  	out_le32((u32 __iomem *)addr + 1, val >> 32);
a4e6d5d3 Al Viro   2008-03-29  222  	out_le32((u32 __iomem *)addr, (u32)val);
173acc7c Zhang Wei 2008-03-01  223  }
173acc7c Zhang Wei 2008-03-01  224  #endif
173acc7c Zhang Wei 2008-03-01  225  

:::::: The code at line 202 was first introduced by commit
:::::: a4e6d5d3817ebae167e78e5957cd9e624be200c7 fix the broken annotations in fsldma

:::::: TO: Al Viro <viro@ftp.linux.org.uk>
:::::: CC: Linus Torvalds <torvalds@linux-foundation.org>
---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [V2] dmaengine: fsl-qdma: add NXP Layerscape qDMA engine driver support
@ 2017-12-25  7:39 Wen He
  0 siblings, 0 replies; 11+ messages in thread
From: Wen He @ 2017-12-25  7:39 UTC (permalink / raw)
  To: vinod.koul; +Cc: leoyang.li, dmaengine, jiafei.pan, jiaheng.fan, Wen He

add NXP Layerscape queue direct memory access controller(qDMA) support.
This module can be found on NXP QorIQ Layerscape Socs.

Signed-off-by: Wen He <wen.he_1@nxp.com>
---
change in v2:
	- Replace GPL V2 License details by SPDX tags
	- Replace Freescale by NXP
	- Reduce and optimize header file references
	- Replace big_endian by feature in struct fsl_qdma_engine
	- Replace struct fsl_qdma_format by struct fsl_qdma_ccdf and struct fsl_qdma_csgf
	- Remove empty line
	- Replace 'if..else' by macro 'FSL_QDMA_IN/OUT' in function qdma_readl() and qdma_writel()
	- Remove function fsl_qdma_alloc_chan_resources()
	- Replace 'prei' by 'pre'
	- Replace '-1' by '-ENOMEM' in function fsl_qdma_pre_request_enqueue_desc()
	- Fix dma pool allocation need to rolled back in function fsl_qdma_request_enqueue_desc()
	- Replace function of_property_read_u32_array() by device_property_read_u32_array()
	- Add functions fsl_qdma_cleanup_vchan() and fsl_qdma_irq_exit() to ensure
	  irq and tasklets stopped
	- Replace dts node element 'channels' by 'dma-channels'
	- Replace function platform_driver_register() by module_platform_driver()

 drivers/dma/Kconfig    |   12 +
 drivers/dma/Makefile   |    1 +
 drivers/dma/fsl-qdma.c | 1117 ++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 1130 insertions(+), 0 deletions(-)
 create mode 100644 drivers/dma/fsl-qdma.c

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index 27df3e2..20803ef 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -215,6 +215,18 @@ config FSL_EDMA
 	  multiplexing capability for DMA request sources(slot).
 	  This module can be found on Freescale Vybrid and LS-1 SoCs.
 
+config FSL_QDMA
+       tristate "NXP Layerscape qDMA engine support"
+       select DMA_ENGINE
+       select DMA_VIRTUAL_CHANNELS
+       select DMA_ENGINE_RAID
+       select ASYNC_TX_ENABLE_CHANNEL_SWITCH
+       help
+         Support the NXP Layerscape qDMA engine with command queue and legacy mode.
+         Channel virtualization is supported through enqueuing of DMA jobs to,
+         or dequeuing DMA jobs from, different work queues.
+         This module can be found on NXP Layerscape SoCs.
+
 config FSL_RAID
         tristate "Freescale RAID engine Support"
         depends on FSL_SOC && !ASYNC_TX_ENABLE_CHANNEL_SWITCH
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index b9dca8a..7a49b7b 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -32,6 +32,7 @@ obj-$(CONFIG_DW_DMAC_CORE) += dw/
 obj-$(CONFIG_EP93XX_DMA) += ep93xx_dma.o
 obj-$(CONFIG_FSL_DMA) += fsldma.o
 obj-$(CONFIG_FSL_EDMA) += fsl-edma.o
+obj-$(CONFIG_FSL_QDMA) += fsl-qdma.o
 obj-$(CONFIG_FSL_RAID) += fsl_raid.o
 obj-$(CONFIG_HSU_DMA) += hsu/
 obj-$(CONFIG_IMG_MDC_DMA) += img-mdc-dma.o
diff --git a/drivers/dma/fsl-qdma.c b/drivers/dma/fsl-qdma.c
new file mode 100644
index 0000000..23c3e02
--- /dev/null
+++ b/drivers/dma/fsl-qdma.c
@@ -0,0 +1,1117 @@
+/*
+ * Driver for NXP Layerscape Queue direct memory access controller (qDMA)
+ *
+ * Copyright 2017 NXP
+ *
+ * Author:
+ *  Jiaheng Fan <jiaheng.fan@nxp.com>
+ *  Wen He <wen.he_1@nxp.com>
+ *
+ * SPDX-License-Identifier: GPL-2.0+
+ */
+
+#include <linux/interrupt.h>
+#include <linux/module.h>
+#include <linux/delay.h>
+#include <linux/of_irq.h>
+#include <linux/of_address.h>
+#include <linux/of_platform.h>
+#include <linux/of_dma.h>
+#include <linux/dma-mapping.h>
+#include <linux/dmapool.h>
+#include <linux/dmaengine.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+
+#include "virt-dma.h"
+#include "fsldma.h"
+
+#define FSL_QDMA_DMR			0x0
+#define FSL_QDMA_DSR			0x4
+#define FSL_QDMA_DEIER			0xe00
+#define FSL_QDMA_DEDR			0xe04
+#define FSL_QDMA_DECFDW0R		0xe10
+#define FSL_QDMA_DECFDW1R		0xe14
+#define FSL_QDMA_DECFDW2R		0xe18
+#define FSL_QDMA_DECFDW3R		0xe1c
+#define FSL_QDMA_DECFQIDR		0xe30
+#define FSL_QDMA_DECBR			0xe34
+
+#define FSL_QDMA_BCQMR(x)		(0xc0 + 0x100 * (x))
+#define FSL_QDMA_BCQSR(x)		(0xc4 + 0x100 * (x))
+#define FSL_QDMA_BCQEDPA_SADDR(x)	(0xc8 + 0x100 * (x))
+#define FSL_QDMA_BCQDPA_SADDR(x)	(0xcc + 0x100 * (x))
+#define FSL_QDMA_BCQEEPA_SADDR(x)	(0xd0 + 0x100 * (x))
+#define FSL_QDMA_BCQEPA_SADDR(x)	(0xd4 + 0x100 * (x))
+#define FSL_QDMA_BCQIER(x)		(0xe0 + 0x100 * (x))
+#define FSL_QDMA_BCQIDR(x)		(0xe4 + 0x100 * (x))
+
+#define FSL_QDMA_SQDPAR			0x80c
+#define FSL_QDMA_SQEPAR			0x814
+#define FSL_QDMA_BSQMR			0x800
+#define FSL_QDMA_BSQSR			0x804
+#define FSL_QDMA_BSQICR			0x828
+#define FSL_QDMA_CQMR			0xa00
+#define FSL_QDMA_CQDSCR1		0xa08
+#define FSL_QDMA_CQDSCR2                0xa0c
+#define FSL_QDMA_CQIER			0xa10
+#define FSL_QDMA_CQEDR			0xa14
+#define FSL_QDMA_SQCCMR			0xa20
+
+#define FSL_QDMA_SQICR_ICEN
+
+#define FSL_QDMA_CQIDR_CQT		0xff000000
+#define FSL_QDMA_CQIDR_SQPE		0x800000
+#define FSL_QDMA_CQIDR_SQT		0x8000
+
+#define FSL_QDMA_BCQIER_CQTIE		0x8000
+#define FSL_QDMA_BCQIER_CQPEIE		0x800000
+#define FSL_QDMA_BSQICR_ICEN		0x80000000
+#define FSL_QDMA_BSQICR_ICST(x)		((x) << 16)
+#define FSL_QDMA_CQIER_MEIE		0x80000000
+#define FSL_QDMA_CQIER_TEIE		0x1
+#define FSL_QDMA_SQCCMR_ENTER_WM	0x200000
+
+#define FSL_QDMA_QUEUE_MAX		8
+
+#define FSL_QDMA_BCQMR_EN		0x80000000
+#define FSL_QDMA_BCQMR_EI		0x40000000
+#define FSL_QDMA_BCQMR_CD_THLD(x)	((x) << 20)
+#define FSL_QDMA_BCQMR_CQ_SIZE(x)	((x) << 16)
+
+#define FSL_QDMA_BCQSR_QF		0x10000
+#define FSL_QDMA_BCQSR_XOFF		0x1
+
+#define FSL_QDMA_BSQMR_EN		0x80000000
+#define FSL_QDMA_BSQMR_DI		0x40000000
+#define FSL_QDMA_BSQMR_CQ_SIZE(x)	((x) << 16)
+
+#define FSL_QDMA_BSQSR_QE		0x20000
+
+#define FSL_QDMA_DMR_DQD		0x40000000
+#define FSL_QDMA_DSR_DB			0x80000000
+
+#define FSL_QDMA_BASE_BUFFER_SIZE	96
+#define FSL_QDMA_EXPECT_SG_ENTRY_NUM	16
+#define FSL_QDMA_CIRCULAR_DESC_SIZE_MIN	64
+#define FSL_QDMA_CIRCULAR_DESC_SIZE_MAX	16384
+#define FSL_QDMA_QUEUE_NUM_MAX		8
+
+#define FSL_QDMA_CMD_RWTTYPE		0x4
+#define FSL_QDMA_CMD_LWC                0x2
+
+#define FSL_QDMA_CMD_RWTTYPE_OFFSET	28
+#define FSL_QDMA_CMD_NS_OFFSET		27
+#define FSL_QDMA_CMD_DQOS_OFFSET	24
+#define FSL_QDMA_CMD_WTHROTL_OFFSET	20
+#define FSL_QDMA_CMD_DSEN_OFFSET	19
+#define FSL_QDMA_CMD_LWC_OFFSET		16
+
+#define FSL_QDMA_E_SG_TABLE		1
+#define FSL_QDMA_E_DATA_BUFFER		0
+#define FSL_QDMA_F_LAST_ENTRY		1
+
+#define QDMA_CCDF_STATUS		20
+#define QDMA_CCDF_OFFSET		20
+#define QDMA_CCDF_MASK			GENMASK(28, 20)
+#define QDMA_CCDF_FOTMAT		BIT(29)
+#define QDMA_CCDF_SER			BIT(30)
+
+#define QDMA_SG_FIN			BIT(30)
+#define QDMA_SG_EXT			BIT(31)
+#define QDMA_SG_LEN_MASK		GENMASK(29, 0)
+
+u64 pre_addr, pre_queue;
+
+/* qDMA Command Descriptor Fotmats */
+
+struct fsl_qdma_format {
+	__le32 status; /* ser, status */
+	__le32 cfg;	/* format, offset */
+	union {
+		struct {
+			__le32 addr_lo;	/* low 32-bits of 40-bit address */
+			u8 addr_hi;	/* high 8-bits of 40-bit address */
+			u8 __reserved1[2];
+			u8 cfg8b_w1; /* dd, queue */
+		} __packed;
+		__le64 data;
+	};
+} __packed;
+
+static inline u64
+qdma_ccdf_addr_get64(const struct fsl_qdma_format *ccdf)
+{
+	return le64_to_cpu(ccdf->data) & 0xffffffffffLLU;
+}
+
+static inline void
+qdma_desc_addr_set64(struct fsl_qdma_format *ccdf, u64 addr)
+{
+	ccdf->addr_hi = upper_32_bits(addr);
+	ccdf->addr_lo = cpu_to_le32(lower_32_bits(addr));
+}
+
+static inline u64
+qdma_ccdf_get_queue(const struct fsl_qdma_format *ccdf)
+{
+	return ccdf->cfg8b_w1 & 0xff;
+}
+
+static inline int
+qdma_ccdf_get_offset(const struct fsl_qdma_format *ccdf)
+{
+	return (le32_to_cpu(ccdf->cfg) & QDMA_CCDF_MASK) >> QDMA_CCDF_OFFSET;
+}
+
+static inline void
+qdma_ccdf_set_format(struct fsl_qdma_format *ccdf, int offset)
+{
+	ccdf->cfg = cpu_to_le32(QDMA_CCDF_FOTMAT | offset);
+}
+
+static inline int
+qdma_ccdf_get_status(const struct fsl_qdma_format *ccdf)
+{
+	return (le32_to_cpu(ccdf->status) & QDMA_CCDF_MASK) >> QDMA_CCDF_STATUS;
+}
+
+static inline void
+qdma_ccdf_set_ser(struct fsl_qdma_format *ccdf, int status)
+{
+	ccdf->status = cpu_to_le32(QDMA_CCDF_SER | status);
+}
+
+static inline void qdma_csgf_set_len(struct fsl_qdma_format *csgf, int len)
+{
+	csgf->cfg = cpu_to_le32(len & QDMA_SG_LEN_MASK);
+}
+
+static inline void qdma_csgf_set_f(struct fsl_qdma_format *csgf, int len)
+{
+	csgf->cfg = cpu_to_le32(QDMA_SG_FIN | (len & QDMA_SG_LEN_MASK));
+}
+
+static inline void qdma_csgf_set_e(struct fsl_qdma_format *csgf, int len)
+{
+	csgf->cfg = cpu_to_le32(QDMA_SG_EXT | (len & QDMA_SG_LEN_MASK));
+}
+
+/* qDMA Source Descriptor Format */
+struct fsl_qdma_sdf {
+	__le32 rev3;
+	__le32 cfg; /* rev4, bit[0-11] - ssd, bit[12-23] sss */
+	__le32 rev5;
+	__le32 cmd;
+} __packed;
+
+/* qDMA Destination Descriptor Format */
+struct fsl_qdma_ddf {
+	__le32 rev1;
+	__le32 cfg; /* rev2, bit[0-11] - dsd, bit[12-23] - dss */
+	__le32 rev3;
+	__le32 cmd;
+} __packed;
+
+struct fsl_qdma_chan {
+	struct virt_dma_chan		vchan;
+	struct virt_dma_desc		vdesc;
+	enum dma_status			status;
+	u32				slave_id;
+	struct fsl_qdma_engine		*qdma;
+	struct fsl_qdma_queue		*queue;
+	struct list_head		qcomp;
+};
+
+struct fsl_qdma_queue {
+	struct fsl_qdma_format	*virt_head;
+	struct fsl_qdma_format	*virt_tail;
+	struct list_head	comp_used;
+	struct list_head	comp_free;
+	struct dma_pool		*comp_pool;
+	struct dma_pool		*sg_pool;
+	spinlock_t		queue_lock;
+	dma_addr_t		bus_addr;
+	u32                     n_cq;
+	u32			id;
+	struct fsl_qdma_format	*cq;
+};
+
+struct fsl_qdma_sg {
+	dma_addr_t		bus_addr;
+	void			*virt_addr;
+};
+
+struct fsl_qdma_comp {
+	dma_addr_t              bus_addr;
+	void			*virt_addr;
+	struct fsl_qdma_chan	*qchan;
+	struct fsl_qdma_sg	*sg_block;
+	struct virt_dma_desc    vdesc;
+	struct list_head	list;
+	u32			sg_block_src;
+	u32			sg_block_dst;
+};
+
+struct fsl_qdma_engine {
+	struct dma_device	dma_dev;
+	void __iomem		*ctrl_base;
+	void __iomem            *status_base;
+	void __iomem		*block_base;
+	u32			n_chans;
+	u32			n_queues;
+	struct mutex            fsl_qdma_mutex;
+	int			error_irq;
+	int			queue_irq;
+	bool			feature;
+	struct fsl_qdma_queue	*queue;
+	struct fsl_qdma_queue	*status;
+	struct fsl_qdma_chan	chans[];
+
+};
+
+static u32 qdma_readl(struct fsl_qdma_engine *qdma, void __iomem *addr)
+{
+	return FSL_DMA_IN(qdma, addr, 32);
+}
+
+static void qdma_writel(struct fsl_qdma_engine *qdma, u32 val,
+						void __iomem *addr)
+{
+	FSL_DMA_OUT(qdma, addr, val, 32);
+}
+
+static struct fsl_qdma_chan *to_fsl_qdma_chan(struct dma_chan *chan)
+{
+	return container_of(chan, struct fsl_qdma_chan, vchan.chan);
+}
+
+static struct fsl_qdma_comp *to_fsl_qdma_comp(struct virt_dma_desc *vd)
+{
+	return container_of(vd, struct fsl_qdma_comp, vdesc);
+}
+
+static void fsl_qdma_free_chan_resources(struct dma_chan *chan)
+{
+	struct fsl_qdma_chan *fsl_chan = to_fsl_qdma_chan(chan);
+	unsigned long flags;
+	LIST_HEAD(head);
+
+	spin_lock_irqsave(&fsl_chan->vchan.lock, flags);
+	vchan_get_all_descriptors(&fsl_chan->vchan, &head);
+	spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags);
+
+	vchan_dma_desc_free_list(&fsl_chan->vchan, &head);
+}
+
+static void fsl_qdma_comp_fill_memcpy(struct fsl_qdma_comp *fsl_comp,
+					dma_addr_t dst, dma_addr_t src, u32 len)
+{
+	struct fsl_qdma_format *ccdf, *csgf_desc, *csgf_src, *csgf_dest;
+	struct fsl_qdma_sdf *sdf;
+	struct fsl_qdma_ddf *ddf;
+
+	ccdf = (struct fsl_qdma_format *)fsl_comp->virt_addr;
+	csgf_desc = (struct fsl_qdma_format *)fsl_comp->virt_addr + 1;
+	csgf_src = (struct fsl_qdma_format *)fsl_comp->virt_addr + 2;
+	csgf_dest = (struct fsl_qdma_format *)fsl_comp->virt_addr + 3;
+	sdf = (struct fsl_qdma_sdf *)fsl_comp->virt_addr + 4;
+	ddf = (struct fsl_qdma_ddf *)fsl_comp->virt_addr + 5;
+
+	memset(fsl_comp->virt_addr, 0, FSL_QDMA_BASE_BUFFER_SIZE);
+	/* Head Command Descriptor(Frame Descriptor) */
+	qdma_desc_addr_set64(ccdf, fsl_comp->bus_addr + 16);
+	qdma_ccdf_set_format(ccdf, qdma_ccdf_get_offset(ccdf));
+	qdma_ccdf_set_ser(ccdf, qdma_ccdf_get_status(ccdf));
+	/* Status notification is enqueued to status queue. */
+	/* Compound Command Descriptor(Frame List Table) */
+	qdma_desc_addr_set64(csgf_desc, fsl_comp->bus_addr + 64);
+	/* It must be 32 as Compound S/G Descriptor */
+	qdma_csgf_set_len(csgf_desc, 32);
+	qdma_desc_addr_set64(csgf_src, src);
+	qdma_csgf_set_len(csgf_src, len);
+	qdma_desc_addr_set64(csgf_dest, dst);
+	qdma_csgf_set_len(csgf_dest, len);
+	/* This entry is the last entry. */
+	qdma_csgf_set_f(csgf_dest, len);
+	/* Descriptor Buffer */
+	sdf->cmd = cpu_to_le32(
+			FSL_QDMA_CMD_RWTTYPE << FSL_QDMA_CMD_RWTTYPE_OFFSET);
+	ddf->cmd = cpu_to_le32(
+			FSL_QDMA_CMD_RWTTYPE << FSL_QDMA_CMD_RWTTYPE_OFFSET);
+	ddf->cmd |= cpu_to_le32(
+			FSL_QDMA_CMD_LWC << FSL_QDMA_CMD_LWC_OFFSET);
+}
+
+/*
+ * Pre-request full command descriptor for enqueue.
+ */
+static int fsl_qdma_pre_request_enqueue_desc(struct fsl_qdma_queue *queue)
+{
+	struct fsl_qdma_comp *comp_temp;
+	int i;
+
+	for (i = 0; i < queue->n_cq; i++) {
+		comp_temp = kzalloc(sizeof(*comp_temp), GFP_KERNEL);
+		if (!comp_temp)
+			return -ENOMEM;
+		comp_temp->virt_addr = dma_pool_alloc(queue->comp_pool,
+						      GFP_NOWAIT,
+						      &comp_temp->bus_addr);
+		if (!comp_temp->virt_addr)
+			return -ENOMEM;
+		list_add_tail(&comp_temp->list, &queue->comp_free);
+	}
+
+	return 0;
+}
+
+/*
+ * Request a command descriptor for enqueue.
+ */
+static struct fsl_qdma_comp *fsl_qdma_request_enqueue_desc(
+					struct fsl_qdma_chan *fsl_chan,
+					unsigned int dst_nents,
+					unsigned int src_nents)
+{
+	struct fsl_qdma_comp *comp_temp;
+	struct fsl_qdma_sg *sg_block;
+	struct fsl_qdma_queue *queue = fsl_chan->queue;
+	unsigned long flags;
+	unsigned int dst_sg_entry_block, src_sg_entry_block, sg_entry_total, i;
+
+	spin_lock_irqsave(&queue->queue_lock, flags);
+	if (list_empty(&queue->comp_free)) {
+		spin_unlock_irqrestore(&queue->queue_lock, flags);
+		comp_temp = kzalloc(sizeof(*comp_temp), GFP_KERNEL);
+		if (!comp_temp)
+			return NULL;
+		comp_temp->virt_addr = dma_pool_alloc(queue->comp_pool,
+						      GFP_NOWAIT,
+						      &comp_temp->bus_addr);
+		if (!comp_temp->virt_addr) {
+			kfree(comp_temp);
+			return NULL;
+		}
+
+	} else {
+		comp_temp = list_first_entry(&queue->comp_free,
+					     struct fsl_qdma_comp,
+					     list);
+		list_del(&comp_temp->list);
+		spin_unlock_irqrestore(&queue->queue_lock, flags);
+	}
+
+	if (dst_nents != 0)
+		dst_sg_entry_block = dst_nents /
+					(FSL_QDMA_EXPECT_SG_ENTRY_NUM - 1) + 1;
+	else
+		dst_sg_entry_block = 0;
+
+	if (src_nents != 0)
+		src_sg_entry_block = src_nents /
+					(FSL_QDMA_EXPECT_SG_ENTRY_NUM - 1) + 1;
+	else
+		src_sg_entry_block = 0;
+
+	sg_entry_total = dst_sg_entry_block + src_sg_entry_block;
+	if (sg_entry_total) {
+		sg_block = kzalloc(sizeof(*sg_block) *
+					      sg_entry_total,
+					      GFP_KERNEL);
+		if (!sg_block) {
+			dma_pool_free(queue->comp_pool,
+					comp_temp->virt_addr,
+					comp_temp->bus_addr);
+			return NULL;
+		}
+		comp_temp->sg_block = sg_block;
+		for (i = 0; i < sg_entry_total; i++) {
+			sg_block->virt_addr = dma_pool_alloc(queue->sg_pool,
+							GFP_NOWAIT,
+							&sg_block->bus_addr);
+			memset(sg_block->virt_addr, 0,
+					FSL_QDMA_EXPECT_SG_ENTRY_NUM * 16);
+			sg_block++;
+		}
+	}
+
+	comp_temp->sg_block_src = src_sg_entry_block;
+	comp_temp->sg_block_dst = dst_sg_entry_block;
+	comp_temp->qchan = fsl_chan;
+
+	return comp_temp;
+}
+
+static struct fsl_qdma_queue *fsl_qdma_alloc_queue_resources(
+					struct platform_device *pdev,
+					unsigned int queue_num)
+{
+	struct fsl_qdma_queue *queue_head, *queue_temp;
+	int ret, len, i;
+	unsigned int queue_size[FSL_QDMA_QUEUE_MAX];
+
+	if (queue_num > FSL_QDMA_QUEUE_MAX)
+		queue_num = FSL_QDMA_QUEUE_MAX;
+	len = sizeof(*queue_head) * queue_num;
+	queue_head = devm_kzalloc(&pdev->dev, len, GFP_KERNEL);
+	if (!queue_head)
+		return NULL;
+
+	ret = device_property_read_u32_array(&pdev->dev, "queue-sizes",
+					queue_size, queue_num);
+	if (ret) {
+		dev_err(&pdev->dev, "Can't get queue-sizes.\n");
+		return NULL;
+	}
+
+	for (i = 0; i < queue_num; i++) {
+		if (queue_size[i] > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX
+			|| queue_size[i] < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
+			dev_err(&pdev->dev, "Get wrong queue-sizes.\n");
+			return NULL;
+		}
+		queue_temp = queue_head + i;
+		queue_temp->cq = dma_alloc_coherent(&pdev->dev,
+						sizeof(struct fsl_qdma_format) *
+						queue_size[i],
+						&queue_temp->bus_addr,
+						GFP_KERNEL);
+		if (!queue_temp->cq)
+			return NULL;
+		queue_temp->n_cq = queue_size[i];
+		queue_temp->id = i;
+		queue_temp->virt_head = queue_temp->cq;
+		queue_temp->virt_tail = queue_temp->cq;
+		/*
+		 * The dma pool for queue command buffer
+		 */
+		queue_temp->comp_pool = dma_pool_create("comp_pool",
+						&pdev->dev,
+						FSL_QDMA_BASE_BUFFER_SIZE,
+						16, 0);
+		if (!queue_temp->comp_pool) {
+			dma_free_coherent(&pdev->dev,
+						sizeof(struct fsl_qdma_format) *
+						queue_size[i],
+						queue_temp->cq,
+						queue_temp->bus_addr);
+			return NULL;
+		}
+		/*
+		 * The dma pool for queue command buffer
+		 */
+		queue_temp->sg_pool = dma_pool_create("sg_pool",
+					&pdev->dev,
+					FSL_QDMA_EXPECT_SG_ENTRY_NUM * 16,
+					64, 0);
+		if (!queue_temp->sg_pool) {
+			dma_free_coherent(&pdev->dev,
+						sizeof(struct fsl_qdma_format) *
+						queue_size[i],
+						queue_temp->cq,
+						queue_temp->bus_addr);
+			dma_pool_destroy(queue_temp->comp_pool);
+			return NULL;
+		}
+		/*
+		 * List for queue command buffer
+		 */
+		INIT_LIST_HEAD(&queue_temp->comp_used);
+		INIT_LIST_HEAD(&queue_temp->comp_free);
+		spin_lock_init(&queue_temp->queue_lock);
+	}
+
+	return queue_head;
+}
+
+static struct fsl_qdma_queue *fsl_qdma_prep_status_queue(
+						struct platform_device *pdev)
+{
+	struct device_node *np = pdev->dev.of_node;
+	struct fsl_qdma_queue *status_head;
+	unsigned int status_size;
+	int ret;
+
+	ret = of_property_read_u32(np, "status-sizes", &status_size);
+	if (ret) {
+		dev_err(&pdev->dev, "Can't get status-sizes.\n");
+		return NULL;
+	}
+	if (status_size > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX
+			|| status_size < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
+		dev_err(&pdev->dev, "Get wrong status_size.\n");
+		return NULL;
+	}
+	status_head = devm_kzalloc(&pdev->dev, sizeof(*status_head),
+								GFP_KERNEL);
+	if (!status_head)
+		return NULL;
+
+	/*
+	 * Buffer for queue command
+	 */
+	status_head->cq = dma_alloc_coherent(&pdev->dev,
+						sizeof(struct fsl_qdma_format) *
+						status_size,
+						&status_head->bus_addr,
+						GFP_KERNEL);
+	if (!status_head->cq)
+		return NULL;
+	status_head->n_cq = status_size;
+	status_head->virt_head = status_head->cq;
+	status_head->virt_tail = status_head->cq;
+	status_head->comp_pool = NULL;
+
+	return status_head;
+}
+
+static int fsl_qdma_halt(struct fsl_qdma_engine *fsl_qdma)
+{
+	void __iomem *ctrl = fsl_qdma->ctrl_base;
+	void __iomem *block = fsl_qdma->block_base;
+	int i, count = 5;
+	u32 reg;
+
+	/* Disable the command queue and wait for idle state. */
+	reg = qdma_readl(fsl_qdma, ctrl + FSL_QDMA_DMR);
+	reg |= FSL_QDMA_DMR_DQD;
+	qdma_writel(fsl_qdma, reg, ctrl + FSL_QDMA_DMR);
+	for (i = 0; i < FSL_QDMA_QUEUE_NUM_MAX; i++)
+		qdma_writel(fsl_qdma, 0, block + FSL_QDMA_BCQMR(i));
+
+	while (1) {
+		reg = qdma_readl(fsl_qdma, ctrl + FSL_QDMA_DSR);
+		if (!(reg & FSL_QDMA_DSR_DB))
+			break;
+		if (count-- < 0)
+			return -EBUSY;
+		udelay(100);
+	}
+
+	/* Disable status queue. */
+	qdma_writel(fsl_qdma, 0, block + FSL_QDMA_BSQMR);
+
+	/*
+	 * Clear the command queue interrupt detect register for all queues.
+	 */
+	qdma_writel(fsl_qdma, 0xffffffff, block + FSL_QDMA_BCQIDR(0));
+
+	return 0;
+}
+
+static int fsl_qdma_queue_transfer_complete(struct fsl_qdma_engine *fsl_qdma)
+{
+	struct fsl_qdma_queue *fsl_queue = fsl_qdma->queue;
+	struct fsl_qdma_queue *fsl_status = fsl_qdma->status;
+	struct fsl_qdma_queue *temp_queue;
+	struct fsl_qdma_comp *fsl_comp;
+	struct fsl_qdma_format *status_addr;
+	struct fsl_qdma_format *csgf_src;
+	void __iomem *block = fsl_qdma->block_base;
+	u32 reg, i;
+	bool duplicate, duplicate_handle;
+
+	while (1) {
+		duplicate = 0;
+		duplicate_handle = 0;
+		reg = qdma_readl(fsl_qdma, block + FSL_QDMA_BSQSR);
+		if (reg & FSL_QDMA_BSQSR_QE)
+			return 0;
+		status_addr = fsl_status->virt_head;
+		if (qdma_ccdf_get_queue(status_addr) == pre_queue &&
+			qdma_ccdf_addr_get64(status_addr) == pre_addr)
+			duplicate = 1;
+		i = qdma_ccdf_get_queue(status_addr);
+		pre_queue = qdma_ccdf_get_queue(status_addr);
+		pre_addr = qdma_ccdf_addr_get64(status_addr);
+		temp_queue = fsl_queue + i;
+		spin_lock(&temp_queue->queue_lock);
+		if (list_empty(&temp_queue->comp_used)) {
+			if (duplicate)
+				duplicate_handle = 1;
+			else {
+				spin_unlock(&temp_queue->queue_lock);
+				return -1;
+			}
+		} else {
+			fsl_comp = list_first_entry(&temp_queue->comp_used,
+							struct fsl_qdma_comp,
+							list);
+			csgf_src = (struct fsl_qdma_format *)fsl_comp->virt_addr
+							   + 2;
+			if (fsl_comp->bus_addr + 16 != pre_addr) {
+				if (duplicate)
+					duplicate_handle = 1;
+				else {
+					spin_unlock(&temp_queue->queue_lock);
+					return -1;
+				}
+			}
+		}
+
+			if (duplicate_handle) {
+				reg = qdma_readl(fsl_qdma, block +
+						FSL_QDMA_BSQMR);
+			reg |= FSL_QDMA_BSQMR_DI;
+			qdma_desc_addr_set64(status_addr, 0x0);
+			fsl_status->virt_head++;
+			if (fsl_status->virt_head == fsl_status->cq
+						   + fsl_status->n_cq)
+				fsl_status->virt_head = fsl_status->cq;
+			qdma_writel(fsl_qdma, reg, block + FSL_QDMA_BSQMR);
+			spin_unlock(&temp_queue->queue_lock);
+			continue;
+		}
+		list_del(&fsl_comp->list);
+
+		reg = qdma_readl(fsl_qdma, block + FSL_QDMA_BSQMR);
+		reg |= FSL_QDMA_BSQMR_DI;
+		qdma_desc_addr_set64(status_addr, 0x0);
+		fsl_status->virt_head++;
+		if (fsl_status->virt_head == fsl_status->cq + fsl_status->n_cq)
+			fsl_status->virt_head = fsl_status->cq;
+		qdma_writel(fsl_qdma, reg, block + FSL_QDMA_BSQMR);
+		spin_unlock(&temp_queue->queue_lock);
+
+		spin_lock(&fsl_comp->qchan->vchan.lock);
+		vchan_cookie_complete(&fsl_comp->vdesc);
+		fsl_comp->qchan->status = DMA_COMPLETE;
+		spin_unlock(&fsl_comp->qchan->vchan.lock);
+	}
+	return 0;
+}
+
+static irqreturn_t fsl_qdma_error_handler(int irq, void *dev_id)
+{
+	struct fsl_qdma_engine *fsl_qdma = dev_id;
+	unsigned int intr;
+	void __iomem *status = fsl_qdma->status_base;
+
+	intr = qdma_readl(fsl_qdma, status + FSL_QDMA_DEDR);
+
+	if (intr)
+		dev_err(fsl_qdma->dma_dev.dev, "DMA transaction error!\n");
+
+	qdma_writel(fsl_qdma, 0xffffffff, status + FSL_QDMA_DEDR);
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t fsl_qdma_queue_handler(int irq, void *dev_id)
+{
+	struct fsl_qdma_engine *fsl_qdma = dev_id;
+	unsigned int intr, reg;
+	void __iomem *block = fsl_qdma->block_base;
+	void __iomem *ctrl = fsl_qdma->ctrl_base;
+
+	intr = qdma_readl(fsl_qdma, block + FSL_QDMA_BCQIDR(0));
+
+	if ((intr & FSL_QDMA_CQIDR_SQT) != 0)
+		intr = fsl_qdma_queue_transfer_complete(fsl_qdma);
+
+	if (intr != 0) {
+		reg = qdma_readl(fsl_qdma, ctrl + FSL_QDMA_DMR);
+		reg |= FSL_QDMA_DMR_DQD;
+		qdma_writel(fsl_qdma, reg, ctrl + FSL_QDMA_DMR);
+		qdma_writel(fsl_qdma, 0, block + FSL_QDMA_BCQIER(0));
+		dev_err(fsl_qdma->dma_dev.dev, "QDMA: status err!\n");
+	}
+
+	qdma_writel(fsl_qdma, 0xffffffff, block + FSL_QDMA_BCQIDR(0));
+
+	return IRQ_HANDLED;
+}
+
+static int
+fsl_qdma_irq_init(struct platform_device *pdev,
+		  struct fsl_qdma_engine *fsl_qdma)
+{
+	int ret;
+
+	fsl_qdma->error_irq = platform_get_irq_byname(pdev,
+							"qdma-error");
+	if (fsl_qdma->error_irq < 0) {
+		dev_err(&pdev->dev, "Can't get qdma controller irq.\n");
+		return fsl_qdma->error_irq;
+	}
+
+	fsl_qdma->queue_irq = platform_get_irq_byname(pdev, "qdma-queue");
+	if (fsl_qdma->queue_irq < 0) {
+		dev_err(&pdev->dev, "Can't get qdma queue irq.\n");
+		return fsl_qdma->queue_irq;
+	}
+
+	ret = devm_request_irq(&pdev->dev, fsl_qdma->error_irq,
+			fsl_qdma_error_handler, 0, "qDMA error", fsl_qdma);
+	if (ret) {
+		dev_err(&pdev->dev, "Can't register qDMA controller IRQ.\n");
+		return  ret;
+	}
+	ret = devm_request_irq(&pdev->dev, fsl_qdma->queue_irq,
+			fsl_qdma_queue_handler, 0, "qDMA queue", fsl_qdma);
+	if (ret) {
+		dev_err(&pdev->dev, "Can't register qDMA queue IRQ.\n");
+		return  ret;
+	}
+
+	return 0;
+}
+
+static void fsl_qdma_irq_exit(
+		struct platform_device *pdev, struct fsl_qdma_engine *fsl_qdma)
+{
+	if (fsl_qdma->queue_irq == fsl_qdma->error_irq) {
+		devm_free_irq(&pdev->dev, fsl_qdma->queue_irq, fsl_qdma);
+	} else {
+		devm_free_irq(&pdev->dev, fsl_qdma->queue_irq, fsl_qdma);
+		devm_free_irq(&pdev->dev, fsl_qdma->error_irq, fsl_qdma);
+	}
+}
+
+static int fsl_qdma_reg_init(struct fsl_qdma_engine *fsl_qdma)
+{
+	struct fsl_qdma_queue *fsl_queue = fsl_qdma->queue;
+	struct fsl_qdma_queue *temp;
+	void __iomem *ctrl = fsl_qdma->ctrl_base;
+	void __iomem *status = fsl_qdma->status_base;
+	void __iomem *block = fsl_qdma->block_base;
+	int i, ret;
+	u32 reg;
+
+	/* Try to halt the qDMA engine first. */
+	ret = fsl_qdma_halt(fsl_qdma);
+	if (ret) {
+		dev_err(fsl_qdma->dma_dev.dev, "DMA halt failed!");
+		return ret;
+	}
+
+	/*
+	 * Clear the command queue interrupt detect register for all queues.
+	 */
+	qdma_writel(fsl_qdma, 0xffffffff, block + FSL_QDMA_BCQIDR(0));
+
+	for (i = 0; i < fsl_qdma->n_queues; i++) {
+		temp = fsl_queue + i;
+		/*
+		 * Initialize Command Queue registers to point to the first
+		 * command descriptor in memory.
+		 * Dequeue Pointer Address Registers
+		 * Enqueue Pointer Address Registers
+		 */
+		qdma_writel(fsl_qdma, temp->bus_addr,
+				block + FSL_QDMA_BCQDPA_SADDR(i));
+		qdma_writel(fsl_qdma, temp->bus_addr,
+				block + FSL_QDMA_BCQEPA_SADDR(i));
+
+		/* Initialize the queue mode. */
+		reg = FSL_QDMA_BCQMR_EN;
+		reg |= FSL_QDMA_BCQMR_CD_THLD(ilog2(temp->n_cq)-4);
+		reg |= FSL_QDMA_BCQMR_CQ_SIZE(ilog2(temp->n_cq)-6);
+		qdma_writel(fsl_qdma, reg, block + FSL_QDMA_BCQMR(i));
+	}
+
+	/*
+	 * Workaround for erratum: ERR010812.
+	 * We must enable XOFF to avoid the enqueue rejection occurs.
+	 * Setting SQCCMR ENTER_WM to 0x20.
+	 */
+	qdma_writel(fsl_qdma, FSL_QDMA_SQCCMR_ENTER_WM,
+			      block + FSL_QDMA_SQCCMR);
+	/*
+	 * Initialize status queue registers to point to the first
+	 * command descriptor in memory.
+	 * Dequeue Pointer Address Registers
+	 * Enqueue Pointer Address Registers
+	 */
+	qdma_writel(fsl_qdma, fsl_qdma->status->bus_addr,
+					block + FSL_QDMA_SQEPAR);
+	qdma_writel(fsl_qdma, fsl_qdma->status->bus_addr,
+					block + FSL_QDMA_SQDPAR);
+	/* Initialize status queue interrupt. */
+	qdma_writel(fsl_qdma, FSL_QDMA_BCQIER_CQTIE,
+			      block + FSL_QDMA_BCQIER(0));
+	qdma_writel(fsl_qdma, FSL_QDMA_BSQICR_ICEN | FSL_QDMA_BSQICR_ICST(5)
+						   | 0x8000,
+			      block + FSL_QDMA_BSQICR);
+	qdma_writel(fsl_qdma, FSL_QDMA_CQIER_MEIE | FSL_QDMA_CQIER_TEIE,
+			      block + FSL_QDMA_CQIER);
+	/* Initialize controller interrupt register. */
+	qdma_writel(fsl_qdma, 0xffffffff, status + FSL_QDMA_DEDR);
+	qdma_writel(fsl_qdma, 0xffffffff, status + FSL_QDMA_DEIER);
+
+	/* Initialize the status queue mode. */
+	reg = FSL_QDMA_BSQMR_EN;
+	reg |= FSL_QDMA_BSQMR_CQ_SIZE(ilog2(fsl_qdma->status->n_cq)-6);
+	qdma_writel(fsl_qdma, reg, block + FSL_QDMA_BSQMR);
+
+	reg = qdma_readl(fsl_qdma, ctrl + FSL_QDMA_DMR);
+	reg &= ~FSL_QDMA_DMR_DQD;
+	qdma_writel(fsl_qdma, reg, ctrl + FSL_QDMA_DMR);
+
+	return 0;
+}
+
+static struct dma_async_tx_descriptor *
+fsl_qdma_prep_memcpy(struct dma_chan *chan, dma_addr_t dst,
+		dma_addr_t src, size_t len, unsigned long flags)
+{
+	struct fsl_qdma_chan *fsl_chan = to_fsl_qdma_chan(chan);
+	struct fsl_qdma_comp *fsl_comp;
+
+	fsl_comp = fsl_qdma_request_enqueue_desc(fsl_chan, 0, 0);
+	fsl_qdma_comp_fill_memcpy(fsl_comp, dst, src, len);
+
+	return vchan_tx_prep(&fsl_chan->vchan, &fsl_comp->vdesc, flags);
+}
+
+static void fsl_qdma_enqueue_desc(struct fsl_qdma_chan *fsl_chan)
+{
+	void __iomem *block = fsl_chan->qdma->block_base;
+	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
+	struct fsl_qdma_comp *fsl_comp;
+	struct virt_dma_desc *vdesc;
+	u32 reg;
+
+	reg = qdma_readl(fsl_chan->qdma, block + FSL_QDMA_BCQSR(fsl_queue->id));
+	if (reg & (FSL_QDMA_BCQSR_QF | FSL_QDMA_BCQSR_XOFF))
+		return;
+	vdesc = vchan_next_desc(&fsl_chan->vchan);
+	if (!vdesc)
+		return;
+	list_del(&vdesc->node);
+	fsl_comp = to_fsl_qdma_comp(vdesc);
+
+	memcpy(fsl_queue->virt_head++, fsl_comp->virt_addr, 16);
+	if (fsl_queue->virt_head == fsl_queue->cq + fsl_queue->n_cq)
+		fsl_queue->virt_head = fsl_queue->cq;
+
+	list_add_tail(&fsl_comp->list, &fsl_queue->comp_used);
+	barrier();
+	reg = qdma_readl(fsl_chan->qdma, block + FSL_QDMA_BCQMR(fsl_queue->id));
+	reg |= FSL_QDMA_BCQMR_EI;
+	qdma_writel(fsl_chan->qdma, reg, block + FSL_QDMA_BCQMR(fsl_queue->id));
+	fsl_chan->status = DMA_IN_PROGRESS;
+}
+
+static enum dma_status fsl_qdma_tx_status(struct dma_chan *chan,
+		dma_cookie_t cookie, struct dma_tx_state *txstate)
+{
+	return dma_cookie_status(chan, cookie, txstate);
+}
+
+static void fsl_qdma_free_desc(struct virt_dma_desc *vdesc)
+{
+	struct fsl_qdma_comp *fsl_comp;
+	struct fsl_qdma_queue *fsl_queue;
+	struct fsl_qdma_sg *sg_block;
+	unsigned long flags;
+	unsigned int i;
+
+	fsl_comp = to_fsl_qdma_comp(vdesc);
+	fsl_queue = fsl_comp->qchan->queue;
+
+	if (fsl_comp->sg_block) {
+		for (i = 0; i < fsl_comp->sg_block_src +
+				fsl_comp->sg_block_dst; i++) {
+			sg_block = fsl_comp->sg_block + i;
+			dma_pool_free(fsl_queue->sg_pool,
+				      sg_block->virt_addr,
+				      sg_block->bus_addr);
+		}
+		kfree(fsl_comp->sg_block);
+	}
+
+	spin_lock_irqsave(&fsl_queue->queue_lock, flags);
+	list_add_tail(&fsl_comp->list, &fsl_queue->comp_free);
+	spin_unlock_irqrestore(&fsl_queue->queue_lock, flags);
+}
+
+static void fsl_qdma_issue_pending(struct dma_chan *chan)
+{
+	struct fsl_qdma_chan *fsl_chan = to_fsl_qdma_chan(chan);
+	struct fsl_qdma_queue *fsl_queue = fsl_chan->queue;
+	unsigned long flags;
+
+	spin_lock_irqsave(&fsl_queue->queue_lock, flags);
+	spin_lock(&fsl_chan->vchan.lock);
+	if (vchan_issue_pending(&fsl_chan->vchan))
+		fsl_qdma_enqueue_desc(fsl_chan);
+	spin_unlock(&fsl_chan->vchan.lock);
+	spin_unlock_irqrestore(&fsl_queue->queue_lock, flags);
+}
+
+static int fsl_qdma_probe(struct platform_device *pdev)
+{
+	struct device_node *np = pdev->dev.of_node;
+	struct fsl_qdma_engine *fsl_qdma;
+	struct fsl_qdma_chan *fsl_chan;
+	struct resource *res;
+	unsigned int len, chans, queues;
+	int ret, i;
+
+	ret = of_property_read_u32(np, "dma-channels", &chans);
+	if (ret) {
+		dev_err(&pdev->dev, "Can't get dma-channels.\n");
+		return ret;
+	}
+
+	len = sizeof(*fsl_qdma) + sizeof(*fsl_chan) * chans;
+	fsl_qdma = devm_kzalloc(&pdev->dev, len, GFP_KERNEL);
+	if (!fsl_qdma)
+		return -ENOMEM;
+
+	ret = of_property_read_u32(np, "queues", &queues);
+	if (ret) {
+		dev_err(&pdev->dev, "Can't get queues.\n");
+		return ret;
+	}
+
+	fsl_qdma->queue = fsl_qdma_alloc_queue_resources(pdev, queues);
+	if (!fsl_qdma->queue)
+		return -ENOMEM;
+
+	fsl_qdma->status = fsl_qdma_prep_status_queue(pdev);
+	if (!fsl_qdma->status)
+		return -ENOMEM;
+
+	fsl_qdma->n_chans = chans;
+	fsl_qdma->n_queues = queues;
+	mutex_init(&fsl_qdma->fsl_qdma_mutex);
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	fsl_qdma->ctrl_base = devm_ioremap_resource(&pdev->dev, res);
+	if (IS_ERR(fsl_qdma->ctrl_base))
+		return PTR_ERR(fsl_qdma->ctrl_base);
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+	fsl_qdma->status_base = devm_ioremap_resource(&pdev->dev, res);
+	if (IS_ERR(fsl_qdma->status_base))
+		return PTR_ERR(fsl_qdma->status_base);
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 2);
+	fsl_qdma->block_base = devm_ioremap_resource(&pdev->dev, res);
+	if (IS_ERR(fsl_qdma->block_base))
+		return PTR_ERR(fsl_qdma->block_base);
+
+	ret = fsl_qdma_irq_init(pdev, fsl_qdma);
+	if (ret)
+		return ret;
+
+	fsl_qdma->feature = of_property_read_bool(np, "big-endian");
+	INIT_LIST_HEAD(&fsl_qdma->dma_dev.channels);
+	for (i = 0; i < fsl_qdma->n_chans; i++) {
+		struct fsl_qdma_chan *fsl_chan = &fsl_qdma->chans[i];
+
+		fsl_chan->qdma = fsl_qdma;
+		fsl_chan->queue = fsl_qdma->queue + i % fsl_qdma->n_queues;
+		fsl_chan->vchan.desc_free = fsl_qdma_free_desc;
+		INIT_LIST_HEAD(&fsl_chan->qcomp);
+		vchan_init(&fsl_chan->vchan, &fsl_qdma->dma_dev);
+	}
+	for (i = 0; i < fsl_qdma->n_queues; i++)
+		fsl_qdma_pre_request_enqueue_desc(fsl_qdma->queue + i);
+
+	dma_cap_set(DMA_MEMCPY, fsl_qdma->dma_dev.cap_mask);
+
+	fsl_qdma->dma_dev.dev = &pdev->dev;
+	fsl_qdma->dma_dev.device_free_chan_resources
+		= fsl_qdma_free_chan_resources;
+	fsl_qdma->dma_dev.device_tx_status = fsl_qdma_tx_status;
+	fsl_qdma->dma_dev.device_prep_dma_memcpy = fsl_qdma_prep_memcpy;
+	fsl_qdma->dma_dev.device_issue_pending = fsl_qdma_issue_pending;
+
+	dma_set_mask(&pdev->dev, DMA_BIT_MASK(40));
+
+	platform_set_drvdata(pdev, fsl_qdma);
+
+	ret = dma_async_device_register(&fsl_qdma->dma_dev);
+	if (ret) {
+		dev_err(&pdev->dev, "Can't register NXP Layerscape qDMA engine.\n");
+		return ret;
+	}
+
+	ret = fsl_qdma_reg_init(fsl_qdma);
+	if (ret) {
+		dev_err(&pdev->dev, "Can't Initialize the qDMA engine.\n");
+		return ret;
+	}
+
+
+	return 0;
+}
+
+static void fsl_qdma_cleanup_vchan(struct dma_device *dmadev)
+{
+	struct fsl_qdma_chan *chan, *_chan;
+
+	list_for_each_entry_safe(chan, _chan,
+				&dmadev->channels, vchan.chan.device_node) {
+		list_del(&chan->vchan.chan.device_node);
+		tasklet_kill(&chan->vchan.task);
+	}
+}
+
+static int fsl_qdma_remove(struct platform_device *pdev)
+{
+	struct device_node *np = pdev->dev.of_node;
+	struct fsl_qdma_engine *fsl_qdma = platform_get_drvdata(pdev);
+	struct fsl_qdma_queue *queue_temp;
+	struct fsl_qdma_queue *status = fsl_qdma->status;
+	struct fsl_qdma_comp *comp_temp, *_comp_temp;
+	int i;
+
+	fsl_qdma_irq_exit(pdev, fsl_qdma);
+	fsl_qdma_cleanup_vchan(&fsl_qdma->dma_dev);
+	of_dma_controller_free(np);
+	dma_async_device_unregister(&fsl_qdma->dma_dev);
+
+	/* Free descriptor areas */
+	for (i = 0; i < fsl_qdma->n_queues; i++) {
+		queue_temp = fsl_qdma->queue + i;
+		list_for_each_entry_safe(comp_temp, _comp_temp,
+					&queue_temp->comp_used,	list) {
+			dma_pool_free(queue_temp->comp_pool,
+					comp_temp->virt_addr,
+					comp_temp->bus_addr);
+			list_del(&comp_temp->list);
+			kfree(comp_temp);
+		}
+		list_for_each_entry_safe(comp_temp, _comp_temp,
+					&queue_temp->comp_free, list) {
+			dma_pool_free(queue_temp->comp_pool,
+					comp_temp->virt_addr,
+					comp_temp->bus_addr);
+			list_del(&comp_temp->list);
+			kfree(comp_temp);
+		}
+		dma_free_coherent(&pdev->dev, sizeof(struct fsl_qdma_format) *
+					queue_temp->n_cq, queue_temp->cq,
+					queue_temp->bus_addr);
+		dma_pool_destroy(queue_temp->comp_pool);
+	}
+
+	dma_free_coherent(&pdev->dev, sizeof(struct fsl_qdma_format) *
+				status->n_cq, status->cq, status->bus_addr);
+	return 0;
+}
+
+static const struct of_device_id fsl_qdma_dt_ids[] = {
+	{ .compatible = "fsl,ls1021a-qdma", },
+	{ /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(of, fsl_qdma_dt_ids);
+
+static struct platform_driver fsl_qdma_driver = {
+	.driver		= {
+		.name	= "fsl-qdma",
+		.of_match_table = fsl_qdma_dt_ids,
+	},
+	.probe          = fsl_qdma_probe,
+	.remove		= fsl_qdma_remove,
+};
+
+module_platform_driver(fsl_qdma_driver);
+
+MODULE_ALIAS("platform:fsl-qdma");
+MODULE_DESCRIPTION("NXP Layerscape qDMA engine driver");
+MODULE_LICENSE("GPL v2");

^ permalink raw reply related	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2018-01-11  9:17 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-11  6:25 [V2] dmaengine: fsl-qdma: add NXP Layerscape qDMA engine driver support Vinod Koul
  -- strict thread matches above, loose matches on Subject: below --
2018-01-11  9:17 Wen He
2018-01-09  3:30 Wen He
2018-01-08 10:42 Vinod Koul
2018-01-04  7:36 Wen He
2018-01-03  3:52 Vinod Koul
2017-12-27  2:27 Wen He
2017-12-27  1:34 kbuild test robot
2017-12-26  5:15 Wen He
2017-12-25 17:39 kbuild test robot
2017-12-25  7:39 Wen He

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.