All of lore.kernel.org
 help / color / mirror / Atom feed
From: Alexey Brodkin <Alexey.Brodkin@synopsys.com>
To: "hch-jcswGhMUV9g@public.gmane.org" <hch-jcswGhMUV9g@public.gmane.org>
Cc: "linux-arch-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<linux-arch-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	"linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw@public.gmane.org"
	<linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw@public.gmane.org>,
	"monstr-pSz03upnqPeHXe+LvDLADg@public.gmane.org"
	<monstr-pSz03upnqPeHXe+LvDLADg@public.gmane.org>,
	"linux-snps-arc-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org"
	<linux-snps-arc-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org>,
	"linux-c6x-dev-jPsnJVOj+W6hPH1hqNUYSQ@public.gmane.org"
	<linux-c6x-dev-jPsnJVOj+W6hPH1hqNUYSQ@public.gmane.org>,
	"linux-parisc-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<linux-parisc-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	"linux-sh-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<linux-sh-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	"linux-hexagon-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<linux-hexagon-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	"linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	"iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org"
	<iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org>,
	"linux-m68k-cunTk1MwBs8S/qaLPR03pWD2FQJk+8+b@public.gmane.org"
	<linux-m68k-cunTk1MwBs8S/qaLPR03pWD2FQJk+8+b@public.gmane.org>,
	"openrisc-cunTk1MwBs9a3B2Vnqf2dGD2FQJk+8+b@public.gmane.org"
	<openrisc-cunTk1MwBs9a3B2Vnqf2dGD2FQJk+8+b@public.gmane.org>,
	"green.hu-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org"
	<green.hu-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>,
	"linux-alpha-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<linux-alpha-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	sparclinux-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Subject: Re: [PATCH 02/20] dma-mapping: provide a generic dma-noncoherent implementation
Date: Fri, 18 May 2018 13:03:46 +0000	[thread overview]
Message-ID: <bad125dff49f6e49c895e818c9d1abb346a46e8e.camel@synopsys.com> (raw)
In-Reply-To: <20180511075945.16548-3-hch-jcswGhMUV9g@public.gmane.org>

SGkgQ2hyaXN0b3BoLA0KDQpPbiBGcmksIDIwMTgtMDUtMTEgYXQgMDk6NTkgKzAyMDAsIENocmlz
dG9waCBIZWxsd2lnIHdyb3RlOg0KDQpbc25pcF0NCg0KVGhlcmUgc2VlbXMgdG8gYmUgb25lIHN1
YnRsZSBpc3N1ZSB3aXRoIG1hcC91bm1hcCBjb2RlLg0KV2hpbGUgaW52ZXN0aWdhdGluZyBwcm9i
bGVtcyBvbiBBUkMgSSBhZGRlZCBpbnN0cnVtZW50YXRpb24gYXMgYmVsb3c6DQotLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tPjgtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0NCi0tLSBhL2FyY2gvYXJjL21tL2RtYS5jDQorKysgYi9hcmNoL2FyYy9tbS9k
bWEuYw0KQEAgLTE1MiwxNCArMTUyLDM3IEBAIHN0YXRpYyB2b2lkIF9kbWFfY2FjaGVfc3luYyhz
dHJ1Y3QgZGV2aWNlICpkZXYsIHBoeXNfYWRkcl90IHBhZGRyLCBzaXplX3Qgc2l6ZSwNCiAgICAg
ICAgfQ0KIH0NCiANCitzdGF0aWMgY29uc3QgY2hhciAqZGlyX3RvX3N0cihlbnVtIGRtYV9kYXRh
X2RpcmVjdGlvbiBkaXIpDQorew0KKyAgICAgICBzd2l0Y2ggKGRpcikgew0KKyAgICAgICBjYXNl
IERNQV9CSURJUkVDVElPTkFMOiByZXR1cm4gIkRNQV9CSURJUkVDVElPTkFMIjsNCisgICAgICAg
Y2FzZSBETUFfVE9fREVWSUNFOiByZXR1cm4gIkRNQV9UT19ERVZJQ0UiOw0KKyAgICAgICBjYXNl
IERNQV9GUk9NX0RFVklDRTogcmV0dXJuICJETUFfRlJPTV9ERVZJQ0UiOw0KKyAgICAgICBjYXNl
IERNQV9OT05FOiByZXR1cm4gIkRNQV9OT05FIjsNCisgICAgICAgZGVmYXVsdDogcmV0dXJuICJX
Uk9OR19WQUxVRSEiOw0KKyAgICAgICB9DQorfQ0KKw0KIHZvaWQgYXJjaF9zeW5jX2RtYV9mb3Jf
ZGV2aWNlKHN0cnVjdCBkZXZpY2UgKmRldiwgcGh5c19hZGRyX3QgcGFkZHIsDQogICAgICAgICAg
ICAgICAgc2l6ZV90IHNpemUsIGVudW0gZG1hX2RhdGFfZGlyZWN0aW9uIGRpcikNCiB7DQorICAg
ICAgIGlmIChkaXIgIT0gRE1BX1RPX0RFVklDRSl7DQorICAgICAgICAgICAgICAgZHVtcF9zdGFj
aygpOw0KKyAgICAgICAgICAgICAgIHByaW50aygiICoqKiAlc0AlZDogRE1BIGRpcmVjdGlvbiBp
cyAlcyBpbnN0ZWFkIG9mICVzXG4iLA0KKyAgICAgICAgICAgICAgICAgICAgICBfX2Z1bmNfXywg
X19MSU5FX18sIGRpcl90b19zdHIoZGlyKSwgZGlyX3RvX3N0cihETUFfVE9fREVWSUNFKSk7DQor
ICAgICAgIH0NCisNCiAgICAgICAgcmV0dXJuIF9kbWFfY2FjaGVfc3luYyhkZXYsIHBhZGRyLCBz
aXplLCBkaXIpOw0KIH0NCiANCiB2b2lkIGFyY2hfc3luY19kbWFfZm9yX2NwdShzdHJ1Y3QgZGV2
aWNlICpkZXYsIHBoeXNfYWRkcl90IHBhZGRyLA0KICAgICAgICAgICAgICAgIHNpemVfdCBzaXpl
LCBlbnVtIGRtYV9kYXRhX2RpcmVjdGlvbiBkaXIpDQogew0KKyAgICAgICBpZiAoZGlyICE9IERN
QV9GUk9NX0RFVklDRSkgew0KKyAgICAgICAgICAgICAgIGR1bXBfc3RhY2soKTsNCisgICAgICAg
ICAgICAgICBwcmludGsoIiAqKiogJXNAJWQ6IERNQSBkaXJlY3Rpb24gaXMgJXMgaW5zdGVhZCBv
ZiAlc1xuIiwNCisgICAgICAgICAgICAgICAgICAgICAgX19mdW5jX18sIF9fTElORV9fLCBkaXJf
dG9fc3RyKGRpciksIGRpcl90b19zdHIoRE1BX0ZST01fREVWSUNFKSk7DQorICAgICAgIH0NCisN
CiAgICAgICAgcmV0dXJuIF9kbWFfY2FjaGVfc3luYyhkZXYsIHBhZGRyLCBzaXplLCBkaXIpOw0K
IH0NCi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0+OC0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KDQpBbmQgd2l0aCB0aGF0IEkgbm90aWNlZCBhIGJp
dCB1bmV4cGVjdGVkIG91dHB1dCwgc2VlIGJlbG93Og0KLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLT44LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQpT
dGFjayBUcmFjZToNCiAgYXJjX3Vud2luZF9jb3JlLmNvbnN0cHJvcC4xKzB4ZDQvMHhmOA0KICBk
dW1wX3N0YWNrKzB4NjgvMHg4MA0KICBhcmNoX3N5bmNfZG1hX2Zvcl9kZXZpY2UrMHgzNC8weGM0
DQogIGRtYV9ub25jb2hlcmVudF9tYXBfc2crMHg4MC8weDk0DQogIF9fZHdfbWNpX3N0YXJ0X3Jl
cXVlc3QrMHgxZWUvMHg4NjgNCiAgZHdfbWNpX3JlcXVlc3QrMHgxN2UvMHgxYzgNCiAgbW1jX3dh
aXRfZm9yX3JlcSsweDEwNi8weDFhYw0KICBtbWNfYXBwX3NkX3N0YXR1cysweDEwOC8weDEzMA0K
ICBtbWNfc2Rfc2V0dXBfY2FyZCsweGM2LzB4MmU4DQogIG1tY19hdHRhY2hfc2QrMHgxYjYvMHgz
OTQNCiAgbW1jX3Jlc2NhbisweDJmNC8weDNiYw0KICBwcm9jZXNzX29uZV93b3JrKzB4MTk0LzB4
MzQ4DQogIHdvcmtlcl90aHJlYWQrMHhmMi8weDQ3OA0KICBrdGhyZWFkKzB4MTIwLzB4MTNjDQog
IHJldF9mcm9tX2ZvcmsrMHgxOC8weDFjDQogKioqIGFyY2hfc3luY19kbWFfZm9yX2RldmljZUAx
NzI6IERNQSBkaXJlY3Rpb24gaXMgRE1BX0ZST01fREVWSUNFIGluc3RlYWQgb2YgRE1BX1RPX0RF
VklDRQ0KLi4uDQpTdGFjayBUcmFjZToNCiAgYXJjX3Vud2luZF9jb3JlLmNvbnN0cHJvcC4xKzB4
ZDQvMHhmOA0KICBkdW1wX3N0YWNrKzB4NjgvMHg4MA0KICBhcmNoX3N5bmNfZG1hX2Zvcl9kZXZp
Y2UrMHgzNC8weGM0DQogIGRtYV9ub25jb2hlcmVudF9tYXBfcGFnZSsweDg2LzB4OGMNCiAgdXNi
X2hjZF9tYXBfdXJiX2Zvcl9kbWErMHg0OWUvMHg1M2MNCiAgdXNiX2hjZF9zdWJtaXRfdXJiKzB4
NDNjLzB4OGM0DQogIHVzYl9jb250cm9sX21zZysweGJlLzB4MTZjDQogIGh1Yl9wb3J0X2luaXQr
MHg1ZTAvMHhiMGMNCiAgaHViX2V2ZW50KzB4NGU2LzB4MTE2NA0KICBwcm9jZXNzX29uZV93b3Jr
KzB4MTk0LzB4MzQ4DQogIHdvcmtlcl90aHJlYWQrMHhmMi8weDQ3OA0KICBrdGhyZWFkKzB4MTIw
LzB4MTNjDQogIHJldF9mcm9tX2ZvcmsrMHgxOC8weDFjDQogbW1jYmxrMDogcDEgcDINCiAqKiog
YXJjaF9zeW5jX2RtYV9mb3JfZGV2aWNlQDE3MjogRE1BIGRpcmVjdGlvbiBpcyBETUFfRlJPTV9E
RVZJQ0UgaW5zdGVhZCBvZiBETUFfVE9fREVWSUNFDQoNCi4uLg0KYW5kIHF1aXRlIHNvbWUgbW9y
ZSBvZiB0aGUgc2ltaWxhcg0KLi4uDQotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tPjgtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCg0KSW4gY2FzZSBv
ZiBNTUMvRFdfTUNJIChBS0EgRGVzaWduV2FyZSBNb2JpbGVTdG9yYWdlIGNvbnRyb2xsZXIpIHRo
YXQncyBhbiBleGVjdXRpb24gZmxvdzoNCjEpIF9fZHdfbWNpX3N0YXJ0X3JlcXVlc3QoKQ0KMikg
ZHdfbWNpX3ByZV9kbWFfdHJhbnNmZXIoKQ0KMykgZG1hX21hcF9zZyguLi4sIG1tY19nZXRfZG1h
X2RpcihkYXRhKSkNCg0KTm90ZSBtbWNfZ2V0X2RtYV9kaXIoKSBpcyBqdXN0ICJkYXRhLT5mbGFn
cyAmIE1NQ19EQVRBX1dSSVRFID8gRE1BX1RPX0RFVklDRSA6IERNQV9GUk9NX0RFVklDRSIuDQpJ
LmUuIGlmIHdlJ3JlIHByZXBhcmluZyBmb3Igc2VuZGluZyBkYXRhIGRtYV9ub25jb2hlcmVudF9t
YXBfc2coKSB3aWxsIGhhdmUgRE1BX1RPX0RFVklDRSB3aGljaA0KaXMgcXVpdGUgT0sgZm9yIHBh
c3NpbmcgdG8gZG1hX25vbmNvaGVyZW50X3N5bmNfc2dfZm9yX2RldmljZSgpIGJ1dCBpbiBjYXNl
IG9mIHJlYWRpbmcgd2UnbGwgaGF2ZQ0KRE1BX0ZST01fREVWSUNFIHdoaWNoIHdlJ2xsIHBhc3Mg
dG8gZG1hX25vbmNvaGVyZW50X3N5bmNfc2dfZm9yX2RldmljZSgpIGluIGRtYV9ub25jb2hlcmVu
dF9tYXBfc2coKS4NCg0KSSdkIHNheSB0aGlzIGlzIG5vdCBlbnRpcmVseSBjb3JyZWN0IGJlY2F1
c2UgSU1ITyBhcmNoX3N5bmNfZG1hX2Zvcl9jcHUoKSBpcyBzdXBwb3NlZCB0byBvbmx5IGJlIHVz
ZWQNCmluIGNhc2Ugb2YgRE1BX0ZST01fREVWSUNFIGFuZCBhcmNoX3N5bmNfZG1hX2Zvcl9kZXZp
Y2UoKSBvbmx5IGluIGNhc2Ugb2YgRE1BX1RPX0RFVklDRS4NCg0KDQo+ICtzdGF0aWMgZG1hX2Fk
ZHJfdCBkbWFfbm9uY29oZXJlbnRfbWFwX3BhZ2Uoc3RydWN0IGRldmljZSAqZGV2LCBzdHJ1Y3Qg
cGFnZSAqcGFnZSwNCj4gKwkJdW5zaWduZWQgbG9uZyBvZmZzZXQsIHNpemVfdCBzaXplLCBlbnVt
IGRtYV9kYXRhX2RpcmVjdGlvbiBkaXIsDQo+ICsJCXVuc2lnbmVkIGxvbmcgYXR0cnMpDQo+ICt7
DQo+ICsJZG1hX2FkZHJfdCBhZGRyOw0KPiArDQo+ICsJYWRkciA9IGRtYV9kaXJlY3RfbWFwX3Bh
Z2UoZGV2LCBwYWdlLCBvZmZzZXQsIHNpemUsIGRpciwgYXR0cnMpOw0KPiArCWlmICghZG1hX21h
cHBpbmdfZXJyb3IoZGV2LCBhZGRyKSAmJiAhKGF0dHJzICYgRE1BX0FUVFJfU0tJUF9DUFVfU1lO
QykpDQo+ICsJCWFyY2hfc3luY19kbWFfZm9yX2RldmljZShkZXYsIHBhZ2VfdG9fcGh5cyhwYWdl
KSwgc2l6ZSwgZGlyKTsNCj4gKwlyZXR1cm4gYWRkcjsNCj4gK30NCj4gKw0KPiArc3RhdGljIGlu
dCBkbWFfbm9uY29oZXJlbnRfbWFwX3NnKHN0cnVjdCBkZXZpY2UgKmRldiwgc3RydWN0IHNjYXR0
ZXJsaXN0ICpzZ2wsDQo+ICsJCWludCBuZW50cywgZW51bSBkbWFfZGF0YV9kaXJlY3Rpb24gZGly
LCB1bnNpZ25lZCBsb25nIGF0dHJzKQ0KPiArew0KPiArCW5lbnRzID0gZG1hX2RpcmVjdF9tYXBf
c2coZGV2LCBzZ2wsIG5lbnRzLCBkaXIsIGF0dHJzKTsNCj4gKwlpZiAobmVudHMgPiAwICYmICEo
YXR0cnMgJiBETUFfQVRUUl9TS0lQX0NQVV9TWU5DKSkNCj4gKwkJZG1hX25vbmNvaGVyZW50X3N5
bmNfc2dfZm9yX2RldmljZShkZXYsIHNnbCwgbmVudHMsIGRpcik7DQo+ICsJcmV0dXJuIG5lbnRz
Ow0KPiArfQ0KDQpUaGUgc2FtZSBpcyBmb3IgdW5tYXAgZnVuY3Rpb25zLg0KTXkgZ3Vlc3MgaXMg
d2UgbmVlZCB0byByZXNwZWN0IGRpcmVjdGlvbiBpbiBtYXAvdW5tYXAgZnVuY3Rpb25zIGFuZCB1
c2UNCmVpdGhlciBkbWFfbm9uY29oZXJlbnRfc3luY19zaW5nbGVfZm9yX2NwdSguLi4sIERNQV9G
Uk9NX0RFVklDRSkgb3INCmRtYV9ub25jb2hlcmVudF9zeW5jX3NpbmdsZV9mb3JfZGV2aWNlKC4u
LixETUFfVE9fREVWSUNFKS4NCg0KDQo+ICtzdGF0aWMgdm9pZCBkbWFfbm9uY29oZXJlbnRfdW5t
YXBfcGFnZShzdHJ1Y3QgZGV2aWNlICpkZXYsIGRtYV9hZGRyX3QgYWRkciwNCj4gKwkJc2l6ZV90
IHNpemUsIGVudW0gZG1hX2RhdGFfZGlyZWN0aW9uIGRpciwgdW5zaWduZWQgbG9uZyBhdHRycykN
Cj4gK3sNCj4gKwlpZiAoIShhdHRycyAmIERNQV9BVFRSX1NLSVBfQ1BVX1NZTkMpKQ0KPiArCQlk
bWFfbm9uY29oZXJlbnRfc3luY19zaW5nbGVfZm9yX2NwdShkZXYsIGFkZHIsIHNpemUsIGRpcik7
DQo+ICt9DQo+ICsNCj4gK3N0YXRpYyB2b2lkIGRtYV9ub25jb2hlcmVudF91bm1hcF9zZyhzdHJ1
Y3QgZGV2aWNlICpkZXYsIHN0cnVjdCBzY2F0dGVybGlzdCAqc2dsLA0KPiArCQlpbnQgbmVudHMs
IGVudW0gZG1hX2RhdGFfZGlyZWN0aW9uIGRpciwgdW5zaWduZWQgbG9uZyBhdHRycykNCj4gK3sN
Cj4gKwlpZiAoIShhdHRycyAmIERNQV9BVFRSX1NLSVBfQ1BVX1NZTkMpKQ0KPiArCQlkbWFfbm9u
Y29oZXJlbnRfc3luY19zZ19mb3JfY3B1KGRldiwgc2dsLCBuZW50cywgZGlyKTsNCj4gK30NCj4g
KyNlbmRpZg0KDQpCdXQgdGhlIHJlYWwgZml4IG9mIG15IHByb2JsZW0gaXM6DQotLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tPjgtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0NCi0tLSBhL2xpYi9kbWEtbm9uY29oZXJlbnQuYw0KKysrIGIvbGliL2RtYS1u
b25jb2hlcmVudC5jDQpAQCAtMzUsNyArMzUsNyBAQCBzdGF0aWMgZG1hX2FkZHJfdCBkbWFfbm9u
Y29oZXJlbnRfbWFwX3BhZ2Uoc3RydWN0IGRldmljZSAqZGV2LCBzdHJ1Y3QgcGFnZSAqcGFnZQ0K
IA0KICAgICAgICBhZGRyID0gZG1hX2RpcmVjdF9tYXBfcGFnZShkZXYsIHBhZ2UsIG9mZnNldCwg
c2l6ZSwgZGlyLCBhdHRycyk7DQogICAgICAgIGlmICghZG1hX21hcHBpbmdfZXJyb3IoZGV2LCBh
ZGRyKSAmJiAhKGF0dHJzICYgRE1BX0FUVFJfU0tJUF9DUFVfU1lOQykpDQotICAgICAgICAgICAg
ICAgYXJjaF9zeW5jX2RtYV9mb3JfZGV2aWNlKGRldiwgcGFnZV90b19waHlzKHBhZ2UpLCBzaXpl
LCBkaXIpOw0KKyAgICAgICAgICAgICAgIGFyY2hfc3luY19kbWFfZm9yX2RldmljZShkZXYsIHBh
Z2VfdG9fcGh5cyhwYWdlKSArIG9mZnNldCwgc2l6ZSwgZGlyKTsNCiAgICAgICAgcmV0dXJuIGFk
ZHI7DQogfQ0KLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLT44LS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQoNCllvdSBzZWVtIHRvIGxvc3QgYW4gb2Zm
c2V0IGluIHRoZSBwYWdlIHNvIGlmIHdlIGhhcHBlbiB0byBoYXZlIGEgYnVmZmVyIG5vdCBhbGln
bmVkIHRvDQphIHBhZ2UgYm91bmRhcnkgdGhlbiB3ZSB3ZXJlIG9idmlvdXNseSBjb3JydXB0aW5n
IGRhdGEgb3V0c2lkZSBvdXIgZGF0YSA6KQ0KDQotQWxleGV5

WARNING: multiple messages have this Message-ID (diff)
From: Alexey Brodkin <Alexey.Brodkin-HKixBCOQz3hWk0Htik3J/w@public.gmane.org>
To: "hch-jcswGhMUV9g@public.gmane.org" <hch-jcswGhMUV9g@public.gmane.org>
Cc: "linux-arch-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<linux-arch-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	"linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw@public.gmane.org"
	<linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw@public.gmane.org>,
	"monstr-pSz03upnqPeHXe+LvDLADg@public.gmane.org"
	<monstr-pSz03upnqPeHXe+LvDLADg@public.gmane.org>,
	"linux-snps-arc-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org"
	<linux-snps-arc-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org>,
	"linux-c6x-dev-jPsnJVOj+W6hPH1hqNUYSQ@public.gmane.org"
	<linux-c6x-dev-jPsnJVOj+W6hPH1hqNUYSQ@public.gmane.org>,
	"linux-parisc-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<linux-parisc-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	"linux-sh-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<linux-sh-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	"linux-hexagon-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<linux-hexagon-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	"linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	"iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org"
	<iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org>,
	"linux-m68k-cunTk1MwBs8S/qaLPR03pWD2FQJk+8+b@public.gmane.org"
	<linux-m68k-cunTk1MwBs8S/qaLPR03pWD2FQJk+8+b@public.gmane.org>,
	"openrisc-cunTk1MwBs9a3B2Vnqf2dGD2FQJk+8+b@public.gmane.org"
	<openrisc-cunTk1MwBs9a3B2Vnqf2dGD2FQJk+8+b@public.gmane.org>,
	"green.hu-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org"
	<green.hu-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>,
	"linux-alpha-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<linux-alpha-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	"sparclinux-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
Subject: Re: [PATCH 02/20] dma-mapping: provide a generic dma-noncoherent implementation
Date: Fri, 18 May 2018 13:03:46 +0000	[thread overview]
Message-ID: <bad125dff49f6e49c895e818c9d1abb346a46e8e.camel@synopsys.com> (raw)
In-Reply-To: <20180511075945.16548-3-hch-jcswGhMUV9g@public.gmane.org>

Hi Christoph,

On Fri, 2018-05-11 at 09:59 +0200, Christoph Hellwig wrote:

[snip]

There seems to be one subtle issue with map/unmap code.
While investigating problems on ARC I added instrumentation as below:
---------------------------------------->8------------------------------------
--- a/arch/arc/mm/dma.c
+++ b/arch/arc/mm/dma.c
@@ -152,14 +152,37 @@ static void _dma_cache_sync(struct device *dev, phys_addr_t paddr, size_t size,
        }
 }
 
+static const char *dir_to_str(enum dma_data_direction dir)
+{
+       switch (dir) {
+       case DMA_BIDIRECTIONAL: return "DMA_BIDIRECTIONAL";
+       case DMA_TO_DEVICE: return "DMA_TO_DEVICE";
+       case DMA_FROM_DEVICE: return "DMA_FROM_DEVICE";
+       case DMA_NONE: return "DMA_NONE";
+       default: return "WRONG_VALUE!";
+       }
+}
+
 void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
                size_t size, enum dma_data_direction dir)
 {
+       if (dir != DMA_TO_DEVICE){
+               dump_stack();
+               printk(" *** %s@%d: DMA direction is %s instead of %s\n",
+                      __func__, __LINE__, dir_to_str(dir), dir_to_str(DMA_TO_DEVICE));
+       }
+
        return _dma_cache_sync(dev, paddr, size, dir);
 }
 
 void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr,
                size_t size, enum dma_data_direction dir)
 {
+       if (dir != DMA_FROM_DEVICE) {
+               dump_stack();
+               printk(" *** %s@%d: DMA direction is %s instead of %s\n",
+                      __func__, __LINE__, dir_to_str(dir), dir_to_str(DMA_FROM_DEVICE));
+       }
+
        return _dma_cache_sync(dev, paddr, size, dir);
 }
---------------------------------------->8------------------------------------

And with that I noticed a bit unexpected output, see below:
---------------------------------------->8------------------------------------
Stack Trace:
  arc_unwind_core.constprop.1+0xd4/0xf8
  dump_stack+0x68/0x80
  arch_sync_dma_for_device+0x34/0xc4
  dma_noncoherent_map_sg+0x80/0x94
  __dw_mci_start_request+0x1ee/0x868
  dw_mci_request+0x17e/0x1c8
  mmc_wait_for_req+0x106/0x1ac
  mmc_app_sd_status+0x108/0x130
  mmc_sd_setup_card+0xc6/0x2e8
  mmc_attach_sd+0x1b6/0x394
  mmc_rescan+0x2f4/0x3bc
  process_one_work+0x194/0x348
  worker_thread+0xf2/0x478
  kthread+0x120/0x13c
  ret_from_fork+0x18/0x1c
 *** arch_sync_dma_for_device@172: DMA direction is DMA_FROM_DEVICE instead of DMA_TO_DEVICE
...
Stack Trace:
  arc_unwind_core.constprop.1+0xd4/0xf8
  dump_stack+0x68/0x80
  arch_sync_dma_for_device+0x34/0xc4
  dma_noncoherent_map_page+0x86/0x8c
  usb_hcd_map_urb_for_dma+0x49e/0x53c
  usb_hcd_submit_urb+0x43c/0x8c4
  usb_control_msg+0xbe/0x16c
  hub_port_init+0x5e0/0xb0c
  hub_event+0x4e6/0x1164
  process_one_work+0x194/0x348
  worker_thread+0xf2/0x478
  kthread+0x120/0x13c
  ret_from_fork+0x18/0x1c
 mmcblk0: p1 p2
 *** arch_sync_dma_for_device@172: DMA direction is DMA_FROM_DEVICE instead of DMA_TO_DEVICE

...
and quite some more of the similar
...
---------------------------------------->8------------------------------------

In case of MMC/DW_MCI (AKA DesignWare MobileStorage controller) that's an execution flow:
1) __dw_mci_start_request()
2) dw_mci_pre_dma_transfer()
3) dma_map_sg(..., mmc_get_dma_dir(data))

Note mmc_get_dma_dir() is just "data->flags & MMC_DATA_WRITE ? DMA_TO_DEVICE : DMA_FROM_DEVICE".
I.e. if we're preparing for sending data dma_noncoherent_map_sg() will have DMA_TO_DEVICE which
is quite OK for passing to dma_noncoherent_sync_sg_for_device() but in case of reading we'll have
DMA_FROM_DEVICE which we'll pass to dma_noncoherent_sync_sg_for_device() in dma_noncoherent_map_sg().

I'd say this is not entirely correct because IMHO arch_sync_dma_for_cpu() is supposed to only be used
in case of DMA_FROM_DEVICE and arch_sync_dma_for_device() only in case of DMA_TO_DEVICE.


> +static dma_addr_t dma_noncoherent_map_page(struct device *dev, struct page *page,
> +		unsigned long offset, size_t size, enum dma_data_direction dir,
> +		unsigned long attrs)
> +{
> +	dma_addr_t addr;
> +
> +	addr = dma_direct_map_page(dev, page, offset, size, dir, attrs);
> +	if (!dma_mapping_error(dev, addr) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
> +		arch_sync_dma_for_device(dev, page_to_phys(page), size, dir);
> +	return addr;
> +}
> +
> +static int dma_noncoherent_map_sg(struct device *dev, struct scatterlist *sgl,
> +		int nents, enum dma_data_direction dir, unsigned long attrs)
> +{
> +	nents = dma_direct_map_sg(dev, sgl, nents, dir, attrs);
> +	if (nents > 0 && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
> +		dma_noncoherent_sync_sg_for_device(dev, sgl, nents, dir);
> +	return nents;
> +}

The same is for unmap functions.
My guess is we need to respect direction in map/unmap functions and use
either dma_noncoherent_sync_single_for_cpu(..., DMA_FROM_DEVICE) or
dma_noncoherent_sync_single_for_device(...,DMA_TO_DEVICE).


> +static void dma_noncoherent_unmap_page(struct device *dev, dma_addr_t addr,
> +		size_t size, enum dma_data_direction dir, unsigned long attrs)
> +{
> +	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
> +		dma_noncoherent_sync_single_for_cpu(dev, addr, size, dir);
> +}
> +
> +static void dma_noncoherent_unmap_sg(struct device *dev, struct scatterlist *sgl,
> +		int nents, enum dma_data_direction dir, unsigned long attrs)
> +{
> +	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
> +		dma_noncoherent_sync_sg_for_cpu(dev, sgl, nents, dir);
> +}
> +#endif

But the real fix of my problem is:
---------------------------------------->8------------------------------------
--- a/lib/dma-noncoherent.c
+++ b/lib/dma-noncoherent.c
@@ -35,7 +35,7 @@ static dma_addr_t dma_noncoherent_map_page(struct device *dev, struct page *page
 
        addr = dma_direct_map_page(dev, page, offset, size, dir, attrs);
        if (!dma_mapping_error(dev, addr) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
-               arch_sync_dma_for_device(dev, page_to_phys(page), size, dir);
+               arch_sync_dma_for_device(dev, page_to_phys(page) + offset, size, dir);
        return addr;
 }
---------------------------------------->8------------------------------------

You seem to lost an offset in the page so if we happen to have a buffer not aligned to
a page boundary then we were obviously corrupting data outside our data :)

-Alexey

WARNING: multiple messages have this Message-ID (diff)
From: Alexey Brodkin <Alexey.Brodkin@synopsys.com>
To: "hch@lst.de" <hch@lst.de>
Cc: "deanbo422@gmail.com" <deanbo422@gmail.com>,
	"linux-sh@vger.kernel.org" <linux-sh@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"nios2-dev@lists.rocketboards.org"
	<nios2-dev@lists.rocketboards.org>,
	"linux-xtensa@linux-xtensa.org" <linux-xtensa@linux-xtensa.org>,
	"linux-m68k@lists.linux-m68k.org" <linux-m68k@vger.kernel.org>,
	"linux-alpha@vger.kernel.org" <linux-alpha@vger.kernel.org>,
	"linux-hexagon@vger.kernel.org" <linux-hexagon@vger.kernel.org>,
	"linux-snps-arc@lists.infradead.org"
	<linux-snps-arc@lists.infradead.org>,
	"iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>,
	"green.hu@gmail.com" <green.hu@gmail.com>,
	"openrisc@lists.librecores.org" <openrisc@lists.librecores.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
	"monstr@monstr.eu" <monstr@monstr.eu>,
	"linux-parisc@vger.kernel.org" <linux-parisc@vger.kernel.org>,
	"linux-c6x-dev@linux-c6x.org" <linux-c6x-dev@linux-c6x.org>,
	"linux-arch@vger.kernel.org" <linux-arch@vger.kernel.org>,
	"sparclinux@vger.kernel.org" <sparclinux@vger.kernel.org>
Subject: Re: [PATCH 02/20] dma-mapping: provide a generic dma-noncoherent implementation
Date: Fri, 18 May 2018 13:03:46 +0000	[thread overview]
Message-ID: <bad125dff49f6e49c895e818c9d1abb346a46e8e.camel@synopsys.com> (raw)
In-Reply-To: <20180511075945.16548-3-hch@lst.de>

Hi Christoph,

On Fri, 2018-05-11 at 09:59 +0200, Christoph Hellwig wrote:

[snip]

There seems to be one subtle issue with map/unmap code.
While investigating problems on ARC I added instrumentation as below:
---------------------------------------->8------------------------------------
--- a/arch/arc/mm/dma.c
+++ b/arch/arc/mm/dma.c
@@ -152,14 +152,37 @@ static void _dma_cache_sync(struct device *dev, phys_addr_t paddr, size_t size,
        }
 }
 
+static const char *dir_to_str(enum dma_data_direction dir)
+{
+       switch (dir) {
+       case DMA_BIDIRECTIONAL: return "DMA_BIDIRECTIONAL";
+       case DMA_TO_DEVICE: return "DMA_TO_DEVICE";
+       case DMA_FROM_DEVICE: return "DMA_FROM_DEVICE";
+       case DMA_NONE: return "DMA_NONE";
+       default: return "WRONG_VALUE!";
+       }
+}
+
 void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
                size_t size, enum dma_data_direction dir)
 {
+       if (dir != DMA_TO_DEVICE){
+               dump_stack();
+               printk(" *** %s@%d: DMA direction is %s instead of %s\n",
+                      __func__, __LINE__, dir_to_str(dir), dir_to_str(DMA_TO_DEVICE));
+       }
+
        return _dma_cache_sync(dev, paddr, size, dir);
 }
 
 void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr,
                size_t size, enum dma_data_direction dir)
 {
+       if (dir != DMA_FROM_DEVICE) {
+               dump_stack();
+               printk(" *** %s@%d: DMA direction is %s instead of %s\n",
+                      __func__, __LINE__, dir_to_str(dir), dir_to_str(DMA_FROM_DEVICE));
+       }
+
        return _dma_cache_sync(dev, paddr, size, dir);
 }
---------------------------------------->8------------------------------------

And with that I noticed a bit unexpected output, see below:
---------------------------------------->8------------------------------------
Stack Trace:
  arc_unwind_core.constprop.1+0xd4/0xf8
  dump_stack+0x68/0x80
  arch_sync_dma_for_device+0x34/0xc4
  dma_noncoherent_map_sg+0x80/0x94
  __dw_mci_start_request+0x1ee/0x868
  dw_mci_request+0x17e/0x1c8
  mmc_wait_for_req+0x106/0x1ac
  mmc_app_sd_status+0x108/0x130
  mmc_sd_setup_card+0xc6/0x2e8
  mmc_attach_sd+0x1b6/0x394
  mmc_rescan+0x2f4/0x3bc
  process_one_work+0x194/0x348
  worker_thread+0xf2/0x478
  kthread+0x120/0x13c
  ret_from_fork+0x18/0x1c
 *** arch_sync_dma_for_device@172: DMA direction is DMA_FROM_DEVICE instead of DMA_TO_DEVICE
...
Stack Trace:
  arc_unwind_core.constprop.1+0xd4/0xf8
  dump_stack+0x68/0x80
  arch_sync_dma_for_device+0x34/0xc4
  dma_noncoherent_map_page+0x86/0x8c
  usb_hcd_map_urb_for_dma+0x49e/0x53c
  usb_hcd_submit_urb+0x43c/0x8c4
  usb_control_msg+0xbe/0x16c
  hub_port_init+0x5e0/0xb0c
  hub_event+0x4e6/0x1164
  process_one_work+0x194/0x348
  worker_thread+0xf2/0x478
  kthread+0x120/0x13c
  ret_from_fork+0x18/0x1c
 mmcblk0: p1 p2
 *** arch_sync_dma_for_device@172: DMA direction is DMA_FROM_DEVICE instead of DMA_TO_DEVICE

...
and quite some more of the similar
...
---------------------------------------->8------------------------------------

In case of MMC/DW_MCI (AKA DesignWare MobileStorage controller) that's an execution flow:
1) __dw_mci_start_request()
2) dw_mci_pre_dma_transfer()
3) dma_map_sg(..., mmc_get_dma_dir(data))

Note mmc_get_dma_dir() is just "data->flags & MMC_DATA_WRITE ? DMA_TO_DEVICE : DMA_FROM_DEVICE".
I.e. if we're preparing for sending data dma_noncoherent_map_sg() will have DMA_TO_DEVICE which
is quite OK for passing to dma_noncoherent_sync_sg_for_device() but in case of reading we'll have
DMA_FROM_DEVICE which we'll pass to dma_noncoherent_sync_sg_for_device() in dma_noncoherent_map_sg().

I'd say this is not entirely correct because IMHO arch_sync_dma_for_cpu() is supposed to only be used
in case of DMA_FROM_DEVICE and arch_sync_dma_for_device() only in case of DMA_TO_DEVICE.


> +static dma_addr_t dma_noncoherent_map_page(struct device *dev, struct page *page,
> +		unsigned long offset, size_t size, enum dma_data_direction dir,
> +		unsigned long attrs)
> +{
> +	dma_addr_t addr;
> +
> +	addr = dma_direct_map_page(dev, page, offset, size, dir, attrs);
> +	if (!dma_mapping_error(dev, addr) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
> +		arch_sync_dma_for_device(dev, page_to_phys(page), size, dir);
> +	return addr;
> +}
> +
> +static int dma_noncoherent_map_sg(struct device *dev, struct scatterlist *sgl,
> +		int nents, enum dma_data_direction dir, unsigned long attrs)
> +{
> +	nents = dma_direct_map_sg(dev, sgl, nents, dir, attrs);
> +	if (nents > 0 && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
> +		dma_noncoherent_sync_sg_for_device(dev, sgl, nents, dir);
> +	return nents;
> +}

The same is for unmap functions.
My guess is we need to respect direction in map/unmap functions and use
either dma_noncoherent_sync_single_for_cpu(..., DMA_FROM_DEVICE) or
dma_noncoherent_sync_single_for_device(...,DMA_TO_DEVICE).


> +static void dma_noncoherent_unmap_page(struct device *dev, dma_addr_t addr,
> +		size_t size, enum dma_data_direction dir, unsigned long attrs)
> +{
> +	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
> +		dma_noncoherent_sync_single_for_cpu(dev, addr, size, dir);
> +}
> +
> +static void dma_noncoherent_unmap_sg(struct device *dev, struct scatterlist *sgl,
> +		int nents, enum dma_data_direction dir, unsigned long attrs)
> +{
> +	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
> +		dma_noncoherent_sync_sg_for_cpu(dev, sgl, nents, dir);
> +}
> +#endif

But the real fix of my problem is:
---------------------------------------->8------------------------------------
--- a/lib/dma-noncoherent.c
+++ b/lib/dma-noncoherent.c
@@ -35,7 +35,7 @@ static dma_addr_t dma_noncoherent_map_page(struct device *dev, struct page *page
 
        addr = dma_direct_map_page(dev, page, offset, size, dir, attrs);
        if (!dma_mapping_error(dev, addr) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
-               arch_sync_dma_for_device(dev, page_to_phys(page), size, dir);
+               arch_sync_dma_for_device(dev, page_to_phys(page) + offset, size, dir);
        return addr;
 }
---------------------------------------->8------------------------------------

You seem to lost an offset in the page so if we happen to have a buffer not aligned to
a page boundary then we were obviously corrupting data outside our data :)

-Alexey

WARNING: multiple messages have this Message-ID (diff)
From: Alexey Brodkin <Alexey.Brodkin@synopsys.com>
To: "hch@lst.de" <hch@lst.de>
Cc: "deanbo422@gmail.com" <deanbo422@gmail.com>,
	"linux-sh@vger.kernel.org" <linux-sh@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"nios2-dev@lists.rocketboards.org"
	<nios2-dev@lists.rocketboards.org>,
	"linux-xtensa@linux-xtensa.org" <linux-xtensa@linux-xtensa.org>,
	"linux-m68k@lists.linux-m68k.org"
	<linux-m68k@lists.linux-m68k.org>,
	"linux-alpha@vger.kernel.org" <linux-alpha@vger.kernel.org>,
	"linux-hexagon@vger.kernel.org" <linux-hexagon@vger.kernel.org>,
	"linux-snps-arc@lists.infradead.org"
	<linux-snps-arc@lists.infradead.org>,
	"iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>,
	"green.hu@gmail.com" <green.hu@gmail.com>,
	"openrisc@lists.librecores.org" <openrisc@lists.librecores.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
	"monstr@monstr.eu" <monstr@monstr.eu>,
	"linux-parisc@vger.kernel.org" <linux-parisc@vger.kernel.org>,
	"linux-c6x-dev@linux-c6x.org" <linux-c6x-dev@linux-c6x.org>,
	"linux-arch@vger.kernel.org" <linux-arch@vger.kernel.org>,
	"sparclinux@vger.kernel.org" <sparclinux@vger.kernel.org>
Subject: Re: [PATCH 02/20] dma-mapping: provide a generic dma-noncoherent implementation
Date: Fri, 18 May 2018 13:03:46 +0000	[thread overview]
Message-ID: <bad125dff49f6e49c895e818c9d1abb346a46e8e.camel@synopsys.com> (raw)
Message-ID: <20180518130346.gfArSRWn5-yvoxLeE1AwB1H3urwCaqv6SExrNNraMY8@z> (raw)
In-Reply-To: <20180511075945.16548-3-hch@lst.de>

Hi Christoph,

On Fri, 2018-05-11 at 09:59 +0200, Christoph Hellwig wrote:

[snip]

There seems to be one subtle issue with map/unmap code.
While investigating problems on ARC I added instrumentation as below:
---------------------------------------->8------------------------------------
--- a/arch/arc/mm/dma.c
+++ b/arch/arc/mm/dma.c
@@ -152,14 +152,37 @@ static void _dma_cache_sync(struct device *dev, phys_addr_t paddr, size_t size,
        }
 }
 
+static const char *dir_to_str(enum dma_data_direction dir)
+{
+       switch (dir) {
+       case DMA_BIDIRECTIONAL: return "DMA_BIDIRECTIONAL";
+       case DMA_TO_DEVICE: return "DMA_TO_DEVICE";
+       case DMA_FROM_DEVICE: return "DMA_FROM_DEVICE";
+       case DMA_NONE: return "DMA_NONE";
+       default: return "WRONG_VALUE!";
+       }
+}
+
 void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
                size_t size, enum dma_data_direction dir)
 {
+       if (dir != DMA_TO_DEVICE){
+               dump_stack();
+               printk(" *** %s@%d: DMA direction is %s instead of %s\n",
+                      __func__, __LINE__, dir_to_str(dir), dir_to_str(DMA_TO_DEVICE));
+       }
+
        return _dma_cache_sync(dev, paddr, size, dir);
 }
 
 void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr,
                size_t size, enum dma_data_direction dir)
 {
+       if (dir != DMA_FROM_DEVICE) {
+               dump_stack();
+               printk(" *** %s@%d: DMA direction is %s instead of %s\n",
+                      __func__, __LINE__, dir_to_str(dir), dir_to_str(DMA_FROM_DEVICE));
+       }
+
        return _dma_cache_sync(dev, paddr, size, dir);
 }
---------------------------------------->8------------------------------------

And with that I noticed a bit unexpected output, see below:
---------------------------------------->8------------------------------------
Stack Trace:
  arc_unwind_core.constprop.1+0xd4/0xf8
  dump_stack+0x68/0x80
  arch_sync_dma_for_device+0x34/0xc4
  dma_noncoherent_map_sg+0x80/0x94
  __dw_mci_start_request+0x1ee/0x868
  dw_mci_request+0x17e/0x1c8
  mmc_wait_for_req+0x106/0x1ac
  mmc_app_sd_status+0x108/0x130
  mmc_sd_setup_card+0xc6/0x2e8
  mmc_attach_sd+0x1b6/0x394
  mmc_rescan+0x2f4/0x3bc
  process_one_work+0x194/0x348
  worker_thread+0xf2/0x478
  kthread+0x120/0x13c
  ret_from_fork+0x18/0x1c
 *** arch_sync_dma_for_device@172: DMA direction is DMA_FROM_DEVICE instead of DMA_TO_DEVICE
...
Stack Trace:
  arc_unwind_core.constprop.1+0xd4/0xf8
  dump_stack+0x68/0x80
  arch_sync_dma_for_device+0x34/0xc4
  dma_noncoherent_map_page+0x86/0x8c
  usb_hcd_map_urb_for_dma+0x49e/0x53c
  usb_hcd_submit_urb+0x43c/0x8c4
  usb_control_msg+0xbe/0x16c
  hub_port_init+0x5e0/0xb0c
  hub_event+0x4e6/0x1164
  process_one_work+0x194/0x348
  worker_thread+0xf2/0x478
  kthread+0x120/0x13c
  ret_from_fork+0x18/0x1c
 mmcblk0: p1 p2
 *** arch_sync_dma_for_device@172: DMA direction is DMA_FROM_DEVICE instead of DMA_TO_DEVICE

...
and quite some more of the similar
...
---------------------------------------->8------------------------------------

In case of MMC/DW_MCI (AKA DesignWare MobileStorage controller) that's an execution flow:
1) __dw_mci_start_request()
2) dw_mci_pre_dma_transfer()
3) dma_map_sg(..., mmc_get_dma_dir(data))

Note mmc_get_dma_dir() is just "data->flags & MMC_DATA_WRITE ? DMA_TO_DEVICE : DMA_FROM_DEVICE".
I.e. if we're preparing for sending data dma_noncoherent_map_sg() will have DMA_TO_DEVICE which
is quite OK for passing to dma_noncoherent_sync_sg_for_device() but in case of reading we'll have
DMA_FROM_DEVICE which we'll pass to dma_noncoherent_sync_sg_for_device() in dma_noncoherent_map_sg().

I'd say this is not entirely correct because IMHO arch_sync_dma_for_cpu() is supposed to only be used
in case of DMA_FROM_DEVICE and arch_sync_dma_for_device() only in case of DMA_TO_DEVICE.


> +static dma_addr_t dma_noncoherent_map_page(struct device *dev, struct page *page,
> +		unsigned long offset, size_t size, enum dma_data_direction dir,
> +		unsigned long attrs)
> +{
> +	dma_addr_t addr;
> +
> +	addr = dma_direct_map_page(dev, page, offset, size, dir, attrs);
> +	if (!dma_mapping_error(dev, addr) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
> +		arch_sync_dma_for_device(dev, page_to_phys(page), size, dir);
> +	return addr;
> +}
> +
> +static int dma_noncoherent_map_sg(struct device *dev, struct scatterlist *sgl,
> +		int nents, enum dma_data_direction dir, unsigned long attrs)
> +{
> +	nents = dma_direct_map_sg(dev, sgl, nents, dir, attrs);
> +	if (nents > 0 && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
> +		dma_noncoherent_sync_sg_for_device(dev, sgl, nents, dir);
> +	return nents;
> +}

The same is for unmap functions.
My guess is we need to respect direction in map/unmap functions and use
either dma_noncoherent_sync_single_for_cpu(..., DMA_FROM_DEVICE) or
dma_noncoherent_sync_single_for_device(...,DMA_TO_DEVICE).


> +static void dma_noncoherent_unmap_page(struct device *dev, dma_addr_t addr,
> +		size_t size, enum dma_data_direction dir, unsigned long attrs)
> +{
> +	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
> +		dma_noncoherent_sync_single_for_cpu(dev, addr, size, dir);
> +}
> +
> +static void dma_noncoherent_unmap_sg(struct device *dev, struct scatterlist *sgl,
> +		int nents, enum dma_data_direction dir, unsigned long attrs)
> +{
> +	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
> +		dma_noncoherent_sync_sg_for_cpu(dev, sgl, nents, dir);
> +}
> +#endif

But the real fix of my problem is:
---------------------------------------->8------------------------------------
--- a/lib/dma-noncoherent.c
+++ b/lib/dma-noncoherent.c
@@ -35,7 +35,7 @@ static dma_addr_t dma_noncoherent_map_page(struct device *dev, struct page *page
 
        addr = dma_direct_map_page(dev, page, offset, size, dir, attrs);
        if (!dma_mapping_error(dev, addr) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
-               arch_sync_dma_for_device(dev, page_to_phys(page), size, dir);
+               arch_sync_dma_for_device(dev, page_to_phys(page) + offset, size, dir);
        return addr;
 }
---------------------------------------->8------------------------------------

You seem to lost an offset in the page so if we happen to have a buffer not aligned to
a page boundary then we were obviously corrupting data outside our data :)

-Alexey

WARNING: multiple messages have this Message-ID (diff)
From: Alexey.Brodkin@synopsys.com (Alexey Brodkin)
To: linux-snps-arc@lists.infradead.org
Subject: [PATCH 02/20] dma-mapping: provide a generic dma-noncoherent implementation
Date: Fri, 18 May 2018 13:03:46 +0000	[thread overview]
Message-ID: <bad125dff49f6e49c895e818c9d1abb346a46e8e.camel@synopsys.com> (raw)
In-Reply-To: <20180511075945.16548-3-hch@lst.de>

Hi Christoph,

On Fri, 2018-05-11@09:59 +0200, Christoph Hellwig wrote:

[snip]

There seems to be one subtle issue with map/unmap code.
While investigating problems on ARC I added instrumentation as below:
---------------------------------------->8------------------------------------
--- a/arch/arc/mm/dma.c
+++ b/arch/arc/mm/dma.c
@@ -152,14 +152,37 @@ static void _dma_cache_sync(struct device *dev, phys_addr_t paddr, size_t size,
        }
 }
 
+static const char *dir_to_str(enum dma_data_direction dir)
+{
+       switch (dir) {
+       case DMA_BIDIRECTIONAL: return "DMA_BIDIRECTIONAL";
+       case DMA_TO_DEVICE: return "DMA_TO_DEVICE";
+       case DMA_FROM_DEVICE: return "DMA_FROM_DEVICE";
+       case DMA_NONE: return "DMA_NONE";
+       default: return "WRONG_VALUE!";
+       }
+}
+
 void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
                size_t size, enum dma_data_direction dir)
 {
+       if (dir != DMA_TO_DEVICE){
+               dump_stack();
+               printk(" *** %s@%d: DMA direction is %s instead of %s\n",
+                      __func__, __LINE__, dir_to_str(dir), dir_to_str(DMA_TO_DEVICE));
+       }
+
        return _dma_cache_sync(dev, paddr, size, dir);
 }
 
 void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr,
                size_t size, enum dma_data_direction dir)
 {
+       if (dir != DMA_FROM_DEVICE) {
+               dump_stack();
+               printk(" *** %s@%d: DMA direction is %s instead of %s\n",
+                      __func__, __LINE__, dir_to_str(dir), dir_to_str(DMA_FROM_DEVICE));
+       }
+
        return _dma_cache_sync(dev, paddr, size, dir);
 }
---------------------------------------->8------------------------------------

And with that I noticed a bit unexpected output, see below:
---------------------------------------->8------------------------------------
Stack Trace:
  arc_unwind_core.constprop.1+0xd4/0xf8
  dump_stack+0x68/0x80
  arch_sync_dma_for_device+0x34/0xc4
  dma_noncoherent_map_sg+0x80/0x94
  __dw_mci_start_request+0x1ee/0x868
  dw_mci_request+0x17e/0x1c8
  mmc_wait_for_req+0x106/0x1ac
  mmc_app_sd_status+0x108/0x130
  mmc_sd_setup_card+0xc6/0x2e8
  mmc_attach_sd+0x1b6/0x394
  mmc_rescan+0x2f4/0x3bc
  process_one_work+0x194/0x348
  worker_thread+0xf2/0x478
  kthread+0x120/0x13c
  ret_from_fork+0x18/0x1c
 *** arch_sync_dma_for_device at 172: DMA direction is DMA_FROM_DEVICE instead of DMA_TO_DEVICE
...
Stack Trace:
  arc_unwind_core.constprop.1+0xd4/0xf8
  dump_stack+0x68/0x80
  arch_sync_dma_for_device+0x34/0xc4
  dma_noncoherent_map_page+0x86/0x8c
  usb_hcd_map_urb_for_dma+0x49e/0x53c
  usb_hcd_submit_urb+0x43c/0x8c4
  usb_control_msg+0xbe/0x16c
  hub_port_init+0x5e0/0xb0c
  hub_event+0x4e6/0x1164
  process_one_work+0x194/0x348
  worker_thread+0xf2/0x478
  kthread+0x120/0x13c
  ret_from_fork+0x18/0x1c
 mmcblk0: p1 p2
 *** arch_sync_dma_for_device at 172: DMA direction is DMA_FROM_DEVICE instead of DMA_TO_DEVICE

...
and quite some more of the similar
...
---------------------------------------->8------------------------------------

In case of MMC/DW_MCI (AKA DesignWare MobileStorage controller) that's an execution flow:
1) __dw_mci_start_request()
2) dw_mci_pre_dma_transfer()
3) dma_map_sg(..., mmc_get_dma_dir(data))

Note mmc_get_dma_dir() is just "data->flags & MMC_DATA_WRITE ? DMA_TO_DEVICE : DMA_FROM_DEVICE".
I.e. if we're preparing for sending data dma_noncoherent_map_sg() will have DMA_TO_DEVICE which
is quite OK for passing to dma_noncoherent_sync_sg_for_device() but in case of reading we'll have
DMA_FROM_DEVICE which we'll pass to dma_noncoherent_sync_sg_for_device() in dma_noncoherent_map_sg().

I'd say this is not entirely correct because IMHO arch_sync_dma_for_cpu() is supposed to only be used
in case of DMA_FROM_DEVICE and arch_sync_dma_for_device() only in case of DMA_TO_DEVICE.


> +static dma_addr_t dma_noncoherent_map_page(struct device *dev, struct page *page,
> +		unsigned long offset, size_t size, enum dma_data_direction dir,
> +		unsigned long attrs)
> +{
> +	dma_addr_t addr;
> +
> +	addr = dma_direct_map_page(dev, page, offset, size, dir, attrs);
> +	if (!dma_mapping_error(dev, addr) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
> +		arch_sync_dma_for_device(dev, page_to_phys(page), size, dir);
> +	return addr;
> +}
> +
> +static int dma_noncoherent_map_sg(struct device *dev, struct scatterlist *sgl,
> +		int nents, enum dma_data_direction dir, unsigned long attrs)
> +{
> +	nents = dma_direct_map_sg(dev, sgl, nents, dir, attrs);
> +	if (nents > 0 && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
> +		dma_noncoherent_sync_sg_for_device(dev, sgl, nents, dir);
> +	return nents;
> +}

The same is for unmap functions.
My guess is we need to respect direction in map/unmap functions and use
either dma_noncoherent_sync_single_for_cpu(..., DMA_FROM_DEVICE) or
dma_noncoherent_sync_single_for_device(...,DMA_TO_DEVICE).


> +static void dma_noncoherent_unmap_page(struct device *dev, dma_addr_t addr,
> +		size_t size, enum dma_data_direction dir, unsigned long attrs)
> +{
> +	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
> +		dma_noncoherent_sync_single_for_cpu(dev, addr, size, dir);
> +}
> +
> +static void dma_noncoherent_unmap_sg(struct device *dev, struct scatterlist *sgl,
> +		int nents, enum dma_data_direction dir, unsigned long attrs)
> +{
> +	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
> +		dma_noncoherent_sync_sg_for_cpu(dev, sgl, nents, dir);
> +}
> +#endif

But the real fix of my problem is:
---------------------------------------->8------------------------------------
--- a/lib/dma-noncoherent.c
+++ b/lib/dma-noncoherent.c
@@ -35,7 +35,7 @@ static dma_addr_t dma_noncoherent_map_page(struct device *dev, struct page *page
 
        addr = dma_direct_map_page(dev, page, offset, size, dir, attrs);
        if (!dma_mapping_error(dev, addr) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
-               arch_sync_dma_for_device(dev, page_to_phys(page), size, dir);
+               arch_sync_dma_for_device(dev, page_to_phys(page) + offset, size, dir);
        return addr;
 }
---------------------------------------->8------------------------------------

You seem to lost an offset in the page so if we happen to have a buffer not aligned to
a page boundary then we were obviously corrupting data outside our data :)

-Alexey

WARNING: multiple messages have this Message-ID (diff)
From: Alexey.Brodkin@synopsys.com (Alexey Brodkin)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH 02/20] dma-mapping: provide a generic dma-noncoherent implementation
Date: Fri, 18 May 2018 13:03:46 +0000	[thread overview]
Message-ID: <bad125dff49f6e49c895e818c9d1abb346a46e8e.camel@synopsys.com> (raw)
In-Reply-To: <20180511075945.16548-3-hch@lst.de>

Hi Christoph,

On Fri, 2018-05-11 at 09:59 +0200, Christoph Hellwig wrote:

[snip]

There seems to be one subtle issue with map/unmap code.
While investigating problems on ARC I added instrumentation as below:
---------------------------------------->8------------------------------------
--- a/arch/arc/mm/dma.c
+++ b/arch/arc/mm/dma.c
@@ -152,14 +152,37 @@ static void _dma_cache_sync(struct device *dev, phys_addr_t paddr, size_t size,
        }
 }
 
+static const char *dir_to_str(enum dma_data_direction dir)
+{
+       switch (dir) {
+       case DMA_BIDIRECTIONAL: return "DMA_BIDIRECTIONAL";
+       case DMA_TO_DEVICE: return "DMA_TO_DEVICE";
+       case DMA_FROM_DEVICE: return "DMA_FROM_DEVICE";
+       case DMA_NONE: return "DMA_NONE";
+       default: return "WRONG_VALUE!";
+       }
+}
+
 void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
                size_t size, enum dma_data_direction dir)
 {
+       if (dir != DMA_TO_DEVICE){
+               dump_stack();
+               printk(" *** %s@%d: DMA direction is %s instead of %s\n",
+                      __func__, __LINE__, dir_to_str(dir), dir_to_str(DMA_TO_DEVICE));
+       }
+
        return _dma_cache_sync(dev, paddr, size, dir);
 }
 
 void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr,
                size_t size, enum dma_data_direction dir)
 {
+       if (dir != DMA_FROM_DEVICE) {
+               dump_stack();
+               printk(" *** %s@%d: DMA direction is %s instead of %s\n",
+                      __func__, __LINE__, dir_to_str(dir), dir_to_str(DMA_FROM_DEVICE));
+       }
+
        return _dma_cache_sync(dev, paddr, size, dir);
 }
---------------------------------------->8------------------------------------

And with that I noticed a bit unexpected output, see below:
---------------------------------------->8------------------------------------
Stack Trace:
  arc_unwind_core.constprop.1+0xd4/0xf8
  dump_stack+0x68/0x80
  arch_sync_dma_for_device+0x34/0xc4
  dma_noncoherent_map_sg+0x80/0x94
  __dw_mci_start_request+0x1ee/0x868
  dw_mci_request+0x17e/0x1c8
  mmc_wait_for_req+0x106/0x1ac
  mmc_app_sd_status+0x108/0x130
  mmc_sd_setup_card+0xc6/0x2e8
  mmc_attach_sd+0x1b6/0x394
  mmc_rescan+0x2f4/0x3bc
  process_one_work+0x194/0x348
  worker_thread+0xf2/0x478
  kthread+0x120/0x13c
  ret_from_fork+0x18/0x1c
 *** arch_sync_dma_for_device at 172: DMA direction is DMA_FROM_DEVICE instead of DMA_TO_DEVICE
...
Stack Trace:
  arc_unwind_core.constprop.1+0xd4/0xf8
  dump_stack+0x68/0x80
  arch_sync_dma_for_device+0x34/0xc4
  dma_noncoherent_map_page+0x86/0x8c
  usb_hcd_map_urb_for_dma+0x49e/0x53c
  usb_hcd_submit_urb+0x43c/0x8c4
  usb_control_msg+0xbe/0x16c
  hub_port_init+0x5e0/0xb0c
  hub_event+0x4e6/0x1164
  process_one_work+0x194/0x348
  worker_thread+0xf2/0x478
  kthread+0x120/0x13c
  ret_from_fork+0x18/0x1c
 mmcblk0: p1 p2
 *** arch_sync_dma_for_device at 172: DMA direction is DMA_FROM_DEVICE instead of DMA_TO_DEVICE

...
and quite some more of the similar
...
---------------------------------------->8------------------------------------

In case of MMC/DW_MCI (AKA DesignWare MobileStorage controller) that's an execution flow:
1) __dw_mci_start_request()
2) dw_mci_pre_dma_transfer()
3) dma_map_sg(..., mmc_get_dma_dir(data))

Note mmc_get_dma_dir() is just "data->flags & MMC_DATA_WRITE ? DMA_TO_DEVICE : DMA_FROM_DEVICE".
I.e. if we're preparing for sending data dma_noncoherent_map_sg() will have DMA_TO_DEVICE which
is quite OK for passing to dma_noncoherent_sync_sg_for_device() but in case of reading we'll have
DMA_FROM_DEVICE which we'll pass to dma_noncoherent_sync_sg_for_device() in dma_noncoherent_map_sg().

I'd say this is not entirely correct because IMHO arch_sync_dma_for_cpu() is supposed to only be used
in case of DMA_FROM_DEVICE and arch_sync_dma_for_device() only in case of DMA_TO_DEVICE.


> +static dma_addr_t dma_noncoherent_map_page(struct device *dev, struct page *page,
> +		unsigned long offset, size_t size, enum dma_data_direction dir,
> +		unsigned long attrs)
> +{
> +	dma_addr_t addr;
> +
> +	addr = dma_direct_map_page(dev, page, offset, size, dir, attrs);
> +	if (!dma_mapping_error(dev, addr) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
> +		arch_sync_dma_for_device(dev, page_to_phys(page), size, dir);
> +	return addr;
> +}
> +
> +static int dma_noncoherent_map_sg(struct device *dev, struct scatterlist *sgl,
> +		int nents, enum dma_data_direction dir, unsigned long attrs)
> +{
> +	nents = dma_direct_map_sg(dev, sgl, nents, dir, attrs);
> +	if (nents > 0 && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
> +		dma_noncoherent_sync_sg_for_device(dev, sgl, nents, dir);
> +	return nents;
> +}

The same is for unmap functions.
My guess is we need to respect direction in map/unmap functions and use
either dma_noncoherent_sync_single_for_cpu(..., DMA_FROM_DEVICE) or
dma_noncoherent_sync_single_for_device(...,DMA_TO_DEVICE).


> +static void dma_noncoherent_unmap_page(struct device *dev, dma_addr_t addr,
> +		size_t size, enum dma_data_direction dir, unsigned long attrs)
> +{
> +	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
> +		dma_noncoherent_sync_single_for_cpu(dev, addr, size, dir);
> +}
> +
> +static void dma_noncoherent_unmap_sg(struct device *dev, struct scatterlist *sgl,
> +		int nents, enum dma_data_direction dir, unsigned long attrs)
> +{
> +	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
> +		dma_noncoherent_sync_sg_for_cpu(dev, sgl, nents, dir);
> +}
> +#endif

But the real fix of my problem is:
---------------------------------------->8------------------------------------
--- a/lib/dma-noncoherent.c
+++ b/lib/dma-noncoherent.c
@@ -35,7 +35,7 @@ static dma_addr_t dma_noncoherent_map_page(struct device *dev, struct page *page
 
        addr = dma_direct_map_page(dev, page, offset, size, dir, attrs);
        if (!dma_mapping_error(dev, addr) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
-               arch_sync_dma_for_device(dev, page_to_phys(page), size, dir);
+               arch_sync_dma_for_device(dev, page_to_phys(page) + offset, size, dir);
        return addr;
 }
---------------------------------------->8------------------------------------

You seem to lost an offset in the page so if we happen to have a buffer not aligned to
a page boundary then we were obviously corrupting data outside our data :)

-Alexey

  parent reply	other threads:[~2018-05-18 13:03 UTC|newest]

Thread overview: 410+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-11  7:59 common non-cache coherent direct dma mapping ops Christoph Hellwig
2018-05-11  7:59 ` [OpenRISC] " Christoph Hellwig
2018-05-11  7:59 ` Christoph Hellwig
2018-05-11  7:59 ` Christoph Hellwig
2018-05-11  7:59 ` Christoph Hellwig
2018-05-11  7:59 ` Christoph Hellwig
2018-05-11  7:59 ` [PATCH 01/20] dma-mapping: simplify Kconfig dependencies Christoph Hellwig
2018-05-11  7:59   ` [OpenRISC] " Christoph Hellwig
2018-05-11  7:59   ` Christoph Hellwig
2018-05-11  7:59   ` Christoph Hellwig
2018-05-11  7:59   ` Christoph Hellwig
2018-05-11  7:59   ` Christoph Hellwig
2018-05-11  7:59   ` Christoph Hellwig
     [not found] ` <20180511075945.16548-1-hch-jcswGhMUV9g@public.gmane.org>
2018-05-11  7:59   ` [PATCH 02/20] dma-mapping: provide a generic dma-noncoherent implementation Christoph Hellwig
2018-05-11  7:59     ` [OpenRISC] " Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
     [not found]     ` <20180511075945.16548-3-hch-jcswGhMUV9g@public.gmane.org>
2018-05-18 13:03       ` Alexey Brodkin [this message]
2018-05-18 13:03         ` Alexey Brodkin
2018-05-18 13:03         ` Alexey Brodkin
2018-05-18 13:03         ` Alexey Brodkin
2018-05-18 13:03         ` Alexey Brodkin
2018-05-18 13:03         ` Alexey Brodkin
     [not found]         ` <bad125dff49f6e49c895e818c9d1abb346a46e8e.camel-HKixBCOQz3hWk0Htik3J/w@public.gmane.org>
2018-05-18 13:27           ` hch
2018-05-18 13:27             ` [OpenRISC] " hch
2018-05-18 13:27             ` hch at lst.de
2018-05-18 13:27             ` hch
2018-05-18 13:27             ` hch
2018-05-18 13:27             ` hch-jcswGhMUV9g
     [not found]             ` <20180518132731.GA31125-jcswGhMUV9g@public.gmane.org>
2018-05-18 14:13               ` Alexey Brodkin
2018-05-18 14:13                 ` Alexey Brodkin
2018-05-18 14:13                 ` Alexey Brodkin
2018-05-18 14:13                 ` Alexey Brodkin
2018-05-18 14:13                 ` Alexey Brodkin
2018-05-18 14:13                 ` Alexey Brodkin
2018-05-18 17:28               ` Vineet Gupta
2018-05-18 17:28                 ` Vineet Gupta
2018-05-18 17:28                 ` Vineet Gupta
2018-05-18 17:28                 ` Vineet Gupta
2018-05-18 17:28                 ` Vineet Gupta
2018-05-18 17:20           ` dma_sync_*_for_cpu and direction=TO_DEVICE (was Re: [PATCH 02/20] dma-mapping: provide a generic dma Vineet Gupta
2018-05-18 17:20             ` dma_sync_*_for_cpu and direction=TO_DEVICE (was Re: [PATCH 02/20] dma-mapping: provide a generic dma-noncoherent implementation) Vineet Gupta
2018-05-18 17:20             ` Vineet Gupta
2018-05-18 17:20             ` Vineet Gupta
2018-05-18 17:20             ` Vineet Gupta
2018-05-18 17:50             ` dma_sync_*_for_cpu and direction=TO_DEVICE (was Re: [PATCH 02/20] dma-mapping: provide a generic Russell King - ARM Linux
2018-05-18 17:50               ` [OpenRISC] dma_sync_*_for_cpu and direction=TO_DEVICE (was Re: [PATCH 02/20] dma-mapping: provide a generic dma-noncoherent implementation) Russell King - ARM Linux
2018-05-18 17:50               ` Russell King - ARM Linux
2018-05-18 17:50               ` Russell King - ARM Linux
2018-05-18 17:50               ` Russell King - ARM Linux
2018-05-18 17:50               ` Russell King - ARM Linux
     [not found]               ` <20180518175004.GF17671-l+eeeJia6m9URfEZ8mYm6t73F7V6hmMc@public.gmane.org>
2018-05-18 19:57                 ` dma_sync_*_for_cpu and direction=TO_DEVICE (was Re: [PATCH 02/20] dma-mapping: provide a generic Alexey Brodkin
2018-05-18 19:57                   ` dma_sync_*_for_cpu and direction=TO_DEVICE (was Re: [PATCH 02/20] dma-mapping: provide a generic dma-noncoherent implementation) Alexey Brodkin
2018-05-18 19:57                   ` Alexey Brodkin
2018-05-18 19:57                   ` Alexey Brodkin
2018-05-18 19:57                   ` Alexey Brodkin
     [not found]                   ` <182840dedb4890a88c672b1c5d556920bf89a8fb.camel-HKixBCOQz3hWk0Htik3J/w@public.gmane.org>
2018-05-18 21:33                     ` dma_sync_*_for_cpu and direction=TO_DEVICE (was Re: [PATCH 02/20] dma-mapping: provide a generic Russell King - ARM Linux
2018-05-18 21:33                       ` [OpenRISC] dma_sync_*_for_cpu and direction=TO_DEVICE (was Re: [PATCH 02/20] dma-mapping: provide a generic dma-noncoherent implementation) Russell King - ARM Linux
2018-05-18 21:33                       ` Russell King - ARM Linux
2018-05-18 21:33                       ` Russell King - ARM Linux
2018-05-18 21:33                       ` Russell King - ARM Linux
2018-05-18 21:33                       ` Russell King - ARM Linux
2018-05-18 20:35                 ` dma_sync_*_for_cpu and direction=TO_DEVICE (was Re: [PATCH 02/20] dma-mapping: provide a generic Vineet Gupta
2018-05-18 20:35                   ` dma_sync_*_for_cpu and direction=TO_DEVICE (was Re: [PATCH 02/20] dma-mapping: provide a generic dma-noncoherent implementation) Vineet Gupta
2018-05-18 20:35                   ` Vineet Gupta
2018-05-18 20:35                   ` Vineet Gupta
2018-05-18 20:35                   ` Vineet Gupta
2018-05-18 20:35                   ` Vineet Gupta
2018-05-18 20:35                   ` Vineet Gupta
2018-05-18 20:35                   ` Vineet Gupta
2018-05-18 20:35                   ` Vineet Gupta
2018-05-18 21:55                   ` dma_sync_*_for_cpu and direction=TO_DEVICE (was Re: [PATCH 02/20] dma-mapping: provide a generic Russell King - ARM Linux
2018-05-18 21:55                     ` [OpenRISC] dma_sync_*_for_cpu and direction=TO_DEVICE (was Re: [PATCH 02/20] dma-mapping: provide a generic dma-noncoherent implementation) Russell King - ARM Linux
2018-05-18 21:55                     ` Russell King - ARM Linux
2018-05-18 21:55                     ` Russell King - ARM Linux
2018-05-18 21:55                     ` Russell King - ARM Linux
2018-05-18 21:55                     ` Russell King - ARM Linux
2018-05-18 21:55                     ` Russell King - ARM Linux
2018-05-18 20:05         ` [PATCH 02/20] dma-mapping: provide a generic dma-noncoherent implementation Helge Deller
2018-05-18 20:05           ` [OpenRISC] " Helge Deller
2018-05-18 20:05           ` Helge Deller
2018-05-18 20:05           ` Helge Deller
2018-05-18 20:05           ` Helge Deller
2018-05-18 20:05           ` Helge Deller
2018-05-18 20:05           ` Helge Deller
     [not found]           ` <0c5d27e9-2799-eb38-8b09-47a04c48b5c7-Mmb7MZpHnFY@public.gmane.org>
2018-05-19  6:38             ` hch
2018-05-19  6:38               ` [OpenRISC] " hch
2018-05-19  6:38               ` hch at lst.de
2018-05-19  6:38               ` hch
2018-05-19  6:38               ` hch-jcswGhMUV9g
2018-05-19  6:38               ` hch
2018-05-19  6:38               ` hch-jcswGhMUV9g
2018-05-11  7:59   ` [PATCH 03/20] arc: use generic dma_noncoherent_ops Christoph Hellwig
2018-05-11  7:59     ` [OpenRISC] " Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
     [not found]     ` <20180511075945.16548-4-hch-jcswGhMUV9g@public.gmane.org>
2018-05-11 12:44       ` Alexey Brodkin
2018-05-11 12:44         ` Alexey Brodkin
2018-05-11 12:44         ` Alexey Brodkin
2018-05-11 12:44         ` Alexey Brodkin
2018-05-11 12:44         ` Alexey Brodkin
2018-05-11  7:59   ` [PATCH 04/20] arm-nommu: " Christoph Hellwig
2018-05-11  7:59     ` [OpenRISC] " Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
     [not found]     ` <20180511075945.16548-5-hch-jcswGhMUV9g@public.gmane.org>
2018-05-11  9:11       ` Russell King - ARM Linux
2018-05-11  9:11         ` [OpenRISC] " Russell King - ARM Linux
2018-05-11  9:11         ` Russell King - ARM Linux
2018-05-11  9:11         ` Russell King - ARM Linux
2018-05-11  9:11         ` Russell King - ARM Linux
2018-05-11  9:11         ` Russell King - ARM Linux
2018-05-11  9:11         ` Russell King - ARM Linux
     [not found]         ` <20180511091114.GA16141-l+eeeJia6m9URfEZ8mYm6t73F7V6hmMc@public.gmane.org>
2018-05-22 11:53           ` Christoph Hellwig
2018-05-22 11:53             ` [OpenRISC] " Christoph Hellwig
2018-05-22 11:53             ` Christoph Hellwig
2018-05-22 11:53             ` Christoph Hellwig
2018-05-22 11:53             ` Christoph Hellwig
2018-05-22 11:53             ` Christoph Hellwig
2018-05-22 11:53             ` Christoph Hellwig
2018-05-11 13:56       ` John Garry
2018-05-11 13:56         ` [OpenRISC] " John Garry
2018-05-11 13:56         ` John Garry
2018-05-11 13:56         ` John Garry
2018-05-11 13:56         ` John Garry
2018-05-11 13:56         ` John Garry
2018-05-11 13:56         ` John Garry
2018-05-11 13:56         ` John Garry
2018-05-11  7:59   ` [PATCH 05/20] c6x: " Christoph Hellwig
2018-05-11  7:59     ` [OpenRISC] " Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
     [not found]     ` <20180511075945.16548-6-hch-jcswGhMUV9g@public.gmane.org>
2018-05-15  0:25       ` [Linux-c6x-dev] " Mark Salter
2018-05-15  0:25         ` [OpenRISC] " Mark Salter
2018-05-15  0:25         ` Mark Salter
2018-05-15  0:25         ` Mark Salter
2018-05-15  0:25         ` Mark Salter
2018-05-15  0:25         ` Mark Salter
2018-05-15  0:25         ` Mark Salter
2018-05-11  7:59   ` [PATCH 06/20] hexagon: " Christoph Hellwig
2018-05-11  7:59     ` [OpenRISC] " Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59   ` [PATCH 07/20] m68k: " Christoph Hellwig
2018-05-11  7:59     ` [OpenRISC] " Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59   ` [PATCH 08/20] microblaze: " Christoph Hellwig
2018-05-11  7:59     ` [OpenRISC] " Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59   ` [PATCH 09/20] microblaze: remove the consistent_sync and consistent_sync_page Christoph Hellwig
2018-05-11  7:59     ` [OpenRISC] " Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59   ` [PATCH 10/20] nds32: use generic dma_noncoherent_ops Christoph Hellwig
2018-05-11  7:59     ` [OpenRISC] " Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59   ` [PATCH 11/20] nios2: " Christoph Hellwig
2018-05-11  7:59     ` [OpenRISC] " Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59   ` [PATCH 12/20] openrisc: " Christoph Hellwig
2018-05-11  7:59     ` [OpenRISC] " Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59   ` [PATCH 13/20] sh: simplify get_arch_dma_ops Christoph Hellwig
2018-05-11  7:59     ` [OpenRISC] " Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59   ` [PATCH 14/20] sh: introduce a sh_cacheop_vaddr helper Christoph Hellwig
2018-05-11  7:59     ` [OpenRISC] " Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59   ` [PATCH 15/20] sh: use dma_direct_ops for the CONFIG_DMA_COHERENT case Christoph Hellwig
2018-05-11  7:59     ` [OpenRISC] " Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59   ` [PATCH 16/20] mm: split arch/sh/mm/consistent.c Christoph Hellwig
2018-05-11  7:59     ` [OpenRISC] " Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59   ` [PATCH 17/20] sh: use generic dma_noncoherent_ops Christoph Hellwig
2018-05-11  7:59     ` [OpenRISC] " Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59   ` [PATCH 18/20] xtensa: " Christoph Hellwig
2018-05-11  7:59     ` [OpenRISC] " Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59   ` [PATCH 19/20] sparc: " Christoph Hellwig
2018-05-11  7:59     ` [OpenRISC] " Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59   ` [PATCH 20/20] parisc: " Christoph Hellwig
2018-05-11  7:59     ` [OpenRISC] " Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-11  7:59     ` Christoph Hellwig
2018-05-13 13:26 ` common non-cache coherent direct dma mapping ops Helge Deller
2018-05-13 13:26   ` [OpenRISC] " Helge Deller
2018-05-13 13:26   ` Helge Deller
2018-05-13 13:26   ` Helge Deller
2018-05-13 13:26   ` Helge Deller
2018-05-13 13:26   ` Helge Deller
2018-05-22 12:04 ` common non-cache coherent direct dma mapping ops v2 Christoph Hellwig
2018-05-22 12:04   ` [OpenRISC] " Christoph Hellwig
2018-05-22 12:04   ` Christoph Hellwig
2018-05-22 12:04   ` Christoph Hellwig
2018-05-22 12:04   ` Christoph Hellwig
2018-05-22 12:04   ` [PATCH 01/25] hexagon: remove the sync_single_for_cpu DMA operation Christoph Hellwig
2018-05-22 12:04     ` [OpenRISC] " Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04   ` [PATCH 02/25] hexagon: implement the sync_sg_for_device " Christoph Hellwig
2018-05-22 12:04     ` [OpenRISC] " Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
     [not found]   ` <20180522120430.28709-1-hch-jcswGhMUV9g@public.gmane.org>
2018-05-22 12:04     ` [PATCH 03/25] hexagon: use generic dma_noncoherent_ops Christoph Hellwig
2018-05-22 12:04       ` [OpenRISC] " Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04     ` [PATCH 04/25] m68k: " Christoph Hellwig
2018-05-22 12:04       ` [OpenRISC] " Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04     ` [PATCH 07/25] nds32: remove the broken kmap code in nds32_dma_map_sg Christoph Hellwig
2018-05-22 12:04       ` [OpenRISC] " Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04     ` [PATCH 08/25] nds32: consolidate DMA cache maintainance routines Christoph Hellwig
2018-05-22 12:04       ` [OpenRISC] " Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04     ` [PATCH 10/25] nds32: use generic dma_noncoherent_ops Christoph Hellwig
2018-05-22 12:04       ` [OpenRISC] " Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04     ` [PATCH 11/25] nios2: " Christoph Hellwig
2018-05-22 12:04       ` [OpenRISC] " Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04     ` [PATCH 12/25] openrisc: remove the sync_single_for_cpu DMA operation Christoph Hellwig
2018-05-22 12:04       ` [OpenRISC] " Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04       ` Christoph Hellwig
2018-05-22 12:04   ` [PATCH 05/25] microblaze: use generic dma_noncoherent_ops Christoph Hellwig
2018-05-22 12:04     ` [OpenRISC] " Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04   ` [PATCH 06/25] microblaze: remove the consistent_sync and consistent_sync_page Christoph Hellwig
2018-05-22 12:04     ` [OpenRISC] " Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04   ` [PATCH 09/25] nds32: implement the unmap_sg DMA operation Christoph Hellwig
2018-05-22 12:04     ` [OpenRISC] " Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04   ` [PATCH 13/25] openrisc: remove the no-op unmap_page and unmap_sg DMA operations Christoph Hellwig
2018-05-22 12:04     ` [OpenRISC] " Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04   ` [PATCH 14/25] openrisc: fix cache maintainance the the sync_single_for_device DMA operation Christoph Hellwig
2018-05-22 12:04     ` [OpenRISC] " Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04   ` [PATCH 15/25] openrisc: use generic dma_noncoherent_ops Christoph Hellwig
2018-05-22 12:04     ` [OpenRISC] " Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04   ` [PATCH 16/25] sh: simplify get_arch_dma_ops Christoph Hellwig
2018-05-22 12:04     ` [OpenRISC] " Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04   ` [PATCH 17/25] sh: introduce a sh_cacheop_vaddr helper Christoph Hellwig
2018-05-22 12:04     ` [OpenRISC] " Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04   ` [PATCH 18/25] sh: use dma_direct_ops for the CONFIG_DMA_COHERENT case Christoph Hellwig
2018-05-22 12:04     ` [OpenRISC] " Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04   ` [PATCH 19/25] sh: split arch/sh/mm/consistent.c Christoph Hellwig
2018-05-22 12:04     ` [OpenRISC] " Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04   ` [PATCH 20/25] sh: use generic dma_noncoherent_ops Christoph Hellwig
2018-05-22 12:04     ` [OpenRISC] " Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04   ` [PATCH 21/25] xtensa: " Christoph Hellwig
2018-05-22 12:04     ` [OpenRISC] " Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04   ` [PATCH 22/25] sparc: " Christoph Hellwig
2018-05-22 12:04     ` [OpenRISC] " Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04   ` [PATCH 23/25] parisc: merge pcx_dma_ops and pcxl_dma_ops Christoph Hellwig
2018-05-22 12:04     ` [OpenRISC] " Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04   ` [PATCH 24/25] parisc: always use flush_kernel_dcache_range for DMA cache maintainance Christoph Hellwig
2018-05-22 12:04     ` [OpenRISC] " Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04   ` [PATCH 25/25] parisc: use generic dma_noncoherent_ops Christoph Hellwig
2018-05-22 12:04     ` [OpenRISC] " Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig
2018-05-22 12:04     ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bad125dff49f6e49c895e818c9d1abb346a46e8e.camel@synopsys.com \
    --to=alexey.brodkin@synopsys.com \
    --cc=green.hu-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org \
    --cc=hch-jcswGhMUV9g@public.gmane.org \
    --cc=iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org \
    --cc=linux-alpha-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=linux-arch-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=linux-c6x-dev-jPsnJVOj+W6hPH1hqNUYSQ@public.gmane.org \
    --cc=linux-hexagon-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=linux-m68k-cunTk1MwBs8S/qaLPR03pWD2FQJk+8+b@public.gmane.org \
    --cc=linux-parisc-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=linux-sh-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=linux-snps-arc-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org \
    --cc=linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw@public.gmane.org \
    --cc=monstr-pSz03upnqPeHXe+LvDLADg@public.gmane.org \
    --cc=openrisc-cunTk1MwBs9a3B2Vnqf2dGD2FQJk+8+b@public.gmane.org \
    --cc=sparclinux-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.