From mboxrd@z Thu Jan 1 00:00:00 1970 From: =?UTF-8?Q?Christian_K=c3=b6nig?= Subject: Re: [RFC PATCH] mm, oom: distinguish blockable mode for mmu notifiers Date: Fri, 22 Jun 2018 17:13:02 +0200 Message-ID: <0aa9f695-5702-6704-9462-7779cbfdb3fd@amd.com> References: <20180622150242.16558-1-mhocko@kernel.org> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8"; Format="flowed" Content-Transfer-Encoding: base64 Return-path: In-Reply-To: <20180622150242.16558-1-mhocko@kernel.org> Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" To: Michal Hocko , LKML Cc: Michal Hocko , kvm@vger.kernel.org, =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , David Airlie , Sudeep Dutt , dri-devel@lists.freedesktop.org, linux-mm@kvack.org, Andrea Arcangeli , "David (ChunMing) Zhou" , Dimitri Sivanich , linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, Jason Gunthorpe , Doug Ledford , David Rientjes , xen-devel@lists.xenproject.org, intel-gfx@lists.freedesktop.org, =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Rodrigo Vivi , Boris Ostrovsky , Juergen Gross , Mike Marciniszyn , Dennis Dalessandro , Ashutosh Dixit List-Id: linux-rdma@vger.kernel.org SGkgTWljaGFsLAoKW0FkZGluZyBGZWxpeCBhcyB3ZWxsXQoKV2VsbCBmaXJzdCBvZiBhbGwgeW91 IGhhdmUgYSBtaXNjb25jZXB0aW9uIHdoeSBhdCBsZWFzdCB0aGUgQU1EIGdyYXBoaWNzIApkcml2 ZXIgbmVlZCB0byBiZSBhYmxlIHRvIHNsZWVwIGluIGFuIE1NVSBub3RpZmllcjogV2UgbmVlZCB0 byBzbGVlcCAKYmVjYXVzZSB3ZSBuZWVkIHRvIHdhaXQgZm9yIGhhcmR3YXJlIG9wZXJhdGlvbnMg dG8gZmluaXNoIGFuZCAqTk9UKiAKYmVjYXVzZSB3ZSBuZWVkIHRvIHdhaXQgZm9yIGxvY2tzLgoK SSdtIG5vdCBzdXJlIGlmIHlvdXIgZmxhZyBub3cgbWVhbnMgdGhhdCB5b3UgZ2VuZXJhbGx5IGNh bid0IHNsZWVwIGluIApNTVUgbm90aWZpZXJzIGFueSBtb3JlLCBidXQgaWYgdGhhdCdzIHRoZSBj YXNlIGF0IGxlYXN0IEFNRCBoYXJkd2FyZSAKd2lsbCBicmVhayBiYWRseS4gSW4gb3VyIGNhc2Ug dGhlIGFwcHJvYWNoIG9mIHdhaXRpbmcgZm9yIGEgc2hvcnQgdGltZSAKZm9yIHRoZSBwcm9jZXNz IHRvIGJlIHJlYXBlZCBhbmQgdGhlbiBzZWxlY3QgYW5vdGhlciB2aWN0aW0gYWN0dWFsbHkgCnNv dW5kcyBsaWtlIHRoZSByaWdodCB0aGluZyB0byBkby4KCldoYXQgd2UgYWxzbyBhbHJlYWR5IHRy eSB0byBkbyBpcyB0byBhYm9ydCBoYXJkd2FyZSBvcGVyYXRpb25zIHdpdGggdGhlIAphZGRyZXNz IHNwYWNlIHdoZW4gd2UgZGV0ZWN0IHRoYXQgdGhlIHByb2Nlc3MgaXMgZHlpbmcsIGJ1dCB0aGF0 IGNhbiAKY2VydGFpbmx5IGJlIGltcHJvdmVkLgoKUmVnYXJkcywKQ2hyaXN0aWFuLgoKQW0gMjIu MDYuMjAxOCB1bSAxNzowMiBzY2hyaWViIE1pY2hhbCBIb2NrbzoKPiBGcm9tOiBNaWNoYWwgSG9j a28gPG1ob2Nrb0BzdXNlLmNvbT4KPgo+IFRoZXJlIGFyZSBzZXZlcmFsIGJsb2NrYWJsZSBtbXUg bm90aWZpZXJzIHdoaWNoIG1pZ2h0IHNsZWVwIGluCj4gbW11X25vdGlmaWVyX2ludmFsaWRhdGVf cmFuZ2Vfc3RhcnQgYW5kIHRoYXQgaXMgYSBwcm9ibGVtIGZvciB0aGUKPiBvb21fcmVhcGVyIGJl Y2F1c2UgaXQgbmVlZHMgdG8gZ3VhcmFudGVlIGEgZm9yd2FyZCBwcm9ncmVzcyBzbyBpdCBjYW5u b3QKPiBkZXBlbmQgb24gYW55IHNsZWVwYWJsZSBsb2Nrcy4gQ3VycmVudGx5IHdlIHNpbXBseSBi YWNrIG9mZiBhbmQgbWFyayBhbgo+IG9vbSB2aWN0aW0gd2l0aCBibG9ja2FibGUgbW11IG5vdGlm aWVycyBhcyBkb25lIGFmdGVyIGEgc2hvcnQgc2xlZXAuCj4gVGhhdCBjYW4gcmVzdWx0IGluIHNl bGVjdGluZyBhIG5ldyBvb20gdmljdGltIHByZW1hdHVyZWx5IGJlY2F1c2UgdGhlCj4gcHJldmlv dXMgb25lIHN0aWxsIGhhc24ndCB0b3JuIGl0cyBtZW1vcnkgZG93biB5ZXQuCj4KPiBXZSBjYW4g ZG8gbXVjaCBiZXR0ZXIgdGhvdWdoLiBFdmVuIGlmIG1tdSBub3RpZmllcnMgdXNlIHNsZWVwYWJs ZSBsb2Nrcwo+IHRoZXJlIGlzIG5vIHJlYXNvbiB0byBhdXRvbWF0aWNhbGx5IGFzc3VtZSB0aG9z ZSBsb2NrcyBhcmUgaGVsZC4KPiBNb3Jlb3ZlciBtb3N0IG5vdGlmaWVycyBvbmx5IGNhcmUgYWJv dXQgYSBwb3J0aW9uIG9mIHRoZSBhZGRyZXNzCj4gc3BhY2UuIFRoaXMgcGF0Y2ggaGFuZGxlcyB0 aGUgZmlyc3QgcGFydCBvZiB0aGUgcHJvYmxlbS4KPiBfX21tdV9ub3RpZmllcl9pbnZhbGlkYXRl X3JhbmdlX3N0YXJ0IGdldHMgYSBibG9ja2FibGUgZmxhZyBhbmQKPiBjYWxsYmFja3MgYXJlIG5v dCBhbGxvd2VkIHRvIHNsZWVwIGlmIHRoZSBmbGFnIGlzIHNldCB0byBmYWxzZS4gVGhpcyBpcwo+ IGFjaGlldmVkIGJ5IHVzaW5nIHRyeWxvY2sgaW5zdGVhZCBvZiB0aGUgc2xlZXBhYmxlIGxvY2sg Zm9yIG1vc3QKPiBjYWxsYmFja3MuIEkgdGhpbmsgd2UgY2FuIGltcHJvdmUgdGhhdCBldmVuIGZ1 cnRoZXIgYmVjYXVzZSB0aGVyZSBpcwo+IGEgY29tbW9uIHBhdHRlcm4gdG8gZG8gYSByYW5nZSBs b29rdXAgZmlyc3QgYW5kIHRoZW4gZG8gc29tZXRoaW5nIGFib3V0Cj4gdGhhdC4gVGhlIGZpcnN0 IHBhcnQgY2FuIGJlIGRvbmUgd2l0aG91dCBhIHNsZWVwaW5nIGxvY2sgSSBwcmVzdW1lLgo+Cj4g QW55d2F5LCB3aGF0IGRvZXMgdGhlIG9vbV9yZWFwZXIgZG8gd2l0aCBhbGwgdGhhdD8gV2UgZG8g bm90IGhhdmUgdG8KPiBmYWlsIHJpZ2h0IGF3YXkuIFdlIHNpbXBseSByZXRyeSBpZiB0aGVyZSBp cyBhdCBsZWFzdCBvbmUgbm90aWZpZXIgd2hpY2gKPiBjb3VsZG4ndCBtYWtlIGFueSBwcm9ncmVz cy4gQSByZXRyeSBsb29wIGlzIGFscmVhZHkgaW1wbGVtZW50ZWQgdG8gd2FpdAo+IGZvciB0aGUg bW1hcF9zZW0gYW5kIHRoaXMgaXMgYmFzaWNhbGx5IHRoZSBzYW1lIHRoaW5nLgo+Cj4gQ2M6ICJE YXZpZCAoQ2h1bk1pbmcpIFpob3UiIDxEYXZpZDEuWmhvdUBhbWQuY29tPgo+IENjOiBQYW9sbyBC b256aW5pIDxwYm9uemluaUByZWRoYXQuY29tPgo+IENjOiAiUmFkaW0gS3LEjW3DocWZIiA8cmty Y21hckByZWRoYXQuY29tPgo+IENjOiBBbGV4IERldWNoZXIgPGFsZXhhbmRlci5kZXVjaGVyQGFt ZC5jb20+Cj4gQ2M6ICJDaHJpc3RpYW4gS8O2bmlnIiA8Y2hyaXN0aWFuLmtvZW5pZ0BhbWQuY29t Pgo+IENjOiBEYXZpZCBBaXJsaWUgPGFpcmxpZWRAbGludXguaWU+Cj4gQ2M6IEphbmkgTmlrdWxh IDxqYW5pLm5pa3VsYUBsaW51eC5pbnRlbC5jb20+Cj4gQ2M6IEpvb25hcyBMYWh0aW5lbiA8am9v bmFzLmxhaHRpbmVuQGxpbnV4LmludGVsLmNvbT4KPiBDYzogUm9kcmlnbyBWaXZpIDxyb2RyaWdv LnZpdmlAaW50ZWwuY29tPgo+IENjOiBEb3VnIExlZGZvcmQgPGRsZWRmb3JkQHJlZGhhdC5jb20+ Cj4gQ2M6IEphc29uIEd1bnRob3JwZSA8amdnQHppZXBlLmNhPgo+IENjOiBNaWtlIE1hcmNpbmlz enluIDxtaWtlLm1hcmNpbmlzenluQGludGVsLmNvbT4KPiBDYzogRGVubmlzIERhbGVzc2FuZHJv IDxkZW5uaXMuZGFsZXNzYW5kcm9AaW50ZWwuY29tPgo+IENjOiBTdWRlZXAgRHV0dCA8c3VkZWVw LmR1dHRAaW50ZWwuY29tPgo+IENjOiBBc2h1dG9zaCBEaXhpdCA8YXNodXRvc2guZGl4aXRAaW50 ZWwuY29tPgo+IENjOiBEaW1pdHJpIFNpdmFuaWNoIDxzaXZhbmljaEBzZ2kuY29tPgo+IENjOiBC b3JpcyBPc3Ryb3Zza3kgPGJvcmlzLm9zdHJvdnNreUBvcmFjbGUuY29tPgo+IENjOiBKdWVyZ2Vu IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+Cj4gQ2M6ICJKw6lyw7RtZSBHbGlzc2UiIDxqZ2xpc3Nl QHJlZGhhdC5jb20+Cj4gQ2M6IEFuZHJlYSBBcmNhbmdlbGkgPGFhcmNhbmdlQHJlZGhhdC5jb20+ Cj4gQ2M6IGt2bUB2Z2VyLmtlcm5lbC5vcmcgKG9wZW4gbGlzdDpLRVJORUwgVklSVFVBTCBNQUNI SU5FIEZPUiBYODYgKEtWTS94ODYpKQo+IENjOiBsaW51eC1rZXJuZWxAdmdlci5rZXJuZWwub3Jn IChvcGVuIGxpc3Q6WDg2IEFSQ0hJVEVDVFVSRSAoMzItQklUIEFORCA2NC1CSVQpKQo+IENjOiBh bWQtZ2Z4QGxpc3RzLmZyZWVkZXNrdG9wLm9yZyAob3BlbiBsaXN0OlJBREVPTiBhbmQgQU1ER1BV IERSTSBEUklWRVJTKQo+IENjOiBkcmktZGV2ZWxAbGlzdHMuZnJlZWRlc2t0b3Aub3JnIChvcGVu IGxpc3Q6RFJNIERSSVZFUlMpCj4gQ2M6IGludGVsLWdmeEBsaXN0cy5mcmVlZGVza3RvcC5vcmcg KG9wZW4gbGlzdDpJTlRFTCBEUk0gRFJJVkVSUyAoZXhjbHVkaW5nIFBvdWxzYm8sIE1vb3Jlc3Rv dy4uLikKPiBDYzogbGludXgtcmRtYUB2Z2VyLmtlcm5lbC5vcmcgKG9wZW4gbGlzdDpJTkZJTklC QU5EIFNVQlNZU1RFTSkKPiBDYzogeGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnIChtb2Rl cmF0ZWQgbGlzdDpYRU4gSFlQRVJWSVNPUiBJTlRFUkZBQ0UpCj4gQ2M6IGxpbnV4LW1tQGt2YWNr Lm9yZyAob3BlbiBsaXN0OkhNTSAtIEhldGVyb2dlbmVvdXMgTWVtb3J5IE1hbmFnZW1lbnQpCj4g UmVwb3J0ZWQtYnk6IERhdmlkIFJpZW50amVzIDxyaWVudGplc0Bnb29nbGUuY29tPgo+IFNpZ25l ZC1vZmYtYnk6IE1pY2hhbCBIb2NrbyA8bWhvY2tvQHN1c2UuY29tPgo+IC0tLQo+Cj4gSGksCj4g dGhpcyBpcyBhbiBSRkMgYW5kIG5vdCB0ZXN0ZWQgYXQgYWxsLiBJIGFtIG5vdCB2ZXJ5IGZhbWls aWFyIHdpdGggdGhlCj4gbW11IG5vdGlmaWVycyBzZW1hbnRpY3MgdmVyeSBtdWNoIHNvIHRoaXMg aXMgYSBjcnVkZSBhdHRlbXB0IHRvIGFjaGlldmUKPiB3aGF0IEkgbmVlZCBiYXNpY2FsbHkuIEl0 IG1pZ2h0IGJlIGNvbXBsZXRlbHkgd3JvbmcgYnV0IEkgd291bGQgbGlrZQo+IHRvIGRpc2N1c3Mg d2hhdCB3b3VsZCBiZSBhIGJldHRlciB3YXkgaWYgdGhhdCBpcyB0aGUgY2FzZS4KPgo+IGdldF9t YWludGFpbmVycyBnYXZlIG1lIHF1aXRlIGxhcmdlIGxpc3Qgb2YgcGVvcGxlIHRvIENDIHNvIEkg aGFkIHRvIHRyaW0KPiBpdCBkb3duLiBJZiB5b3UgdGhpbmsgSSBoYXZlIGZvcmdvdCBzb21lYm9k eSwgcGxlYXNlIGxldCBtZSBrbm93Cj4KPiBBbnkgZmVlZGJhY2sgaXMgaGlnaGx5IGFwcHJlY2lh dGVkLgo+Cj4gICBhcmNoL3g4Ni9rdm0veDg2LmMgICAgICAgICAgICAgICAgICAgICAgfCAgNyAr KysrLS0KPiAgIGRyaXZlcnMvZ3B1L2RybS9hbWQvYW1kZ3B1L2FtZGdwdV9tbi5jICB8IDMzICsr KysrKysrKysrKysrKysrKystLS0tLS0KPiAgIGRyaXZlcnMvZ3B1L2RybS9pOTE1L2k5MTVfZ2Vt X3VzZXJwdHIuYyB8IDEwICsrKysrLS0tCj4gICBkcml2ZXJzL2dwdS9kcm0vcmFkZW9uL3JhZGVv bl9tbi5jICAgICAgfCAxNSArKysrKysrKy0tLQo+ICAgZHJpdmVycy9pbmZpbmliYW5kL2NvcmUv dW1lbV9vZHAuYyAgICAgIHwgMTUgKysrKysrKystLS0KPiAgIGRyaXZlcnMvaW5maW5pYmFuZC9o dy9oZmkxL21tdV9yYi5jICAgICB8ICA3ICsrKystLQo+ICAgZHJpdmVycy9taXNjL21pYy9zY2lm L3NjaWZfZG1hLmMgICAgICAgIHwgIDcgKysrKy0tCj4gICBkcml2ZXJzL21pc2Mvc2dpLWdydS9n cnV0bGJwdXJnZS5jICAgICAgfCAgNyArKysrLS0KPiAgIGRyaXZlcnMveGVuL2dudGRldi5jICAg ICAgICAgICAgICAgICAgICB8IDE0ICsrKysrKysrLS0tCj4gICBpbmNsdWRlL2xpbnV4L2t2bV9o b3N0LmggICAgICAgICAgICAgICAgfCAgMiArLQo+ICAgaW5jbHVkZS9saW51eC9tbXVfbm90aWZp ZXIuaCAgICAgICAgICAgIHwgMTUgKysrKysrKysrLS0KPiAgIG1tL2htbS5jICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICB8ICA3ICsrKystLQo+ICAgbW0vbW11X25vdGlmaWVyLmMgICAg ICAgICAgICAgICAgICAgICAgIHwgMTUgKysrKysrKystLS0KPiAgIG1tL29vbV9raWxsLmMgICAg ICAgICAgICAgICAgICAgICAgICAgICB8IDI5ICsrKysrKysrKysrLS0tLS0tLS0tLS0KPiAgIHZp cnQva3ZtL2t2bV9tYWluLmMgICAgICAgICAgICAgICAgICAgICB8IDEyICsrKysrKy0tLQo+ICAg MTUgZmlsZXMgY2hhbmdlZCwgMTM3IGluc2VydGlvbnMoKyksIDU4IGRlbGV0aW9ucygtKQo+Cj4g ZGlmZiAtLWdpdCBhL2FyY2gveDg2L2t2bS94ODYuYyBiL2FyY2gveDg2L2t2bS94ODYuYwo+IGlu ZGV4IDZiY2VjYzMyNWU3ZS4uYWMwOGY1ZDcxMWJlIDEwMDY0NAo+IC0tLSBhL2FyY2gveDg2L2t2 bS94ODYuYwo+ICsrKyBiL2FyY2gveDg2L2t2bS94ODYuYwo+IEBAIC03MjAzLDggKzcyMDMsOSBA QCBzdGF0aWMgdm9pZCB2Y3B1X2xvYWRfZW9pX2V4aXRtYXAoc3RydWN0IGt2bV92Y3B1ICp2Y3B1 KQo+ICAgCWt2bV94ODZfb3BzLT5sb2FkX2VvaV9leGl0bWFwKHZjcHUsIGVvaV9leGl0X2JpdG1h cCk7Cj4gICB9Cj4gICAKPiAtdm9pZCBrdm1fYXJjaF9tbXVfbm90aWZpZXJfaW52YWxpZGF0ZV9y YW5nZShzdHJ1Y3Qga3ZtICprdm0sCj4gLQkJdW5zaWduZWQgbG9uZyBzdGFydCwgdW5zaWduZWQg bG9uZyBlbmQpCj4gK2ludCBrdm1fYXJjaF9tbXVfbm90aWZpZXJfaW52YWxpZGF0ZV9yYW5nZShz dHJ1Y3Qga3ZtICprdm0sCj4gKwkJdW5zaWduZWQgbG9uZyBzdGFydCwgdW5zaWduZWQgbG9uZyBl bmQsCj4gKwkJYm9vbCBibG9ja2FibGUpCj4gICB7Cj4gICAJdW5zaWduZWQgbG9uZyBhcGljX2Fk ZHJlc3M7Cj4gICAKPiBAQCAtNzIxNSw2ICs3MjE2LDggQEAgdm9pZCBrdm1fYXJjaF9tbXVfbm90 aWZpZXJfaW52YWxpZGF0ZV9yYW5nZShzdHJ1Y3Qga3ZtICprdm0sCj4gICAJYXBpY19hZGRyZXNz ID0gZ2ZuX3RvX2h2YShrdm0sIEFQSUNfREVGQVVMVF9QSFlTX0JBU0UgPj4gUEFHRV9TSElGVCk7 Cj4gICAJaWYgKHN0YXJ0IDw9IGFwaWNfYWRkcmVzcyAmJiBhcGljX2FkZHJlc3MgPCBlbmQpCj4g ICAJCWt2bV9tYWtlX2FsbF9jcHVzX3JlcXVlc3Qoa3ZtLCBLVk1fUkVRX0FQSUNfUEFHRV9SRUxP QUQpOwo+ICsKPiArCXJldHVybiAwOwo+ICAgfQo+ICAgCj4gICB2b2lkIGt2bV92Y3B1X3JlbG9h ZF9hcGljX2FjY2Vzc19wYWdlKHN0cnVjdCBrdm1fdmNwdSAqdmNwdSkKPiBkaWZmIC0tZ2l0IGEv ZHJpdmVycy9ncHUvZHJtL2FtZC9hbWRncHUvYW1kZ3B1X21uLmMgYi9kcml2ZXJzL2dwdS9kcm0v YW1kL2FtZGdwdS9hbWRncHVfbW4uYwo+IGluZGV4IDgzZTM0NGZiYjUwYS4uZDEzOGE1MjZmZWZm IDEwMDY0NAo+IC0tLSBhL2RyaXZlcnMvZ3B1L2RybS9hbWQvYW1kZ3B1L2FtZGdwdV9tbi5jCj4g KysrIGIvZHJpdmVycy9ncHUvZHJtL2FtZC9hbWRncHUvYW1kZ3B1X21uLmMKPiBAQCAtMTM2LDEy ICsxMzYsMTggQEAgdm9pZCBhbWRncHVfbW5fdW5sb2NrKHN0cnVjdCBhbWRncHVfbW4gKm1uKQo+ ICAgICoKPiAgICAqIFRha2UgdGhlIHJtbiByZWFkIHNpZGUgbG9jay4KPiAgICAqLwo+IC1zdGF0 aWMgdm9pZCBhbWRncHVfbW5fcmVhZF9sb2NrKHN0cnVjdCBhbWRncHVfbW4gKnJtbikKPiArc3Rh dGljIGludCBhbWRncHVfbW5fcmVhZF9sb2NrKHN0cnVjdCBhbWRncHVfbW4gKnJtbiwgYm9vbCBi bG9ja2FibGUpCj4gICB7Cj4gLQltdXRleF9sb2NrKCZybW4tPnJlYWRfbG9jayk7Cj4gKwlpZiAo YmxvY2thYmxlKQo+ICsJCW11dGV4X2xvY2soJnJtbi0+cmVhZF9sb2NrKTsKPiArCWVsc2UgaWYg KCFtdXRleF90cnlsb2NrKCZybW4tPnJlYWRfbG9jaykpCj4gKwkJcmV0dXJuIC1FQUdBSU47Cj4g Kwo+ICAgCWlmIChhdG9taWNfaW5jX3JldHVybigmcm1uLT5yZWN1cnNpb24pID09IDEpCj4gICAJ CWRvd25fcmVhZF9ub25fb3duZXIoJnJtbi0+bG9jayk7Cj4gICAJbXV0ZXhfdW5sb2NrKCZybW4t PnJlYWRfbG9jayk7Cj4gKwo+ICsJcmV0dXJuIDA7Cj4gICB9Cj4gICAKPiAgIC8qKgo+IEBAIC0x OTcsMTAgKzIwMywxMSBAQCBzdGF0aWMgdm9pZCBhbWRncHVfbW5faW52YWxpZGF0ZV9ub2RlKHN0 cnVjdCBhbWRncHVfbW5fbm9kZSAqbm9kZSwKPiAgICAqIFdlIGJsb2NrIGZvciBhbGwgQk9zIGJl dHdlZW4gc3RhcnQgYW5kIGVuZCB0byBiZSBpZGxlIGFuZAo+ICAgICogdW5tYXAgdGhlbSBieSBt b3ZlIHRoZW0gaW50byBzeXN0ZW0gZG9tYWluIGFnYWluLgo+ICAgICovCj4gLXN0YXRpYyB2b2lk IGFtZGdwdV9tbl9pbnZhbGlkYXRlX3JhbmdlX3N0YXJ0X2dmeChzdHJ1Y3QgbW11X25vdGlmaWVy ICptbiwKPiArc3RhdGljIGludCBhbWRncHVfbW5faW52YWxpZGF0ZV9yYW5nZV9zdGFydF9nZngo c3RydWN0IG1tdV9ub3RpZmllciAqbW4sCj4gICAJCQkJCQkgc3RydWN0IG1tX3N0cnVjdCAqbW0s Cj4gICAJCQkJCQkgdW5zaWduZWQgbG9uZyBzdGFydCwKPiAtCQkJCQkJIHVuc2lnbmVkIGxvbmcg ZW5kKQo+ICsJCQkJCQkgdW5zaWduZWQgbG9uZyBlbmQsCj4gKwkJCQkJCSBib29sIGJsb2NrYWJs ZSkKPiAgIHsKPiAgIAlzdHJ1Y3QgYW1kZ3B1X21uICpybW4gPSBjb250YWluZXJfb2YobW4sIHN0 cnVjdCBhbWRncHVfbW4sIG1uKTsKPiAgIAlzdHJ1Y3QgaW50ZXJ2YWxfdHJlZV9ub2RlICppdDsK PiBAQCAtMjA4LDcgKzIxNSwxMSBAQCBzdGF0aWMgdm9pZCBhbWRncHVfbW5faW52YWxpZGF0ZV9y YW5nZV9zdGFydF9nZngoc3RydWN0IG1tdV9ub3RpZmllciAqbW4sCj4gICAJLyogbm90aWZpY2F0 aW9uIGlzIGV4Y2x1c2l2ZSwgYnV0IGludGVydmFsIGlzIGluY2x1c2l2ZSAqLwo+ICAgCWVuZCAt PSAxOwo+ICAgCj4gLQlhbWRncHVfbW5fcmVhZF9sb2NrKHJtbik7Cj4gKwkvKiBUT0RPIHdlIHNo b3VsZCBiZSBhYmxlIHRvIHNwbGl0IGxvY2tpbmcgZm9yIGludGVydmFsIHRyZWUgYW5kCj4gKwkg KiBhbWRncHVfbW5faW52YWxpZGF0ZV9ub2RlCj4gKwkgKi8KPiArCWlmIChhbWRncHVfbW5fcmVh ZF9sb2NrKHJtbiwgYmxvY2thYmxlKSkKPiArCQlyZXR1cm4gLUVBR0FJTjsKPiAgIAo+ICAgCWl0 ID0gaW50ZXJ2YWxfdHJlZV9pdGVyX2ZpcnN0KCZybW4tPm9iamVjdHMsIHN0YXJ0LCBlbmQpOwo+ ICAgCXdoaWxlIChpdCkgewo+IEBAIC0yMTksNiArMjMwLDggQEAgc3RhdGljIHZvaWQgYW1kZ3B1 X21uX2ludmFsaWRhdGVfcmFuZ2Vfc3RhcnRfZ2Z4KHN0cnVjdCBtbXVfbm90aWZpZXIgKm1uLAo+ ICAgCj4gICAJCWFtZGdwdV9tbl9pbnZhbGlkYXRlX25vZGUobm9kZSwgc3RhcnQsIGVuZCk7Cj4g ICAJfQo+ICsKPiArCXJldHVybiAwOwo+ICAgfQo+ICAgCj4gICAvKioKPiBAQCAtMjMzLDEwICsy NDYsMTEgQEAgc3RhdGljIHZvaWQgYW1kZ3B1X21uX2ludmFsaWRhdGVfcmFuZ2Vfc3RhcnRfZ2Z4 KHN0cnVjdCBtbXVfbm90aWZpZXIgKm1uLAo+ICAgICogbmVjZXNzaXRhdGVzIGV2aWN0aW5nIGFs bCB1c2VyLW1vZGUgcXVldWVzIG9mIHRoZSBwcm9jZXNzLiBUaGUgQk9zCj4gICAgKiBhcmUgcmVz dG9ydGVkIGluIGFtZGdwdV9tbl9pbnZhbGlkYXRlX3JhbmdlX2VuZF9oc2EuCj4gICAgKi8KPiAt c3RhdGljIHZvaWQgYW1kZ3B1X21uX2ludmFsaWRhdGVfcmFuZ2Vfc3RhcnRfaHNhKHN0cnVjdCBt bXVfbm90aWZpZXIgKm1uLAo+ICtzdGF0aWMgaW50IGFtZGdwdV9tbl9pbnZhbGlkYXRlX3Jhbmdl X3N0YXJ0X2hzYShzdHJ1Y3QgbW11X25vdGlmaWVyICptbiwKPiAgIAkJCQkJCSBzdHJ1Y3QgbW1f c3RydWN0ICptbSwKPiAgIAkJCQkJCSB1bnNpZ25lZCBsb25nIHN0YXJ0LAo+IC0JCQkJCQkgdW5z aWduZWQgbG9uZyBlbmQpCj4gKwkJCQkJCSB1bnNpZ25lZCBsb25nIGVuZCwKPiArCQkJCQkJIGJv b2wgYmxvY2thYmxlKQo+ICAgewo+ICAgCXN0cnVjdCBhbWRncHVfbW4gKnJtbiA9IGNvbnRhaW5l cl9vZihtbiwgc3RydWN0IGFtZGdwdV9tbiwgbW4pOwo+ICAgCXN0cnVjdCBpbnRlcnZhbF90cmVl X25vZGUgKml0Owo+IEBAIC0yNDQsNyArMjU4LDggQEAgc3RhdGljIHZvaWQgYW1kZ3B1X21uX2lu dmFsaWRhdGVfcmFuZ2Vfc3RhcnRfaHNhKHN0cnVjdCBtbXVfbm90aWZpZXIgKm1uLAo+ICAgCS8q IG5vdGlmaWNhdGlvbiBpcyBleGNsdXNpdmUsIGJ1dCBpbnRlcnZhbCBpcyBpbmNsdXNpdmUgKi8K PiAgIAllbmQgLT0gMTsKPiAgIAo+IC0JYW1kZ3B1X21uX3JlYWRfbG9jayhybW4pOwo+ICsJaWYg KGFtZGdwdV9tbl9yZWFkX2xvY2socm1uLCBibG9ja2FibGUpKQo+ICsJCXJldHVybiAtRUFHQUlO Owo+ICAgCj4gICAJaXQgPSBpbnRlcnZhbF90cmVlX2l0ZXJfZmlyc3QoJnJtbi0+b2JqZWN0cywg c3RhcnQsIGVuZCk7Cj4gICAJd2hpbGUgKGl0KSB7Cj4gQEAgLTI2Miw2ICsyNzcsOCBAQCBzdGF0 aWMgdm9pZCBhbWRncHVfbW5faW52YWxpZGF0ZV9yYW5nZV9zdGFydF9oc2Eoc3RydWN0IG1tdV9u b3RpZmllciAqbW4sCj4gICAJCQkJYW1kZ3B1X2FtZGtmZF9ldmljdF91c2VycHRyKG1lbSwgbW0p Owo+ICAgCQl9Cj4gICAJfQo+ICsKPiArCXJldHVybiAwOwo+ICAgfQo+ICAgCj4gICAvKioKPiBk aWZmIC0tZ2l0IGEvZHJpdmVycy9ncHUvZHJtL2k5MTUvaTkxNV9nZW1fdXNlcnB0ci5jIGIvZHJp dmVycy9ncHUvZHJtL2k5MTUvaTkxNV9nZW1fdXNlcnB0ci5jCj4gaW5kZXggODU0YmQ1MWI5NDc4 Li41Mjg1ZGY5MzMxZmEgMTAwNjQ0Cj4gLS0tIGEvZHJpdmVycy9ncHUvZHJtL2k5MTUvaTkxNV9n ZW1fdXNlcnB0ci5jCj4gKysrIGIvZHJpdmVycy9ncHUvZHJtL2k5MTUvaTkxNV9nZW1fdXNlcnB0 ci5jCj4gQEAgLTExMiwxMCArMTEyLDExIEBAIHN0YXRpYyB2b2lkIGRlbF9vYmplY3Qoc3RydWN0 IGk5MTVfbW11X29iamVjdCAqbW8pCj4gICAJbW8tPmF0dGFjaGVkID0gZmFsc2U7Cj4gICB9Cj4g ICAKPiAtc3RhdGljIHZvaWQgaTkxNV9nZW1fdXNlcnB0cl9tbl9pbnZhbGlkYXRlX3JhbmdlX3N0 YXJ0KHN0cnVjdCBtbXVfbm90aWZpZXIgKl9tbiwKPiArc3RhdGljIGludCBpOTE1X2dlbV91c2Vy cHRyX21uX2ludmFsaWRhdGVfcmFuZ2Vfc3RhcnQoc3RydWN0IG1tdV9ub3RpZmllciAqX21uLAo+ ICAgCQkJCQkJICAgICAgIHN0cnVjdCBtbV9zdHJ1Y3QgKm1tLAo+ICAgCQkJCQkJICAgICAgIHVu c2lnbmVkIGxvbmcgc3RhcnQsCj4gLQkJCQkJCSAgICAgICB1bnNpZ25lZCBsb25nIGVuZCkKPiAr CQkJCQkJICAgICAgIHVuc2lnbmVkIGxvbmcgZW5kLAo+ICsJCQkJCQkgICAgICAgYm9vbCBibG9j a2FibGUpCj4gICB7Cj4gICAJc3RydWN0IGk5MTVfbW11X25vdGlmaWVyICptbiA9Cj4gICAJCWNv bnRhaW5lcl9vZihfbW4sIHN0cnVjdCBpOTE1X21tdV9ub3RpZmllciwgbW4pOwo+IEBAIC0xMjQs NyArMTI1LDcgQEAgc3RhdGljIHZvaWQgaTkxNV9nZW1fdXNlcnB0cl9tbl9pbnZhbGlkYXRlX3Jh bmdlX3N0YXJ0KHN0cnVjdCBtbXVfbm90aWZpZXIgKl9tbiwKPiAgIAlMSVNUX0hFQUQoY2FuY2Vs bGVkKTsKPiAgIAo+ICAgCWlmIChSQl9FTVBUWV9ST09UKCZtbi0+b2JqZWN0cy5yYl9yb290KSkK PiAtCQlyZXR1cm47Cj4gKwkJcmV0dXJuIDA7Cj4gICAKPiAgIAkvKiBpbnRlcnZhbCByYW5nZXMg YXJlIGluY2x1c2l2ZSwgYnV0IGludmFsaWRhdGUgcmFuZ2UgaXMgZXhjbHVzaXZlICovCj4gICAJ ZW5kLS07Cj4gQEAgLTE1Miw3ICsxNTMsOCBAQCBzdGF0aWMgdm9pZCBpOTE1X2dlbV91c2VycHRy X21uX2ludmFsaWRhdGVfcmFuZ2Vfc3RhcnQoc3RydWN0IG1tdV9ub3RpZmllciAqX21uLAo+ICAg CQlkZWxfb2JqZWN0KG1vKTsKPiAgIAlzcGluX3VubG9jaygmbW4tPmxvY2spOwo+ICAgCj4gLQlp ZiAoIWxpc3RfZW1wdHkoJmNhbmNlbGxlZCkpCj4gKwkvKiBUT0RPOiBjYW4gd2Ugc2tpcCB3YWl0 aW5nIGhlcmU/ICovCj4gKwlpZiAoIWxpc3RfZW1wdHkoJmNhbmNlbGxlZCkgJiYgYmxvY2thYmxl KQo+ICAgCQlmbHVzaF93b3JrcXVldWUobW4tPndxKTsKPiAgIH0KPiAgIAo+IGRpZmYgLS1naXQg YS9kcml2ZXJzL2dwdS9kcm0vcmFkZW9uL3JhZGVvbl9tbi5jIGIvZHJpdmVycy9ncHUvZHJtL3Jh ZGVvbi9yYWRlb25fbW4uYwo+IGluZGV4IGFiZDI0OTc1YzliMS4uYjQ3ZTgyOGI3MjVkIDEwMDY0 NAo+IC0tLSBhL2RyaXZlcnMvZ3B1L2RybS9yYWRlb24vcmFkZW9uX21uLmMKPiArKysgYi9kcml2 ZXJzL2dwdS9kcm0vcmFkZW9uL3JhZGVvbl9tbi5jCj4gQEAgLTExOCwxMCArMTE4LDExIEBAIHN0 YXRpYyB2b2lkIHJhZGVvbl9tbl9yZWxlYXNlKHN0cnVjdCBtbXVfbm90aWZpZXIgKm1uLAo+ICAg ICogV2UgYmxvY2sgZm9yIGFsbCBCT3MgYmV0d2VlbiBzdGFydCBhbmQgZW5kIHRvIGJlIGlkbGUg YW5kCj4gICAgKiB1bm1hcCB0aGVtIGJ5IG1vdmUgdGhlbSBpbnRvIHN5c3RlbSBkb21haW4gYWdh aW4uCj4gICAgKi8KPiAtc3RhdGljIHZvaWQgcmFkZW9uX21uX2ludmFsaWRhdGVfcmFuZ2Vfc3Rh cnQoc3RydWN0IG1tdV9ub3RpZmllciAqbW4sCj4gK3N0YXRpYyBpbnQgcmFkZW9uX21uX2ludmFs aWRhdGVfcmFuZ2Vfc3RhcnQoc3RydWN0IG1tdV9ub3RpZmllciAqbW4sCj4gICAJCQkJCSAgICAg c3RydWN0IG1tX3N0cnVjdCAqbW0sCj4gICAJCQkJCSAgICAgdW5zaWduZWQgbG9uZyBzdGFydCwK PiAtCQkJCQkgICAgIHVuc2lnbmVkIGxvbmcgZW5kKQo+ICsJCQkJCSAgICAgdW5zaWduZWQgbG9u ZyBlbmQsCj4gKwkJCQkJICAgICBib29sIGJsb2NrYWJsZSkKPiAgIHsKPiAgIAlzdHJ1Y3QgcmFk ZW9uX21uICpybW4gPSBjb250YWluZXJfb2YobW4sIHN0cnVjdCByYWRlb25fbW4sIG1uKTsKPiAg IAlzdHJ1Y3QgdHRtX29wZXJhdGlvbl9jdHggY3R4ID0geyBmYWxzZSwgZmFsc2UgfTsKPiBAQCAt MTMwLDcgKzEzMSwxMyBAQCBzdGF0aWMgdm9pZCByYWRlb25fbW5faW52YWxpZGF0ZV9yYW5nZV9z dGFydChzdHJ1Y3QgbW11X25vdGlmaWVyICptbiwKPiAgIAkvKiBub3RpZmljYXRpb24gaXMgZXhj bHVzaXZlLCBidXQgaW50ZXJ2YWwgaXMgaW5jbHVzaXZlICovCj4gICAJZW5kIC09IDE7Cj4gICAK PiAtCW11dGV4X2xvY2soJnJtbi0+bG9jayk7Cj4gKwkvKiBUT0RPIHdlIHNob3VsZCBiZSBhYmxl IHRvIHNwbGl0IGxvY2tpbmcgZm9yIGludGVydmFsIHRyZWUgYW5kCj4gKwkgKiB0aGUgdGVhciBk b3duLgo+ICsJICovCj4gKwlpZiAoYmxvY2thYmxlKQo+ICsJCW11dGV4X2xvY2soJnJtbi0+bG9j ayk7Cj4gKwllbHNlIGlmICghbXV0ZXhfdHJ5bG9jaygmcm1uLT5sb2NrKSkKPiArCQlyZXR1cm4g LUVBR0FJTjsKPiAgIAo+ICAgCWl0ID0gaW50ZXJ2YWxfdHJlZV9pdGVyX2ZpcnN0KCZybW4tPm9i amVjdHMsIHN0YXJ0LCBlbmQpOwo+ICAgCXdoaWxlIChpdCkgewo+IEBAIC0xNjcsNiArMTc0LDgg QEAgc3RhdGljIHZvaWQgcmFkZW9uX21uX2ludmFsaWRhdGVfcmFuZ2Vfc3RhcnQoc3RydWN0IG1t dV9ub3RpZmllciAqbW4sCj4gICAJfQo+ICAgCQo+ICAgCW11dGV4X3VubG9jaygmcm1uLT5sb2Nr KTsKPiArCj4gKwlyZXR1cm4gMDsKPiAgIH0KPiAgIAo+ICAgc3RhdGljIGNvbnN0IHN0cnVjdCBt bXVfbm90aWZpZXJfb3BzIHJhZGVvbl9tbl9vcHMgPSB7Cj4gZGlmZiAtLWdpdCBhL2RyaXZlcnMv aW5maW5pYmFuZC9jb3JlL3VtZW1fb2RwLmMgYi9kcml2ZXJzL2luZmluaWJhbmQvY29yZS91bWVt X29kcC5jCj4gaW5kZXggMTgyNDM2YjkyYmE5Li5mNjVmNmEyOWRhYWUgMTAwNjQ0Cj4gLS0tIGEv ZHJpdmVycy9pbmZpbmliYW5kL2NvcmUvdW1lbV9vZHAuYwo+ICsrKyBiL2RyaXZlcnMvaW5maW5p YmFuZC9jb3JlL3VtZW1fb2RwLmMKPiBAQCAtMjA3LDIyICsyMDcsMjkgQEAgc3RhdGljIGludCBp bnZhbGlkYXRlX3JhbmdlX3N0YXJ0X3RyYW1wb2xpbmUoc3RydWN0IGliX3VtZW0gKml0ZW0sIHU2 NCBzdGFydCwKPiAgIAlyZXR1cm4gMDsKPiAgIH0KPiAgIAo+IC1zdGF0aWMgdm9pZCBpYl91bWVt X25vdGlmaWVyX2ludmFsaWRhdGVfcmFuZ2Vfc3RhcnQoc3RydWN0IG1tdV9ub3RpZmllciAqbW4s Cj4gK3N0YXRpYyBpbnQgaWJfdW1lbV9ub3RpZmllcl9pbnZhbGlkYXRlX3JhbmdlX3N0YXJ0KHN0 cnVjdCBtbXVfbm90aWZpZXIgKm1uLAo+ICAgCQkJCQkJICAgIHN0cnVjdCBtbV9zdHJ1Y3QgKm1t LAo+ICAgCQkJCQkJICAgIHVuc2lnbmVkIGxvbmcgc3RhcnQsCj4gLQkJCQkJCSAgICB1bnNpZ25l ZCBsb25nIGVuZCkKPiArCQkJCQkJICAgIHVuc2lnbmVkIGxvbmcgZW5kLAo+ICsJCQkJCQkgICAg Ym9vbCBibG9ja2FibGUpCj4gICB7Cj4gICAJc3RydWN0IGliX3Vjb250ZXh0ICpjb250ZXh0ID0g Y29udGFpbmVyX29mKG1uLCBzdHJ1Y3QgaWJfdWNvbnRleHQsIG1uKTsKPiAgIAo+ICAgCWlmICgh Y29udGV4dC0+aW52YWxpZGF0ZV9yYW5nZSkKPiAtCQlyZXR1cm47Cj4gKwkJcmV0dXJuIDA7Cj4g Kwo+ICsJaWYgKGJsb2NrYWJsZSkKPiArCQlkb3duX3JlYWQoJmNvbnRleHQtPnVtZW1fcndzZW0p Owo+ICsJZWxzZSBpZiAoIWRvd25fcmVhZF90cnlsb2NrKCZjb250ZXh0LT51bWVtX3J3c2VtKSkK PiArCQlyZXR1cm4gLUVBR0FJTjsKPiAgIAo+ICAgCWliX3Vjb250ZXh0X25vdGlmaWVyX3N0YXJ0 X2FjY291bnQoY29udGV4dCk7Cj4gLQlkb3duX3JlYWQoJmNvbnRleHQtPnVtZW1fcndzZW0pOwo+ ICAgCXJidF9pYl91bWVtX2Zvcl9lYWNoX2luX3JhbmdlKCZjb250ZXh0LT51bWVtX3RyZWUsIHN0 YXJ0LAo+ICAgCQkJCSAgICAgIGVuZCwKPiAgIAkJCQkgICAgICBpbnZhbGlkYXRlX3JhbmdlX3N0 YXJ0X3RyYW1wb2xpbmUsIE5VTEwpOwo+ICAgCXVwX3JlYWQoJmNvbnRleHQtPnVtZW1fcndzZW0p Owo+ICsKPiArCXJldHVybiAwOwo+ICAgfQo+ICAgCj4gICBzdGF0aWMgaW50IGludmFsaWRhdGVf cmFuZ2VfZW5kX3RyYW1wb2xpbmUoc3RydWN0IGliX3VtZW0gKml0ZW0sIHU2NCBzdGFydCwKPiBk aWZmIC0tZ2l0IGEvZHJpdmVycy9pbmZpbmliYW5kL2h3L2hmaTEvbW11X3JiLmMgYi9kcml2ZXJz L2luZmluaWJhbmQvaHcvaGZpMS9tbXVfcmIuYwo+IGluZGV4IDcwYWNlZWZlMTRkNS4uODc4MDU2 MGQxNjIzIDEwMDY0NAo+IC0tLSBhL2RyaXZlcnMvaW5maW5pYmFuZC9ody9oZmkxL21tdV9yYi5j Cj4gKysrIGIvZHJpdmVycy9pbmZpbmliYW5kL2h3L2hmaTEvbW11X3JiLmMKPiBAQCAtMjg0LDEw ICsyODQsMTEgQEAgdm9pZCBoZmkxX21tdV9yYl9yZW1vdmUoc3RydWN0IG1tdV9yYl9oYW5kbGVy ICpoYW5kbGVyLAo+ICAgCWhhbmRsZXItPm9wcy0+cmVtb3ZlKGhhbmRsZXItPm9wc19hcmcsIG5v ZGUpOwo+ICAgfQo+ICAgCj4gLXN0YXRpYyB2b2lkIG1tdV9ub3RpZmllcl9yYW5nZV9zdGFydChz dHJ1Y3QgbW11X25vdGlmaWVyICptbiwKPiArc3RhdGljIGludCBtbXVfbm90aWZpZXJfcmFuZ2Vf c3RhcnQoc3RydWN0IG1tdV9ub3RpZmllciAqbW4sCj4gICAJCQkJICAgICBzdHJ1Y3QgbW1fc3Ry dWN0ICptbSwKPiAgIAkJCQkgICAgIHVuc2lnbmVkIGxvbmcgc3RhcnQsCj4gLQkJCQkgICAgIHVu c2lnbmVkIGxvbmcgZW5kKQo+ICsJCQkJICAgICB1bnNpZ25lZCBsb25nIGVuZCwKPiArCQkJCSAg ICAgYm9vbCBibG9ja2FibGUpCj4gICB7Cj4gICAJc3RydWN0IG1tdV9yYl9oYW5kbGVyICpoYW5k bGVyID0KPiAgIAkJY29udGFpbmVyX29mKG1uLCBzdHJ1Y3QgbW11X3JiX2hhbmRsZXIsIG1uKTsK PiBAQCAtMzEzLDYgKzMxNCw4IEBAIHN0YXRpYyB2b2lkIG1tdV9ub3RpZmllcl9yYW5nZV9zdGFy dChzdHJ1Y3QgbW11X25vdGlmaWVyICptbiwKPiAgIAo+ICAgCWlmIChhZGRlZCkKPiAgIAkJcXVl dWVfd29yayhoYW5kbGVyLT53cSwgJmhhbmRsZXItPmRlbF93b3JrKTsKPiArCj4gKwlyZXR1cm4g MDsKPiAgIH0KPiAgIAo+ICAgLyoKPiBkaWZmIC0tZ2l0IGEvZHJpdmVycy9taXNjL21pYy9zY2lm L3NjaWZfZG1hLmMgYi9kcml2ZXJzL21pc2MvbWljL3NjaWYvc2NpZl9kbWEuYwo+IGluZGV4IDYz ZDYyNDZkNmRmZi4uZDk0MDU2OGJlZDg3IDEwMDY0NAo+IC0tLSBhL2RyaXZlcnMvbWlzYy9taWMv c2NpZi9zY2lmX2RtYS5jCj4gKysrIGIvZHJpdmVycy9taXNjL21pYy9zY2lmL3NjaWZfZG1hLmMK PiBAQCAtMjAwLDE1ICsyMDAsMTggQEAgc3RhdGljIHZvaWQgc2NpZl9tbXVfbm90aWZpZXJfcmVs ZWFzZShzdHJ1Y3QgbW11X25vdGlmaWVyICptbiwKPiAgIAlzY2hlZHVsZV93b3JrKCZzY2lmX2lu Zm8ubWlzY193b3JrKTsKPiAgIH0KPiAgIAo+IC1zdGF0aWMgdm9pZCBzY2lmX21tdV9ub3RpZmll cl9pbnZhbGlkYXRlX3JhbmdlX3N0YXJ0KHN0cnVjdCBtbXVfbm90aWZpZXIgKm1uLAo+ICtzdGF0 aWMgaW50IHNjaWZfbW11X25vdGlmaWVyX2ludmFsaWRhdGVfcmFuZ2Vfc3RhcnQoc3RydWN0IG1t dV9ub3RpZmllciAqbW4sCj4gICAJCQkJCQkgICAgIHN0cnVjdCBtbV9zdHJ1Y3QgKm1tLAo+ICAg CQkJCQkJICAgICB1bnNpZ25lZCBsb25nIHN0YXJ0LAo+IC0JCQkJCQkgICAgIHVuc2lnbmVkIGxv bmcgZW5kKQo+ICsJCQkJCQkgICAgIHVuc2lnbmVkIGxvbmcgZW5kLAo+ICsJCQkJCQkgICAgIGJv b2wgYmxvY2thYmxlKQo+ICAgewo+ICAgCXN0cnVjdCBzY2lmX21tdV9ub3RpZgkqbW1uOwo+ICAg Cj4gICAJbW1uID0gY29udGFpbmVyX29mKG1uLCBzdHJ1Y3Qgc2NpZl9tbXVfbm90aWYsIGVwX21t dV9ub3RpZmllcik7Cj4gICAJc2NpZl9ybWFfZGVzdHJveV90Y3cobW1uLCBzdGFydCwgZW5kIC0g c3RhcnQpOwo+ICsKPiArCXJldHVybiAwCj4gICB9Cj4gICAKPiAgIHN0YXRpYyB2b2lkIHNjaWZf bW11X25vdGlmaWVyX2ludmFsaWRhdGVfcmFuZ2VfZW5kKHN0cnVjdCBtbXVfbm90aWZpZXIgKm1u LAo+IGRpZmYgLS1naXQgYS9kcml2ZXJzL21pc2Mvc2dpLWdydS9ncnV0bGJwdXJnZS5jIGIvZHJp dmVycy9taXNjL3NnaS1ncnUvZ3J1dGxicHVyZ2UuYwo+IGluZGV4IGEzNDU0ZWI1NmZiZi4uYmUy OGYwNWJmYWZhIDEwMDY0NAo+IC0tLSBhL2RyaXZlcnMvbWlzYy9zZ2ktZ3J1L2dydXRsYnB1cmdl LmMKPiArKysgYi9kcml2ZXJzL21pc2Mvc2dpLWdydS9ncnV0bGJwdXJnZS5jCj4gQEAgLTIxOSw5 ICsyMTksMTAgQEAgdm9pZCBncnVfZmx1c2hfYWxsX3RsYihzdHJ1Y3QgZ3J1X3N0YXRlICpncnUp Cj4gICAvKgo+ICAgICogTU1VT1BTIG5vdGlmaWVyIGNhbGxvdXQgZnVuY3Rpb25zCj4gICAgKi8K PiAtc3RhdGljIHZvaWQgZ3J1X2ludmFsaWRhdGVfcmFuZ2Vfc3RhcnQoc3RydWN0IG1tdV9ub3Rp ZmllciAqbW4sCj4gK3N0YXRpYyBpbnQgZ3J1X2ludmFsaWRhdGVfcmFuZ2Vfc3RhcnQoc3RydWN0 IG1tdV9ub3RpZmllciAqbW4sCj4gICAJCQkJICAgICAgIHN0cnVjdCBtbV9zdHJ1Y3QgKm1tLAo+ IC0JCQkJICAgICAgIHVuc2lnbmVkIGxvbmcgc3RhcnQsIHVuc2lnbmVkIGxvbmcgZW5kKQo+ICsJ CQkJICAgICAgIHVuc2lnbmVkIGxvbmcgc3RhcnQsIHVuc2lnbmVkIGxvbmcgZW5kLAo+ICsJCQkJ ICAgICAgIGJvb2wgYmxvY2thYmxlKQo+ICAgewo+ICAgCXN0cnVjdCBncnVfbW1fc3RydWN0ICpn bXMgPSBjb250YWluZXJfb2YobW4sIHN0cnVjdCBncnVfbW1fc3RydWN0LAo+ICAgCQkJCQkJIG1z X25vdGlmaWVyKTsKPiBAQCAtMjMxLDYgKzIzMiw4IEBAIHN0YXRpYyB2b2lkIGdydV9pbnZhbGlk YXRlX3JhbmdlX3N0YXJ0KHN0cnVjdCBtbXVfbm90aWZpZXIgKm1uLAo+ICAgCWdydV9kYmcoZ3J1 ZGV2LCAiZ21zICVwLCBzdGFydCAweCVseCwgZW5kIDB4JWx4LCBhY3QgJWRcbiIsIGdtcywKPiAg IAkJc3RhcnQsIGVuZCwgYXRvbWljX3JlYWQoJmdtcy0+bXNfcmFuZ2VfYWN0aXZlKSk7Cj4gICAJ Z3J1X2ZsdXNoX3RsYl9yYW5nZShnbXMsIHN0YXJ0LCBlbmQgLSBzdGFydCk7Cj4gKwo+ICsJcmV0 dXJuIDA7Cj4gICB9Cj4gICAKPiAgIHN0YXRpYyB2b2lkIGdydV9pbnZhbGlkYXRlX3JhbmdlX2Vu ZChzdHJ1Y3QgbW11X25vdGlmaWVyICptbiwKPiBkaWZmIC0tZ2l0IGEvZHJpdmVycy94ZW4vZ250 ZGV2LmMgYi9kcml2ZXJzL3hlbi9nbnRkZXYuYwo+IGluZGV4IGJkNTY2NTNiOWJiYy4uNTA3MjRk MDlmZTVjIDEwMDY0NAo+IC0tLSBhL2RyaXZlcnMveGVuL2dudGRldi5jCj4gKysrIGIvZHJpdmVy cy94ZW4vZ250ZGV2LmMKPiBAQCAtNDY1LDE0ICs0NjUsMjAgQEAgc3RhdGljIHZvaWQgdW5tYXBf aWZfaW5fcmFuZ2Uoc3RydWN0IGdyYW50X21hcCAqbWFwLAo+ICAgCVdBUk5fT04oZXJyKTsKPiAg IH0KPiAgIAo+IC1zdGF0aWMgdm9pZCBtbl9pbnZsX3JhbmdlX3N0YXJ0KHN0cnVjdCBtbXVfbm90 aWZpZXIgKm1uLAo+ICtzdGF0aWMgaW50IG1uX2ludmxfcmFuZ2Vfc3RhcnQoc3RydWN0IG1tdV9u b3RpZmllciAqbW4sCj4gICAJCQkJc3RydWN0IG1tX3N0cnVjdCAqbW0sCj4gLQkJCQl1bnNpZ25l ZCBsb25nIHN0YXJ0LCB1bnNpZ25lZCBsb25nIGVuZCkKPiArCQkJCXVuc2lnbmVkIGxvbmcgc3Rh cnQsIHVuc2lnbmVkIGxvbmcgZW5kLAo+ICsJCQkJYm9vbCBibG9ja2FibGUpCj4gICB7Cj4gICAJ c3RydWN0IGdudGRldl9wcml2ICpwcml2ID0gY29udGFpbmVyX29mKG1uLCBzdHJ1Y3QgZ250ZGV2 X3ByaXYsIG1uKTsKPiAgIAlzdHJ1Y3QgZ3JhbnRfbWFwICptYXA7Cj4gICAKPiAtCW11dGV4X2xv Y2soJnByaXYtPmxvY2spOwo+ICsJLyogVE9ETyBkbyB3ZSByZWFsbHkgbmVlZCBhIG11dGV4IGhl cmU/ICovCj4gKwlpZiAoYmxvY2thYmxlKQo+ICsJCW11dGV4X2xvY2soJnByaXYtPmxvY2spOwo+ ICsJZWxzZSBpZiAoIW11dGV4X3RyeWxvY2soJnByaXYtPmxvY2spKQo+ICsJCXJldHVybiAtRUFH QUlOOwo+ICsKPiAgIAlsaXN0X2Zvcl9lYWNoX2VudHJ5KG1hcCwgJnByaXYtPm1hcHMsIG5leHQp IHsKPiAgIAkJdW5tYXBfaWZfaW5fcmFuZ2UobWFwLCBzdGFydCwgZW5kKTsKPiAgIAl9Cj4gQEAg LTQ4MCw2ICs0ODYsOCBAQCBzdGF0aWMgdm9pZCBtbl9pbnZsX3JhbmdlX3N0YXJ0KHN0cnVjdCBt bXVfbm90aWZpZXIgKm1uLAo+ICAgCQl1bm1hcF9pZl9pbl9yYW5nZShtYXAsIHN0YXJ0LCBlbmQp Owo+ICAgCX0KPiAgIAltdXRleF91bmxvY2soJnByaXYtPmxvY2spOwo+ICsKPiArCXJldHVybiB0 cnVlOwo+ICAgfQo+ICAgCj4gICBzdGF0aWMgdm9pZCBtbl9yZWxlYXNlKHN0cnVjdCBtbXVfbm90 aWZpZXIgKm1uLAo+IGRpZmYgLS1naXQgYS9pbmNsdWRlL2xpbnV4L2t2bV9ob3N0LmggYi9pbmNs dWRlL2xpbnV4L2t2bV9ob3N0LmgKPiBpbmRleCA0ZWU3YmM1NDhhODMuLmU0MTgxMDYzZTc1NSAx MDA2NDQKPiAtLS0gYS9pbmNsdWRlL2xpbnV4L2t2bV9ob3N0LmgKPiArKysgYi9pbmNsdWRlL2xp bnV4L2t2bV9ob3N0LmgKPiBAQCAtMTI3NSw3ICsxMjc1LDcgQEAgc3RhdGljIGlubGluZSBsb25n IGt2bV9hcmNoX3ZjcHVfYXN5bmNfaW9jdGwoc3RydWN0IGZpbGUgKmZpbHAsCj4gICB9Cj4gICAj ZW5kaWYgLyogQ09ORklHX0hBVkVfS1ZNX1ZDUFVfQVNZTkNfSU9DVEwgKi8KPiAgIAo+IC12b2lk IGt2bV9hcmNoX21tdV9ub3RpZmllcl9pbnZhbGlkYXRlX3JhbmdlKHN0cnVjdCBrdm0gKmt2bSwK PiAraW50IGt2bV9hcmNoX21tdV9ub3RpZmllcl9pbnZhbGlkYXRlX3JhbmdlKHN0cnVjdCBrdm0g Kmt2bSwKPiAgIAkJdW5zaWduZWQgbG9uZyBzdGFydCwgdW5zaWduZWQgbG9uZyBlbmQpOwo+ICAg Cj4gICAjaWZkZWYgQ09ORklHX0hBVkVfS1ZNX1ZDUFVfUlVOX1BJRF9DSEFOR0UKPiBkaWZmIC0t Z2l0IGEvaW5jbHVkZS9saW51eC9tbXVfbm90aWZpZXIuaCBiL2luY2x1ZGUvbGludXgvbW11X25v dGlmaWVyLmgKPiBpbmRleCAzOTJlNmFmODI3MDEuLjM2OTg2NzUwMWJlZCAxMDA2NDQKPiAtLS0g YS9pbmNsdWRlL2xpbnV4L21tdV9ub3RpZmllci5oCj4gKysrIGIvaW5jbHVkZS9saW51eC9tbXVf bm90aWZpZXIuaAo+IEBAIC0yMzAsNyArMjMwLDggQEAgZXh0ZXJuIGludCBfX21tdV9ub3RpZmll cl90ZXN0X3lvdW5nKHN0cnVjdCBtbV9zdHJ1Y3QgKm1tLAo+ICAgZXh0ZXJuIHZvaWQgX19tbXVf bm90aWZpZXJfY2hhbmdlX3B0ZShzdHJ1Y3QgbW1fc3RydWN0ICptbSwKPiAgIAkJCQkgICAgICB1 bnNpZ25lZCBsb25nIGFkZHJlc3MsIHB0ZV90IHB0ZSk7Cj4gICBleHRlcm4gdm9pZCBfX21tdV9u b3RpZmllcl9pbnZhbGlkYXRlX3JhbmdlX3N0YXJ0KHN0cnVjdCBtbV9zdHJ1Y3QgKm1tLAo+IC0J CQkJICB1bnNpZ25lZCBsb25nIHN0YXJ0LCB1bnNpZ25lZCBsb25nIGVuZCk7Cj4gKwkJCQkgIHVu c2lnbmVkIGxvbmcgc3RhcnQsIHVuc2lnbmVkIGxvbmcgZW5kLAo+ICsJCQkJICBib29sIGJsb2Nr YWJsZSk7Cj4gICBleHRlcm4gdm9pZCBfX21tdV9ub3RpZmllcl9pbnZhbGlkYXRlX3JhbmdlX2Vu ZChzdHJ1Y3QgbW1fc3RydWN0ICptbSwKPiAgIAkJCQkgIHVuc2lnbmVkIGxvbmcgc3RhcnQsIHVu c2lnbmVkIGxvbmcgZW5kLAo+ICAgCQkJCSAgYm9vbCBvbmx5X2VuZCk7Cj4gQEAgLTI4MSw3ICsy ODIsMTcgQEAgc3RhdGljIGlubGluZSB2b2lkIG1tdV9ub3RpZmllcl9pbnZhbGlkYXRlX3Jhbmdl X3N0YXJ0KHN0cnVjdCBtbV9zdHJ1Y3QgKm1tLAo+ICAgCQkJCSAgdW5zaWduZWQgbG9uZyBzdGFy dCwgdW5zaWduZWQgbG9uZyBlbmQpCj4gICB7Cj4gICAJaWYgKG1tX2hhc19ub3RpZmllcnMobW0p KQo+IC0JCV9fbW11X25vdGlmaWVyX2ludmFsaWRhdGVfcmFuZ2Vfc3RhcnQobW0sIHN0YXJ0LCBl bmQpOwo+ICsJCV9fbW11X25vdGlmaWVyX2ludmFsaWRhdGVfcmFuZ2Vfc3RhcnQobW0sIHN0YXJ0 LCBlbmQsIHRydWUpOwo+ICt9Cj4gKwo+ICtzdGF0aWMgaW5saW5lIGludCBtbXVfbm90aWZpZXJf aW52YWxpZGF0ZV9yYW5nZV9zdGFydF9ub25ibG9jayhzdHJ1Y3QgbW1fc3RydWN0ICptbSwKPiAr CQkJCSAgdW5zaWduZWQgbG9uZyBzdGFydCwgdW5zaWduZWQgbG9uZyBlbmQpCj4gK3sKPiArCWlu dCByZXQgPSAwOwo+ICsJaWYgKG1tX2hhc19ub3RpZmllcnMobW0pKQo+ICsJCXJldCA9IF9fbW11 X25vdGlmaWVyX2ludmFsaWRhdGVfcmFuZ2Vfc3RhcnQobW0sIHN0YXJ0LCBlbmQsIGZhbHNlKTsK PiArCj4gKwlyZXR1cm4gcmV0Owo+ICAgfQo+ICAgCj4gICBzdGF0aWMgaW5saW5lIHZvaWQgbW11 X25vdGlmaWVyX2ludmFsaWRhdGVfcmFuZ2VfZW5kKHN0cnVjdCBtbV9zdHJ1Y3QgKm1tLAo+IGRp ZmYgLS1naXQgYS9tbS9obW0uYyBiL21tL2htbS5jCj4gaW5kZXggZGU3YjZiZjc3MjAxLi44MWZk NTdiZDI2MzQgMTAwNjQ0Cj4gLS0tIGEvbW0vaG1tLmMKPiArKysgYi9tbS9obW0uYwo+IEBAIC0x NzcsMTYgKzE3NywxOSBAQCBzdGF0aWMgdm9pZCBobW1fcmVsZWFzZShzdHJ1Y3QgbW11X25vdGlm aWVyICptbiwgc3RydWN0IG1tX3N0cnVjdCAqbW0pCj4gICAJdXBfd3JpdGUoJmhtbS0+bWlycm9y c19zZW0pOwo+ICAgfQo+ICAgCj4gLXN0YXRpYyB2b2lkIGhtbV9pbnZhbGlkYXRlX3JhbmdlX3N0 YXJ0KHN0cnVjdCBtbXVfbm90aWZpZXIgKm1uLAo+ICtzdGF0aWMgaW50IGhtbV9pbnZhbGlkYXRl X3JhbmdlX3N0YXJ0KHN0cnVjdCBtbXVfbm90aWZpZXIgKm1uLAo+ICAgCQkJCSAgICAgICBzdHJ1 Y3QgbW1fc3RydWN0ICptbSwKPiAgIAkJCQkgICAgICAgdW5zaWduZWQgbG9uZyBzdGFydCwKPiAt CQkJCSAgICAgICB1bnNpZ25lZCBsb25nIGVuZCkKPiArCQkJCSAgICAgICB1bnNpZ25lZCBsb25n IGVuZCwKPiArCQkJCSAgICAgICBib29sIGJsb2NrYWJsZSkKPiAgIHsKPiAgIAlzdHJ1Y3QgaG1t ICpobW0gPSBtbS0+aG1tOwo+ICAgCj4gICAJVk1fQlVHX09OKCFobW0pOwo+ICAgCj4gICAJYXRv bWljX2luYygmaG1tLT5zZXF1ZW5jZSk7Cj4gKwo+ICsJcmV0dXJuIDA7Cj4gICB9Cj4gICAKPiAg IHN0YXRpYyB2b2lkIGhtbV9pbnZhbGlkYXRlX3JhbmdlX2VuZChzdHJ1Y3QgbW11X25vdGlmaWVy ICptbiwKPiBkaWZmIC0tZ2l0IGEvbW0vbW11X25vdGlmaWVyLmMgYi9tbS9tbXVfbm90aWZpZXIu Ywo+IGluZGV4IGVmZjZiODhhOTkzZi4uMzBjYzQzMTIxZGE5IDEwMDY0NAo+IC0tLSBhL21tL21t dV9ub3RpZmllci5jCj4gKysrIGIvbW0vbW11X25vdGlmaWVyLmMKPiBAQCAtMTc0LDE4ICsxNzQs MjUgQEAgdm9pZCBfX21tdV9ub3RpZmllcl9jaGFuZ2VfcHRlKHN0cnVjdCBtbV9zdHJ1Y3QgKm1t LCB1bnNpZ25lZCBsb25nIGFkZHJlc3MsCj4gICAJc3JjdV9yZWFkX3VubG9jaygmc3JjdSwgaWQp Owo+ICAgfQo+ICAgCj4gLXZvaWQgX19tbXVfbm90aWZpZXJfaW52YWxpZGF0ZV9yYW5nZV9zdGFy dChzdHJ1Y3QgbW1fc3RydWN0ICptbSwKPiAtCQkJCSAgdW5zaWduZWQgbG9uZyBzdGFydCwgdW5z aWduZWQgbG9uZyBlbmQpCj4gK2ludCBfX21tdV9ub3RpZmllcl9pbnZhbGlkYXRlX3JhbmdlX3N0 YXJ0KHN0cnVjdCBtbV9zdHJ1Y3QgKm1tLAo+ICsJCQkJICB1bnNpZ25lZCBsb25nIHN0YXJ0LCB1 bnNpZ25lZCBsb25nIGVuZCwKPiArCQkJCSAgYm9vbCBibG9ja2FibGUpCj4gICB7Cj4gICAJc3Ry dWN0IG1tdV9ub3RpZmllciAqbW47Cj4gKwlpbnQgcmV0ID0gMDsKPiAgIAlpbnQgaWQ7Cj4gICAK PiAgIAlpZCA9IHNyY3VfcmVhZF9sb2NrKCZzcmN1KTsKPiAgIAlobGlzdF9mb3JfZWFjaF9lbnRy eV9yY3UobW4sICZtbS0+bW11X25vdGlmaWVyX21tLT5saXN0LCBobGlzdCkgewo+IC0JCWlmICht bi0+b3BzLT5pbnZhbGlkYXRlX3JhbmdlX3N0YXJ0KQo+IC0JCQltbi0+b3BzLT5pbnZhbGlkYXRl X3JhbmdlX3N0YXJ0KG1uLCBtbSwgc3RhcnQsIGVuZCk7Cj4gKwkJaWYgKG1uLT5vcHMtPmludmFs aWRhdGVfcmFuZ2Vfc3RhcnQpIHsKPiArCQkJaW50IF9yZXQgPSBtbi0+b3BzLT5pbnZhbGlkYXRl X3JhbmdlX3N0YXJ0KG1uLCBtbSwgc3RhcnQsIGVuZCwgYmxvY2thYmxlKTsKPiArCQkJaWYgKF9y ZXQpCj4gKwkJCQlyZXQgPSBfcmV0Owo+ICsJCX0KPiAgIAl9Cj4gICAJc3JjdV9yZWFkX3VubG9j aygmc3JjdSwgaWQpOwo+ICsKPiArCXJldHVybiByZXQ7Cj4gICB9Cj4gICBFWFBPUlRfU1lNQk9M X0dQTChfX21tdV9ub3RpZmllcl9pbnZhbGlkYXRlX3JhbmdlX3N0YXJ0KTsKPiAgIAo+IGRpZmYg LS1naXQgYS9tbS9vb21fa2lsbC5jIGIvbW0vb29tX2tpbGwuYwo+IGluZGV4IDg0MDgxZTc3YmM1 MS4uN2UwYzZlNzhhZTVjIDEwMDY0NAo+IC0tLSBhL21tL29vbV9raWxsLmMKPiArKysgYi9tbS9v b21fa2lsbC5jCj4gQEAgLTQ3OSw5ICs0NzksMTAgQEAgc3RhdGljIERFQ0xBUkVfV0FJVF9RVUVV RV9IRUFEKG9vbV9yZWFwZXJfd2FpdCk7Cj4gICBzdGF0aWMgc3RydWN0IHRhc2tfc3RydWN0ICpv b21fcmVhcGVyX2xpc3Q7Cj4gICBzdGF0aWMgREVGSU5FX1NQSU5MT0NLKG9vbV9yZWFwZXJfbG9j ayk7Cj4gICAKPiAtdm9pZCBfX29vbV9yZWFwX3Rhc2tfbW0oc3RydWN0IG1tX3N0cnVjdCAqbW0p Cj4gK2Jvb2wgX19vb21fcmVhcF90YXNrX21tKHN0cnVjdCBtbV9zdHJ1Y3QgKm1tKQo+ICAgewo+ ICAgCXN0cnVjdCB2bV9hcmVhX3N0cnVjdCAqdm1hOwo+ICsJYm9vbCByZXQgPSB0cnVlOwo+ICAg Cj4gICAJLyoKPiAgIAkgKiBUZWxsIGFsbCB1c2VycyBvZiBnZXRfdXNlci9jb3B5X2Zyb21fdXNl ciBldGMuLi4gdGhhdCB0aGUgY29udGVudAo+IEBAIC01MTEsMTIgKzUxMiwxNyBAQCB2b2lkIF9f b29tX3JlYXBfdGFza19tbShzdHJ1Y3QgbW1fc3RydWN0ICptbSkKPiAgIAkJCXN0cnVjdCBtbXVf Z2F0aGVyIHRsYjsKPiAgIAo+ICAgCQkJdGxiX2dhdGhlcl9tbXUoJnRsYiwgbW0sIHN0YXJ0LCBl bmQpOwo+IC0JCQltbXVfbm90aWZpZXJfaW52YWxpZGF0ZV9yYW5nZV9zdGFydChtbSwgc3RhcnQs IGVuZCk7Cj4gKwkJCWlmIChtbXVfbm90aWZpZXJfaW52YWxpZGF0ZV9yYW5nZV9zdGFydF9ub25i bG9jayhtbSwgc3RhcnQsIGVuZCkpIHsKPiArCQkJCXJldCA9IGZhbHNlOwo+ICsJCQkJY29udGlu dWU7Cj4gKwkJCX0KPiAgIAkJCXVubWFwX3BhZ2VfcmFuZ2UoJnRsYiwgdm1hLCBzdGFydCwgZW5k LCBOVUxMKTsKPiAgIAkJCW1tdV9ub3RpZmllcl9pbnZhbGlkYXRlX3JhbmdlX2VuZChtbSwgc3Rh cnQsIGVuZCk7Cj4gICAJCQl0bGJfZmluaXNoX21tdSgmdGxiLCBzdGFydCwgZW5kKTsKPiAgIAkJ fQo+ICAgCX0KPiArCj4gKwlyZXR1cm4gcmV0Owo+ICAgfQo+ICAgCj4gICBzdGF0aWMgYm9vbCBv b21fcmVhcF90YXNrX21tKHN0cnVjdCB0YXNrX3N0cnVjdCAqdHNrLCBzdHJ1Y3QgbW1fc3RydWN0 ICptbSkKPiBAQCAtNTQ1LDE4ICs1NTEsNiBAQCBzdGF0aWMgYm9vbCBvb21fcmVhcF90YXNrX21t KHN0cnVjdCB0YXNrX3N0cnVjdCAqdHNrLCBzdHJ1Y3QgbW1fc3RydWN0ICptbSkKPiAgIAkJZ290 byB1bmxvY2tfb29tOwo+ICAgCX0KPiAgIAo+IC0JLyoKPiAtCSAqIElmIHRoZSBtbSBoYXMgaW52 YWxpZGF0ZV97c3RhcnQsZW5kfSgpIG5vdGlmaWVycyB0aGF0IGNvdWxkIGJsb2NrLAo+IC0JICog c2xlZXAgdG8gZ2l2ZSB0aGUgb29tIHZpY3RpbSBzb21lIG1vcmUgdGltZS4KPiAtCSAqIFRPRE86 IHdlIHJlYWxseSB3YW50IHRvIGdldCByaWQgb2YgdGhpcyB1Z2x5IGhhY2sgYW5kIG1ha2Ugc3Vy ZSB0aGF0Cj4gLQkgKiBub3RpZmllcnMgY2Fubm90IGJsb2NrIGZvciB1bmJvdW5kZWQgYW1vdW50 IG9mIHRpbWUKPiAtCSAqLwo+IC0JaWYgKG1tX2hhc19ibG9ja2FibGVfaW52YWxpZGF0ZV9ub3Rp ZmllcnMobW0pKSB7Cj4gLQkJdXBfcmVhZCgmbW0tPm1tYXBfc2VtKTsKPiAtCQlzY2hlZHVsZV90 aW1lb3V0X2lkbGUoSFopOwo+IC0JCWdvdG8gdW5sb2NrX29vbTsKPiAtCX0KPiAtCj4gICAJLyoK PiAgIAkgKiBNTUZfT09NX1NLSVAgaXMgc2V0IGJ5IGV4aXRfbW1hcCB3aGVuIHRoZSBPT00gcmVh cGVyIGNhbid0Cj4gICAJICogd29yayBvbiB0aGUgbW0gYW55bW9yZS4gVGhlIGNoZWNrIGZvciBN TUZfT09NX1NLSVAgbXVzdCBydW4KPiBAQCAtNTcxLDcgKzU2NSwxMiBAQCBzdGF0aWMgYm9vbCBv b21fcmVhcF90YXNrX21tKHN0cnVjdCB0YXNrX3N0cnVjdCAqdHNrLCBzdHJ1Y3QgbW1fc3RydWN0 ICptbSkKPiAgIAo+ICAgCXRyYWNlX3N0YXJ0X3Rhc2tfcmVhcGluZyh0c2stPnBpZCk7Cj4gICAK PiAtCV9fb29tX3JlYXBfdGFza19tbShtbSk7Cj4gKwkvKiBmYWlsZWQgdG8gcmVhcCBwYXJ0IG9m IHRoZSBhZGRyZXNzIHNwYWNlLiBUcnkgYWdhaW4gbGF0ZXIgKi8KPiArCWlmICghX19vb21fcmVh cF90YXNrX21tKG1tKSkgewo+ICsJCXVwX3JlYWQoJm1tLT5tbWFwX3NlbSk7Cj4gKwkJcmV0ID0g ZmFsc2U7Cj4gKwkJZ290byBvdXRfdW5sb2NrOwo+ICsJfQo+ICAgCj4gICAJcHJfaW5mbygib29t X3JlYXBlcjogcmVhcGVkIHByb2Nlc3MgJWQgKCVzKSwgbm93IGFub24tcnNzOiVsdWtCLCBmaWxl LXJzczolbHVrQiwgc2htZW0tcnNzOiVsdWtCXG4iLAo+ICAgCQkJdGFza19waWRfbnIodHNrKSwg dHNrLT5jb21tLAo+IGRpZmYgLS1naXQgYS92aXJ0L2t2bS9rdm1fbWFpbi5jIGIvdmlydC9rdm0v a3ZtX21haW4uYwo+IGluZGV4IGFkYTIxZjQ3ZjIyYi4uNmY3ZTcwOWQyOTQ0IDEwMDY0NAo+IC0t LSBhL3ZpcnQva3ZtL2t2bV9tYWluLmMKPiArKysgYi92aXJ0L2t2bS9rdm1fbWFpbi5jCj4gQEAg LTEzNSw3ICsxMzUsNyBAQCBzdGF0aWMgdm9pZCBrdm1fdWV2ZW50X25vdGlmeV9jaGFuZ2UodW5z aWduZWQgaW50IHR5cGUsIHN0cnVjdCBrdm0gKmt2bSk7Cj4gICBzdGF0aWMgdW5zaWduZWQgbG9u ZyBsb25nIGt2bV9jcmVhdGV2bV9jb3VudDsKPiAgIHN0YXRpYyB1bnNpZ25lZCBsb25nIGxvbmcg a3ZtX2FjdGl2ZV92bXM7Cj4gICAKPiAtX193ZWFrIHZvaWQga3ZtX2FyY2hfbW11X25vdGlmaWVy X2ludmFsaWRhdGVfcmFuZ2Uoc3RydWN0IGt2bSAqa3ZtLAo+ICtfX3dlYWsgaW50IGt2bV9hcmNo X21tdV9ub3RpZmllcl9pbnZhbGlkYXRlX3JhbmdlKHN0cnVjdCBrdm0gKmt2bSwKPiAgIAkJdW5z aWduZWQgbG9uZyBzdGFydCwgdW5zaWduZWQgbG9uZyBlbmQpCj4gICB7Cj4gICB9Cj4gQEAgLTM1 NCwxMyArMzU0LDE1IEBAIHN0YXRpYyB2b2lkIGt2bV9tbXVfbm90aWZpZXJfY2hhbmdlX3B0ZShz dHJ1Y3QgbW11X25vdGlmaWVyICptbiwKPiAgIAlzcmN1X3JlYWRfdW5sb2NrKCZrdm0tPnNyY3Us IGlkeCk7Cj4gICB9Cj4gICAKPiAtc3RhdGljIHZvaWQga3ZtX21tdV9ub3RpZmllcl9pbnZhbGlk YXRlX3JhbmdlX3N0YXJ0KHN0cnVjdCBtbXVfbm90aWZpZXIgKm1uLAo+ICtzdGF0aWMgaW50IGt2 bV9tbXVfbm90aWZpZXJfaW52YWxpZGF0ZV9yYW5nZV9zdGFydChzdHJ1Y3QgbW11X25vdGlmaWVy ICptbiwKPiAgIAkJCQkJCSAgICBzdHJ1Y3QgbW1fc3RydWN0ICptbSwKPiAgIAkJCQkJCSAgICB1 bnNpZ25lZCBsb25nIHN0YXJ0LAo+IC0JCQkJCQkgICAgdW5zaWduZWQgbG9uZyBlbmQpCj4gKwkJ CQkJCSAgICB1bnNpZ25lZCBsb25nIGVuZCwKPiArCQkJCQkJICAgIGJvb2wgYmxvY2thYmxlKQo+ ICAgewo+ICAgCXN0cnVjdCBrdm0gKmt2bSA9IG1tdV9ub3RpZmllcl90b19rdm0obW4pOwo+ICAg CWludCBuZWVkX3RsYl9mbHVzaCA9IDAsIGlkeDsKPiArCWludCByZXQ7Cj4gICAKPiAgIAlpZHgg PSBzcmN1X3JlYWRfbG9jaygma3ZtLT5zcmN1KTsKPiAgIAlzcGluX2xvY2soJmt2bS0+bW11X2xv Y2spOwo+IEBAIC0zNzgsOSArMzgwLDExIEBAIHN0YXRpYyB2b2lkIGt2bV9tbXVfbm90aWZpZXJf aW52YWxpZGF0ZV9yYW5nZV9zdGFydChzdHJ1Y3QgbW11X25vdGlmaWVyICptbiwKPiAgIAo+ICAg CXNwaW5fdW5sb2NrKCZrdm0tPm1tdV9sb2NrKTsKPiAgIAo+IC0Ja3ZtX2FyY2hfbW11X25vdGlm aWVyX2ludmFsaWRhdGVfcmFuZ2Uoa3ZtLCBzdGFydCwgZW5kKTsKPiArCXJldCA9IGt2bV9hcmNo X21tdV9ub3RpZmllcl9pbnZhbGlkYXRlX3JhbmdlKGt2bSwgc3RhcnQsIGVuZCwgYmxvY2thYmxl KTsKPiAgIAo+ICAgCXNyY3VfcmVhZF91bmxvY2soJmt2bS0+c3JjdSwgaWR4KTsKPiArCj4gKwly ZXR1cm4gcmV0Owo+ICAgfQo+ICAgCj4gICBzdGF0aWMgdm9pZCBrdm1fbW11X25vdGlmaWVyX2lu dmFsaWRhdGVfcmFuZ2VfZW5kKHN0cnVjdCBtbXVfbm90aWZpZXIgKm1uLAoKX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KSW50ZWwtZ2Z4IG1haWxpbmcgbGlz dApJbnRlbC1nZnhAbGlzdHMuZnJlZWRlc2t0b3Aub3JnCmh0dHBzOi8vbGlzdHMuZnJlZWRlc2t0 b3Aub3JnL21haWxtYW4vbGlzdGluZm8vaW50ZWwtZ2Z4Cg== From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_PASS,T_DKIMWL_WL_MED, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98B7DC43144 for ; Fri, 22 Jun 2018 15:13:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2768224471 for ; Fri, 22 Jun 2018 15:13:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="HkG8oZCN" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2768224471 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=amd.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933580AbeFVPNZ (ORCPT ); Fri, 22 Jun 2018 11:13:25 -0400 Received: from mail-eopbgr720089.outbound.protection.outlook.com ([40.107.72.89]:40728 "EHLO NAM05-CO1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751233AbeFVPNX (ORCPT ); Fri, 22 Jun 2018 11:13:23 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector1-amd-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=QWMjTfWCkGLrLGYcc61uz3uGFVgg3zZNpQaM1a7ifFE=; b=HkG8oZCNWUpnKOhzIjcwl5ZktXw8lD3+Zx0hYu0ceQCCKb6azWC7ejbsn8DK6NikNryfW2v6vKM6UZ7EIBjnzcGJja9BWl+Aq09jMzgH6Y/fgPU7ReJ6/qyPk2oQly2VCfTcuIkeZQwVGa7lBTBG6b/4gc4n7FEEny3f4gSXinE= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Christian.Koenig@amd.com; Received: from [IPv6:2a02:908:1257:4460:1ab8:55c1:a639:6740] (2a02:908:1257:4460:1ab8:55c1:a639:6740) by BN6PR12MB1714.namprd12.prod.outlook.com (2603:10b6:404:106::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.884.21; Fri, 22 Jun 2018 15:13:13 +0000 Subject: Re: [RFC PATCH] mm, oom: distinguish blockable mode for mmu notifiers To: Michal Hocko , LKML Cc: Michal Hocko , "David (ChunMing) Zhou" , Paolo Bonzini , =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , Alex Deucher , David Airlie , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Doug Ledford , Jason Gunthorpe , Mike Marciniszyn , Dennis Dalessandro , Sudeep Dutt , Ashutosh Dixit , Dimitri Sivanich , Boris Ostrovsky , Juergen Gross , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Andrea Arcangeli , kvm@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-rdma@vger.kernel.org, xen-devel@lists.xenproject.org, linux-mm@kvack.org, David Rientjes , Felix Kuehling References: <20180622150242.16558-1-mhocko@kernel.org> From: =?UTF-8?Q?Christian_K=c3=b6nig?= Message-ID: <0aa9f695-5702-6704-9462-7779cbfdb3fd@amd.com> Date: Fri, 22 Jun 2018 17:13:02 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.8.0 MIME-Version: 1.0 In-Reply-To: <20180622150242.16558-1-mhocko@kernel.org> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US X-Originating-IP: [2a02:908:1257:4460:1ab8:55c1:a639:6740] X-ClientProxiedBy: AM5P194CA0005.EURP194.PROD.OUTLOOK.COM (2603:10a6:203:8f::15) To BN6PR12MB1714.namprd12.prod.outlook.com (2603:10b6:404:106::11) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f25f5291-9497-4351-ea62-08d5d852aef1 X-MS-Office365-Filtering-HT: Tenant X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:(7020095)(4652020)(5600026)(711020)(48565401081)(2017052603328)(7153060)(7193020);SRVR:BN6PR12MB1714; X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1714;3:wmCUPy1h/QaF540gpjcHA19aJHvNtIyvE2RONKUk/3idCSUbmnjyUQwhONWNFCi/oC7/ZmAaNAf027zTXPEWtzeNFMv/8jBYZ7L5tQAXDEDxLcYghwtun9zIiaWPdvdF9TYuw37OLQPnB5KdhvXN7gV7VKjlUfbuLY+xNMGqalUTeNjjOUO9iP4Pqehzb/PvAi16ocU31GnCsCwWz6yEBKU6qDMS8urUOiChzcCOD0JbLDkFJUqWZ8dphk8cuavi;25:OWw7DUDykENsaYdrDtAM3S76knxl5tJN1kDK81bEQ/2jx4AV7oUhIEou+MivvaX5jdnItY2YNPzO7vNt5PCOWkYpdWu2j4g5EhKeOIHxz3UfKnip/qtgbaHc/SXyelKMXSf74NVoVSHgSu+GamaRP7fHHfNVi2vr/+F3UY05G4trw15p9JvGnJLWQ7yZehVITkF/bFG/jsCKXuAuZ8DELePjtfRyKWGSEQhkLVGQncoeL24blcTegHujAqGCWZ/xdv6BdeQl66jyhYIZ7ts06js0Y5SMT1tsSqk7H3p697Ehvheu6C+Oivqv7gwoBLOTCdlP5JhxJt8I09rSEMOdFw==;31:xoZ9f664HrnCbkSFcMBAt3hz4N/RHehMKyA99EHpMC1tpzISjSnD2hU49aZMRCF6kvydzh+avX2s6cNFEE0//oiBSpZ/D2EiImKMWQ/aGmXORvIjoKSGJRfrKvUQ/5v5LjlUYHuwMmt1byoZ2CZjBYXhyD4PbgPwcXJNdCr6mEp6QvmotVF4TRyIsZnFkF+gHTNNRcmkejp8OkKgd26btNNnI90ahGrJJjREmBwR3/U= X-MS-TrafficTypeDiagnostic: BN6PR12MB1714: X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1714;20:bAa5KuMtQwg1Q8FAFdXaCkMQZbFrogHS7clx+gQ7ij7qtjycOTSQhJoQ0hV9mDtrwWEQJyBpcbtz3a1TkYvxPHR7LgFpigYvG8e3nTw15dvgKXvvaffHoSkGIKYdhBaebTK11tuLVaVNMy4pHtkw45Ve3HRhVg1rthCLFyKtZMKKhjrZqY3RQFeIEg1yCesWkbLJZXM0LUEVwL4QkrGV70fLQgBTjBm07sPhH+x+Q+FfXHFFHmb1jjMkwLQCI8gLDcjSC0idBqfdHB9MC+OAgwxFI22NggL4l/hIN4+cfDt39sZ7wJdBeIV1W3uT+ROEfVNKhgw8sJsxgAhp5EeTRUY8zZhHgk/lACqPGRr6JXAY0cG6r3HNvIBp10TvYw8qVNysyLSUN9qZS6R9VLc6hrsZBLjmkfZLajgFfOLI+jnFG7GHspBz2TLp8jt9n/bQ2B+b0vx5X11rhiagAAieMRsr/3mcRTy9LaoiXT7MOKTE28jR3OMq4w1RC2Qhormx X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(9452136761055)(767451399110)(58134797142442)(211936372134217)(217544274631240)(153496737603132)(146099531331640)(228905959029699); X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(8211001083)(6040522)(2401047)(5005006)(8121501046)(93006095)(93001095)(10201501046)(3002001)(3231254)(944501410)(52105095)(6055026)(149027)(150027)(6041310)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123562045)(20161123560045)(20161123564045)(20161123558120)(6072148)(201708071742011)(7699016);SRVR:BN6PR12MB1714;BCL:0;PCL:0;RULEID:;SRVR:BN6PR12MB1714; X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1714;4:XKaFozp9vCYSA79H7mBnKi9pavTy6ztjKIHKVXoII6t/nzzy8ts7UQN01cLQLjF9Ng2oOo0jbHnKVxegTS0RhYg9t4SrdkNnAnNCcj5qvO16aUyY2OaXf7FdRcsI9nIfb9sbF91Fa4qmcclFurVt2j8MxJJ6ntskcaXKMuByVdSv1rkjQwPo/8K5IEZ48KWtQDm4EpuvS42AaLTnlKGv0cTYdwEH5/B66kNQdDc4sz+CEGuVwvLoEzv4SP1dAM3qhg4JVoOpM9w+fCNbkohX7v8KyXxxfJYC0lqhNtIasdrOiBs78w6rEZNqLB7qz51W+PYm9I9/M7Us5Oi3LD2cSqNFSPtdZAOTIUklWympcSAYlCDNeAwEIuiIJYpBAy7rMvoUVW80wP13Hpm7J3of05x+Tbl36jKreROdCF8SOWKqss7EQwTHqHrff7h6142BDnFkJ5wH2OdgZzzzUrcXls2/V/uca9Mwy2ZxMFm4baWNnJEqUAvzDvJMJE26urqVljvu6KB/GcSji5oJXPJdBX0Jt+U6sHlgkRdDwdCVQAA= X-Forefront-PRVS: 071156160B X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10009020)(396003)(376002)(366004)(346002)(39860400002)(39380400002)(199004)(189003)(52314003)(6666003)(5660300001)(6116002)(2870700001)(105586002)(7416002)(1706002)(72206003)(2906002)(476003)(36756003)(486006)(97736004)(478600001)(46003)(5890100001)(31686004)(65826007)(68736007)(64126003)(76176011)(31696002)(81156014)(8936002)(65956001)(4326008)(11346002)(229853002)(81166006)(8676002)(59450400001)(575784001)(6486002)(316002)(446003)(86362001)(65806001)(7736002)(52396003)(106356001)(52116002)(386003)(23676004)(2486003)(52146003)(305945005)(58126008)(54906003)(110136005)(53936002)(47776003)(6246003)(67846002)(16526019)(25786009)(2616005)(50466002);DIR:OUT;SFP:1101;SCL:1;SRVR:BN6PR12MB1714;H:[IPv6:2a02:908:1257:4460:1ab8:55c1:a639:6740];FPR:;SPF:None;LANG:en;PTR:InfoNoRecords;A:1;MX:1; Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtCTjZQUjEyTUIxNzE0OzIzOjZFYTVDSFR4NUw1elNWdWVmTmxleDlZL3VT?= =?utf-8?B?bHdkbjVRT0NIN0UycWV4QWFScnNZUEZyWXRuY0ZrV0RhTkN2RWg4bVpQaGNr?= =?utf-8?B?MlkrL3N5K0NxVFp0VElPVmpFVURibzYzUXVRbVBabUxRYUR3Z0xoQWtvMzNu?= =?utf-8?B?ZXY4djdFUU4zaWdJZXBFamlTTmZaOGpOTi9vY2JlMEdkTFFoQThyMXgvbEN3?= =?utf-8?B?RXczbkswYmdjTGhjQTExT0RFd1BzaFg5TTl4cFErK3U4bERhQUhBcmd2WDJN?= =?utf-8?B?cGd3emo5ZFlLRVdDajhVbGVvZDRPZGVWczM0ZTRNSm1oQkFZczBPWEI5aHJp?= =?utf-8?B?SFpPanloYkF5c21PYXNBalE3T1N0R1BpWlcxS014OTRxc2lGSzErWTQxSmVu?= =?utf-8?B?cXh3TFB6Q0ZudTgxelA0TjN1NlR3TDZDN0wzcStsY0lzaUZ6T0E5QUhrYXQr?= =?utf-8?B?cnYxOVNld2ZBb0NVc0hpZktLdjJVcnVXMlliYjE2R1huUEsxbldJOGlSV20r?= =?utf-8?B?ajBkQTJrTlJDNVgybFFDb25FQUdXZ3ZpTW5CZWhuUkZtbGRIbGgydlRnWHJX?= =?utf-8?B?ZG1IVE5idHR3N3drZWMwOTUwWDFDT296YUQraXY3N0RDQ2lKcWxzNUliNDBX?= =?utf-8?B?eWJxcHJJK29JRUdUdnhlVnR0U1FTenhyYmFtWHhkRk4xK1NQMjR6eDEvV2wv?= =?utf-8?B?Qzc2NDBzWlZaK2xnajBKY2M2N0FLbW5XamhrSDRsTnRuSWlGMi9GWFdERjFh?= =?utf-8?B?MkY2WlB1VUdsMEFhdGY5YTgxQUo1V1h3UnlMN2tQb1BjVEoyVnVOaHJWVCtY?= =?utf-8?B?VXdQUU0wK2VPSk10aWRoSkgwZldITGtIVW1QalU2RDN4elN5dXZYZHhaQ2pH?= =?utf-8?B?ZUFNVDNUZkMyQUVLalk2UG52akdKNEY2VWFxQjZBNFZuYU1BeGlRWTF5engx?= =?utf-8?B?U2NQeFcrRSsvMmpqNUp2OEsxU25lMzVOYzk3b045NTNSWmF3dmJtNzZKVndo?= =?utf-8?B?dHR2bU1sNWd0bGZIY2pRZDZsUENndGdsaXNJa29reUh2TlJpZG12cnU2MjNy?= =?utf-8?B?bTlUTUxGb1JEa3d1WXB5eWVuOXNxNGVZVU0weENOQUFRR1RldEhYR2grTVpZ?= =?utf-8?B?cmJ5QzJyWHlYYjgzc3ZyQkxXTEpZVjZIVnd1YmtXemdRN2V1ZmFIaktKZjdn?= =?utf-8?B?RUpkcUdXbFhZWjFQSVhSdzE1a0dkOVFLdUN6bkZnOW9KWS8zQXo0Q1Zueldl?= =?utf-8?B?ak95b1BQRnVZbXNGbTVrTElkUXVlWUdRK0RYNEN6MG9pV3k3SkcxYy9zU1Js?= =?utf-8?B?RjRYTHJobVVoTkNrVWV6UUFHL2Npbk1yZzh6ZlFGb2JvOTRuMERGSjdPVUIy?= =?utf-8?B?eG9ITWl3L2NqQ0cwK0t1OWZpUmU3amRMZ1hEdFNBKzdESGEra24xZHd1VXYz?= =?utf-8?B?cHU3akxXNkxpd2RVL0RxaEFrSithbVZZc3cybzkwQmRnYlg5OG5xWTA0Y1Z6?= =?utf-8?B?V2R3NWJCYXhLcDhUa2RiNEh1bEdJVmxHajBkdnB0ZFR6Q1pkOHZnMklLcnhS?= =?utf-8?B?NlB0Q2ZUR2NTb0lJRVh6bGFjSFVWRGVzRzdIVER5Und2VDlYYmFsM3h1eXFV?= =?utf-8?B?dEVEdzlMd2ZFeVhOakRYVGJDM0l2TUc1QWpJRDdSWXh3MXFZVm5rL3RaK1ha?= =?utf-8?B?N3RqeWcxeUwwQjJPMnIxMUdaWk8rVlNldGFiTGgyUHNvWXRQQjNQK2RydEdi?= =?utf-8?B?ZkllTGgxcHNhdWRQNTdxWnpPOWxTY2VIQUxaV1FUZzkvVG4xQnlDWEpKbUZG?= =?utf-8?B?TU91WFp6ZUpRQ0ZZK3JyUloxK1hySDZTUndjMzZjQzJPanBtd2JZMlpQd0V1?= =?utf-8?B?a1lxdmhtWXZKbjhjbVl5YVhJSnc4NGNRdUFuMXFYb3VsSU1mbFhlQzMxUVVF?= =?utf-8?B?UW5JRlBQblB1R2dwUlJTQXdwc3d1bTV3WG5BUnk2Mnd6WDRKRENDTFI2L2Fo?= =?utf-8?B?Z1lpV1AzRmR4aW5FL05PWlpVdmFhZmtSSGY3QT09?= X-Microsoft-Antispam-Message-Info: eVbpLJ2UghHrZU320ymf12UcTMBxjXbRyoSst79U188Lb/TnvyVq5ycW70mBlP/O1drqctT31rzhd6sykSikmXeTUIA8/KZl5l8BsQR6ibj15v/JPvPjS7iuFxWsKYAj9zqykChMJIPGXeD7xHdTeedeM3s1xUbSt3Sc0znufTdhOxcOk3GtYMgcyUIcy/PHiWrIp89y5F2SFatcGfe67zU7eBy5ZibAjqKPC6W7XLTvh7VXYoTQgSI3CnSwj2sRZXD2BiorylMpSrGQ811D0sLrxEL+1VFz+V/4Vj+vc4iPw2WQw24JsaWAY5EV9/nk6fhjGwdrvN3NavemxkOmtQ== X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1714;6:279yDKs22PKmpgbXkOK50np+/aJb3jGjeBxH3x264hgnTgIvKpjTy/La5ynOenNIN7QN6LwpCqa//+Ed8MT0kaNopbf84BXNEjrJ4psiv0ihdLXZSJKRmvcoei0MFgF+P6VXqNOEvVkvRnoPryG8FH/gQqBm/ZGlRx3p8+7TBlN4lhV0bte7+AtCcEJ6kybuwdvb9ZfBaWDJDB7KT+82jp3NRGpQHAXi0jTL2lx4nj1GIPyay/rIbDOe2utFAN12bN4mFIfuJ5rVKW73rgOqOkst+yhy6uH6e1T9tabK2bTfziYsLvKBClxQJckxKhoi/orB/gAjXXcPVE1ooryavzNUYhsYCXPqCTuTVd39eZ4+RaIKbZVzehfB7fA3grcVQg9HCQlU+p555wi9Lcr/lZOd/cmTcOuU0sQnLWP2BMoqqLE3bJt9+f5RhY9jWed3EjapqhBVk8eCeS/aIioO0Q==;5:tFIZJLaAyQRbsbbBCuKtVwd6wjYdVITR+PxUivkbLEfh5q8Kr+lqdwdfW+59+ul6r6E732yusweLY7Ja6L6uqSZE06Nb1QWNSJcepnwi78dqoJiXqFBFc/uU/5tu8bOSphm0M2bW1z5+4kw8qkZFrzag7VaYVcmQVTcfOHpQu6A=;24:7TGQWXAcCxc+GA4iFUII7zb1vXMRwgQnNE2yaBVQ7CuasNPtXqkQ054ykGnRWxJtpD+AUkPZ1DFqXh5rYAakOXDdt3q3y0iELWDcrhPEErI= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;BN6PR12MB1714;7:f4lyyt4veL1gTIMzmw6zLMec8Nifi7QelAjUsrdfNdliG7w8wcXaZeUkYLMJKzUjWk+xrhTMq7bZ0KlfHp0lOUbC2toqS0ThgQPPv8iXmy/4sVT/7w0hjdJHXia2z7enVT0G5YmEFrnAJUVNF3956rFUdxUpTAExvkKjmlj49Z9M9sUzwr5faViXXEGCS8ENJ6fbKRsYYQlcL8BubgbM9tB2NgtJyjdrwBQsGOZkj8qVOQBDQbLsvzmmAMPFno5y;20:AlHHUuhOEju3qJBMGmot5IS8BCiUYfiFuS0fH7ybQc7vF+KAPtZJBljwl3mQXrv5lcPpALS9PF0hZh42Np+1a4eANXa4ibOCprUtX9qS+7pxnE3yXQ2MQzf1iy5LRMirVl83IC9Bdt6zM+FCQEE+rpwUkLd+ZwHgJYW1ut+mUhTwWsGHjwXNUqZoTbGwpd+d7kvAGn6yXQHYRZS06BKSnIxleq2kQOjmu8fMDGcd9Y60RCBF4RyksECik5ukPT9L X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2018 15:13:13.1332 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f25f5291-9497-4351-ea62-08d5d852aef1 X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR12MB1714 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Michal, [Adding Felix as well] Well first of all you have a misconception why at least the AMD graphics driver need to be able to sleep in an MMU notifier: We need to sleep because we need to wait for hardware operations to finish and *NOT* because we need to wait for locks. I'm not sure if your flag now means that you generally can't sleep in MMU notifiers any more, but if that's the case at least AMD hardware will break badly. In our case the approach of waiting for a short time for the process to be reaped and then select another victim actually sounds like the right thing to do. What we also already try to do is to abort hardware operations with the address space when we detect that the process is dying, but that can certainly be improved. Regards, Christian. Am 22.06.2018 um 17:02 schrieb Michal Hocko: > From: Michal Hocko > > There are several blockable mmu notifiers which might sleep in > mmu_notifier_invalidate_range_start and that is a problem for the > oom_reaper because it needs to guarantee a forward progress so it cannot > depend on any sleepable locks. Currently we simply back off and mark an > oom victim with blockable mmu notifiers as done after a short sleep. > That can result in selecting a new oom victim prematurely because the > previous one still hasn't torn its memory down yet. > > We can do much better though. Even if mmu notifiers use sleepable locks > there is no reason to automatically assume those locks are held. > Moreover most notifiers only care about a portion of the address > space. This patch handles the first part of the problem. > __mmu_notifier_invalidate_range_start gets a blockable flag and > callbacks are not allowed to sleep if the flag is set to false. This is > achieved by using trylock instead of the sleepable lock for most > callbacks. I think we can improve that even further because there is > a common pattern to do a range lookup first and then do something about > that. The first part can be done without a sleeping lock I presume. > > Anyway, what does the oom_reaper do with all that? We do not have to > fail right away. We simply retry if there is at least one notifier which > couldn't make any progress. A retry loop is already implemented to wait > for the mmap_sem and this is basically the same thing. > > Cc: "David (ChunMing) Zhou" > Cc: Paolo Bonzini > Cc: "Radim Krčmář" > Cc: Alex Deucher > Cc: "Christian König" > Cc: David Airlie > Cc: Jani Nikula > Cc: Joonas Lahtinen > Cc: Rodrigo Vivi > Cc: Doug Ledford > Cc: Jason Gunthorpe > Cc: Mike Marciniszyn > Cc: Dennis Dalessandro > Cc: Sudeep Dutt > Cc: Ashutosh Dixit > Cc: Dimitri Sivanich > Cc: Boris Ostrovsky > Cc: Juergen Gross > Cc: "Jérôme Glisse" > Cc: Andrea Arcangeli > Cc: kvm@vger.kernel.org (open list:KERNEL VIRTUAL MACHINE FOR X86 (KVM/x86)) > Cc: linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)) > Cc: amd-gfx@lists.freedesktop.org (open list:RADEON and AMDGPU DRM DRIVERS) > Cc: dri-devel@lists.freedesktop.org (open list:DRM DRIVERS) > Cc: intel-gfx@lists.freedesktop.org (open list:INTEL DRM DRIVERS (excluding Poulsbo, Moorestow...) > Cc: linux-rdma@vger.kernel.org (open list:INFINIBAND SUBSYSTEM) > Cc: xen-devel@lists.xenproject.org (moderated list:XEN HYPERVISOR INTERFACE) > Cc: linux-mm@kvack.org (open list:HMM - Heterogeneous Memory Management) > Reported-by: David Rientjes > Signed-off-by: Michal Hocko > --- > > Hi, > this is an RFC and not tested at all. I am not very familiar with the > mmu notifiers semantics very much so this is a crude attempt to achieve > what I need basically. It might be completely wrong but I would like > to discuss what would be a better way if that is the case. > > get_maintainers gave me quite large list of people to CC so I had to trim > it down. If you think I have forgot somebody, please let me know > > Any feedback is highly appreciated. > > arch/x86/kvm/x86.c | 7 ++++-- > drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c | 33 +++++++++++++++++++------ > drivers/gpu/drm/i915/i915_gem_userptr.c | 10 +++++--- > drivers/gpu/drm/radeon/radeon_mn.c | 15 ++++++++--- > drivers/infiniband/core/umem_odp.c | 15 ++++++++--- > drivers/infiniband/hw/hfi1/mmu_rb.c | 7 ++++-- > drivers/misc/mic/scif/scif_dma.c | 7 ++++-- > drivers/misc/sgi-gru/grutlbpurge.c | 7 ++++-- > drivers/xen/gntdev.c | 14 ++++++++--- > include/linux/kvm_host.h | 2 +- > include/linux/mmu_notifier.h | 15 +++++++++-- > mm/hmm.c | 7 ++++-- > mm/mmu_notifier.c | 15 ++++++++--- > mm/oom_kill.c | 29 +++++++++++----------- > virt/kvm/kvm_main.c | 12 ++++++--- > 15 files changed, 137 insertions(+), 58 deletions(-) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 6bcecc325e7e..ac08f5d711be 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -7203,8 +7203,9 @@ static void vcpu_load_eoi_exitmap(struct kvm_vcpu *vcpu) > kvm_x86_ops->load_eoi_exitmap(vcpu, eoi_exit_bitmap); > } > > -void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, > - unsigned long start, unsigned long end) > +int kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, > + unsigned long start, unsigned long end, > + bool blockable) > { > unsigned long apic_address; > > @@ -7215,6 +7216,8 @@ void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, > apic_address = gfn_to_hva(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT); > if (start <= apic_address && apic_address < end) > kvm_make_all_cpus_request(kvm, KVM_REQ_APIC_PAGE_RELOAD); > + > + return 0; > } > > void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu) > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c > index 83e344fbb50a..d138a526feff 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c > @@ -136,12 +136,18 @@ void amdgpu_mn_unlock(struct amdgpu_mn *mn) > * > * Take the rmn read side lock. > */ > -static void amdgpu_mn_read_lock(struct amdgpu_mn *rmn) > +static int amdgpu_mn_read_lock(struct amdgpu_mn *rmn, bool blockable) > { > - mutex_lock(&rmn->read_lock); > + if (blockable) > + mutex_lock(&rmn->read_lock); > + else if (!mutex_trylock(&rmn->read_lock)) > + return -EAGAIN; > + > if (atomic_inc_return(&rmn->recursion) == 1) > down_read_non_owner(&rmn->lock); > mutex_unlock(&rmn->read_lock); > + > + return 0; > } > > /** > @@ -197,10 +203,11 @@ static void amdgpu_mn_invalidate_node(struct amdgpu_mn_node *node, > * We block for all BOs between start and end to be idle and > * unmap them by move them into system domain again. > */ > -static void amdgpu_mn_invalidate_range_start_gfx(struct mmu_notifier *mn, > +static int amdgpu_mn_invalidate_range_start_gfx(struct mmu_notifier *mn, > struct mm_struct *mm, > unsigned long start, > - unsigned long end) > + unsigned long end, > + bool blockable) > { > struct amdgpu_mn *rmn = container_of(mn, struct amdgpu_mn, mn); > struct interval_tree_node *it; > @@ -208,7 +215,11 @@ static void amdgpu_mn_invalidate_range_start_gfx(struct mmu_notifier *mn, > /* notification is exclusive, but interval is inclusive */ > end -= 1; > > - amdgpu_mn_read_lock(rmn); > + /* TODO we should be able to split locking for interval tree and > + * amdgpu_mn_invalidate_node > + */ > + if (amdgpu_mn_read_lock(rmn, blockable)) > + return -EAGAIN; > > it = interval_tree_iter_first(&rmn->objects, start, end); > while (it) { > @@ -219,6 +230,8 @@ static void amdgpu_mn_invalidate_range_start_gfx(struct mmu_notifier *mn, > > amdgpu_mn_invalidate_node(node, start, end); > } > + > + return 0; > } > > /** > @@ -233,10 +246,11 @@ static void amdgpu_mn_invalidate_range_start_gfx(struct mmu_notifier *mn, > * necessitates evicting all user-mode queues of the process. The BOs > * are restorted in amdgpu_mn_invalidate_range_end_hsa. > */ > -static void amdgpu_mn_invalidate_range_start_hsa(struct mmu_notifier *mn, > +static int amdgpu_mn_invalidate_range_start_hsa(struct mmu_notifier *mn, > struct mm_struct *mm, > unsigned long start, > - unsigned long end) > + unsigned long end, > + bool blockable) > { > struct amdgpu_mn *rmn = container_of(mn, struct amdgpu_mn, mn); > struct interval_tree_node *it; > @@ -244,7 +258,8 @@ static void amdgpu_mn_invalidate_range_start_hsa(struct mmu_notifier *mn, > /* notification is exclusive, but interval is inclusive */ > end -= 1; > > - amdgpu_mn_read_lock(rmn); > + if (amdgpu_mn_read_lock(rmn, blockable)) > + return -EAGAIN; > > it = interval_tree_iter_first(&rmn->objects, start, end); > while (it) { > @@ -262,6 +277,8 @@ static void amdgpu_mn_invalidate_range_start_hsa(struct mmu_notifier *mn, > amdgpu_amdkfd_evict_userptr(mem, mm); > } > } > + > + return 0; > } > > /** > diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c > index 854bd51b9478..5285df9331fa 100644 > --- a/drivers/gpu/drm/i915/i915_gem_userptr.c > +++ b/drivers/gpu/drm/i915/i915_gem_userptr.c > @@ -112,10 +112,11 @@ static void del_object(struct i915_mmu_object *mo) > mo->attached = false; > } > > -static void i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn, > +static int i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn, > struct mm_struct *mm, > unsigned long start, > - unsigned long end) > + unsigned long end, > + bool blockable) > { > struct i915_mmu_notifier *mn = > container_of(_mn, struct i915_mmu_notifier, mn); > @@ -124,7 +125,7 @@ static void i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn, > LIST_HEAD(cancelled); > > if (RB_EMPTY_ROOT(&mn->objects.rb_root)) > - return; > + return 0; > > /* interval ranges are inclusive, but invalidate range is exclusive */ > end--; > @@ -152,7 +153,8 @@ static void i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn, > del_object(mo); > spin_unlock(&mn->lock); > > - if (!list_empty(&cancelled)) > + /* TODO: can we skip waiting here? */ > + if (!list_empty(&cancelled) && blockable) > flush_workqueue(mn->wq); > } > > diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/radeon_mn.c > index abd24975c9b1..b47e828b725d 100644 > --- a/drivers/gpu/drm/radeon/radeon_mn.c > +++ b/drivers/gpu/drm/radeon/radeon_mn.c > @@ -118,10 +118,11 @@ static void radeon_mn_release(struct mmu_notifier *mn, > * We block for all BOs between start and end to be idle and > * unmap them by move them into system domain again. > */ > -static void radeon_mn_invalidate_range_start(struct mmu_notifier *mn, > +static int radeon_mn_invalidate_range_start(struct mmu_notifier *mn, > struct mm_struct *mm, > unsigned long start, > - unsigned long end) > + unsigned long end, > + bool blockable) > { > struct radeon_mn *rmn = container_of(mn, struct radeon_mn, mn); > struct ttm_operation_ctx ctx = { false, false }; > @@ -130,7 +131,13 @@ static void radeon_mn_invalidate_range_start(struct mmu_notifier *mn, > /* notification is exclusive, but interval is inclusive */ > end -= 1; > > - mutex_lock(&rmn->lock); > + /* TODO we should be able to split locking for interval tree and > + * the tear down. > + */ > + if (blockable) > + mutex_lock(&rmn->lock); > + else if (!mutex_trylock(&rmn->lock)) > + return -EAGAIN; > > it = interval_tree_iter_first(&rmn->objects, start, end); > while (it) { > @@ -167,6 +174,8 @@ static void radeon_mn_invalidate_range_start(struct mmu_notifier *mn, > } > > mutex_unlock(&rmn->lock); > + > + return 0; > } > > static const struct mmu_notifier_ops radeon_mn_ops = { > diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c > index 182436b92ba9..f65f6a29daae 100644 > --- a/drivers/infiniband/core/umem_odp.c > +++ b/drivers/infiniband/core/umem_odp.c > @@ -207,22 +207,29 @@ static int invalidate_range_start_trampoline(struct ib_umem *item, u64 start, > return 0; > } > > -static void ib_umem_notifier_invalidate_range_start(struct mmu_notifier *mn, > +static int ib_umem_notifier_invalidate_range_start(struct mmu_notifier *mn, > struct mm_struct *mm, > unsigned long start, > - unsigned long end) > + unsigned long end, > + bool blockable) > { > struct ib_ucontext *context = container_of(mn, struct ib_ucontext, mn); > > if (!context->invalidate_range) > - return; > + return 0; > + > + if (blockable) > + down_read(&context->umem_rwsem); > + else if (!down_read_trylock(&context->umem_rwsem)) > + return -EAGAIN; > > ib_ucontext_notifier_start_account(context); > - down_read(&context->umem_rwsem); > rbt_ib_umem_for_each_in_range(&context->umem_tree, start, > end, > invalidate_range_start_trampoline, NULL); > up_read(&context->umem_rwsem); > + > + return 0; > } > > static int invalidate_range_end_trampoline(struct ib_umem *item, u64 start, > diff --git a/drivers/infiniband/hw/hfi1/mmu_rb.c b/drivers/infiniband/hw/hfi1/mmu_rb.c > index 70aceefe14d5..8780560d1623 100644 > --- a/drivers/infiniband/hw/hfi1/mmu_rb.c > +++ b/drivers/infiniband/hw/hfi1/mmu_rb.c > @@ -284,10 +284,11 @@ void hfi1_mmu_rb_remove(struct mmu_rb_handler *handler, > handler->ops->remove(handler->ops_arg, node); > } > > -static void mmu_notifier_range_start(struct mmu_notifier *mn, > +static int mmu_notifier_range_start(struct mmu_notifier *mn, > struct mm_struct *mm, > unsigned long start, > - unsigned long end) > + unsigned long end, > + bool blockable) > { > struct mmu_rb_handler *handler = > container_of(mn, struct mmu_rb_handler, mn); > @@ -313,6 +314,8 @@ static void mmu_notifier_range_start(struct mmu_notifier *mn, > > if (added) > queue_work(handler->wq, &handler->del_work); > + > + return 0; > } > > /* > diff --git a/drivers/misc/mic/scif/scif_dma.c b/drivers/misc/mic/scif/scif_dma.c > index 63d6246d6dff..d940568bed87 100644 > --- a/drivers/misc/mic/scif/scif_dma.c > +++ b/drivers/misc/mic/scif/scif_dma.c > @@ -200,15 +200,18 @@ static void scif_mmu_notifier_release(struct mmu_notifier *mn, > schedule_work(&scif_info.misc_work); > } > > -static void scif_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, > +static int scif_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, > struct mm_struct *mm, > unsigned long start, > - unsigned long end) > + unsigned long end, > + bool blockable) > { > struct scif_mmu_notif *mmn; > > mmn = container_of(mn, struct scif_mmu_notif, ep_mmu_notifier); > scif_rma_destroy_tcw(mmn, start, end - start); > + > + return 0 > } > > static void scif_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn, > diff --git a/drivers/misc/sgi-gru/grutlbpurge.c b/drivers/misc/sgi-gru/grutlbpurge.c > index a3454eb56fbf..be28f05bfafa 100644 > --- a/drivers/misc/sgi-gru/grutlbpurge.c > +++ b/drivers/misc/sgi-gru/grutlbpurge.c > @@ -219,9 +219,10 @@ void gru_flush_all_tlb(struct gru_state *gru) > /* > * MMUOPS notifier callout functions > */ > -static void gru_invalidate_range_start(struct mmu_notifier *mn, > +static int gru_invalidate_range_start(struct mmu_notifier *mn, > struct mm_struct *mm, > - unsigned long start, unsigned long end) > + unsigned long start, unsigned long end, > + bool blockable) > { > struct gru_mm_struct *gms = container_of(mn, struct gru_mm_struct, > ms_notifier); > @@ -231,6 +232,8 @@ static void gru_invalidate_range_start(struct mmu_notifier *mn, > gru_dbg(grudev, "gms %p, start 0x%lx, end 0x%lx, act %d\n", gms, > start, end, atomic_read(&gms->ms_range_active)); > gru_flush_tlb_range(gms, start, end - start); > + > + return 0; > } > > static void gru_invalidate_range_end(struct mmu_notifier *mn, > diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c > index bd56653b9bbc..50724d09fe5c 100644 > --- a/drivers/xen/gntdev.c > +++ b/drivers/xen/gntdev.c > @@ -465,14 +465,20 @@ static void unmap_if_in_range(struct grant_map *map, > WARN_ON(err); > } > > -static void mn_invl_range_start(struct mmu_notifier *mn, > +static int mn_invl_range_start(struct mmu_notifier *mn, > struct mm_struct *mm, > - unsigned long start, unsigned long end) > + unsigned long start, unsigned long end, > + bool blockable) > { > struct gntdev_priv *priv = container_of(mn, struct gntdev_priv, mn); > struct grant_map *map; > > - mutex_lock(&priv->lock); > + /* TODO do we really need a mutex here? */ > + if (blockable) > + mutex_lock(&priv->lock); > + else if (!mutex_trylock(&priv->lock)) > + return -EAGAIN; > + > list_for_each_entry(map, &priv->maps, next) { > unmap_if_in_range(map, start, end); > } > @@ -480,6 +486,8 @@ static void mn_invl_range_start(struct mmu_notifier *mn, > unmap_if_in_range(map, start, end); > } > mutex_unlock(&priv->lock); > + > + return true; > } > > static void mn_release(struct mmu_notifier *mn, > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > index 4ee7bc548a83..e4181063e755 100644 > --- a/include/linux/kvm_host.h > +++ b/include/linux/kvm_host.h > @@ -1275,7 +1275,7 @@ static inline long kvm_arch_vcpu_async_ioctl(struct file *filp, > } > #endif /* CONFIG_HAVE_KVM_VCPU_ASYNC_IOCTL */ > > -void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, > +int kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, > unsigned long start, unsigned long end); > > #ifdef CONFIG_HAVE_KVM_VCPU_RUN_PID_CHANGE > diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h > index 392e6af82701..369867501bed 100644 > --- a/include/linux/mmu_notifier.h > +++ b/include/linux/mmu_notifier.h > @@ -230,7 +230,8 @@ extern int __mmu_notifier_test_young(struct mm_struct *mm, > extern void __mmu_notifier_change_pte(struct mm_struct *mm, > unsigned long address, pte_t pte); > extern void __mmu_notifier_invalidate_range_start(struct mm_struct *mm, > - unsigned long start, unsigned long end); > + unsigned long start, unsigned long end, > + bool blockable); > extern void __mmu_notifier_invalidate_range_end(struct mm_struct *mm, > unsigned long start, unsigned long end, > bool only_end); > @@ -281,7 +282,17 @@ static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm, > unsigned long start, unsigned long end) > { > if (mm_has_notifiers(mm)) > - __mmu_notifier_invalidate_range_start(mm, start, end); > + __mmu_notifier_invalidate_range_start(mm, start, end, true); > +} > + > +static inline int mmu_notifier_invalidate_range_start_nonblock(struct mm_struct *mm, > + unsigned long start, unsigned long end) > +{ > + int ret = 0; > + if (mm_has_notifiers(mm)) > + ret = __mmu_notifier_invalidate_range_start(mm, start, end, false); > + > + return ret; > } > > static inline void mmu_notifier_invalidate_range_end(struct mm_struct *mm, > diff --git a/mm/hmm.c b/mm/hmm.c > index de7b6bf77201..81fd57bd2634 100644 > --- a/mm/hmm.c > +++ b/mm/hmm.c > @@ -177,16 +177,19 @@ static void hmm_release(struct mmu_notifier *mn, struct mm_struct *mm) > up_write(&hmm->mirrors_sem); > } > > -static void hmm_invalidate_range_start(struct mmu_notifier *mn, > +static int hmm_invalidate_range_start(struct mmu_notifier *mn, > struct mm_struct *mm, > unsigned long start, > - unsigned long end) > + unsigned long end, > + bool blockable) > { > struct hmm *hmm = mm->hmm; > > VM_BUG_ON(!hmm); > > atomic_inc(&hmm->sequence); > + > + return 0; > } > > static void hmm_invalidate_range_end(struct mmu_notifier *mn, > diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c > index eff6b88a993f..30cc43121da9 100644 > --- a/mm/mmu_notifier.c > +++ b/mm/mmu_notifier.c > @@ -174,18 +174,25 @@ void __mmu_notifier_change_pte(struct mm_struct *mm, unsigned long address, > srcu_read_unlock(&srcu, id); > } > > -void __mmu_notifier_invalidate_range_start(struct mm_struct *mm, > - unsigned long start, unsigned long end) > +int __mmu_notifier_invalidate_range_start(struct mm_struct *mm, > + unsigned long start, unsigned long end, > + bool blockable) > { > struct mmu_notifier *mn; > + int ret = 0; > int id; > > id = srcu_read_lock(&srcu); > hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) { > - if (mn->ops->invalidate_range_start) > - mn->ops->invalidate_range_start(mn, mm, start, end); > + if (mn->ops->invalidate_range_start) { > + int _ret = mn->ops->invalidate_range_start(mn, mm, start, end, blockable); > + if (_ret) > + ret = _ret; > + } > } > srcu_read_unlock(&srcu, id); > + > + return ret; > } > EXPORT_SYMBOL_GPL(__mmu_notifier_invalidate_range_start); > > diff --git a/mm/oom_kill.c b/mm/oom_kill.c > index 84081e77bc51..7e0c6e78ae5c 100644 > --- a/mm/oom_kill.c > +++ b/mm/oom_kill.c > @@ -479,9 +479,10 @@ static DECLARE_WAIT_QUEUE_HEAD(oom_reaper_wait); > static struct task_struct *oom_reaper_list; > static DEFINE_SPINLOCK(oom_reaper_lock); > > -void __oom_reap_task_mm(struct mm_struct *mm) > +bool __oom_reap_task_mm(struct mm_struct *mm) > { > struct vm_area_struct *vma; > + bool ret = true; > > /* > * Tell all users of get_user/copy_from_user etc... that the content > @@ -511,12 +512,17 @@ void __oom_reap_task_mm(struct mm_struct *mm) > struct mmu_gather tlb; > > tlb_gather_mmu(&tlb, mm, start, end); > - mmu_notifier_invalidate_range_start(mm, start, end); > + if (mmu_notifier_invalidate_range_start_nonblock(mm, start, end)) { > + ret = false; > + continue; > + } > unmap_page_range(&tlb, vma, start, end, NULL); > mmu_notifier_invalidate_range_end(mm, start, end); > tlb_finish_mmu(&tlb, start, end); > } > } > + > + return ret; > } > > static bool oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm) > @@ -545,18 +551,6 @@ static bool oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm) > goto unlock_oom; > } > > - /* > - * If the mm has invalidate_{start,end}() notifiers that could block, > - * sleep to give the oom victim some more time. > - * TODO: we really want to get rid of this ugly hack and make sure that > - * notifiers cannot block for unbounded amount of time > - */ > - if (mm_has_blockable_invalidate_notifiers(mm)) { > - up_read(&mm->mmap_sem); > - schedule_timeout_idle(HZ); > - goto unlock_oom; > - } > - > /* > * MMF_OOM_SKIP is set by exit_mmap when the OOM reaper can't > * work on the mm anymore. The check for MMF_OOM_SKIP must run > @@ -571,7 +565,12 @@ static bool oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm) > > trace_start_task_reaping(tsk->pid); > > - __oom_reap_task_mm(mm); > + /* failed to reap part of the address space. Try again later */ > + if (!__oom_reap_task_mm(mm)) { > + up_read(&mm->mmap_sem); > + ret = false; > + goto out_unlock; > + } > > pr_info("oom_reaper: reaped process %d (%s), now anon-rss:%lukB, file-rss:%lukB, shmem-rss:%lukB\n", > task_pid_nr(tsk), tsk->comm, > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index ada21f47f22b..6f7e709d2944 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -135,7 +135,7 @@ static void kvm_uevent_notify_change(unsigned int type, struct kvm *kvm); > static unsigned long long kvm_createvm_count; > static unsigned long long kvm_active_vms; > > -__weak void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, > +__weak int kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, > unsigned long start, unsigned long end) > { > } > @@ -354,13 +354,15 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn, > srcu_read_unlock(&kvm->srcu, idx); > } > > -static void kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, > +static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, > struct mm_struct *mm, > unsigned long start, > - unsigned long end) > + unsigned long end, > + bool blockable) > { > struct kvm *kvm = mmu_notifier_to_kvm(mn); > int need_tlb_flush = 0, idx; > + int ret; > > idx = srcu_read_lock(&kvm->srcu); > spin_lock(&kvm->mmu_lock); > @@ -378,9 +380,11 @@ static void kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, > > spin_unlock(&kvm->mmu_lock); > > - kvm_arch_mmu_notifier_invalidate_range(kvm, start, end); > + ret = kvm_arch_mmu_notifier_invalidate_range(kvm, start, end, blockable); > > srcu_read_unlock(&kvm->srcu, idx); > + > + return ret; > } > > static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn, From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f197.google.com (mail-pf0-f197.google.com [209.85.192.197]) by kanga.kvack.org (Postfix) with ESMTP id 6E48A6B0269 for ; Fri, 22 Jun 2018 11:13:24 -0400 (EDT) Received: by mail-pf0-f197.google.com with SMTP id y8-v6so3340511pfl.17 for ; Fri, 22 Jun 2018 08:13:24 -0700 (PDT) Received: from NAM05-CO1-obe.outbound.protection.outlook.com (mail-eopbgr720083.outbound.protection.outlook.com. [40.107.72.83]) by mx.google.com with ESMTPS id f71-v6si7471460pfc.316.2018.06.22.08.13.22 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 22 Jun 2018 08:13:22 -0700 (PDT) Subject: Re: [RFC PATCH] mm, oom: distinguish blockable mode for mmu notifiers References: <20180622150242.16558-1-mhocko@kernel.org> From: =?UTF-8?Q?Christian_K=c3=b6nig?= Message-ID: <0aa9f695-5702-6704-9462-7779cbfdb3fd@amd.com> Date: Fri, 22 Jun 2018 17:13:02 +0200 MIME-Version: 1.0 In-Reply-To: <20180622150242.16558-1-mhocko@kernel.org> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US Sender: owner-linux-mm@kvack.org List-ID: To: Michal Hocko , LKML Cc: Michal Hocko , "David (ChunMing) Zhou" , Paolo Bonzini , =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , Alex Deucher , David Airlie , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Doug Ledford , Jason Gunthorpe , Mike Marciniszyn , Dennis Dalessandro , Sudeep Dutt , Ashutosh Dixit , Dimitri Sivanich , Boris Ostrovsky , Juergen Gross , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Andrea Arcangeli , kvm@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-rdma@vger.kernel.org, xen-devel@lists.xenproject.org, linux-mm@kvack.org, David Rientjes , Felix Kuehling Hi Michal, [Adding Felix as well] Well first of all you have a misconception why at least the AMD graphics driver need to be able to sleep in an MMU notifier: We need to sleep because we need to wait for hardware operations to finish and *NOT* because we need to wait for locks. I'm not sure if your flag now means that you generally can't sleep in MMU notifiers any more, but if that's the case at least AMD hardware will break badly. In our case the approach of waiting for a short time for the process to be reaped and then select another victim actually sounds like the right thing to do. What we also already try to do is to abort hardware operations with the address space when we detect that the process is dying, but that can certainly be improved. Regards, Christian. Am 22.06.2018 um 17:02 schrieb Michal Hocko: > From: Michal Hocko > > There are several blockable mmu notifiers which might sleep in > mmu_notifier_invalidate_range_start and that is a problem for the > oom_reaper because it needs to guarantee a forward progress so it cannot > depend on any sleepable locks. Currently we simply back off and mark an > oom victim with blockable mmu notifiers as done after a short sleep. > That can result in selecting a new oom victim prematurely because the > previous one still hasn't torn its memory down yet. > > We can do much better though. Even if mmu notifiers use sleepable locks > there is no reason to automatically assume those locks are held. > Moreover most notifiers only care about a portion of the address > space. This patch handles the first part of the problem. > __mmu_notifier_invalidate_range_start gets a blockable flag and > callbacks are not allowed to sleep if the flag is set to false. This is > achieved by using trylock instead of the sleepable lock for most > callbacks. I think we can improve that even further because there is > a common pattern to do a range lookup first and then do something about > that. The first part can be done without a sleeping lock I presume. > > Anyway, what does the oom_reaper do with all that? We do not have to > fail right away. We simply retry if there is at least one notifier which > couldn't make any progress. A retry loop is already implemented to wait > for the mmap_sem and this is basically the same thing. > > Cc: "David (ChunMing) Zhou" > Cc: Paolo Bonzini > Cc: "Radim KrA?mA!A?" > Cc: Alex Deucher > Cc: "Christian KA?nig" > Cc: David Airlie > Cc: Jani Nikula > Cc: Joonas Lahtinen > Cc: Rodrigo Vivi > Cc: Doug Ledford > Cc: Jason Gunthorpe > Cc: Mike Marciniszyn > Cc: Dennis Dalessandro > Cc: Sudeep Dutt > Cc: Ashutosh Dixit > Cc: Dimitri Sivanich > Cc: Boris Ostrovsky > Cc: Juergen Gross > Cc: "JA(C)rA'me Glisse" > Cc: Andrea Arcangeli > Cc: kvm@vger.kernel.org (open list:KERNEL VIRTUAL MACHINE FOR X86 (KVM/x86)) > Cc: linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)) > Cc: amd-gfx@lists.freedesktop.org (open list:RADEON and AMDGPU DRM DRIVERS) > Cc: dri-devel@lists.freedesktop.org (open list:DRM DRIVERS) > Cc: intel-gfx@lists.freedesktop.org (open list:INTEL DRM DRIVERS (excluding Poulsbo, Moorestow...) > Cc: linux-rdma@vger.kernel.org (open list:INFINIBAND SUBSYSTEM) > Cc: xen-devel@lists.xenproject.org (moderated list:XEN HYPERVISOR INTERFACE) > Cc: linux-mm@kvack.org (open list:HMM - Heterogeneous Memory Management) > Reported-by: David Rientjes > Signed-off-by: Michal Hocko > --- > > Hi, > this is an RFC and not tested at all. I am not very familiar with the > mmu notifiers semantics very much so this is a crude attempt to achieve > what I need basically. It might be completely wrong but I would like > to discuss what would be a better way if that is the case. > > get_maintainers gave me quite large list of people to CC so I had to trim > it down. If you think I have forgot somebody, please let me know > > Any feedback is highly appreciated. > > arch/x86/kvm/x86.c | 7 ++++-- > drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c | 33 +++++++++++++++++++------ > drivers/gpu/drm/i915/i915_gem_userptr.c | 10 +++++--- > drivers/gpu/drm/radeon/radeon_mn.c | 15 ++++++++--- > drivers/infiniband/core/umem_odp.c | 15 ++++++++--- > drivers/infiniband/hw/hfi1/mmu_rb.c | 7 ++++-- > drivers/misc/mic/scif/scif_dma.c | 7 ++++-- > drivers/misc/sgi-gru/grutlbpurge.c | 7 ++++-- > drivers/xen/gntdev.c | 14 ++++++++--- > include/linux/kvm_host.h | 2 +- > include/linux/mmu_notifier.h | 15 +++++++++-- > mm/hmm.c | 7 ++++-- > mm/mmu_notifier.c | 15 ++++++++--- > mm/oom_kill.c | 29 +++++++++++----------- > virt/kvm/kvm_main.c | 12 ++++++--- > 15 files changed, 137 insertions(+), 58 deletions(-) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 6bcecc325e7e..ac08f5d711be 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -7203,8 +7203,9 @@ static void vcpu_load_eoi_exitmap(struct kvm_vcpu *vcpu) > kvm_x86_ops->load_eoi_exitmap(vcpu, eoi_exit_bitmap); > } > > -void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, > - unsigned long start, unsigned long end) > +int kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, > + unsigned long start, unsigned long end, > + bool blockable) > { > unsigned long apic_address; > > @@ -7215,6 +7216,8 @@ void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, > apic_address = gfn_to_hva(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT); > if (start <= apic_address && apic_address < end) > kvm_make_all_cpus_request(kvm, KVM_REQ_APIC_PAGE_RELOAD); > + > + return 0; > } > > void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu) > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c > index 83e344fbb50a..d138a526feff 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c > @@ -136,12 +136,18 @@ void amdgpu_mn_unlock(struct amdgpu_mn *mn) > * > * Take the rmn read side lock. > */ > -static void amdgpu_mn_read_lock(struct amdgpu_mn *rmn) > +static int amdgpu_mn_read_lock(struct amdgpu_mn *rmn, bool blockable) > { > - mutex_lock(&rmn->read_lock); > + if (blockable) > + mutex_lock(&rmn->read_lock); > + else if (!mutex_trylock(&rmn->read_lock)) > + return -EAGAIN; > + > if (atomic_inc_return(&rmn->recursion) == 1) > down_read_non_owner(&rmn->lock); > mutex_unlock(&rmn->read_lock); > + > + return 0; > } > > /** > @@ -197,10 +203,11 @@ static void amdgpu_mn_invalidate_node(struct amdgpu_mn_node *node, > * We block for all BOs between start and end to be idle and > * unmap them by move them into system domain again. > */ > -static void amdgpu_mn_invalidate_range_start_gfx(struct mmu_notifier *mn, > +static int amdgpu_mn_invalidate_range_start_gfx(struct mmu_notifier *mn, > struct mm_struct *mm, > unsigned long start, > - unsigned long end) > + unsigned long end, > + bool blockable) > { > struct amdgpu_mn *rmn = container_of(mn, struct amdgpu_mn, mn); > struct interval_tree_node *it; > @@ -208,7 +215,11 @@ static void amdgpu_mn_invalidate_range_start_gfx(struct mmu_notifier *mn, > /* notification is exclusive, but interval is inclusive */ > end -= 1; > > - amdgpu_mn_read_lock(rmn); > + /* TODO we should be able to split locking for interval tree and > + * amdgpu_mn_invalidate_node > + */ > + if (amdgpu_mn_read_lock(rmn, blockable)) > + return -EAGAIN; > > it = interval_tree_iter_first(&rmn->objects, start, end); > while (it) { > @@ -219,6 +230,8 @@ static void amdgpu_mn_invalidate_range_start_gfx(struct mmu_notifier *mn, > > amdgpu_mn_invalidate_node(node, start, end); > } > + > + return 0; > } > > /** > @@ -233,10 +246,11 @@ static void amdgpu_mn_invalidate_range_start_gfx(struct mmu_notifier *mn, > * necessitates evicting all user-mode queues of the process. The BOs > * are restorted in amdgpu_mn_invalidate_range_end_hsa. > */ > -static void amdgpu_mn_invalidate_range_start_hsa(struct mmu_notifier *mn, > +static int amdgpu_mn_invalidate_range_start_hsa(struct mmu_notifier *mn, > struct mm_struct *mm, > unsigned long start, > - unsigned long end) > + unsigned long end, > + bool blockable) > { > struct amdgpu_mn *rmn = container_of(mn, struct amdgpu_mn, mn); > struct interval_tree_node *it; > @@ -244,7 +258,8 @@ static void amdgpu_mn_invalidate_range_start_hsa(struct mmu_notifier *mn, > /* notification is exclusive, but interval is inclusive */ > end -= 1; > > - amdgpu_mn_read_lock(rmn); > + if (amdgpu_mn_read_lock(rmn, blockable)) > + return -EAGAIN; > > it = interval_tree_iter_first(&rmn->objects, start, end); > while (it) { > @@ -262,6 +277,8 @@ static void amdgpu_mn_invalidate_range_start_hsa(struct mmu_notifier *mn, > amdgpu_amdkfd_evict_userptr(mem, mm); > } > } > + > + return 0; > } > > /** > diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c > index 854bd51b9478..5285df9331fa 100644 > --- a/drivers/gpu/drm/i915/i915_gem_userptr.c > +++ b/drivers/gpu/drm/i915/i915_gem_userptr.c > @@ -112,10 +112,11 @@ static void del_object(struct i915_mmu_object *mo) > mo->attached = false; > } > > -static void i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn, > +static int i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn, > struct mm_struct *mm, > unsigned long start, > - unsigned long end) > + unsigned long end, > + bool blockable) > { > struct i915_mmu_notifier *mn = > container_of(_mn, struct i915_mmu_notifier, mn); > @@ -124,7 +125,7 @@ static void i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn, > LIST_HEAD(cancelled); > > if (RB_EMPTY_ROOT(&mn->objects.rb_root)) > - return; > + return 0; > > /* interval ranges are inclusive, but invalidate range is exclusive */ > end--; > @@ -152,7 +153,8 @@ static void i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn, > del_object(mo); > spin_unlock(&mn->lock); > > - if (!list_empty(&cancelled)) > + /* TODO: can we skip waiting here? */ > + if (!list_empty(&cancelled) && blockable) > flush_workqueue(mn->wq); > } > > diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/radeon_mn.c > index abd24975c9b1..b47e828b725d 100644 > --- a/drivers/gpu/drm/radeon/radeon_mn.c > +++ b/drivers/gpu/drm/radeon/radeon_mn.c > @@ -118,10 +118,11 @@ static void radeon_mn_release(struct mmu_notifier *mn, > * We block for all BOs between start and end to be idle and > * unmap them by move them into system domain again. > */ > -static void radeon_mn_invalidate_range_start(struct mmu_notifier *mn, > +static int radeon_mn_invalidate_range_start(struct mmu_notifier *mn, > struct mm_struct *mm, > unsigned long start, > - unsigned long end) > + unsigned long end, > + bool blockable) > { > struct radeon_mn *rmn = container_of(mn, struct radeon_mn, mn); > struct ttm_operation_ctx ctx = { false, false }; > @@ -130,7 +131,13 @@ static void radeon_mn_invalidate_range_start(struct mmu_notifier *mn, > /* notification is exclusive, but interval is inclusive */ > end -= 1; > > - mutex_lock(&rmn->lock); > + /* TODO we should be able to split locking for interval tree and > + * the tear down. > + */ > + if (blockable) > + mutex_lock(&rmn->lock); > + else if (!mutex_trylock(&rmn->lock)) > + return -EAGAIN; > > it = interval_tree_iter_first(&rmn->objects, start, end); > while (it) { > @@ -167,6 +174,8 @@ static void radeon_mn_invalidate_range_start(struct mmu_notifier *mn, > } > > mutex_unlock(&rmn->lock); > + > + return 0; > } > > static const struct mmu_notifier_ops radeon_mn_ops = { > diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c > index 182436b92ba9..f65f6a29daae 100644 > --- a/drivers/infiniband/core/umem_odp.c > +++ b/drivers/infiniband/core/umem_odp.c > @@ -207,22 +207,29 @@ static int invalidate_range_start_trampoline(struct ib_umem *item, u64 start, > return 0; > } > > -static void ib_umem_notifier_invalidate_range_start(struct mmu_notifier *mn, > +static int ib_umem_notifier_invalidate_range_start(struct mmu_notifier *mn, > struct mm_struct *mm, > unsigned long start, > - unsigned long end) > + unsigned long end, > + bool blockable) > { > struct ib_ucontext *context = container_of(mn, struct ib_ucontext, mn); > > if (!context->invalidate_range) > - return; > + return 0; > + > + if (blockable) > + down_read(&context->umem_rwsem); > + else if (!down_read_trylock(&context->umem_rwsem)) > + return -EAGAIN; > > ib_ucontext_notifier_start_account(context); > - down_read(&context->umem_rwsem); > rbt_ib_umem_for_each_in_range(&context->umem_tree, start, > end, > invalidate_range_start_trampoline, NULL); > up_read(&context->umem_rwsem); > + > + return 0; > } > > static int invalidate_range_end_trampoline(struct ib_umem *item, u64 start, > diff --git a/drivers/infiniband/hw/hfi1/mmu_rb.c b/drivers/infiniband/hw/hfi1/mmu_rb.c > index 70aceefe14d5..8780560d1623 100644 > --- a/drivers/infiniband/hw/hfi1/mmu_rb.c > +++ b/drivers/infiniband/hw/hfi1/mmu_rb.c > @@ -284,10 +284,11 @@ void hfi1_mmu_rb_remove(struct mmu_rb_handler *handler, > handler->ops->remove(handler->ops_arg, node); > } > > -static void mmu_notifier_range_start(struct mmu_notifier *mn, > +static int mmu_notifier_range_start(struct mmu_notifier *mn, > struct mm_struct *mm, > unsigned long start, > - unsigned long end) > + unsigned long end, > + bool blockable) > { > struct mmu_rb_handler *handler = > container_of(mn, struct mmu_rb_handler, mn); > @@ -313,6 +314,8 @@ static void mmu_notifier_range_start(struct mmu_notifier *mn, > > if (added) > queue_work(handler->wq, &handler->del_work); > + > + return 0; > } > > /* > diff --git a/drivers/misc/mic/scif/scif_dma.c b/drivers/misc/mic/scif/scif_dma.c > index 63d6246d6dff..d940568bed87 100644 > --- a/drivers/misc/mic/scif/scif_dma.c > +++ b/drivers/misc/mic/scif/scif_dma.c > @@ -200,15 +200,18 @@ static void scif_mmu_notifier_release(struct mmu_notifier *mn, > schedule_work(&scif_info.misc_work); > } > > -static void scif_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, > +static int scif_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, > struct mm_struct *mm, > unsigned long start, > - unsigned long end) > + unsigned long end, > + bool blockable) > { > struct scif_mmu_notif *mmn; > > mmn = container_of(mn, struct scif_mmu_notif, ep_mmu_notifier); > scif_rma_destroy_tcw(mmn, start, end - start); > + > + return 0 > } > > static void scif_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn, > diff --git a/drivers/misc/sgi-gru/grutlbpurge.c b/drivers/misc/sgi-gru/grutlbpurge.c > index a3454eb56fbf..be28f05bfafa 100644 > --- a/drivers/misc/sgi-gru/grutlbpurge.c > +++ b/drivers/misc/sgi-gru/grutlbpurge.c > @@ -219,9 +219,10 @@ void gru_flush_all_tlb(struct gru_state *gru) > /* > * MMUOPS notifier callout functions > */ > -static void gru_invalidate_range_start(struct mmu_notifier *mn, > +static int gru_invalidate_range_start(struct mmu_notifier *mn, > struct mm_struct *mm, > - unsigned long start, unsigned long end) > + unsigned long start, unsigned long end, > + bool blockable) > { > struct gru_mm_struct *gms = container_of(mn, struct gru_mm_struct, > ms_notifier); > @@ -231,6 +232,8 @@ static void gru_invalidate_range_start(struct mmu_notifier *mn, > gru_dbg(grudev, "gms %p, start 0x%lx, end 0x%lx, act %d\n", gms, > start, end, atomic_read(&gms->ms_range_active)); > gru_flush_tlb_range(gms, start, end - start); > + > + return 0; > } > > static void gru_invalidate_range_end(struct mmu_notifier *mn, > diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c > index bd56653b9bbc..50724d09fe5c 100644 > --- a/drivers/xen/gntdev.c > +++ b/drivers/xen/gntdev.c > @@ -465,14 +465,20 @@ static void unmap_if_in_range(struct grant_map *map, > WARN_ON(err); > } > > -static void mn_invl_range_start(struct mmu_notifier *mn, > +static int mn_invl_range_start(struct mmu_notifier *mn, > struct mm_struct *mm, > - unsigned long start, unsigned long end) > + unsigned long start, unsigned long end, > + bool blockable) > { > struct gntdev_priv *priv = container_of(mn, struct gntdev_priv, mn); > struct grant_map *map; > > - mutex_lock(&priv->lock); > + /* TODO do we really need a mutex here? */ > + if (blockable) > + mutex_lock(&priv->lock); > + else if (!mutex_trylock(&priv->lock)) > + return -EAGAIN; > + > list_for_each_entry(map, &priv->maps, next) { > unmap_if_in_range(map, start, end); > } > @@ -480,6 +486,8 @@ static void mn_invl_range_start(struct mmu_notifier *mn, > unmap_if_in_range(map, start, end); > } > mutex_unlock(&priv->lock); > + > + return true; > } > > static void mn_release(struct mmu_notifier *mn, > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > index 4ee7bc548a83..e4181063e755 100644 > --- a/include/linux/kvm_host.h > +++ b/include/linux/kvm_host.h > @@ -1275,7 +1275,7 @@ static inline long kvm_arch_vcpu_async_ioctl(struct file *filp, > } > #endif /* CONFIG_HAVE_KVM_VCPU_ASYNC_IOCTL */ > > -void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, > +int kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, > unsigned long start, unsigned long end); > > #ifdef CONFIG_HAVE_KVM_VCPU_RUN_PID_CHANGE > diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h > index 392e6af82701..369867501bed 100644 > --- a/include/linux/mmu_notifier.h > +++ b/include/linux/mmu_notifier.h > @@ -230,7 +230,8 @@ extern int __mmu_notifier_test_young(struct mm_struct *mm, > extern void __mmu_notifier_change_pte(struct mm_struct *mm, > unsigned long address, pte_t pte); > extern void __mmu_notifier_invalidate_range_start(struct mm_struct *mm, > - unsigned long start, unsigned long end); > + unsigned long start, unsigned long end, > + bool blockable); > extern void __mmu_notifier_invalidate_range_end(struct mm_struct *mm, > unsigned long start, unsigned long end, > bool only_end); > @@ -281,7 +282,17 @@ static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm, > unsigned long start, unsigned long end) > { > if (mm_has_notifiers(mm)) > - __mmu_notifier_invalidate_range_start(mm, start, end); > + __mmu_notifier_invalidate_range_start(mm, start, end, true); > +} > + > +static inline int mmu_notifier_invalidate_range_start_nonblock(struct mm_struct *mm, > + unsigned long start, unsigned long end) > +{ > + int ret = 0; > + if (mm_has_notifiers(mm)) > + ret = __mmu_notifier_invalidate_range_start(mm, start, end, false); > + > + return ret; > } > > static inline void mmu_notifier_invalidate_range_end(struct mm_struct *mm, > diff --git a/mm/hmm.c b/mm/hmm.c > index de7b6bf77201..81fd57bd2634 100644 > --- a/mm/hmm.c > +++ b/mm/hmm.c > @@ -177,16 +177,19 @@ static void hmm_release(struct mmu_notifier *mn, struct mm_struct *mm) > up_write(&hmm->mirrors_sem); > } > > -static void hmm_invalidate_range_start(struct mmu_notifier *mn, > +static int hmm_invalidate_range_start(struct mmu_notifier *mn, > struct mm_struct *mm, > unsigned long start, > - unsigned long end) > + unsigned long end, > + bool blockable) > { > struct hmm *hmm = mm->hmm; > > VM_BUG_ON(!hmm); > > atomic_inc(&hmm->sequence); > + > + return 0; > } > > static void hmm_invalidate_range_end(struct mmu_notifier *mn, > diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c > index eff6b88a993f..30cc43121da9 100644 > --- a/mm/mmu_notifier.c > +++ b/mm/mmu_notifier.c > @@ -174,18 +174,25 @@ void __mmu_notifier_change_pte(struct mm_struct *mm, unsigned long address, > srcu_read_unlock(&srcu, id); > } > > -void __mmu_notifier_invalidate_range_start(struct mm_struct *mm, > - unsigned long start, unsigned long end) > +int __mmu_notifier_invalidate_range_start(struct mm_struct *mm, > + unsigned long start, unsigned long end, > + bool blockable) > { > struct mmu_notifier *mn; > + int ret = 0; > int id; > > id = srcu_read_lock(&srcu); > hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) { > - if (mn->ops->invalidate_range_start) > - mn->ops->invalidate_range_start(mn, mm, start, end); > + if (mn->ops->invalidate_range_start) { > + int _ret = mn->ops->invalidate_range_start(mn, mm, start, end, blockable); > + if (_ret) > + ret = _ret; > + } > } > srcu_read_unlock(&srcu, id); > + > + return ret; > } > EXPORT_SYMBOL_GPL(__mmu_notifier_invalidate_range_start); > > diff --git a/mm/oom_kill.c b/mm/oom_kill.c > index 84081e77bc51..7e0c6e78ae5c 100644 > --- a/mm/oom_kill.c > +++ b/mm/oom_kill.c > @@ -479,9 +479,10 @@ static DECLARE_WAIT_QUEUE_HEAD(oom_reaper_wait); > static struct task_struct *oom_reaper_list; > static DEFINE_SPINLOCK(oom_reaper_lock); > > -void __oom_reap_task_mm(struct mm_struct *mm) > +bool __oom_reap_task_mm(struct mm_struct *mm) > { > struct vm_area_struct *vma; > + bool ret = true; > > /* > * Tell all users of get_user/copy_from_user etc... that the content > @@ -511,12 +512,17 @@ void __oom_reap_task_mm(struct mm_struct *mm) > struct mmu_gather tlb; > > tlb_gather_mmu(&tlb, mm, start, end); > - mmu_notifier_invalidate_range_start(mm, start, end); > + if (mmu_notifier_invalidate_range_start_nonblock(mm, start, end)) { > + ret = false; > + continue; > + } > unmap_page_range(&tlb, vma, start, end, NULL); > mmu_notifier_invalidate_range_end(mm, start, end); > tlb_finish_mmu(&tlb, start, end); > } > } > + > + return ret; > } > > static bool oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm) > @@ -545,18 +551,6 @@ static bool oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm) > goto unlock_oom; > } > > - /* > - * If the mm has invalidate_{start,end}() notifiers that could block, > - * sleep to give the oom victim some more time. > - * TODO: we really want to get rid of this ugly hack and make sure that > - * notifiers cannot block for unbounded amount of time > - */ > - if (mm_has_blockable_invalidate_notifiers(mm)) { > - up_read(&mm->mmap_sem); > - schedule_timeout_idle(HZ); > - goto unlock_oom; > - } > - > /* > * MMF_OOM_SKIP is set by exit_mmap when the OOM reaper can't > * work on the mm anymore. The check for MMF_OOM_SKIP must run > @@ -571,7 +565,12 @@ static bool oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm) > > trace_start_task_reaping(tsk->pid); > > - __oom_reap_task_mm(mm); > + /* failed to reap part of the address space. Try again later */ > + if (!__oom_reap_task_mm(mm)) { > + up_read(&mm->mmap_sem); > + ret = false; > + goto out_unlock; > + } > > pr_info("oom_reaper: reaped process %d (%s), now anon-rss:%lukB, file-rss:%lukB, shmem-rss:%lukB\n", > task_pid_nr(tsk), tsk->comm, > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index ada21f47f22b..6f7e709d2944 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -135,7 +135,7 @@ static void kvm_uevent_notify_change(unsigned int type, struct kvm *kvm); > static unsigned long long kvm_createvm_count; > static unsigned long long kvm_active_vms; > > -__weak void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, > +__weak int kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, > unsigned long start, unsigned long end) > { > } > @@ -354,13 +354,15 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn, > srcu_read_unlock(&kvm->srcu, idx); > } > > -static void kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, > +static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, > struct mm_struct *mm, > unsigned long start, > - unsigned long end) > + unsigned long end, > + bool blockable) > { > struct kvm *kvm = mmu_notifier_to_kvm(mn); > int need_tlb_flush = 0, idx; > + int ret; > > idx = srcu_read_lock(&kvm->srcu); > spin_lock(&kvm->mmu_lock); > @@ -378,9 +380,11 @@ static void kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, > > spin_unlock(&kvm->mmu_lock); > > - kvm_arch_mmu_notifier_invalidate_range(kvm, start, end); > + ret = kvm_arch_mmu_notifier_invalidate_range(kvm, start, end, blockable); > > srcu_read_unlock(&kvm->srcu, idx); > + > + return ret; > } > > static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn,