From mboxrd@z Thu Jan 1 00:00:00 1970 From: David Hildenbrand Subject: [PATCH RFC] mm/memory_hotplug: Introduce memory block types Date: Fri, 28 Sep 2018 17:03:57 +0200 Message-ID: <20180928150357.12942-1-david@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: driverdev-devel-bounces@linuxdriverproject.org Sender: "devel" To: linux-mm@kvack.org Cc: Kate Stewart , Rich Felker , linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org, Peter Zijlstra , Benjamin Herrenschmidt , Balbir Singh , Dave Hansen , Heiko Carstens , Pavel Tatashin , Michal Hocko , Paul Mackerras , "H. Peter Anvin" , Rashmica Gupta , Boris Ostrovsky , linux-s390@vger.kernel.org, Michael Neuling , Stephen Hemminger , Yoshinori Sato , Michael Ellerman , David Hildenbrand , linux-acpi@vger.kernel.org, Ingo Molnar , xen-devel@l List-Id: linux-acpi@vger.kernel.org SG93IHRvL3doZW4gdG8gb25saW5lIGhvdHBsdWdnZWQgbWVtb3J5IGlzIGhhcmQgdG8gbWFuYWdl IGZvcgpkaXN0cmlidXRpb25zIGJlY2F1c2UgZGlmZmVyZW50IG1lbW9yeSB0eXBlcyBhcmUgdG8g YmUgdHJlYXRlZCBkaWZmZXJlbnRseS4KUmlnaHQgbm93LCB3ZSBuZWVkIGNvbXBsaWNhdGVkIHVk ZXYgcnVsZXMgdGhhdCBlLmcuIGNoZWNrIGlmIHdlIGFyZQpydW5uaW5nIG9uIHMzOTB4LCBvbiBh IHBoeXNpY2FsIHN5c3RlbSBvciBvbiBhIHZpcnR1YWxpemVkIHN5c3RlbS4gQnV0CnRoZXJlIGlz IGFsc28gc29tZXRpbWVzIHRoZSBkZW1hbmQgdG8gcmVhbGx5IG9ubGluZSBtZW1vcnkgaW1tZWRp YXRlbHkKd2hpbGUgYWRkaW5nIGluIHRoZSBrZXJuZWwgYW5kIG5vdCB0byB3YWl0IGZvciB1c2Vy IHNwYWNlIHRvIG1ha2UgYQpkZWNpc2lvbi4gQW5kIG9uIHZpcnR1YWxpemVkIHN5c3RlbXMgdGhl cmUgbWlnaHQgYmUgZGlmZmVyZW50CnJlcXVpcmVtZW50cywgZGVwZW5kaW5nIG9uICJob3ciIHRo ZSBtZW1vcnkgd2FzIGFkZGVkIChhbmQgaWYgaXQgd2lsbApldmVudHVhbGx5IGdldCB1bnBsdWdn ZWQgYWdhaW4gLSBESU1NIHZzLiBwYXJhdmlydHVhbGl6ZWQgbWVjaGFuaXNtcykuCgpPbiB0aGUg b25lIGhhbmQsIHdlIGhhdmUgcGh5c2ljYWwgc3lzdGVtcyB3aGVyZSB3ZSBzb21ldGltZXMKd2Fu dCB0byBiZSBhYmxlIHRvIHVucGx1ZyBtZW1vcnkgYWdhaW4gLSBlLmcuIGEgRElNTSAtIHNvIHdl IGhhdmUgdG8gb25saW5lCml0IHRvIHRoZSBNT1ZBQkxFIHpvbmUgb3B0aW9uYWxseS4gVGhhdCBk ZWNpc2lvbiBpcyB1c3VhbGx5IG1hZGUgaW4gdXNlcgpzcGFjZS4KCk9uIHRoZSBvdGhlciBoYW5k LCB3ZSBoYXZlIG1lbW9yeSB0aGF0IHNob3VsZCBuZXZlciBiZSBvbmxpbmVkCmF1dG9tYXRpY2Fs bHksIG9ubHkgd2hlbiBhc2tlZCBmb3IgYnkgYW4gYWRtaW5pc3RyYXRvci4gU3VjaCBtZW1vcnkg b25seQphcHBsaWVzIHRvIHZpcnR1YWxpemVkIGVudmlyb25tZW50cyBsaWtlIHMzOTB4LCB3aGVy ZSB0aGUgY29uY2VwdCBvZgoic3RhbmRieSIgbWVtb3J5IGV4aXN0cy4gTWVtb3J5IGlzIGRldGVj dGVkIGFuZCBhZGRlZCBkdXJpbmcgYm9vdCwgc28gaXQKY2FuIGJlIG9ubGluZWQgd2hlbiByZXF1 ZXN0ZWQgYnkgdGhlIGFkbWluaW5pc3RyYXRvciBvciBzb21lIHRvb2xpbmcuCk9ubHkgd2hlbiBv bmxpbmluZywgbWVtb3J5IHdpbGwgYmUgYWxsb2NhdGVkIGluIHRoZSBoeXBlcnZpc29yLgoKQnV0 IHRoZW4sIHdlIGFsc28gaGF2ZSBwYXJhdmlydHVhbGl6ZWQgZGV2aWNlcyAobmFtZWx5IHhlbiBh bmQgaHlwZXItdgpiYWxsb29ucyksIHRoYXQgaG90cGx1ZyBtZW1vcnkgdGhhdCB3aWxsIG5ldmVy IGV2ZXIgYmUgcmVtb3ZlZCBmcm9tIGEKc3lzdGVtIHJpZ2h0IG5vdyB1c2luZyBvZmZsaW5lX3Bh Z2VzL3JlbW92ZV9tZW1vcnkuIElmIGF0IGFsbCwgdGhpcyBtZW1vcnkKaXMgbG9naWNhbGx5IHVu cGx1Z2dlZCBhbmQgaGFuZGVkIGJhY2sgdG8gdGhlIGh5cGVydmlzb3IgdmlhIGJhbGxvb25pbmcu CgpGb3IgcGFyYXZpcnR1YWxpemVkIGRldmljZXMgaXQgaXMgcmVsZXZhbnQgdGhhdCBtZW1vcnkg aXMgb25saW5lZCBhcwpxdWlja2x5IGFzIHBvc3NpYmxlIGFmdGVyIGFkZGluZyAtIGFuZCB0aGF0 IGl0IGlzIGFkZGVkIHRvIHRoZSBOT1JNQUwKem9uZS4gT3RoZXJ3aXNlLCBpdCBjb3VsZCBoYXBw ZW4gdGhhdCB0b28gbXVjaCBtZW1vcnkgaW4gYSByb3cgaXMgYWRkZWQKKGJ1dCBub3Qgb25saW5l ZCksIHJlc3VsdGluZyBpbiBvdXQtb2YtbWVtb3J5IGNvbmRpdGlvbnMgZHVlIHRvIHRoZQphZGRp dGlvbmFsIG1lbW9yeSBmb3IgInN0cnVjdCBwYWdlcyIgYW5kIGZyaWVuZHMuIE1PVkFCTEUgem9u ZSBhcyB3ZWxsCmFzIGRlbGF5cyBtaWdodCBiZSB2ZXJ5IHByb2JsZW1hdGljIGFuZCBsZWFkIHRv IGNyYXNoZXMgKGUuZy4gem9uZQppbWJhbGFuY2UpLgoKVGhlcmVmb3JlLCBpbnRyb2R1Y2UgbWVt b3J5IGJsb2NrIHR5cGVzIGFuZCBvbmxpbmUgbWVtb3J5IGRlcGVuZGluZyBvbgppdCB3aGVuIGFk ZGluZyB0aGUgbWVtb3J5LiBFeHBvc2UgdGhlIG1lbW9yeSB0eXBlIHRvIHVzZXIgc3BhY2UsIHNv IHVzZXIKc3BhY2UgaGFuZGxlcnMgY2FuIHN0YXJ0IHRvIHByb2Nlc3Mgb25seSAibm9ybWFsIiBt ZW1vcnkuIE90aGVyIG1lbW9yeQpibG9jayB0eXBlcyBjYW4gYmUgaWdub3JlZC4gT25lIHRoaW5n IGxlc3MgdG8gd29ycnkgYWJvdXQgaW4gdXNlciBzcGFjZS4KCkNjOiBUb255IEx1Y2sgPHRvbnku bHVja0BpbnRlbC5jb20+CkNjOiBGZW5naHVhIFl1IDxmZW5naHVhLnl1QGludGVsLmNvbT4KQ2M6 IEJlbmphbWluIEhlcnJlbnNjaG1pZHQgPGJlbmhAa2VybmVsLmNyYXNoaW5nLm9yZz4KQ2M6IFBh dWwgTWFja2VycmFzIDxwYXVsdXNAc2FtYmEub3JnPgpDYzogTWljaGFlbCBFbGxlcm1hbiA8bXBl QGVsbGVybWFuLmlkLmF1PgpDYzogTWFydGluIFNjaHdpZGVmc2t5IDxzY2h3aWRlZnNreUBkZS5p Ym0uY29tPgpDYzogSGVpa28gQ2Fyc3RlbnMgPGhlaWtvLmNhcnN0ZW5zQGRlLmlibS5jb20+CkNj OiBZb3NoaW5vcmkgU2F0byA8eXNhdG9AdXNlcnMuc291cmNlZm9yZ2UuanA+CkNjOiBSaWNoIEZl bGtlciA8ZGFsaWFzQGxpYmMub3JnPgpDYzogRGF2ZSBIYW5zZW4gPGRhdmUuaGFuc2VuQGxpbnV4 LmludGVsLmNvbT4KQ2M6IEFuZHkgTHV0b21pcnNraSA8bHV0b0BrZXJuZWwub3JnPgpDYzogUGV0 ZXIgWmlqbHN0cmEgPHBldGVyekBpbmZyYWRlYWQub3JnPgpDYzogVGhvbWFzIEdsZWl4bmVyIDx0 Z2x4QGxpbnV0cm9uaXguZGU+CkNjOiBJbmdvIE1vbG5hciA8bWluZ29AcmVkaGF0LmNvbT4KQ2M6 IEJvcmlzbGF2IFBldGtvdiA8YnBAYWxpZW44LmRlPgpDYzogIkguIFBldGVyIEFudmluIiA8aHBh QHp5dG9yLmNvbT4KQ2M6ICJSYWZhZWwgSi4gV3lzb2NraSIgPHJqd0Byand5c29ja2kubmV0PgpD YzogTGVuIEJyb3duIDxsZW5iQGtlcm5lbC5vcmc+CkNjOiBHcmVnIEtyb2FoLUhhcnRtYW4gPGdy ZWdraEBsaW51eGZvdW5kYXRpb24ub3JnPgpDYzogIksuIFkuIFNyaW5pdmFzYW4iIDxreXNAbWlj cm9zb2Z0LmNvbT4KQ2M6IEhhaXlhbmcgWmhhbmcgPGhhaXlhbmd6QG1pY3Jvc29mdC5jb20+CkNj OiBTdGVwaGVuIEhlbW1pbmdlciA8c3RoZW1taW5AbWljcm9zb2Z0LmNvbT4KQ2M6IEJvcmlzIE9z dHJvdnNreSA8Ym9yaXMub3N0cm92c2t5QG9yYWNsZS5jb20+CkNjOiBKdWVyZ2VuIEdyb3NzIDxq Z3Jvc3NAc3VzZS5jb20+CkNjOiAiSsOpcsO0bWUgR2xpc3NlIiA8amdsaXNzZUByZWRoYXQuY29t PgpDYzogQW5kcmV3IE1vcnRvbiA8YWtwbUBsaW51eC1mb3VuZGF0aW9uLm9yZz4KQ2M6IE1pa2Ug UmFwb3BvcnQgPHJwcHRAbGludXgudm5ldC5pYm0uY29tPgpDYzogRGFuIFdpbGxpYW1zIDxkYW4u ai53aWxsaWFtc0BpbnRlbC5jb20+CkNjOiBTdGVwaGVuIFJvdGh3ZWxsIDxzZnJAY2FuYi5hdXVn Lm9yZy5hdT4KQ2M6IE1pY2hhbCBIb2NrbyA8bWhvY2tvQHN1c2UuY29tPgpDYzogIktpcmlsbCBB LiBTaHV0ZW1vdiIgPGtpcmlsbC5zaHV0ZW1vdkBsaW51eC5pbnRlbC5jb20+CkNjOiBEYXZpZCBI aWxkZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT4KQ2M6IE5pY2hvbGFzIFBpZ2dpbiA8bnBpZ2dp bkBnbWFpbC5jb20+CkNjOiAiSm9uYXRoYW4gTmV1c2Now6RmZXIiIDxqLm5ldXNjaGFlZmVyQGdt eC5uZXQ+CkNjOiBKb2UgUGVyY2hlcyA8am9lQHBlcmNoZXMuY29tPgpDYzogTWljaGFlbCBOZXVs aW5nIDxtaWtleUBuZXVsaW5nLm9yZz4KQ2M6IE1hdXJpY2lvIEZhcmlhIGRlIE9saXZlaXJhIDxt YXVyaWNmb0BsaW51eC52bmV0LmlibS5jb20+CkNjOiBCYWxiaXIgU2luZ2ggPGJzaW5naGFyb3Jh QGdtYWlsLmNvbT4KQ2M6IFJhc2htaWNhIEd1cHRhIDxyYXNobWljYS5nQGdtYWlsLmNvbT4KQ2M6 IFBhdmVsIFRhdGFzaGluIDxwYXZlbC50YXRhc2hpbkBtaWNyb3NvZnQuY29tPgpDYzogUm9iIEhl cnJpbmcgPHJvYmhAa2VybmVsLm9yZz4KQ2M6IFBoaWxpcHBlIE9tYnJlZGFubmUgPHBvbWJyZWRh bm5lQG5leGIuY29tPgpDYzogS2F0ZSBTdGV3YXJ0IDxrc3Rld2FydEBsaW51eGZvdW5kYXRpb24u b3JnPgpDYzogIm1pa2UudHJhdmlzQGhwZS5jb20iIDxtaWtlLnRyYXZpc0BocGUuY29tPgpDYzog Sm9vbnNvbyBLaW0gPGlhbWpvb25zb28ua2ltQGxnZS5jb20+CkNjOiBPc2NhciBTYWx2YWRvciA8 b3NhbHZhZG9yQHN1c2UuZGU+CkNjOiBNYXRoaWV1IE1hbGF0ZXJyZSA8bWFsYXRAZGViaWFuLm9y Zz4KU2lnbmVkLW9mZi1ieTogRGF2aWQgSGlsZGVuYnJhbmQgPGRhdmlkQHJlZGhhdC5jb20+Ci0t LQoKVGhpcyBwYXRjaCBpcyBiYXNlZCBvbiB0aGUgY3VycmVudCBtbS10cmVlLCB3aGVyZSBzb21l IHJlbGF0ZWQKcGF0Y2hlcyBmcm9tIG1lIGFyZSBjdXJyZW50bHkgcmVzaWRpbmcgdGhhdCB0b3Vj aGVkIHRoZSBhZGRfbWVtb3J5KCkKZnVuY3Rpb25zLgoKIGFyY2gvaWE2NC9tbS9pbml0LmMgICAg ICAgICAgICAgICAgICAgICAgIHwgIDQgKy0KIGFyY2gvcG93ZXJwYy9tbS9tZW0uYyAgICAgICAg ICAgICAgICAgICAgIHwgIDQgKy0KIGFyY2gvcG93ZXJwYy9wbGF0Zm9ybXMvcG93ZXJudi9tZW10 cmFjZS5jIHwgIDMgKy0KIGFyY2gvczM5MC9tbS9pbml0LmMgICAgICAgICAgICAgICAgICAgICAg IHwgIDQgKy0KIGFyY2gvc2gvbW0vaW5pdC5jICAgICAgICAgICAgICAgICAgICAgICAgIHwgIDQg Ky0KIGFyY2gveDg2L21tL2luaXRfMzIuYyAgICAgICAgICAgICAgICAgICAgIHwgIDQgKy0KIGFy Y2gveDg2L21tL2luaXRfNjQuYyAgICAgICAgICAgICAgICAgICAgIHwgIDggKy0tCiBkcml2ZXJz L2FjcGkvYWNwaV9tZW1ob3RwbHVnLmMgICAgICAgICAgICB8ICAzICstCiBkcml2ZXJzL2Jhc2Uv bWVtb3J5LmMgICAgICAgICAgICAgICAgICAgICB8IDYzICsrKysrKysrKysrKysrKysrKysrLS0t CiBkcml2ZXJzL2h2L2h2X2JhbGxvb24uYyAgICAgICAgICAgICAgICAgICB8IDMzICsrLS0tLS0t LS0tLQogZHJpdmVycy9zMzkwL2NoYXIvc2NscF9jbWQuYyAgICAgICAgICAgICAgfCAgMyArLQog ZHJpdmVycy94ZW4vYmFsbG9vbi5jICAgICAgICAgICAgICAgICAgICAgfCAgMiArLQogaW5jbHVk ZS9saW51eC9tZW1vcnkuaCAgICAgICAgICAgICAgICAgICAgfCAyOCArKysrKysrKystCiBpbmNs dWRlL2xpbnV4L21lbW9yeV9ob3RwbHVnLmggICAgICAgICAgICB8IDE3ICsrKy0tLQogbW0vaG1t LmMgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgfCAgNiArKy0KIG1tL21lbW9yeV9o b3RwbHVnLmMgICAgICAgICAgICAgICAgICAgICAgIHwgMzEgKysrKysrLS0tLS0KIDE2IGZpbGVz IGNoYW5nZWQsIDEzOSBpbnNlcnRpb25zKCspLCA3OCBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQg YS9hcmNoL2lhNjQvbW0vaW5pdC5jIGIvYXJjaC9pYTY0L21tL2luaXQuYwppbmRleCBkNWUxMmZm MWQ3M2MuLjgxM2QxZDg2YmY5NSAxMDA2NDQKLS0tIGEvYXJjaC9pYTY0L21tL2luaXQuYworKysg Yi9hcmNoL2lhNjQvbW0vaW5pdC5jCkBAIC02NDYsMTMgKzY0NiwxMyBAQCBtZW1faW5pdCAodm9p ZCkKIAogI2lmZGVmIENPTkZJR19NRU1PUllfSE9UUExVRwogaW50IGFyY2hfYWRkX21lbW9yeShp bnQgbmlkLCB1NjQgc3RhcnQsIHU2NCBzaXplLCBzdHJ1Y3Qgdm1lbV9hbHRtYXAgKmFsdG1hcCwK LQkJYm9vbCB3YW50X21lbWJsb2NrKQorCQkgICAgaW50IG1lbW9yeV9ibG9ja190eXBlKQogewog CXVuc2lnbmVkIGxvbmcgc3RhcnRfcGZuID0gc3RhcnQgPj4gUEFHRV9TSElGVDsKIAl1bnNpZ25l ZCBsb25nIG5yX3BhZ2VzID0gc2l6ZSA+PiBQQUdFX1NISUZUOwogCWludCByZXQ7CiAKLQlyZXQg PSBfX2FkZF9wYWdlcyhuaWQsIHN0YXJ0X3BmbiwgbnJfcGFnZXMsIGFsdG1hcCwgd2FudF9tZW1i bG9jayk7CisJcmV0ID0gX19hZGRfcGFnZXMobmlkLCBzdGFydF9wZm4sIG5yX3BhZ2VzLCBhbHRt YXAsIG1lbW9yeV9ibG9ja190eXBlKTsKIAlpZiAocmV0KQogCQlwcmludGsoIiVzOiBQcm9ibGVt IGVuY291bnRlcmVkIGluIF9fYWRkX3BhZ2VzKCkgYXMgcmV0PSVkXG4iLAogCQkgICAgICAgX19m dW5jX18sICByZXQpOwpkaWZmIC0tZ2l0IGEvYXJjaC9wb3dlcnBjL21tL21lbS5jIGIvYXJjaC9w b3dlcnBjL21tL21lbS5jCmluZGV4IDU1NTFmNTg3MGRjYy4uZGQzMmZjYzkwOTljIDEwMDY0NAot LS0gYS9hcmNoL3Bvd2VycGMvbW0vbWVtLmMKKysrIGIvYXJjaC9wb3dlcnBjL21tL21lbS5jCkBA IC0xMTgsNyArMTE4LDcgQEAgaW50IF9fd2VhayByZW1vdmVfc2VjdGlvbl9tYXBwaW5nKHVuc2ln bmVkIGxvbmcgc3RhcnQsIHVuc2lnbmVkIGxvbmcgZW5kKQogfQogCiBpbnQgX19tZW1pbml0IGFy Y2hfYWRkX21lbW9yeShpbnQgbmlkLCB1NjQgc3RhcnQsIHU2NCBzaXplLCBzdHJ1Y3Qgdm1lbV9h bHRtYXAgKmFsdG1hcCwKLQkJYm9vbCB3YW50X21lbWJsb2NrKQorCQkJICAgICAgaW50IG1lbW9y eV9ibG9ja190eXBlKQogewogCXVuc2lnbmVkIGxvbmcgc3RhcnRfcGZuID0gc3RhcnQgPj4gUEFH RV9TSElGVDsKIAl1bnNpZ25lZCBsb25nIG5yX3BhZ2VzID0gc2l6ZSA+PiBQQUdFX1NISUZUOwpA QCAtMTM1LDcgKzEzNSw3IEBAIGludCBfX21lbWluaXQgYXJjaF9hZGRfbWVtb3J5KGludCBuaWQs IHU2NCBzdGFydCwgdTY0IHNpemUsIHN0cnVjdCB2bWVtX2FsdG1hcCAqCiAJfQogCWZsdXNoX2lu dmFsX2RjYWNoZV9yYW5nZShzdGFydCwgc3RhcnQgKyBzaXplKTsKIAotCXJldHVybiBfX2FkZF9w YWdlcyhuaWQsIHN0YXJ0X3BmbiwgbnJfcGFnZXMsIGFsdG1hcCwgd2FudF9tZW1ibG9jayk7CisJ cmV0dXJuIF9fYWRkX3BhZ2VzKG5pZCwgc3RhcnRfcGZuLCBucl9wYWdlcywgYWx0bWFwLCBtZW1v cnlfYmxvY2tfdHlwZSk7CiB9CiAKICNpZmRlZiBDT05GSUdfTUVNT1JZX0hPVFJFTU9WRQpkaWZm IC0tZ2l0IGEvYXJjaC9wb3dlcnBjL3BsYXRmb3Jtcy9wb3dlcm52L21lbXRyYWNlLmMgYi9hcmNo L3Bvd2VycGMvcGxhdGZvcm1zL3Bvd2VybnYvbWVtdHJhY2UuYwppbmRleCA4NGQwMzhlZDM4ODIu LjU3ZDZiM2Q0NjM4MiAxMDA2NDQKLS0tIGEvYXJjaC9wb3dlcnBjL3BsYXRmb3Jtcy9wb3dlcm52 L21lbXRyYWNlLmMKKysrIGIvYXJjaC9wb3dlcnBjL3BsYXRmb3Jtcy9wb3dlcm52L21lbXRyYWNl LmMKQEAgLTIzMiw3ICsyMzIsOCBAQCBzdGF0aWMgaW50IG1lbXRyYWNlX29ubGluZSh2b2lkKQog CQkJZW50LT5tZW0gPSAwOwogCQl9CiAKLQkJaWYgKGFkZF9tZW1vcnkoZW50LT5uaWQsIGVudC0+ c3RhcnQsIGVudC0+c2l6ZSkpIHsKKwkJaWYgKGFkZF9tZW1vcnkoZW50LT5uaWQsIGVudC0+c3Rh cnQsIGVudC0+c2l6ZSwKKwkJCSAgICAgICBNRU1PUllfQkxPQ0tfTk9STUFMKSkgewogCQkJcHJf ZXJyKCJGYWlsZWQgdG8gYWRkIHRyYWNlIG1lbW9yeSB0byBub2RlICVkXG4iLAogCQkJCWVudC0+ bmlkKTsKIAkJCXJldCArPSAxOwpkaWZmIC0tZ2l0IGEvYXJjaC9zMzkwL21tL2luaXQuYyBiL2Fy Y2gvczM5MC9tbS9pbml0LmMKaW5kZXggZTQ3MmNkNzYzZWIzLi5iNTMyNDUyN2M3ZjYgMTAwNjQ0 Ci0tLSBhL2FyY2gvczM5MC9tbS9pbml0LmMKKysrIGIvYXJjaC9zMzkwL21tL2luaXQuYwpAQCAt MjIyLDcgKzIyMiw3IEBAIGRldmljZV9pbml0Y2FsbChzMzkwX2NtYV9tZW1faW5pdCk7CiAjZW5k aWYgLyogQ09ORklHX0NNQSAqLwogCiBpbnQgYXJjaF9hZGRfbWVtb3J5KGludCBuaWQsIHU2NCBz dGFydCwgdTY0IHNpemUsIHN0cnVjdCB2bWVtX2FsdG1hcCAqYWx0bWFwLAotCQlib29sIHdhbnRf bWVtYmxvY2spCisJCSAgICBpbnQgbWVtb3J5X2Jsb2NrX3R5cGUpCiB7CiAJdW5zaWduZWQgbG9u ZyBzdGFydF9wZm4gPSBQRk5fRE9XTihzdGFydCk7CiAJdW5zaWduZWQgbG9uZyBzaXplX3BhZ2Vz ID0gUEZOX0RPV04oc2l6ZSk7CkBAIC0yMzIsNyArMjMyLDcgQEAgaW50IGFyY2hfYWRkX21lbW9y eShpbnQgbmlkLCB1NjQgc3RhcnQsIHU2NCBzaXplLCBzdHJ1Y3Qgdm1lbV9hbHRtYXAgKmFsdG1h cCwKIAlpZiAocmMpCiAJCXJldHVybiByYzsKIAotCXJjID0gX19hZGRfcGFnZXMobmlkLCBzdGFy dF9wZm4sIHNpemVfcGFnZXMsIGFsdG1hcCwgd2FudF9tZW1ibG9jayk7CisJcmMgPSBfX2FkZF9w YWdlcyhuaWQsIHN0YXJ0X3Bmbiwgc2l6ZV9wYWdlcywgYWx0bWFwLCBtZW1vcnlfYmxvY2tfdHlw ZSk7CiAJaWYgKHJjKQogCQl2bWVtX3JlbW92ZV9tYXBwaW5nKHN0YXJ0LCBzaXplKTsKIAlyZXR1 cm4gcmM7CmRpZmYgLS1naXQgYS9hcmNoL3NoL21tL2luaXQuYyBiL2FyY2gvc2gvbW0vaW5pdC5j CmluZGV4IGM4YzEzYzc3NzE2Mi4uNmI4NzYwMDA3MzFhIDEwMDY0NAotLS0gYS9hcmNoL3NoL21t L2luaXQuYworKysgYi9hcmNoL3NoL21tL2luaXQuYwpAQCAtNDE5LDE0ICs0MTksMTQgQEAgdm9p ZCBmcmVlX2luaXRyZF9tZW0odW5zaWduZWQgbG9uZyBzdGFydCwgdW5zaWduZWQgbG9uZyBlbmQp CiAKICNpZmRlZiBDT05GSUdfTUVNT1JZX0hPVFBMVUcKIGludCBhcmNoX2FkZF9tZW1vcnkoaW50 IG5pZCwgdTY0IHN0YXJ0LCB1NjQgc2l6ZSwgc3RydWN0IHZtZW1fYWx0bWFwICphbHRtYXAsCi0J CWJvb2wgd2FudF9tZW1ibG9jaykKKwkJICAgIGludCBtZW1vcnlfYmxvY2tfdHlwZSkKIHsKIAl1 bnNpZ25lZCBsb25nIHN0YXJ0X3BmbiA9IFBGTl9ET1dOKHN0YXJ0KTsKIAl1bnNpZ25lZCBsb25n IG5yX3BhZ2VzID0gc2l6ZSA+PiBQQUdFX1NISUZUOwogCWludCByZXQ7CiAKIAkvKiBXZSBvbmx5 IGhhdmUgWk9ORV9OT1JNQUwsIHNvIHRoaXMgaXMgZWFzeS4uICovCi0JcmV0ID0gX19hZGRfcGFn ZXMobmlkLCBzdGFydF9wZm4sIG5yX3BhZ2VzLCBhbHRtYXAsIHdhbnRfbWVtYmxvY2spOworCXJl dCA9IF9fYWRkX3BhZ2VzKG5pZCwgc3RhcnRfcGZuLCBucl9wYWdlcywgYWx0bWFwLCBtZW1vcnlf YmxvY2tfdHlwZSk7CiAJaWYgKHVubGlrZWx5KHJldCkpCiAJCXByaW50aygiJXM6IEZhaWxlZCwg X19hZGRfcGFnZXMoKSA9PSAlZFxuIiwgX19mdW5jX18sIHJldCk7CiAKZGlmZiAtLWdpdCBhL2Fy Y2gveDg2L21tL2luaXRfMzIuYyBiL2FyY2gveDg2L21tL2luaXRfMzIuYwppbmRleCBmMjgzN2U0 YzQwYjMuLjRmNTBjZDQ0NjdhOSAxMDA2NDQKLS0tIGEvYXJjaC94ODYvbW0vaW5pdF8zMi5jCisr KyBiL2FyY2gveDg2L21tL2luaXRfMzIuYwpAQCAtODUxLDEyICs4NTEsMTIgQEAgdm9pZCBfX2lu aXQgbWVtX2luaXQodm9pZCkKIAogI2lmZGVmIENPTkZJR19NRU1PUllfSE9UUExVRwogaW50IGFy Y2hfYWRkX21lbW9yeShpbnQgbmlkLCB1NjQgc3RhcnQsIHU2NCBzaXplLCBzdHJ1Y3Qgdm1lbV9h bHRtYXAgKmFsdG1hcCwKLQkJYm9vbCB3YW50X21lbWJsb2NrKQorCQkgICAgaW50IG1lbW9yeV9i bG9ja190eXBlKQogewogCXVuc2lnbmVkIGxvbmcgc3RhcnRfcGZuID0gc3RhcnQgPj4gUEFHRV9T SElGVDsKIAl1bnNpZ25lZCBsb25nIG5yX3BhZ2VzID0gc2l6ZSA+PiBQQUdFX1NISUZUOwogCi0J cmV0dXJuIF9fYWRkX3BhZ2VzKG5pZCwgc3RhcnRfcGZuLCBucl9wYWdlcywgYWx0bWFwLCB3YW50 X21lbWJsb2NrKTsKKwlyZXR1cm4gX19hZGRfcGFnZXMobmlkLCBzdGFydF9wZm4sIG5yX3BhZ2Vz LCBhbHRtYXAsIG1lbW9yeV9ibG9ja190eXBlKTsKIH0KIAogI2lmZGVmIENPTkZJR19NRU1PUllf SE9UUkVNT1ZFCmRpZmYgLS1naXQgYS9hcmNoL3g4Ni9tbS9pbml0XzY0LmMgYi9hcmNoL3g4Ni9t bS9pbml0XzY0LmMKaW5kZXggNWZhYjI2NDk0OGMyLi5mYzNkZjU3M2YwZjMgMTAwNjQ0Ci0tLSBh L2FyY2gveDg2L21tL2luaXRfNjQuYworKysgYi9hcmNoL3g4Ni9tbS9pbml0XzY0LmMKQEAgLTc4 MywxMSArNzgzLDExIEBAIHN0YXRpYyB2b2lkIHVwZGF0ZV9lbmRfb2ZfbWVtb3J5X3ZhcnModTY0 IHN0YXJ0LCB1NjQgc2l6ZSkKIH0KIAogaW50IGFkZF9wYWdlcyhpbnQgbmlkLCB1bnNpZ25lZCBs b25nIHN0YXJ0X3BmbiwgdW5zaWduZWQgbG9uZyBucl9wYWdlcywKLQkJc3RydWN0IHZtZW1fYWx0 bWFwICphbHRtYXAsIGJvb2wgd2FudF9tZW1ibG9jaykKKwkJc3RydWN0IHZtZW1fYWx0bWFwICph bHRtYXAsIGludCBtZW1vcnlfYmxvY2tfdHlwZSkKIHsKIAlpbnQgcmV0OwogCi0JcmV0ID0gX19h ZGRfcGFnZXMobmlkLCBzdGFydF9wZm4sIG5yX3BhZ2VzLCBhbHRtYXAsIHdhbnRfbWVtYmxvY2sp OworCXJldCA9IF9fYWRkX3BhZ2VzKG5pZCwgc3RhcnRfcGZuLCBucl9wYWdlcywgYWx0bWFwLCBt ZW1vcnlfYmxvY2tfdHlwZSk7CiAJV0FSTl9PTl9PTkNFKHJldCk7CiAKIAkvKiB1cGRhdGUgbWF4 X3BmbiwgbWF4X2xvd19wZm4gYW5kIGhpZ2hfbWVtb3J5ICovCkBAIC03OTgsMTQgKzc5OCwxNCBA QCBpbnQgYWRkX3BhZ2VzKGludCBuaWQsIHVuc2lnbmVkIGxvbmcgc3RhcnRfcGZuLCB1bnNpZ25l ZCBsb25nIG5yX3BhZ2VzLAogfQogCiBpbnQgYXJjaF9hZGRfbWVtb3J5KGludCBuaWQsIHU2NCBz dGFydCwgdTY0IHNpemUsIHN0cnVjdCB2bWVtX2FsdG1hcCAqYWx0bWFwLAotCQlib29sIHdhbnRf bWVtYmxvY2spCisJCSAgICBpbnQgbWVtb3J5X2Jsb2NrX3R5cGUpCiB7CiAJdW5zaWduZWQgbG9u ZyBzdGFydF9wZm4gPSBzdGFydCA+PiBQQUdFX1NISUZUOwogCXVuc2lnbmVkIGxvbmcgbnJfcGFn ZXMgPSBzaXplID4+IFBBR0VfU0hJRlQ7CiAKIAlpbml0X21lbW9yeV9tYXBwaW5nKHN0YXJ0LCBz dGFydCArIHNpemUpOwogCi0JcmV0dXJuIGFkZF9wYWdlcyhuaWQsIHN0YXJ0X3BmbiwgbnJfcGFn ZXMsIGFsdG1hcCwgd2FudF9tZW1ibG9jayk7CisJcmV0dXJuIGFkZF9wYWdlcyhuaWQsIHN0YXJ0 X3BmbiwgbnJfcGFnZXMsIGFsdG1hcCwgbWVtb3J5X2Jsb2NrX3R5cGUpOwogfQogCiAjZGVmaW5l IFBBR0VfSU5VU0UgMHhGRApkaWZmIC0tZ2l0IGEvZHJpdmVycy9hY3BpL2FjcGlfbWVtaG90cGx1 Zy5jIGIvZHJpdmVycy9hY3BpL2FjcGlfbWVtaG90cGx1Zy5jCmluZGV4IDhmZTA5NjBlYTU3Mi4u YzVmNjQ2YjRlOTdlIDEwMDY0NAotLS0gYS9kcml2ZXJzL2FjcGkvYWNwaV9tZW1ob3RwbHVnLmMK KysrIGIvZHJpdmVycy9hY3BpL2FjcGlfbWVtaG90cGx1Zy5jCkBAIC0yMjgsNyArMjI4LDggQEAg c3RhdGljIGludCBhY3BpX21lbW9yeV9lbmFibGVfZGV2aWNlKHN0cnVjdCBhY3BpX21lbW9yeV9k ZXZpY2UgKm1lbV9kZXZpY2UpCiAJCWlmIChub2RlIDwgMCkKIAkJCW5vZGUgPSBtZW1vcnlfYWRk X3BoeXNhZGRyX3RvX25pZChpbmZvLT5zdGFydF9hZGRyKTsKIAotCQlyZXN1bHQgPSBfX2FkZF9t ZW1vcnkobm9kZSwgaW5mby0+c3RhcnRfYWRkciwgaW5mby0+bGVuZ3RoKTsKKwkJcmVzdWx0ID0g X19hZGRfbWVtb3J5KG5vZGUsIGluZm8tPnN0YXJ0X2FkZHIsIGluZm8tPmxlbmd0aCwKKwkJCQkg ICAgICBNRU1PUllfQkxPQ0tfTk9STUFMKTsKIAogCQkvKgogCQkgKiBJZiB0aGUgbWVtb3J5IGJs b2NrIGhhcyBiZWVuIHVzZWQgYnkgdGhlIGtlcm5lbCwgYWRkX21lbW9yeSgpCmRpZmYgLS1naXQg YS9kcml2ZXJzL2Jhc2UvbWVtb3J5LmMgYi9kcml2ZXJzL2Jhc2UvbWVtb3J5LmMKaW5kZXggMGU1 OTg1NjgyNjQyLi4yNjg2MTAxZTQxYjUgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMvYmFzZS9tZW1vcnku YworKysgYi9kcml2ZXJzL2Jhc2UvbWVtb3J5LmMKQEAgLTM4MSw2ICszODEsMzIgQEAgc3RhdGlj IHNzaXplX3Qgc2hvd19waHlzX2RldmljZShzdHJ1Y3QgZGV2aWNlICpkZXYsCiAJcmV0dXJuIHNw cmludGYoYnVmLCAiJWRcbiIsIG1lbS0+cGh5c19kZXZpY2UpOwogfQogCitzdGF0aWMgc3NpemVf dCB0eXBlX3Nob3coc3RydWN0IGRldmljZSAqZGV2LCBzdHJ1Y3QgZGV2aWNlX2F0dHJpYnV0ZSAq YXR0ciwKKwkJCSBjaGFyICpidWYpCit7CisJc3RydWN0IG1lbW9yeV9ibG9jayAqbWVtID0gdG9f bWVtb3J5X2Jsb2NrKGRldik7CisJc3NpemVfdCBsZW4gPSAwOworCisJc3dpdGNoIChtZW0tPnN0 YXRlKSB7CisJY2FzZSBNRU1PUllfQkxPQ0tfTk9STUFMOgorCQlsZW4gPSBzcHJpbnRmKGJ1Ziwg Im5vcm1hbFxuIik7CisJCWJyZWFrOworCWNhc2UgTUVNT1JZX0JMT0NLX1NUQU5EQlk6CisJCWxl biA9IHNwcmludGYoYnVmLCAic3RhbmRieVxuIik7CisJCWJyZWFrOworCWNhc2UgTUVNT1JZX0JM T0NLX1BBUkFWSVJUOgorCQlsZW4gPSBzcHJpbnRmKGJ1ZiwgInBhcmF2aXJ0XG4iKTsKKwkJYnJl YWs7CisJZGVmYXVsdDoKKwkJbGVuID0gc3ByaW50ZihidWYsICJFUlJPUi1VTktOT1dOLSVsZFxu IiwKKwkJCQltZW0tPnN0YXRlKTsKKwkJV0FSTl9PTigxKTsKKwkJYnJlYWs7CisJfQorCisJcmV0 dXJuIGxlbjsKK30KKwogI2lmZGVmIENPTkZJR19NRU1PUllfSE9UUkVNT1ZFCiBzdGF0aWMgdm9p ZCBwcmludF9hbGxvd2VkX3pvbmUoY2hhciAqYnVmLCBpbnQgbmlkLCB1bnNpZ25lZCBsb25nIHN0 YXJ0X3BmbiwKIAkJdW5zaWduZWQgbG9uZyBucl9wYWdlcywgaW50IG9ubGluZV90eXBlLApAQCAt NDQyLDYgKzQ2OCw3IEBAIHN0YXRpYyBERVZJQ0VfQVRUUihwaHlzX2luZGV4LCAwNDQ0LCBzaG93 X21lbV9zdGFydF9waHlzX2luZGV4LCBOVUxMKTsKIHN0YXRpYyBERVZJQ0VfQVRUUihzdGF0ZSwg MDY0NCwgc2hvd19tZW1fc3RhdGUsIHN0b3JlX21lbV9zdGF0ZSk7CiBzdGF0aWMgREVWSUNFX0FU VFIocGh5c19kZXZpY2UsIDA0NDQsIHNob3dfcGh5c19kZXZpY2UsIE5VTEwpOwogc3RhdGljIERF VklDRV9BVFRSKHJlbW92YWJsZSwgMDQ0NCwgc2hvd19tZW1fcmVtb3ZhYmxlLCBOVUxMKTsKK3N0 YXRpYyBERVZJQ0VfQVRUUl9STyh0eXBlKTsKIAogLyoKICAqIEJsb2NrIHNpemUgYXR0cmlidXRl IHN0dWZmCkBAIC01MTQsNyArNTQxLDggQEAgbWVtb3J5X3Byb2JlX3N0b3JlKHN0cnVjdCBkZXZp Y2UgKmRldiwgc3RydWN0IGRldmljZV9hdHRyaWJ1dGUgKmF0dHIsCiAKIAluaWQgPSBtZW1vcnlf YWRkX3BoeXNhZGRyX3RvX25pZChwaHlzX2FkZHIpOwogCXJldCA9IF9fYWRkX21lbW9yeShuaWQs IHBoeXNfYWRkciwKLQkJCSAgIE1JTl9NRU1PUllfQkxPQ0tfU0laRSAqIHNlY3Rpb25zX3Blcl9i bG9jayk7CisJCQkgICBNSU5fTUVNT1JZX0JMT0NLX1NJWkUgKiBzZWN0aW9uc19wZXJfYmxvY2ss CisJCQkgICBNRU1PUllfQkxPQ0tfTk9STUFMKTsKIAogCWlmIChyZXQpCiAJCWdvdG8gb3V0OwpA QCAtNjIwLDYgKzY0OCw3IEBAIHN0YXRpYyBzdHJ1Y3QgYXR0cmlidXRlICptZW1vcnlfbWVtYmxr X2F0dHJzW10gPSB7CiAJJmRldl9hdHRyX3N0YXRlLmF0dHIsCiAJJmRldl9hdHRyX3BoeXNfZGV2 aWNlLmF0dHIsCiAJJmRldl9hdHRyX3JlbW92YWJsZS5hdHRyLAorCSZkZXZfYXR0cl90eXBlLmF0 dHIsCiAjaWZkZWYgQ09ORklHX01FTU9SWV9IT1RSRU1PVkUKIAkmZGV2X2F0dHJfdmFsaWRfem9u ZXMuYXR0ciwKICNlbmRpZgpAQCAtNjU3LDEzICs2ODYsMTcgQEAgaW50IHJlZ2lzdGVyX21lbW9y eShzdHJ1Y3QgbWVtb3J5X2Jsb2NrICptZW1vcnkpCiB9CiAKIHN0YXRpYyBpbnQgaW5pdF9tZW1v cnlfYmxvY2soc3RydWN0IG1lbW9yeV9ibG9jayAqKm1lbW9yeSwKLQkJCSAgICAgc3RydWN0IG1l bV9zZWN0aW9uICpzZWN0aW9uLCB1bnNpZ25lZCBsb25nIHN0YXRlKQorCQkJICAgICBzdHJ1Y3Qg bWVtX3NlY3Rpb24gKnNlY3Rpb24sIHVuc2lnbmVkIGxvbmcgc3RhdGUsCisJCQkgICAgIGludCBt ZW1vcnlfYmxvY2tfdHlwZSkKIHsKIAlzdHJ1Y3QgbWVtb3J5X2Jsb2NrICptZW07CiAJdW5zaWdu ZWQgbG9uZyBzdGFydF9wZm47CiAJaW50IHNjbl9ucjsKIAlpbnQgcmV0ID0gMDsKIAorCWlmICht ZW1vcnlfYmxvY2tfdHlwZSA9PSBNRU1PUllfQkxPQ0tfTk9ORSkKKwkJcmV0dXJuIC1FSU5WQUw7 CisKIAltZW0gPSBremFsbG9jKHNpemVvZigqbWVtKSwgR0ZQX0tFUk5FTCk7CiAJaWYgKCFtZW0p CiAJCXJldHVybiAtRU5PTUVNOwpAQCAtNjc1LDYgKzcwOCw3IEBAIHN0YXRpYyBpbnQgaW5pdF9t ZW1vcnlfYmxvY2soc3RydWN0IG1lbW9yeV9ibG9jayAqKm1lbW9yeSwKIAltZW0tPnN0YXRlID0g c3RhdGU7CiAJc3RhcnRfcGZuID0gc2VjdGlvbl9ucl90b19wZm4obWVtLT5zdGFydF9zZWN0aW9u X25yKTsKIAltZW0tPnBoeXNfZGV2aWNlID0gYXJjaF9nZXRfbWVtb3J5X3BoeXNfZGV2aWNlKHN0 YXJ0X3Bmbik7CisJbWVtLT50eXBlID0gbWVtb3J5X2Jsb2NrX3R5cGU7CiAKIAlyZXQgPSByZWdp c3Rlcl9tZW1vcnkobWVtKTsKIApAQCAtNjk5LDcgKzczMyw4IEBAIHN0YXRpYyBpbnQgYWRkX21l bW9yeV9ibG9jayhpbnQgYmFzZV9zZWN0aW9uX25yKQogCiAJaWYgKHNlY3Rpb25fY291bnQgPT0g MCkKIAkJcmV0dXJuIDA7Ci0JcmV0ID0gaW5pdF9tZW1vcnlfYmxvY2soJm1lbSwgX19ucl90b19z ZWN0aW9uKHNlY3Rpb25fbnIpLCBNRU1fT05MSU5FKTsKKwlyZXQgPSBpbml0X21lbW9yeV9ibG9j aygmbWVtLCBfX25yX3RvX3NlY3Rpb24oc2VjdGlvbl9uciksIE1FTV9PTkxJTkUsCisJCQkJTUVN T1JZX0JMT0NLX05PUk1BTCk7CiAJaWYgKHJldCkKIAkJcmV0dXJuIHJldDsKIAltZW0tPnNlY3Rp b25fY291bnQgPSBzZWN0aW9uX2NvdW50OwpAQCAtNzEwLDE5ICs3NDUsMzUgQEAgc3RhdGljIGlu dCBhZGRfbWVtb3J5X2Jsb2NrKGludCBiYXNlX3NlY3Rpb25fbnIpCiAgKiBuZWVkIGFuIGludGVy ZmFjZSBmb3IgdGhlIFZNIHRvIGFkZCBuZXcgbWVtb3J5IHJlZ2lvbnMsCiAgKiBidXQgd2l0aG91 dCBvbmxpbmluZyBpdC4KICAqLwotaW50IGhvdHBsdWdfbWVtb3J5X3JlZ2lzdGVyKGludCBuaWQs IHN0cnVjdCBtZW1fc2VjdGlvbiAqc2VjdGlvbikKK2ludCBob3RwbHVnX21lbW9yeV9yZWdpc3Rl cihpbnQgbmlkLCBzdHJ1Y3QgbWVtX3NlY3Rpb24gKnNlY3Rpb24sCisJCQkgICAgaW50IG1lbW9y eV9ibG9ja190eXBlKQogewogCWludCByZXQgPSAwOwogCXN0cnVjdCBtZW1vcnlfYmxvY2sgKm1l bTsKIAogCW11dGV4X2xvY2soJm1lbV9zeXNmc19tdXRleCk7CiAKKwkvKiBtYWtlIHN1cmUgdGhl cmUgaXMgbm8gbWVtYmxvY2sgaWYgd2UgZG9uJ3Qgd2FudCBvbmUgKi8KKwlpZiAobWVtb3J5X2Js b2NrX3R5cGUgPT0gTUVNT1JZX0JMT0NLX05PTkUpIHsKKwkJbWVtID0gZmluZF9tZW1vcnlfYmxv Y2soc2VjdGlvbik7CisJCWlmIChtZW0pIHsKKwkJCXB1dF9kZXZpY2UoJm1lbS0+ZGV2KTsKKwkJ CXJldCA9IC1FSU5WQUw7CisJCX0KKwkJZ290byBvdXQ7CisJfQorCiAJbWVtID0gZmluZF9tZW1v cnlfYmxvY2soc2VjdGlvbik7CiAJaWYgKG1lbSkgewotCQltZW0tPnNlY3Rpb25fY291bnQrKzsK KwkJLyogbWFrZSBzdXJlIHRoZSB0eXBlIG1hdGNoZXMgKi8KKwkJaWYgKG1lbS0+dHlwZSA9PSBt ZW1vcnlfYmxvY2tfdHlwZSkKKwkJCW1lbS0+c2VjdGlvbl9jb3VudCsrOworCQllbHNlCisJCQly ZXQgPSAtRUlOVkFMOwogCQlwdXRfZGV2aWNlKCZtZW0tPmRldik7CiAJfSBlbHNlIHsKLQkJcmV0 ID0gaW5pdF9tZW1vcnlfYmxvY2soJm1lbSwgc2VjdGlvbiwgTUVNX09GRkxJTkUpOworCQlyZXQg PSBpbml0X21lbW9yeV9ibG9jaygmbWVtLCBzZWN0aW9uLCBNRU1fT0ZGTElORSwKKwkJCQkJbWVt b3J5X2Jsb2NrX3R5cGUpOwogCQlpZiAocmV0KQogCQkJZ290byBvdXQ7CiAJCW1lbS0+c2VjdGlv bl9jb3VudCsrOwpkaWZmIC0tZ2l0IGEvZHJpdmVycy9odi9odl9iYWxsb29uLmMgYi9kcml2ZXJz L2h2L2h2X2JhbGxvb24uYwppbmRleCBiMWI3ODgwODI3OTMuLjVhOGQxOGM0ZDY5OSAxMDA2NDQK LS0tIGEvZHJpdmVycy9odi9odl9iYWxsb29uLmMKKysrIGIvZHJpdmVycy9odi9odl9iYWxsb29u LmMKQEAgLTUzNywxMSArNTM3LDYgQEAgc3RydWN0IGh2X2R5bm1lbV9kZXZpY2UgewogCSAqLwog CWJvb2wgaG9zdF9zcGVjaWZpZWRfaGFfcmVnaW9uOwogCi0JLyoKLQkgKiBTdGF0ZSB0byBzeW5j aHJvbml6ZSBob3QtYWRkLgotCSAqLwotCXN0cnVjdCBjb21wbGV0aW9uICBvbF93YWl0ZXZlbnQ7 Ci0JYm9vbCBoYV93YWl0aW5nOwogCS8qCiAJICogVGhpcyB0aHJlYWQgaGFuZGxlcyBob3QtYWRk CiAJICogcmVxdWVzdHMgZnJvbSB0aGUgaG9zdCBhcyB3ZWxsIGFzIG5vdGlmeWluZwpAQCAtNjQw LDE0ICs2MzUsNiBAQCBzdGF0aWMgaW50IGh2X21lbW9yeV9ub3RpZmllcihzdHJ1Y3Qgbm90aWZp ZXJfYmxvY2sgKm5iLCB1bnNpZ25lZCBsb25nIHZhbCwKIAl1bnNpZ25lZCBsb25nIGZsYWdzLCBw Zm5fY291bnQ7CiAKIAlzd2l0Y2ggKHZhbCkgewotCWNhc2UgTUVNX09OTElORToKLQljYXNlIE1F TV9DQU5DRUxfT05MSU5FOgotCQlpZiAoZG1fZGV2aWNlLmhhX3dhaXRpbmcpIHsKLQkJCWRtX2Rl dmljZS5oYV93YWl0aW5nID0gZmFsc2U7Ci0JCQljb21wbGV0ZSgmZG1fZGV2aWNlLm9sX3dhaXRl dmVudCk7Ci0JCX0KLQkJYnJlYWs7Ci0KIAljYXNlIE1FTV9PRkZMSU5FOgogCQlzcGluX2xvY2tf aXJxc2F2ZSgmZG1fZGV2aWNlLmhhX2xvY2ssIGZsYWdzKTsKIAkJcGZuX2NvdW50ID0gaHZfcGFn ZV9vZmZsaW5lX2NoZWNrKG1lbS0+c3RhcnRfcGZuLApAQCAtNjY1LDkgKzY1Miw3IEBAIHN0YXRp YyBpbnQgaHZfbWVtb3J5X25vdGlmaWVyKHN0cnVjdCBub3RpZmllcl9ibG9jayAqbmIsIHVuc2ln bmVkIGxvbmcgdmFsLAogCQl9CiAJCXNwaW5fdW5sb2NrX2lycXJlc3RvcmUoJmRtX2RldmljZS5o YV9sb2NrLCBmbGFncyk7CiAJCWJyZWFrOwotCWNhc2UgTUVNX0dPSU5HX09OTElORToKLQljYXNl IE1FTV9HT0lOR19PRkZMSU5FOgotCWNhc2UgTUVNX0NBTkNFTF9PRkZMSU5FOgorCWRlZmF1bHQ6 CiAJCWJyZWFrOwogCX0KIAlyZXR1cm4gTk9USUZZX09LOwpAQCAtNzMxLDEyICs3MTYsMTAgQEAg c3RhdGljIHZvaWQgaHZfbWVtX2hvdF9hZGQodW5zaWduZWQgbG9uZyBzdGFydCwgdW5zaWduZWQg bG9uZyBzaXplLAogCQloYXMtPmNvdmVyZWRfZW5kX3BmbiArPSAgcHJvY2Vzc2VkX3BmbjsKIAkJ c3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgmZG1fZGV2aWNlLmhhX2xvY2ssIGZsYWdzKTsKIAotCQlp bml0X2NvbXBsZXRpb24oJmRtX2RldmljZS5vbF93YWl0ZXZlbnQpOwotCQlkbV9kZXZpY2UuaGFf d2FpdGluZyA9ICFtZW1ocF9hdXRvX29ubGluZTsKLQogCQluaWQgPSBtZW1vcnlfYWRkX3BoeXNh ZGRyX3RvX25pZChQRk5fUEhZUyhzdGFydF9wZm4pKTsKIAkJcmV0ID0gYWRkX21lbW9yeShuaWQs IFBGTl9QSFlTKChzdGFydF9wZm4pKSwKLQkJCQkoSEFfQ0hVTksgPDwgUEFHRV9TSElGVCkpOwor CQkJCSAoSEFfQ0hVTksgPDwgUEFHRV9TSElGVCksCisJCQkJIE1FTU9SWV9CTE9DS19QQVJBVklS VCk7CiAKIAkJaWYgKHJldCkgewogCQkJcHJfZXJyKCJob3RfYWRkIG1lbW9yeSBmYWlsZWQgZXJy b3IgaXMgJWRcbiIsIHJldCk7CkBAIC03NTcsMTYgKzc0MCw2IEBAIHN0YXRpYyB2b2lkIGh2X21l bV9ob3RfYWRkKHVuc2lnbmVkIGxvbmcgc3RhcnQsIHVuc2lnbmVkIGxvbmcgc2l6ZSwKIAkJCWJy ZWFrOwogCQl9CiAKLQkJLyoKLQkJICogV2FpdCBmb3IgdGhlIG1lbW9yeSBibG9jayB0byBiZSBv bmxpbmVkIHdoZW4gbWVtb3J5IG9ubGluaW5nCi0JCSAqIGlzIGRvbmUgb3V0c2lkZSBvZiBrZXJu ZWwgKG1lbWhwX2F1dG9fb25saW5lKS4gU2luY2UgdGhlIGhvdAotCQkgKiBhZGQgaGFzIHN1Y2Nl ZWRlZCwgaXQgaXMgb2sgdG8gcHJvY2VlZCBldmVuIGlmIHRoZSBwYWdlcyBpbgotCQkgKiB0aGUg aG90IGFkZGVkIHJlZ2lvbiBoYXZlIG5vdCBiZWVuICJvbmxpbmVkIiB3aXRoaW4gdGhlCi0JCSAq IGFsbG93ZWQgdGltZS4KLQkJICovCi0JCWlmIChkbV9kZXZpY2UuaGFfd2FpdGluZykKLQkJCXdh aXRfZm9yX2NvbXBsZXRpb25fdGltZW91dCgmZG1fZGV2aWNlLm9sX3dhaXRldmVudCwKLQkJCQkJ CSAgICA1KkhaKTsKIAkJcG9zdF9zdGF0dXMoJmRtX2RldmljZSk7CiAJfQogfQpkaWZmIC0tZ2l0 IGEvZHJpdmVycy9zMzkwL2NoYXIvc2NscF9jbWQuYyBiL2RyaXZlcnMvczM5MC9jaGFyL3NjbHBf Y21kLmMKaW5kZXggZDc2ODZhNjhjMDkzLi4xOTI4YTI0MTE0NTYgMTAwNjQ0Ci0tLSBhL2RyaXZl cnMvczM5MC9jaGFyL3NjbHBfY21kLmMKKysrIGIvZHJpdmVycy9zMzkwL2NoYXIvc2NscF9jbWQu YwpAQCAtNDA2LDcgKzQwNiw4IEBAIHN0YXRpYyB2b2lkIF9faW5pdCBhZGRfbWVtb3J5X21lcmdl ZCh1MTYgcm4pCiAJaWYgKCFzaXplKQogCQlnb3RvIHNraXBfYWRkOwogCWZvciAoYWRkciA9IHN0 YXJ0OyBhZGRyIDwgc3RhcnQgKyBzaXplOyBhZGRyICs9IGJsb2NrX3NpemUpCi0JCWFkZF9tZW1v cnkobnVtYV9wZm5fdG9fbmlkKFBGTl9ET1dOKGFkZHIpKSwgYWRkciwgYmxvY2tfc2l6ZSk7CisJ CWFkZF9tZW1vcnkobnVtYV9wZm5fdG9fbmlkKFBGTl9ET1dOKGFkZHIpKSwgYWRkciwgYmxvY2tf c2l6ZSwKKwkJCSAgIE1FTU9SWV9CTE9DS19TVEFOREJZKTsKIHNraXBfYWRkOgogCWZpcnN0X3Ju ID0gcm47CiAJbnVtID0gMTsKZGlmZiAtLWdpdCBhL2RyaXZlcnMveGVuL2JhbGxvb24uYyBiL2Ry aXZlcnMveGVuL2JhbGxvb24uYwppbmRleCBmZGZjNjRmNWFjZWEuLjI5MWE4YWFjNmFmMyAxMDA2 NDQKLS0tIGEvZHJpdmVycy94ZW4vYmFsbG9vbi5jCisrKyBiL2RyaXZlcnMveGVuL2JhbGxvb24u YwpAQCAtMzk3LDcgKzM5Nyw3IEBAIHN0YXRpYyBlbnVtIGJwX3N0YXRlIHJlc2VydmVfYWRkaXRp b25hbF9tZW1vcnkodm9pZCkKIAltdXRleF91bmxvY2soJmJhbGxvb25fbXV0ZXgpOwogCS8qIGFk ZF9tZW1vcnlfcmVzb3VyY2UoKSByZXF1aXJlcyB0aGUgZGV2aWNlX2hvdHBsdWcgbG9jayAqLwog CWxvY2tfZGV2aWNlX2hvdHBsdWcoKTsKLQlyYyA9IGFkZF9tZW1vcnlfcmVzb3VyY2UobmlkLCBy ZXNvdXJjZSwgbWVtaHBfYXV0b19vbmxpbmUpOworCXJjID0gYWRkX21lbW9yeV9yZXNvdXJjZShu aWQsIHJlc291cmNlLCBNRU1PUllfQkxPQ0tfUEFSQVZJUlQpOwogCXVubG9ja19kZXZpY2VfaG90 cGx1ZygpOwogCW11dGV4X2xvY2soJmJhbGxvb25fbXV0ZXgpOwogCmRpZmYgLS1naXQgYS9pbmNs dWRlL2xpbnV4L21lbW9yeS5oIGIvaW5jbHVkZS9saW51eC9tZW1vcnkuaAppbmRleCBhNmRkZWZj NjA1MTcuLjNkYzJhMGIxMjY1MyAxMDA2NDQKLS0tIGEvaW5jbHVkZS9saW51eC9tZW1vcnkuaAor KysgYi9pbmNsdWRlL2xpbnV4L21lbW9yeS5oCkBAIC0yMyw2ICsyMywzMCBAQAogCiAjZGVmaW5l IE1JTl9NRU1PUllfQkxPQ0tfU0laRSAgICAgKDFVTCA8PCBTRUNUSU9OX1NJWkVfQklUUykKIAor LyoKKyAqIE5PTkU6ICAgICBObyBtZW1vcnkgYmxvY2sgaXMgdG8gYmUgY3JlYXRlZCAoZS5nLiBk ZXZpY2UgbWVtb3J5KS4KKyAqIE5PUk1BTDogICBNZW1vcnkgYmxvY2sgdGhhdCByZXByZXNlbnRz IG5vcm1hbCAoYm9vdCBvciBob3RwbHVnZ2VkKSBtZW1vcnkKKyAqICAgICAgICAgICAoZS5nLiBB Q1BJIERJTU1zKSB0aGF0IHNob3VsZCBiZSBvbmxpbmVkIGVpdGhlciBhdXRvbWF0aWNhbGx5Cisg KiAgICAgICAgICAgKG1lbWhwX2F1dG9fb25saW5lKSBvciBtYW51YWxseSBieSB1c2VyIHNwYWNl IHRvIHNlbGVjdCBhCisgKiAgICAgICAgICAgc3BlY2lmaWMgem9uZS4KKyAqICAgICAgICAgICBB cHBsaWNhYmxlIHRvIG1lbWhwX2F1dG9fb25saW5lLgorICogU1RBTkRCWTogIE1lbW9yeSBibG9j ayB0aGF0IHJlcHJlc2VudHMgc3RhbmRieSBtZW1vcnkgdGhhdCBzaG91bGQgb25seQorICogICAg ICAgICAgIGJlIG9ubGluZWQgb24gZGVtYW5kIGJ5IHVzZXIgc3BhY2UgKGUuZy4gc3RhbmRieSBt ZW1vcnkgb24KKyAqICAgICAgICAgICBzMzkweCksIGJ1dCBuZXZlciBhdXRvbWF0aWNhbGx5IGJ5 IHRoZSBrZXJuZWwuCisgKiAgICAgICAgICAgTm90IGFwcGxpY2FibGUgdG8gbWVtaHBfYXV0b19v bmxpbmUuCisgKiBQQVJBVklSVDogTWVtb3J5IGJsb2NrIHRoYXQgcmVwcmVzZW50cyBtZW1vcnkg YWRkZWQgYnkKKyAqICAgICAgICAgICBwYXJhdmlydHVhbGl6ZWQgbWVjaGFuaXNtcyAoZS5nLiBo eXBlci12LCB4ZW4pIHRoYXQgd2lsbAorICogICAgICAgICAgIGFsd2F5cyBhdXRvbWF0aWNhbGx5 IGdldCBvbmxpbmVkLiBNZW1vcnkgd2lsbCBiZSB1bnBsdWdnZWQKKyAqICAgICAgICAgICB1c2lu ZyBiYWxsb29uaW5nLCBub3QgYnkgcmVseWluZyBvbiB0aGUgTU9WQUJMRSBaT05FLgorICogICAg ICAgICAgIE5vdCBhcHBsaWNhYmxlIHRvIG1lbWhwX2F1dG9fb25saW5lLgorICovCitlbnVtIHsK KwlNRU1PUllfQkxPQ0tfTk9ORSwKKwlNRU1PUllfQkxPQ0tfTk9STUFMLAorCU1FTU9SWV9CTE9D S19TVEFOREJZLAorCU1FTU9SWV9CTE9DS19QQVJBVklSVCwKK307CisKIHN0cnVjdCBtZW1vcnlf YmxvY2sgewogCXVuc2lnbmVkIGxvbmcgc3RhcnRfc2VjdGlvbl9ucjsKIAl1bnNpZ25lZCBsb25n IGVuZF9zZWN0aW9uX25yOwpAQCAtMzQsNiArNTgsNyBAQCBzdHJ1Y3QgbWVtb3J5X2Jsb2NrIHsK IAlpbnQgKCpwaHlzX2NhbGxiYWNrKShzdHJ1Y3QgbWVtb3J5X2Jsb2NrICopOwogCXN0cnVjdCBk ZXZpY2UgZGV2OwogCWludCBuaWQ7CQkJLyogTklEIGZvciB0aGlzIG1lbW9yeSBibG9jayAqLwor CWludCB0eXBlOwkJCS8qIHR5cGUgb2YgdGhpcyBtZW1vcnkgYmxvY2sgKi8KIH07CiAKIGludCBh cmNoX2dldF9tZW1vcnlfcGh5c19kZXZpY2UodW5zaWduZWQgbG9uZyBzdGFydF9wZm4pOwpAQCAt MTExLDcgKzEzNiw4IEBAIGV4dGVybiBpbnQgcmVnaXN0ZXJfbWVtb3J5X25vdGlmaWVyKHN0cnVj dCBub3RpZmllcl9ibG9jayAqbmIpOwogZXh0ZXJuIHZvaWQgdW5yZWdpc3Rlcl9tZW1vcnlfbm90 aWZpZXIoc3RydWN0IG5vdGlmaWVyX2Jsb2NrICpuYik7CiBleHRlcm4gaW50IHJlZ2lzdGVyX21l bW9yeV9pc29sYXRlX25vdGlmaWVyKHN0cnVjdCBub3RpZmllcl9ibG9jayAqbmIpOwogZXh0ZXJu IHZvaWQgdW5yZWdpc3Rlcl9tZW1vcnlfaXNvbGF0ZV9ub3RpZmllcihzdHJ1Y3Qgbm90aWZpZXJf YmxvY2sgKm5iKTsKLWludCBob3RwbHVnX21lbW9yeV9yZWdpc3RlcihpbnQgbmlkLCBzdHJ1Y3Qg bWVtX3NlY3Rpb24gKnNlY3Rpb24pOworaW50IGhvdHBsdWdfbWVtb3J5X3JlZ2lzdGVyKGludCBu aWQsIHN0cnVjdCBtZW1fc2VjdGlvbiAqc2VjdGlvbiwKKwkJCSAgICBpbnQgbWVtb3J5X2Jsb2Nr X3R5cGUpOwogI2lmZGVmIENPTkZJR19NRU1PUllfSE9UUkVNT1ZFCiBleHRlcm4gaW50IHVucmVn aXN0ZXJfbWVtb3J5X3NlY3Rpb24oc3RydWN0IG1lbV9zZWN0aW9uICopOwogI2VuZGlmCmRpZmYg LS1naXQgYS9pbmNsdWRlL2xpbnV4L21lbW9yeV9ob3RwbHVnLmggYi9pbmNsdWRlL2xpbnV4L21l bW9yeV9ob3RwbHVnLmgKaW5kZXggZmZkOWNkMTBmY2YzLi5iNTYwYTllZTBlOGMgMTAwNjQ0Ci0t LSBhL2luY2x1ZGUvbGludXgvbWVtb3J5X2hvdHBsdWcuaAorKysgYi9pbmNsdWRlL2xpbnV4L21l bW9yeV9ob3RwbHVnLmgKQEAgLTExNSwxOCArMTE1LDE4IEBAIGV4dGVybiBpbnQgX19yZW1vdmVf cGFnZXMoc3RydWN0IHpvbmUgKnpvbmUsIHVuc2lnbmVkIGxvbmcgc3RhcnRfcGZuLAogCiAvKiBy ZWFzb25hYmx5IGdlbmVyaWMgaW50ZXJmYWNlIHRvIGV4cGFuZCB0aGUgcGh5c2ljYWwgcGFnZXMg Ki8KIGV4dGVybiBpbnQgX19hZGRfcGFnZXMoaW50IG5pZCwgdW5zaWduZWQgbG9uZyBzdGFydF9w Zm4sIHVuc2lnbmVkIGxvbmcgbnJfcGFnZXMsCi0JCXN0cnVjdCB2bWVtX2FsdG1hcCAqYWx0bWFw LCBib29sIHdhbnRfbWVtYmxvY2spOworCQlzdHJ1Y3Qgdm1lbV9hbHRtYXAgKmFsdG1hcCwgaW50 IG1lbW9yeV9ibG9ja190eXBlKTsKIAogI2lmbmRlZiBDT05GSUdfQVJDSF9IQVNfQUREX1BBR0VT CiBzdGF0aWMgaW5saW5lIGludCBhZGRfcGFnZXMoaW50IG5pZCwgdW5zaWduZWQgbG9uZyBzdGFy dF9wZm4sCiAJCXVuc2lnbmVkIGxvbmcgbnJfcGFnZXMsIHN0cnVjdCB2bWVtX2FsdG1hcCAqYWx0 bWFwLAotCQlib29sIHdhbnRfbWVtYmxvY2spCisJCWludCBtZW1vcnlfYmxvY2tfdHlwZSkKIHsK LQlyZXR1cm4gX19hZGRfcGFnZXMobmlkLCBzdGFydF9wZm4sIG5yX3BhZ2VzLCBhbHRtYXAsIHdh bnRfbWVtYmxvY2spOworCXJldHVybiBfX2FkZF9wYWdlcyhuaWQsIHN0YXJ0X3BmbiwgbnJfcGFn ZXMsIGFsdG1hcCwgbWVtb3J5X2Jsb2NrX3R5cGUpOwogfQogI2Vsc2UgLyogQVJDSF9IQVNfQURE X1BBR0VTICovCiBpbnQgYWRkX3BhZ2VzKGludCBuaWQsIHVuc2lnbmVkIGxvbmcgc3RhcnRfcGZu LCB1bnNpZ25lZCBsb25nIG5yX3BhZ2VzLAotCQlzdHJ1Y3Qgdm1lbV9hbHRtYXAgKmFsdG1hcCwg Ym9vbCB3YW50X21lbWJsb2NrKTsKKwkgICAgICBzdHJ1Y3Qgdm1lbV9hbHRtYXAgKmFsdG1hcCwg aW50IG1lbW9yeV9ibG9ja190eXBlKTsKICNlbmRpZiAvKiBBUkNIX0hBU19BRERfUEFHRVMgKi8K IAogI2lmZGVmIENPTkZJR19OVU1BCkBAIC0zMjQsMTEgKzMyNCwxMiBAQCBzdGF0aWMgaW5saW5l IHZvaWQgX19yZW1vdmVfbWVtb3J5KGludCBuaWQsIHU2NCBzdGFydCwgdTY0IHNpemUpIHt9CiBl eHRlcm4gdm9pZCBfX3JlZiBmcmVlX2FyZWFfaW5pdF9jb3JlX2hvdHBsdWcoaW50IG5pZCk7CiBl eHRlcm4gaW50IHdhbGtfbWVtb3J5X3JhbmdlKHVuc2lnbmVkIGxvbmcgc3RhcnRfcGZuLCB1bnNp Z25lZCBsb25nIGVuZF9wZm4sCiAJCXZvaWQgKmFyZywgaW50ICgqZnVuYykoc3RydWN0IG1lbW9y eV9ibG9jayAqLCB2b2lkICopKTsKLWV4dGVybiBpbnQgX19hZGRfbWVtb3J5KGludCBuaWQsIHU2 NCBzdGFydCwgdTY0IHNpemUpOwotZXh0ZXJuIGludCBhZGRfbWVtb3J5KGludCBuaWQsIHU2NCBz dGFydCwgdTY0IHNpemUpOwotZXh0ZXJuIGludCBhZGRfbWVtb3J5X3Jlc291cmNlKGludCBuaWQs IHN0cnVjdCByZXNvdXJjZSAqcmVzb3VyY2UsIGJvb2wgb25saW5lKTsKK2V4dGVybiBpbnQgX19h ZGRfbWVtb3J5KGludCBuaWQsIHU2NCBzdGFydCwgdTY0IHNpemUsIGludCBtZW1vcnlfYmxvY2tf dHlwZSk7CitleHRlcm4gaW50IGFkZF9tZW1vcnkoaW50IG5pZCwgdTY0IHN0YXJ0LCB1NjQgc2l6 ZSwgaW50IG1lbW9yeV9ibG9ja190eXBlKTsKK2V4dGVybiBpbnQgYWRkX21lbW9yeV9yZXNvdXJj ZShpbnQgbmlkLCBzdHJ1Y3QgcmVzb3VyY2UgKnJlc291cmNlLAorCQkJICAgICAgIGludCBtZW1v cnlfYmxvY2tfdHlwZSk7CiBleHRlcm4gaW50IGFyY2hfYWRkX21lbW9yeShpbnQgbmlkLCB1NjQg c3RhcnQsIHU2NCBzaXplLAotCQlzdHJ1Y3Qgdm1lbV9hbHRtYXAgKmFsdG1hcCwgYm9vbCB3YW50 X21lbWJsb2NrKTsKKwkJCSAgIHN0cnVjdCB2bWVtX2FsdG1hcCAqYWx0bWFwLCBpbnQgbWVtb3J5 X2Jsb2NrX3R5cGUpOwogZXh0ZXJuIHZvaWQgbW92ZV9wZm5fcmFuZ2VfdG9fem9uZShzdHJ1Y3Qg em9uZSAqem9uZSwgdW5zaWduZWQgbG9uZyBzdGFydF9wZm4sCiAJCXVuc2lnbmVkIGxvbmcgbnJf cGFnZXMsIHN0cnVjdCB2bWVtX2FsdG1hcCAqYWx0bWFwKTsKIGV4dGVybiBpbnQgb2ZmbGluZV9w YWdlcyh1bnNpZ25lZCBsb25nIHN0YXJ0X3BmbiwgdW5zaWduZWQgbG9uZyBucl9wYWdlcyk7CmRp ZmYgLS1naXQgYS9tbS9obW0uYyBiL21tL2htbS5jCmluZGV4IGM5NjhlNDlmN2EwYy4uMjM1MGY2 ZjZhYjQyIDEwMDY0NAotLS0gYS9tbS9obW0uYworKysgYi9tbS9obW0uYwpAQCAtMzIsNiArMzIs NyBAQAogI2luY2x1ZGUgPGxpbnV4L2p1bXBfbGFiZWwuaD4KICNpbmNsdWRlIDxsaW51eC9tbXVf bm90aWZpZXIuaD4KICNpbmNsdWRlIDxsaW51eC9tZW1vcnlfaG90cGx1Zy5oPgorI2luY2x1ZGUg PGxpbnV4L21lbW9yeS5oPgogCiAjZGVmaW5lIFBBX1NFQ1RJT05fU0laRSAoMVVMIDw8IFBBX1NF Q1RJT05fU0hJRlQpCiAKQEAgLTEwOTYsMTAgKzEwOTcsMTEgQEAgc3RhdGljIGludCBobW1fZGV2 bWVtX3BhZ2VzX2NyZWF0ZShzdHJ1Y3QgaG1tX2Rldm1lbSAqZGV2bWVtKQogCSAqLwogCWlmIChk ZXZtZW0tPnBhZ2VtYXAudHlwZSA9PSBNRU1PUllfREVWSUNFX1BVQkxJQykKIAkJcmV0ID0gYXJj aF9hZGRfbWVtb3J5KG5pZCwgYWxpZ25fc3RhcnQsIGFsaWduX3NpemUsIE5VTEwsCi0JCQkJZmFs c2UpOworCQkJCSAgICAgIE1FTU9SWV9CTE9DS19OT05FKTsKIAllbHNlCiAJCXJldCA9IGFkZF9w YWdlcyhuaWQsIGFsaWduX3N0YXJ0ID4+IFBBR0VfU0hJRlQsCi0JCQkJYWxpZ25fc2l6ZSA+PiBQ QUdFX1NISUZULCBOVUxMLCBmYWxzZSk7CisJCQkJYWxpZ25fc2l6ZSA+PiBQQUdFX1NISUZULCBO VUxMLAorCQkJCU1FTU9SWV9CTE9DS19OT05FKTsKIAlpZiAocmV0KSB7CiAJCW1lbV9ob3RwbHVn X2RvbmUoKTsKIAkJZ290byBlcnJvcl9hZGRfbWVtb3J5OwpkaWZmIC0tZ2l0IGEvbW0vbWVtb3J5 X2hvdHBsdWcuYyBiL21tL21lbW9yeV9ob3RwbHVnLmMKaW5kZXggZDRjN2U0MmU0NmYzLi5iY2U2 YzQxZDcyMWMgMTAwNjQ0Ci0tLSBhL21tL21lbW9yeV9ob3RwbHVnLmMKKysrIGIvbW0vbWVtb3J5 X2hvdHBsdWcuYwpAQCAtMjQ2LDcgKzI0Niw3IEBAIHZvaWQgX19pbml0IHJlZ2lzdGVyX3BhZ2Vf Ym9vdG1lbV9pbmZvX25vZGUoc3RydWN0IHBnbGlzdF9kYXRhICpwZ2RhdCkKICNlbmRpZiAvKiBD T05GSUdfSEFWRV9CT09UTUVNX0lORk9fTk9ERSAqLwogCiBzdGF0aWMgaW50IF9fbWVtaW5pdCBf X2FkZF9zZWN0aW9uKGludCBuaWQsIHVuc2lnbmVkIGxvbmcgcGh5c19zdGFydF9wZm4sCi0JCXN0 cnVjdCB2bWVtX2FsdG1hcCAqYWx0bWFwLCBib29sIHdhbnRfbWVtYmxvY2spCisJCXN0cnVjdCB2 bWVtX2FsdG1hcCAqYWx0bWFwLCBpbnQgbWVtb3J5X2Jsb2NrX3R5cGUpCiB7CiAJaW50IHJldDsK IApAQCAtMjU3LDEwICsyNTcsMTEgQEAgc3RhdGljIGludCBfX21lbWluaXQgX19hZGRfc2VjdGlv bihpbnQgbmlkLCB1bnNpZ25lZCBsb25nIHBoeXNfc3RhcnRfcGZuLAogCWlmIChyZXQgPCAwKQog CQlyZXR1cm4gcmV0OwogCi0JaWYgKCF3YW50X21lbWJsb2NrKQorCWlmIChtZW1vcnlfYmxvY2tf dHlwZSA9PSBNRU1CTE9DS19OT05FKQogCQlyZXR1cm4gMDsKIAotCXJldHVybiBob3RwbHVnX21l bW9yeV9yZWdpc3RlcihuaWQsIF9fcGZuX3RvX3NlY3Rpb24ocGh5c19zdGFydF9wZm4pKTsKKwly ZXR1cm4gaG90cGx1Z19tZW1vcnlfcmVnaXN0ZXIobmlkLCBfX3Bmbl90b19zZWN0aW9uKHBoeXNf c3RhcnRfcGZuKSwKKwkJCQkgICAgICAgbWVtb3J5X2Jsb2NrX3R5cGUpOwogfQogCiAvKgpAQCAt MjcxLDcgKzI3Miw3IEBAIHN0YXRpYyBpbnQgX19tZW1pbml0IF9fYWRkX3NlY3Rpb24oaW50IG5p ZCwgdW5zaWduZWQgbG9uZyBwaHlzX3N0YXJ0X3BmbiwKICAqLwogaW50IF9fcmVmIF9fYWRkX3Bh Z2VzKGludCBuaWQsIHVuc2lnbmVkIGxvbmcgcGh5c19zdGFydF9wZm4sCiAJCXVuc2lnbmVkIGxv bmcgbnJfcGFnZXMsIHN0cnVjdCB2bWVtX2FsdG1hcCAqYWx0bWFwLAotCQlib29sIHdhbnRfbWVt YmxvY2spCisJCWludCBtZW1vcnlfYmxvY2tfdHlwZSkKIHsKIAl1bnNpZ25lZCBsb25nIGk7CiAJ aW50IGVyciA9IDA7CkBAIC0yOTYsNyArMjk3LDcgQEAgaW50IF9fcmVmIF9fYWRkX3BhZ2VzKGlu dCBuaWQsIHVuc2lnbmVkIGxvbmcgcGh5c19zdGFydF9wZm4sCiAKIAlmb3IgKGkgPSBzdGFydF9z ZWM7IGkgPD0gZW5kX3NlYzsgaSsrKSB7CiAJCWVyciA9IF9fYWRkX3NlY3Rpb24obmlkLCBzZWN0 aW9uX25yX3RvX3BmbihpKSwgYWx0bWFwLAotCQkJCXdhbnRfbWVtYmxvY2spOworCQkJCSAgICBt ZW1vcnlfYmxvY2tfdHlwZSk7CiAKIAkJLyoKIAkJICogRUVYSVNUIGlzIGZpbmFsbHkgZGVhbHQg d2l0aCBieSBpb3Jlc291cmNlIGNvbGxpc2lvbgpAQCAtMTA5OSw3ICsxMTAwLDggQEAgc3RhdGlj IGludCBvbmxpbmVfbWVtb3J5X2Jsb2NrKHN0cnVjdCBtZW1vcnlfYmxvY2sgKm1lbSwgdm9pZCAq YXJnKQogICoKICAqIHdlIGFyZSBPSyBjYWxsaW5nIF9fbWVtaW5pdCBzdHVmZiBoZXJlIC0gd2Ug aGF2ZSBDT05GSUdfTUVNT1JZX0hPVFBMVUcKICAqLwotaW50IF9fcmVmIGFkZF9tZW1vcnlfcmVz b3VyY2UoaW50IG5pZCwgc3RydWN0IHJlc291cmNlICpyZXMsIGJvb2wgb25saW5lKQoraW50IF9f cmVmIGFkZF9tZW1vcnlfcmVzb3VyY2UoaW50IG5pZCwgc3RydWN0IHJlc291cmNlICpyZXMsCisJ CQkgICAgICBpbnQgbWVtb3J5X2Jsb2NrX3R5cGUpCiB7CiAJdTY0IHN0YXJ0LCBzaXplOwogCWJv b2wgbmV3X25vZGUgPSBmYWxzZTsKQEAgLTExMDgsNiArMTExMCw5IEBAIGludCBfX3JlZiBhZGRf bWVtb3J5X3Jlc291cmNlKGludCBuaWQsIHN0cnVjdCByZXNvdXJjZSAqcmVzLCBib29sIG9ubGlu ZSkKIAlzdGFydCA9IHJlcy0+c3RhcnQ7CiAJc2l6ZSA9IHJlc291cmNlX3NpemUocmVzKTsKIAor CWlmIChtZW1vcnlfYmxvY2tfdHlwZSA9PSBNRU1PUllfQkxPQ0tfTk9ORSkKKwkJcmV0dXJuIC1F SU5WQUw7CisKIAlyZXQgPSBjaGVja19ob3RwbHVnX21lbW9yeV9yYW5nZShzdGFydCwgc2l6ZSk7 CiAJaWYgKHJldCkKIAkJcmV0dXJuIHJldDsKQEAgLTExMjgsNyArMTEzMyw3IEBAIGludCBfX3Jl ZiBhZGRfbWVtb3J5X3Jlc291cmNlKGludCBuaWQsIHN0cnVjdCByZXNvdXJjZSAqcmVzLCBib29s IG9ubGluZSkKIAluZXdfbm9kZSA9IHJldDsKIAogCS8qIGNhbGwgYXJjaCdzIG1lbW9yeSBob3Rh ZGQgKi8KLQlyZXQgPSBhcmNoX2FkZF9tZW1vcnkobmlkLCBzdGFydCwgc2l6ZSwgTlVMTCwgdHJ1 ZSk7CisJcmV0ID0gYXJjaF9hZGRfbWVtb3J5KG5pZCwgc3RhcnQsIHNpemUsIE5VTEwsIG1lbW9y eV9ibG9ja190eXBlKTsKIAlpZiAocmV0IDwgMCkKIAkJZ290byBlcnJvcjsKIApAQCAtMTE1Myw4 ICsxMTU4LDggQEAgaW50IF9fcmVmIGFkZF9tZW1vcnlfcmVzb3VyY2UoaW50IG5pZCwgc3RydWN0 IHJlc291cmNlICpyZXMsIGJvb2wgb25saW5lKQogCS8qIGRldmljZV9vbmxpbmUoKSB3aWxsIHRh a2UgdGhlIGxvY2sgd2hlbiBjYWxsaW5nIG9ubGluZV9wYWdlcygpICovCiAJbWVtX2hvdHBsdWdf ZG9uZSgpOwogCi0JLyogb25saW5lIHBhZ2VzIGlmIHJlcXVlc3RlZCAqLwotCWlmIChvbmxpbmUp CisJaWYgKG1lbW9yeV9ibG9ja190eXBlID09IE1FTU9SWV9CTE9DS19QQVJBVklSVCB8fAorCSAg ICAobWVtb3J5X2Jsb2NrX3R5cGUgPT0gTUVNT1JZX0JMT0NLX05PUk1BTCAmJiBtZW1ocF9hdXRv X29ubGluZSkpCiAJCXdhbGtfbWVtb3J5X3JhbmdlKFBGTl9ET1dOKHN0YXJ0KSwgUEZOX1VQKHN0 YXJ0ICsgc2l6ZSAtIDEpLAogCQkJCSAgTlVMTCwgb25saW5lX21lbW9yeV9ibG9jayk7CiAKQEAg LTExNjksNyArMTE3NCw3IEBAIGludCBfX3JlZiBhZGRfbWVtb3J5X3Jlc291cmNlKGludCBuaWQs IHN0cnVjdCByZXNvdXJjZSAqcmVzLCBib29sIG9ubGluZSkKIH0KIAogLyogcmVxdWlyZXMgZGV2 aWNlX2hvdHBsdWdfbG9jaywgc2VlIGFkZF9tZW1vcnlfcmVzb3VyY2UoKSAqLwotaW50IF9fcmVm IF9fYWRkX21lbW9yeShpbnQgbmlkLCB1NjQgc3RhcnQsIHU2NCBzaXplKQoraW50IF9fcmVmIF9f YWRkX21lbW9yeShpbnQgbmlkLCB1NjQgc3RhcnQsIHU2NCBzaXplLCBpbnQgbWVtb3J5X2Jsb2Nr X3R5cGUpCiB7CiAJc3RydWN0IHJlc291cmNlICpyZXM7CiAJaW50IHJldDsKQEAgLTExNzgsMTgg KzExODMsMTggQEAgaW50IF9fcmVmIF9fYWRkX21lbW9yeShpbnQgbmlkLCB1NjQgc3RhcnQsIHU2 NCBzaXplKQogCWlmIChJU19FUlIocmVzKSkKIAkJcmV0dXJuIFBUUl9FUlIocmVzKTsKIAotCXJl dCA9IGFkZF9tZW1vcnlfcmVzb3VyY2UobmlkLCByZXMsIG1lbWhwX2F1dG9fb25saW5lKTsKKwly ZXQgPSBhZGRfbWVtb3J5X3Jlc291cmNlKG5pZCwgcmVzLCBtZW1vcnlfYmxvY2tfdHlwZSk7CiAJ aWYgKHJldCA8IDApCiAJCXJlbGVhc2VfbWVtb3J5X3Jlc291cmNlKHJlcyk7CiAJcmV0dXJuIHJl dDsKIH0KIAotaW50IGFkZF9tZW1vcnkoaW50IG5pZCwgdTY0IHN0YXJ0LCB1NjQgc2l6ZSkKK2lu dCBhZGRfbWVtb3J5KGludCBuaWQsIHU2NCBzdGFydCwgdTY0IHNpemUsIGludCBtZW1vcnlfYmxv Y2tfdHlwZSkKIHsKIAlpbnQgcmM7CiAKIAlsb2NrX2RldmljZV9ob3RwbHVnKCk7Ci0JcmMgPSBf X2FkZF9tZW1vcnkobmlkLCBzdGFydCwgc2l6ZSk7CisJcmMgPSBfX2FkZF9tZW1vcnkobmlkLCBz dGFydCwgc2l6ZSwgbWVtb3J5X2Jsb2NrX3R5cGUpOwogCXVubG9ja19kZXZpY2VfaG90cGx1Zygp OwogCiAJcmV0dXJuIHJjOwotLSAKMi4xNy4xCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fXwpkZXZlbCBtYWlsaW5nIGxpc3QKZGV2ZWxAbGludXhkcml2ZXJw cm9qZWN0Lm9yZwpodHRwOi8vZHJpdmVyZGV2LmxpbnV4ZHJpdmVycHJvamVjdC5vcmcvbWFpbG1h bi9saXN0aW5mby9kcml2ZXJkZXYtZGV2ZWwK From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) by kanga.kvack.org (Postfix) with ESMTP id 935DE8E0001 for ; Fri, 28 Sep 2018 11:04:20 -0400 (EDT) Received: by mail-qk1-f197.google.com with SMTP id u195-v6so6295333qka.14 for ; Fri, 28 Sep 2018 08:04:20 -0700 (PDT) Received: from mx1.redhat.com (mx1.redhat.com. [209.132.183.28]) by mx.google.com with ESMTPS id w10-v6si317442qtk.68.2018.09.28.08.04.18 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 28 Sep 2018 08:04:18 -0700 (PDT) From: David Hildenbrand Subject: [PATCH RFC] mm/memory_hotplug: Introduce memory block types Date: Fri, 28 Sep 2018 17:03:57 +0200 Message-Id: <20180928150357.12942-1-david@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: owner-linux-mm@kvack.org List-ID: To: linux-mm@kvack.org Cc: xen-devel@lists.xenproject.org, devel@linuxdriverproject.org, linux-acpi@vger.kernel.org, linux-sh@vger.kernel.org, linux-s390@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, linux-ia64@vger.kernel.org, David Hildenbrand , Tony Luck , Fenghua Yu , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Martin Schwidefsky , Heiko Carstens , Yoshinori Sato , Rich Felker , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , "Rafael J. Wysocki" , Len Brown , Greg Kroah-Hartman , "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Boris Ostrovsky , Juergen Gross , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Andrew Morton , Mike Rapoport , Dan Williams , Stephen Rothwell , Michal Hocko , "Kirill A. Shutemov" , Nicholas Piggin , =?UTF-8?q?Jonathan=20Neusch=C3=A4fer?= , Joe Perches , Michael Neuling , Mauricio Faria de Oliveira , Balbir Singh , Rashmica Gupta , Pavel Tatashin , Rob Herring , Philippe Ombredanne , Kate Stewart , "mike.travis@hpe.com" , Joonsoo Kim , Oscar Salvador , Mathieu Malaterre How to/when to online hotplugged memory is hard to manage for distributions because different memory types are to be treated differently. Right now, we need complicated udev rules that e.g. check if we are running on s390x, on a physical system or on a virtualized system. But there is also sometimes the demand to really online memory immediately while adding in the kernel and not to wait for user space to make a decision. And on virtualized systems there might be different requirements, depending on "how" the memory was added (and if it will eventually get unplugged again - DIMM vs. paravirtualized mechanisms). On the one hand, we have physical systems where we sometimes want to be able to unplug memory again - e.g. a DIMM - so we have to online it to the MOVABLE zone optionally. That decision is usually made in user space. On the other hand, we have memory that should never be onlined automatically, only when asked for by an administrator. Such memory only applies to virtualized environments like s390x, where the concept of "standby" memory exists. Memory is detected and added during boot, so it can be onlined when requested by the admininistrator or some tooling. Only when onlining, memory will be allocated in the hypervisor. But then, we also have paravirtualized devices (namely xen and hyper-v balloons), that hotplug memory that will never ever be removed from a system right now using offline_pages/remove_memory. If at all, this memory is logically unplugged and handed back to the hypervisor via ballooning. For paravirtualized devices it is relevant that memory is onlined as quickly as possible after adding - and that it is added to the NORMAL zone. Otherwise, it could happen that too much memory in a row is added (but not onlined), resulting in out-of-memory conditions due to the additional memory for "struct pages" and friends. MOVABLE zone as well as delays might be very problematic and lead to crashes (e.g. zone imbalance). Therefore, introduce memory block types and online memory depending on it when adding the memory. Expose the memory type to user space, so user space handlers can start to process only "normal" memory. Other memory block types can be ignored. One thing less to worry about in user space. Cc: Tony Luck Cc: Fenghua Yu Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: Michael Ellerman Cc: Martin Schwidefsky Cc: Heiko Carstens Cc: Yoshinori Sato Cc: Rich Felker Cc: Dave Hansen Cc: Andy Lutomirski Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: "H. Peter Anvin" Cc: "Rafael J. Wysocki" Cc: Len Brown Cc: Greg Kroah-Hartman Cc: "K. Y. Srinivasan" Cc: Haiyang Zhang Cc: Stephen Hemminger Cc: Boris Ostrovsky Cc: Juergen Gross Cc: "JA(C)rA'me Glisse" Cc: Andrew Morton Cc: Mike Rapoport Cc: Dan Williams Cc: Stephen Rothwell Cc: Michal Hocko Cc: "Kirill A. Shutemov" Cc: David Hildenbrand Cc: Nicholas Piggin Cc: "Jonathan NeuschA?fer" Cc: Joe Perches Cc: Michael Neuling Cc: Mauricio Faria de Oliveira Cc: Balbir Singh Cc: Rashmica Gupta Cc: Pavel Tatashin Cc: Rob Herring Cc: Philippe Ombredanne Cc: Kate Stewart Cc: "mike.travis@hpe.com" Cc: Joonsoo Kim Cc: Oscar Salvador Cc: Mathieu Malaterre Signed-off-by: David Hildenbrand --- This patch is based on the current mm-tree, where some related patches from me are currently residing that touched the add_memory() functions. arch/ia64/mm/init.c | 4 +- arch/powerpc/mm/mem.c | 4 +- arch/powerpc/platforms/powernv/memtrace.c | 3 +- arch/s390/mm/init.c | 4 +- arch/sh/mm/init.c | 4 +- arch/x86/mm/init_32.c | 4 +- arch/x86/mm/init_64.c | 8 +-- drivers/acpi/acpi_memhotplug.c | 3 +- drivers/base/memory.c | 63 ++++++++++++++++++++--- drivers/hv/hv_balloon.c | 33 ++---------- drivers/s390/char/sclp_cmd.c | 3 +- drivers/xen/balloon.c | 2 +- include/linux/memory.h | 28 +++++++++- include/linux/memory_hotplug.h | 17 +++--- mm/hmm.c | 6 ++- mm/memory_hotplug.c | 31 ++++++----- 16 files changed, 139 insertions(+), 78 deletions(-) diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c index d5e12ff1d73c..813d1d86bf95 100644 --- a/arch/ia64/mm/init.c +++ b/arch/ia64/mm/init.c @@ -646,13 +646,13 @@ mem_init (void) #ifdef CONFIG_MEMORY_HOTPLUG int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, - bool want_memblock) + int memory_block_type) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; int ret; - ret = __add_pages(nid, start_pfn, nr_pages, altmap, want_memblock); + ret = __add_pages(nid, start_pfn, nr_pages, altmap, memory_block_type); if (ret) printk("%s: Problem encountered in __add_pages() as ret=%d\n", __func__, ret); diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index 5551f5870dcc..dd32fcc9099c 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -118,7 +118,7 @@ int __weak remove_section_mapping(unsigned long start, unsigned long end) } int __meminit arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, - bool want_memblock) + int memory_block_type) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; @@ -135,7 +135,7 @@ int __meminit arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap * } flush_inval_dcache_range(start, start + size); - return __add_pages(nid, start_pfn, nr_pages, altmap, want_memblock); + return __add_pages(nid, start_pfn, nr_pages, altmap, memory_block_type); } #ifdef CONFIG_MEMORY_HOTREMOVE diff --git a/arch/powerpc/platforms/powernv/memtrace.c b/arch/powerpc/platforms/powernv/memtrace.c index 84d038ed3882..57d6b3d46382 100644 --- a/arch/powerpc/platforms/powernv/memtrace.c +++ b/arch/powerpc/platforms/powernv/memtrace.c @@ -232,7 +232,8 @@ static int memtrace_online(void) ent->mem = 0; } - if (add_memory(ent->nid, ent->start, ent->size)) { + if (add_memory(ent->nid, ent->start, ent->size, + MEMORY_BLOCK_NORMAL)) { pr_err("Failed to add trace memory to node %d\n", ent->nid); ret += 1; diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index e472cd763eb3..b5324527c7f6 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -222,7 +222,7 @@ device_initcall(s390_cma_mem_init); #endif /* CONFIG_CMA */ int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, - bool want_memblock) + int memory_block_type) { unsigned long start_pfn = PFN_DOWN(start); unsigned long size_pages = PFN_DOWN(size); @@ -232,7 +232,7 @@ int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, if (rc) return rc; - rc = __add_pages(nid, start_pfn, size_pages, altmap, want_memblock); + rc = __add_pages(nid, start_pfn, size_pages, altmap, memory_block_type); if (rc) vmem_remove_mapping(start, size); return rc; diff --git a/arch/sh/mm/init.c b/arch/sh/mm/init.c index c8c13c777162..6b876000731a 100644 --- a/arch/sh/mm/init.c +++ b/arch/sh/mm/init.c @@ -419,14 +419,14 @@ void free_initrd_mem(unsigned long start, unsigned long end) #ifdef CONFIG_MEMORY_HOTPLUG int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, - bool want_memblock) + int memory_block_type) { unsigned long start_pfn = PFN_DOWN(start); unsigned long nr_pages = size >> PAGE_SHIFT; int ret; /* We only have ZONE_NORMAL, so this is easy.. */ - ret = __add_pages(nid, start_pfn, nr_pages, altmap, want_memblock); + ret = __add_pages(nid, start_pfn, nr_pages, altmap, memory_block_type); if (unlikely(ret)) printk("%s: Failed, __add_pages() == %d\n", __func__, ret); diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c index f2837e4c40b3..4f50cd4467a9 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -851,12 +851,12 @@ void __init mem_init(void) #ifdef CONFIG_MEMORY_HOTPLUG int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, - bool want_memblock) + int memory_block_type) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; - return __add_pages(nid, start_pfn, nr_pages, altmap, want_memblock); + return __add_pages(nid, start_pfn, nr_pages, altmap, memory_block_type); } #ifdef CONFIG_MEMORY_HOTREMOVE diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 5fab264948c2..fc3df573f0f3 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -783,11 +783,11 @@ static void update_end_of_memory_vars(u64 start, u64 size) } int add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages, - struct vmem_altmap *altmap, bool want_memblock) + struct vmem_altmap *altmap, int memory_block_type) { int ret; - ret = __add_pages(nid, start_pfn, nr_pages, altmap, want_memblock); + ret = __add_pages(nid, start_pfn, nr_pages, altmap, memory_block_type); WARN_ON_ONCE(ret); /* update max_pfn, max_low_pfn and high_memory */ @@ -798,14 +798,14 @@ int add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages, } int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, - bool want_memblock) + int memory_block_type) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; init_memory_mapping(start, start + size); - return add_pages(nid, start_pfn, nr_pages, altmap, want_memblock); + return add_pages(nid, start_pfn, nr_pages, altmap, memory_block_type); } #define PAGE_INUSE 0xFD diff --git a/drivers/acpi/acpi_memhotplug.c b/drivers/acpi/acpi_memhotplug.c index 8fe0960ea572..c5f646b4e97e 100644 --- a/drivers/acpi/acpi_memhotplug.c +++ b/drivers/acpi/acpi_memhotplug.c @@ -228,7 +228,8 @@ static int acpi_memory_enable_device(struct acpi_memory_device *mem_device) if (node < 0) node = memory_add_physaddr_to_nid(info->start_addr); - result = __add_memory(node, info->start_addr, info->length); + result = __add_memory(node, info->start_addr, info->length, + MEMORY_BLOCK_NORMAL); /* * If the memory block has been used by the kernel, add_memory() diff --git a/drivers/base/memory.c b/drivers/base/memory.c index 0e5985682642..2686101e41b5 100644 --- a/drivers/base/memory.c +++ b/drivers/base/memory.c @@ -381,6 +381,32 @@ static ssize_t show_phys_device(struct device *dev, return sprintf(buf, "%d\n", mem->phys_device); } +static ssize_t type_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct memory_block *mem = to_memory_block(dev); + ssize_t len = 0; + + switch (mem->state) { + case MEMORY_BLOCK_NORMAL: + len = sprintf(buf, "normal\n"); + break; + case MEMORY_BLOCK_STANDBY: + len = sprintf(buf, "standby\n"); + break; + case MEMORY_BLOCK_PARAVIRT: + len = sprintf(buf, "paravirt\n"); + break; + default: + len = sprintf(buf, "ERROR-UNKNOWN-%ld\n", + mem->state); + WARN_ON(1); + break; + } + + return len; +} + #ifdef CONFIG_MEMORY_HOTREMOVE static void print_allowed_zone(char *buf, int nid, unsigned long start_pfn, unsigned long nr_pages, int online_type, @@ -442,6 +468,7 @@ static DEVICE_ATTR(phys_index, 0444, show_mem_start_phys_index, NULL); static DEVICE_ATTR(state, 0644, show_mem_state, store_mem_state); static DEVICE_ATTR(phys_device, 0444, show_phys_device, NULL); static DEVICE_ATTR(removable, 0444, show_mem_removable, NULL); +static DEVICE_ATTR_RO(type); /* * Block size attribute stuff @@ -514,7 +541,8 @@ memory_probe_store(struct device *dev, struct device_attribute *attr, nid = memory_add_physaddr_to_nid(phys_addr); ret = __add_memory(nid, phys_addr, - MIN_MEMORY_BLOCK_SIZE * sections_per_block); + MIN_MEMORY_BLOCK_SIZE * sections_per_block, + MEMORY_BLOCK_NORMAL); if (ret) goto out; @@ -620,6 +648,7 @@ static struct attribute *memory_memblk_attrs[] = { &dev_attr_state.attr, &dev_attr_phys_device.attr, &dev_attr_removable.attr, + &dev_attr_type.attr, #ifdef CONFIG_MEMORY_HOTREMOVE &dev_attr_valid_zones.attr, #endif @@ -657,13 +686,17 @@ int register_memory(struct memory_block *memory) } static int init_memory_block(struct memory_block **memory, - struct mem_section *section, unsigned long state) + struct mem_section *section, unsigned long state, + int memory_block_type) { struct memory_block *mem; unsigned long start_pfn; int scn_nr; int ret = 0; + if (memory_block_type == MEMORY_BLOCK_NONE) + return -EINVAL; + mem = kzalloc(sizeof(*mem), GFP_KERNEL); if (!mem) return -ENOMEM; @@ -675,6 +708,7 @@ static int init_memory_block(struct memory_block **memory, mem->state = state; start_pfn = section_nr_to_pfn(mem->start_section_nr); mem->phys_device = arch_get_memory_phys_device(start_pfn); + mem->type = memory_block_type; ret = register_memory(mem); @@ -699,7 +733,8 @@ static int add_memory_block(int base_section_nr) if (section_count == 0) return 0; - ret = init_memory_block(&mem, __nr_to_section(section_nr), MEM_ONLINE); + ret = init_memory_block(&mem, __nr_to_section(section_nr), MEM_ONLINE, + MEMORY_BLOCK_NORMAL); if (ret) return ret; mem->section_count = section_count; @@ -710,19 +745,35 @@ static int add_memory_block(int base_section_nr) * need an interface for the VM to add new memory regions, * but without onlining it. */ -int hotplug_memory_register(int nid, struct mem_section *section) +int hotplug_memory_register(int nid, struct mem_section *section, + int memory_block_type) { int ret = 0; struct memory_block *mem; mutex_lock(&mem_sysfs_mutex); + /* make sure there is no memblock if we don't want one */ + if (memory_block_type == MEMORY_BLOCK_NONE) { + mem = find_memory_block(section); + if (mem) { + put_device(&mem->dev); + ret = -EINVAL; + } + goto out; + } + mem = find_memory_block(section); if (mem) { - mem->section_count++; + /* make sure the type matches */ + if (mem->type == memory_block_type) + mem->section_count++; + else + ret = -EINVAL; put_device(&mem->dev); } else { - ret = init_memory_block(&mem, section, MEM_OFFLINE); + ret = init_memory_block(&mem, section, MEM_OFFLINE, + memory_block_type); if (ret) goto out; mem->section_count++; diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c index b1b788082793..5a8d18c4d699 100644 --- a/drivers/hv/hv_balloon.c +++ b/drivers/hv/hv_balloon.c @@ -537,11 +537,6 @@ struct hv_dynmem_device { */ bool host_specified_ha_region; - /* - * State to synchronize hot-add. - */ - struct completion ol_waitevent; - bool ha_waiting; /* * This thread handles hot-add * requests from the host as well as notifying @@ -640,14 +635,6 @@ static int hv_memory_notifier(struct notifier_block *nb, unsigned long val, unsigned long flags, pfn_count; switch (val) { - case MEM_ONLINE: - case MEM_CANCEL_ONLINE: - if (dm_device.ha_waiting) { - dm_device.ha_waiting = false; - complete(&dm_device.ol_waitevent); - } - break; - case MEM_OFFLINE: spin_lock_irqsave(&dm_device.ha_lock, flags); pfn_count = hv_page_offline_check(mem->start_pfn, @@ -665,9 +652,7 @@ static int hv_memory_notifier(struct notifier_block *nb, unsigned long val, } spin_unlock_irqrestore(&dm_device.ha_lock, flags); break; - case MEM_GOING_ONLINE: - case MEM_GOING_OFFLINE: - case MEM_CANCEL_OFFLINE: + default: break; } return NOTIFY_OK; @@ -731,12 +716,10 @@ static void hv_mem_hot_add(unsigned long start, unsigned long size, has->covered_end_pfn += processed_pfn; spin_unlock_irqrestore(&dm_device.ha_lock, flags); - init_completion(&dm_device.ol_waitevent); - dm_device.ha_waiting = !memhp_auto_online; - nid = memory_add_physaddr_to_nid(PFN_PHYS(start_pfn)); ret = add_memory(nid, PFN_PHYS((start_pfn)), - (HA_CHUNK << PAGE_SHIFT)); + (HA_CHUNK << PAGE_SHIFT), + MEMORY_BLOCK_PARAVIRT); if (ret) { pr_err("hot_add memory failed error is %d\n", ret); @@ -757,16 +740,6 @@ static void hv_mem_hot_add(unsigned long start, unsigned long size, break; } - /* - * Wait for the memory block to be onlined when memory onlining - * is done outside of kernel (memhp_auto_online). Since the hot - * add has succeeded, it is ok to proceed even if the pages in - * the hot added region have not been "onlined" within the - * allowed time. - */ - if (dm_device.ha_waiting) - wait_for_completion_timeout(&dm_device.ol_waitevent, - 5*HZ); post_status(&dm_device); } } diff --git a/drivers/s390/char/sclp_cmd.c b/drivers/s390/char/sclp_cmd.c index d7686a68c093..1928a2411456 100644 --- a/drivers/s390/char/sclp_cmd.c +++ b/drivers/s390/char/sclp_cmd.c @@ -406,7 +406,8 @@ static void __init add_memory_merged(u16 rn) if (!size) goto skip_add; for (addr = start; addr < start + size; addr += block_size) - add_memory(numa_pfn_to_nid(PFN_DOWN(addr)), addr, block_size); + add_memory(numa_pfn_to_nid(PFN_DOWN(addr)), addr, block_size, + MEMORY_BLOCK_STANDBY); skip_add: first_rn = rn; num = 1; diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c index fdfc64f5acea..291a8aac6af3 100644 --- a/drivers/xen/balloon.c +++ b/drivers/xen/balloon.c @@ -397,7 +397,7 @@ static enum bp_state reserve_additional_memory(void) mutex_unlock(&balloon_mutex); /* add_memory_resource() requires the device_hotplug lock */ lock_device_hotplug(); - rc = add_memory_resource(nid, resource, memhp_auto_online); + rc = add_memory_resource(nid, resource, MEMORY_BLOCK_PARAVIRT); unlock_device_hotplug(); mutex_lock(&balloon_mutex); diff --git a/include/linux/memory.h b/include/linux/memory.h index a6ddefc60517..3dc2a0b12653 100644 --- a/include/linux/memory.h +++ b/include/linux/memory.h @@ -23,6 +23,30 @@ #define MIN_MEMORY_BLOCK_SIZE (1UL << SECTION_SIZE_BITS) +/* + * NONE: No memory block is to be created (e.g. device memory). + * NORMAL: Memory block that represents normal (boot or hotplugged) memory + * (e.g. ACPI DIMMs) that should be onlined either automatically + * (memhp_auto_online) or manually by user space to select a + * specific zone. + * Applicable to memhp_auto_online. + * STANDBY: Memory block that represents standby memory that should only + * be onlined on demand by user space (e.g. standby memory on + * s390x), but never automatically by the kernel. + * Not applicable to memhp_auto_online. + * PARAVIRT: Memory block that represents memory added by + * paravirtualized mechanisms (e.g. hyper-v, xen) that will + * always automatically get onlined. Memory will be unplugged + * using ballooning, not by relying on the MOVABLE ZONE. + * Not applicable to memhp_auto_online. + */ +enum { + MEMORY_BLOCK_NONE, + MEMORY_BLOCK_NORMAL, + MEMORY_BLOCK_STANDBY, + MEMORY_BLOCK_PARAVIRT, +}; + struct memory_block { unsigned long start_section_nr; unsigned long end_section_nr; @@ -34,6 +58,7 @@ struct memory_block { int (*phys_callback)(struct memory_block *); struct device dev; int nid; /* NID for this memory block */ + int type; /* type of this memory block */ }; int arch_get_memory_phys_device(unsigned long start_pfn); @@ -111,7 +136,8 @@ extern int register_memory_notifier(struct notifier_block *nb); extern void unregister_memory_notifier(struct notifier_block *nb); extern int register_memory_isolate_notifier(struct notifier_block *nb); extern void unregister_memory_isolate_notifier(struct notifier_block *nb); -int hotplug_memory_register(int nid, struct mem_section *section); +int hotplug_memory_register(int nid, struct mem_section *section, + int memory_block_type); #ifdef CONFIG_MEMORY_HOTREMOVE extern int unregister_memory_section(struct mem_section *); #endif diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index ffd9cd10fcf3..b560a9ee0e8c 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -115,18 +115,18 @@ extern int __remove_pages(struct zone *zone, unsigned long start_pfn, /* reasonably generic interface to expand the physical pages */ extern int __add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages, - struct vmem_altmap *altmap, bool want_memblock); + struct vmem_altmap *altmap, int memory_block_type); #ifndef CONFIG_ARCH_HAS_ADD_PAGES static inline int add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages, struct vmem_altmap *altmap, - bool want_memblock) + int memory_block_type) { - return __add_pages(nid, start_pfn, nr_pages, altmap, want_memblock); + return __add_pages(nid, start_pfn, nr_pages, altmap, memory_block_type); } #else /* ARCH_HAS_ADD_PAGES */ int add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages, - struct vmem_altmap *altmap, bool want_memblock); + struct vmem_altmap *altmap, int memory_block_type); #endif /* ARCH_HAS_ADD_PAGES */ #ifdef CONFIG_NUMA @@ -324,11 +324,12 @@ static inline void __remove_memory(int nid, u64 start, u64 size) {} extern void __ref free_area_init_core_hotplug(int nid); extern int walk_memory_range(unsigned long start_pfn, unsigned long end_pfn, void *arg, int (*func)(struct memory_block *, void *)); -extern int __add_memory(int nid, u64 start, u64 size); -extern int add_memory(int nid, u64 start, u64 size); -extern int add_memory_resource(int nid, struct resource *resource, bool online); +extern int __add_memory(int nid, u64 start, u64 size, int memory_block_type); +extern int add_memory(int nid, u64 start, u64 size, int memory_block_type); +extern int add_memory_resource(int nid, struct resource *resource, + int memory_block_type); extern int arch_add_memory(int nid, u64 start, u64 size, - struct vmem_altmap *altmap, bool want_memblock); + struct vmem_altmap *altmap, int memory_block_type); extern void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn, unsigned long nr_pages, struct vmem_altmap *altmap); extern int offline_pages(unsigned long start_pfn, unsigned long nr_pages); diff --git a/mm/hmm.c b/mm/hmm.c index c968e49f7a0c..2350f6f6ab42 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -32,6 +32,7 @@ #include #include #include +#include #define PA_SECTION_SIZE (1UL << PA_SECTION_SHIFT) @@ -1096,10 +1097,11 @@ static int hmm_devmem_pages_create(struct hmm_devmem *devmem) */ if (devmem->pagemap.type == MEMORY_DEVICE_PUBLIC) ret = arch_add_memory(nid, align_start, align_size, NULL, - false); + MEMORY_BLOCK_NONE); else ret = add_pages(nid, align_start >> PAGE_SHIFT, - align_size >> PAGE_SHIFT, NULL, false); + align_size >> PAGE_SHIFT, NULL, + MEMORY_BLOCK_NONE); if (ret) { mem_hotplug_done(); goto error_add_memory; diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index d4c7e42e46f3..bce6c41d721c 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -246,7 +246,7 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat) #endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */ static int __meminit __add_section(int nid, unsigned long phys_start_pfn, - struct vmem_altmap *altmap, bool want_memblock) + struct vmem_altmap *altmap, int memory_block_type) { int ret; @@ -257,10 +257,11 @@ static int __meminit __add_section(int nid, unsigned long phys_start_pfn, if (ret < 0) return ret; - if (!want_memblock) + if (memory_block_type == MEMBLOCK_NONE) return 0; - return hotplug_memory_register(nid, __pfn_to_section(phys_start_pfn)); + return hotplug_memory_register(nid, __pfn_to_section(phys_start_pfn), + memory_block_type); } /* @@ -271,7 +272,7 @@ static int __meminit __add_section(int nid, unsigned long phys_start_pfn, */ int __ref __add_pages(int nid, unsigned long phys_start_pfn, unsigned long nr_pages, struct vmem_altmap *altmap, - bool want_memblock) + int memory_block_type) { unsigned long i; int err = 0; @@ -296,7 +297,7 @@ int __ref __add_pages(int nid, unsigned long phys_start_pfn, for (i = start_sec; i <= end_sec; i++) { err = __add_section(nid, section_nr_to_pfn(i), altmap, - want_memblock); + memory_block_type); /* * EEXIST is finally dealt with by ioresource collision @@ -1099,7 +1100,8 @@ static int online_memory_block(struct memory_block *mem, void *arg) * * we are OK calling __meminit stuff here - we have CONFIG_MEMORY_HOTPLUG */ -int __ref add_memory_resource(int nid, struct resource *res, bool online) +int __ref add_memory_resource(int nid, struct resource *res, + int memory_block_type) { u64 start, size; bool new_node = false; @@ -1108,6 +1110,9 @@ int __ref add_memory_resource(int nid, struct resource *res, bool online) start = res->start; size = resource_size(res); + if (memory_block_type == MEMORY_BLOCK_NONE) + return -EINVAL; + ret = check_hotplug_memory_range(start, size); if (ret) return ret; @@ -1128,7 +1133,7 @@ int __ref add_memory_resource(int nid, struct resource *res, bool online) new_node = ret; /* call arch's memory hotadd */ - ret = arch_add_memory(nid, start, size, NULL, true); + ret = arch_add_memory(nid, start, size, NULL, memory_block_type); if (ret < 0) goto error; @@ -1153,8 +1158,8 @@ int __ref add_memory_resource(int nid, struct resource *res, bool online) /* device_online() will take the lock when calling online_pages() */ mem_hotplug_done(); - /* online pages if requested */ - if (online) + if (memory_block_type == MEMORY_BLOCK_PARAVIRT || + (memory_block_type == MEMORY_BLOCK_NORMAL && memhp_auto_online)) walk_memory_range(PFN_DOWN(start), PFN_UP(start + size - 1), NULL, online_memory_block); @@ -1169,7 +1174,7 @@ int __ref add_memory_resource(int nid, struct resource *res, bool online) } /* requires device_hotplug_lock, see add_memory_resource() */ -int __ref __add_memory(int nid, u64 start, u64 size) +int __ref __add_memory(int nid, u64 start, u64 size, int memory_block_type) { struct resource *res; int ret; @@ -1178,18 +1183,18 @@ int __ref __add_memory(int nid, u64 start, u64 size) if (IS_ERR(res)) return PTR_ERR(res); - ret = add_memory_resource(nid, res, memhp_auto_online); + ret = add_memory_resource(nid, res, memory_block_type); if (ret < 0) release_memory_resource(res); return ret; } -int add_memory(int nid, u64 start, u64 size) +int add_memory(int nid, u64 start, u64 size, int memory_block_type) { int rc; lock_device_hotplug(); - rc = __add_memory(nid, start, size); + rc = __add_memory(nid, start, size, memory_block_type); unlock_device_hotplug(); return rc; -- 2.17.1 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2973C43382 for ; Fri, 28 Sep 2018 15:06:39 +0000 (UTC) Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C888520684 for ; Fri, 28 Sep 2018 15:06:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C888520684 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 42MFPm33KLzF3Jf for ; Sat, 29 Sep 2018 01:06:36 +1000 (AEST) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: lists.ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=redhat.com (client-ip=209.132.183.28; helo=mx1.redhat.com; envelope-from=david@redhat.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=redhat.com Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 42MFM93tBvzF3F1 for ; Sat, 29 Sep 2018 01:04:20 +1000 (AEST) Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 97B9430833A9; Fri, 28 Sep 2018 15:04:16 +0000 (UTC) Received: from t460s.redhat.com (ovpn-116-40.ams2.redhat.com [10.36.116.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6C01E66829; Fri, 28 Sep 2018 15:03:58 +0000 (UTC) From: David Hildenbrand To: linux-mm@kvack.org Subject: [PATCH RFC] mm/memory_hotplug: Introduce memory block types Date: Fri, 28 Sep 2018 17:03:57 +0200 Message-Id: <20180928150357.12942-1-david@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.44]); Fri, 28 Sep 2018 15:04:18 +0000 (UTC) X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kate Stewart , Rich Felker , linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org, Peter Zijlstra , Dave Hansen , Heiko Carstens , Pavel Tatashin , Michal Hocko , Paul Mackerras , "H. Peter Anvin" , Rashmica Gupta , "K. Y. Srinivasan" , Boris Ostrovsky , linux-s390@vger.kernel.org, Michael Neuling , Stephen Hemminger , Yoshinori Sato , David Hildenbrand , linux-acpi@vger.kernel.org, Ingo Molnar , xen-devel@lists.xenproject.org, Rob Herring , Len Brown , Fenghua Yu , Stephen Rothwell , "mike.travis@hpe.com" , Haiyang Zhang , Dan Williams , =?UTF-8?q?Jonathan=20Neusch=C3=A4fer?= , Nicholas Piggin , Joe Perches , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Mike Rapoport , Borislav Petkov , Andy Lutomirski , Thomas Gleixner , Joonsoo Kim , Oscar Salvador , Juergen Gross , Tony Luck , Mathieu Malaterre , Greg Kroah-Hartman , "Rafael J. Wysocki" , linux-kernel@vger.kernel.org, Mauricio Faria de Oliveira , Philippe Ombredanne , Martin Schwidefsky , devel@linuxdriverproject.org, Andrew Morton , linuxppc-dev@lists.ozlabs.org, "Kirill A. Shutemov" Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" How to/when to online hotplugged memory is hard to manage for distributions because different memory types are to be treated differently. Right now, we need complicated udev rules that e.g. check if we are running on s390x, on a physical system or on a virtualized system. But there is also sometimes the demand to really online memory immediately while adding in the kernel and not to wait for user space to make a decision. And on virtualized systems there might be different requirements, depending on "how" the memory was added (and if it will eventually get unplugged again - DIMM vs. paravirtualized mechanisms). On the one hand, we have physical systems where we sometimes want to be able to unplug memory again - e.g. a DIMM - so we have to online it to the MOVABLE zone optionally. That decision is usually made in user space. On the other hand, we have memory that should never be onlined automatically, only when asked for by an administrator. Such memory only applies to virtualized environments like s390x, where the concept of "standby" memory exists. Memory is detected and added during boot, so it can be onlined when requested by the admininistrator or some tooling. Only when onlining, memory will be allocated in the hypervisor. But then, we also have paravirtualized devices (namely xen and hyper-v balloons), that hotplug memory that will never ever be removed from a system right now using offline_pages/remove_memory. If at all, this memory is logically unplugged and handed back to the hypervisor via ballooning. For paravirtualized devices it is relevant that memory is onlined as quickly as possible after adding - and that it is added to the NORMAL zone. Otherwise, it could happen that too much memory in a row is added (but not onlined), resulting in out-of-memory conditions due to the additional memory for "struct pages" and friends. MOVABLE zone as well as delays might be very problematic and lead to crashes (e.g. zone imbalance). Therefore, introduce memory block types and online memory depending on it when adding the memory. Expose the memory type to user space, so user space handlers can start to process only "normal" memory. Other memory block types can be ignored. One thing less to worry about in user space. Cc: Tony Luck Cc: Fenghua Yu Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: Michael Ellerman Cc: Martin Schwidefsky Cc: Heiko Carstens Cc: Yoshinori Sato Cc: Rich Felker Cc: Dave Hansen Cc: Andy Lutomirski Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: "H. Peter Anvin" Cc: "Rafael J. Wysocki" Cc: Len Brown Cc: Greg Kroah-Hartman Cc: "K. Y. Srinivasan" Cc: Haiyang Zhang Cc: Stephen Hemminger Cc: Boris Ostrovsky Cc: Juergen Gross Cc: "Jérôme Glisse" Cc: Andrew Morton Cc: Mike Rapoport Cc: Dan Williams Cc: Stephen Rothwell Cc: Michal Hocko Cc: "Kirill A. Shutemov" Cc: David Hildenbrand Cc: Nicholas Piggin Cc: "Jonathan Neuschäfer" Cc: Joe Perches Cc: Michael Neuling Cc: Mauricio Faria de Oliveira Cc: Balbir Singh Cc: Rashmica Gupta Cc: Pavel Tatashin Cc: Rob Herring Cc: Philippe Ombredanne Cc: Kate Stewart Cc: "mike.travis@hpe.com" Cc: Joonsoo Kim Cc: Oscar Salvador Cc: Mathieu Malaterre Signed-off-by: David Hildenbrand --- This patch is based on the current mm-tree, where some related patches from me are currently residing that touched the add_memory() functions. arch/ia64/mm/init.c | 4 +- arch/powerpc/mm/mem.c | 4 +- arch/powerpc/platforms/powernv/memtrace.c | 3 +- arch/s390/mm/init.c | 4 +- arch/sh/mm/init.c | 4 +- arch/x86/mm/init_32.c | 4 +- arch/x86/mm/init_64.c | 8 +-- drivers/acpi/acpi_memhotplug.c | 3 +- drivers/base/memory.c | 63 ++++++++++++++++++++--- drivers/hv/hv_balloon.c | 33 ++---------- drivers/s390/char/sclp_cmd.c | 3 +- drivers/xen/balloon.c | 2 +- include/linux/memory.h | 28 +++++++++- include/linux/memory_hotplug.h | 17 +++--- mm/hmm.c | 6 ++- mm/memory_hotplug.c | 31 ++++++----- 16 files changed, 139 insertions(+), 78 deletions(-) diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c index d5e12ff1d73c..813d1d86bf95 100644 --- a/arch/ia64/mm/init.c +++ b/arch/ia64/mm/init.c @@ -646,13 +646,13 @@ mem_init (void) #ifdef CONFIG_MEMORY_HOTPLUG int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, - bool want_memblock) + int memory_block_type) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; int ret; - ret = __add_pages(nid, start_pfn, nr_pages, altmap, want_memblock); + ret = __add_pages(nid, start_pfn, nr_pages, altmap, memory_block_type); if (ret) printk("%s: Problem encountered in __add_pages() as ret=%d\n", __func__, ret); diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index 5551f5870dcc..dd32fcc9099c 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -118,7 +118,7 @@ int __weak remove_section_mapping(unsigned long start, unsigned long end) } int __meminit arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, - bool want_memblock) + int memory_block_type) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; @@ -135,7 +135,7 @@ int __meminit arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap * } flush_inval_dcache_range(start, start + size); - return __add_pages(nid, start_pfn, nr_pages, altmap, want_memblock); + return __add_pages(nid, start_pfn, nr_pages, altmap, memory_block_type); } #ifdef CONFIG_MEMORY_HOTREMOVE diff --git a/arch/powerpc/platforms/powernv/memtrace.c b/arch/powerpc/platforms/powernv/memtrace.c index 84d038ed3882..57d6b3d46382 100644 --- a/arch/powerpc/platforms/powernv/memtrace.c +++ b/arch/powerpc/platforms/powernv/memtrace.c @@ -232,7 +232,8 @@ static int memtrace_online(void) ent->mem = 0; } - if (add_memory(ent->nid, ent->start, ent->size)) { + if (add_memory(ent->nid, ent->start, ent->size, + MEMORY_BLOCK_NORMAL)) { pr_err("Failed to add trace memory to node %d\n", ent->nid); ret += 1; diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index e472cd763eb3..b5324527c7f6 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -222,7 +222,7 @@ device_initcall(s390_cma_mem_init); #endif /* CONFIG_CMA */ int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, - bool want_memblock) + int memory_block_type) { unsigned long start_pfn = PFN_DOWN(start); unsigned long size_pages = PFN_DOWN(size); @@ -232,7 +232,7 @@ int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, if (rc) return rc; - rc = __add_pages(nid, start_pfn, size_pages, altmap, want_memblock); + rc = __add_pages(nid, start_pfn, size_pages, altmap, memory_block_type); if (rc) vmem_remove_mapping(start, size); return rc; diff --git a/arch/sh/mm/init.c b/arch/sh/mm/init.c index c8c13c777162..6b876000731a 100644 --- a/arch/sh/mm/init.c +++ b/arch/sh/mm/init.c @@ -419,14 +419,14 @@ void free_initrd_mem(unsigned long start, unsigned long end) #ifdef CONFIG_MEMORY_HOTPLUG int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, - bool want_memblock) + int memory_block_type) { unsigned long start_pfn = PFN_DOWN(start); unsigned long nr_pages = size >> PAGE_SHIFT; int ret; /* We only have ZONE_NORMAL, so this is easy.. */ - ret = __add_pages(nid, start_pfn, nr_pages, altmap, want_memblock); + ret = __add_pages(nid, start_pfn, nr_pages, altmap, memory_block_type); if (unlikely(ret)) printk("%s: Failed, __add_pages() == %d\n", __func__, ret); diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c index f2837e4c40b3..4f50cd4467a9 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -851,12 +851,12 @@ void __init mem_init(void) #ifdef CONFIG_MEMORY_HOTPLUG int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, - bool want_memblock) + int memory_block_type) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; - return __add_pages(nid, start_pfn, nr_pages, altmap, want_memblock); + return __add_pages(nid, start_pfn, nr_pages, altmap, memory_block_type); } #ifdef CONFIG_MEMORY_HOTREMOVE diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 5fab264948c2..fc3df573f0f3 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -783,11 +783,11 @@ static void update_end_of_memory_vars(u64 start, u64 size) } int add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages, - struct vmem_altmap *altmap, bool want_memblock) + struct vmem_altmap *altmap, int memory_block_type) { int ret; - ret = __add_pages(nid, start_pfn, nr_pages, altmap, want_memblock); + ret = __add_pages(nid, start_pfn, nr_pages, altmap, memory_block_type); WARN_ON_ONCE(ret); /* update max_pfn, max_low_pfn and high_memory */ @@ -798,14 +798,14 @@ int add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages, } int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, - bool want_memblock) + int memory_block_type) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; init_memory_mapping(start, start + size); - return add_pages(nid, start_pfn, nr_pages, altmap, want_memblock); + return add_pages(nid, start_pfn, nr_pages, altmap, memory_block_type); } #define PAGE_INUSE 0xFD diff --git a/drivers/acpi/acpi_memhotplug.c b/drivers/acpi/acpi_memhotplug.c index 8fe0960ea572..c5f646b4e97e 100644 --- a/drivers/acpi/acpi_memhotplug.c +++ b/drivers/acpi/acpi_memhotplug.c @@ -228,7 +228,8 @@ static int acpi_memory_enable_device(struct acpi_memory_device *mem_device) if (node < 0) node = memory_add_physaddr_to_nid(info->start_addr); - result = __add_memory(node, info->start_addr, info->length); + result = __add_memory(node, info->start_addr, info->length, + MEMORY_BLOCK_NORMAL); /* * If the memory block has been used by the kernel, add_memory() diff --git a/drivers/base/memory.c b/drivers/base/memory.c index 0e5985682642..2686101e41b5 100644 --- a/drivers/base/memory.c +++ b/drivers/base/memory.c @@ -381,6 +381,32 @@ static ssize_t show_phys_device(struct device *dev, return sprintf(buf, "%d\n", mem->phys_device); } +static ssize_t type_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct memory_block *mem = to_memory_block(dev); + ssize_t len = 0; + + switch (mem->state) { + case MEMORY_BLOCK_NORMAL: + len = sprintf(buf, "normal\n"); + break; + case MEMORY_BLOCK_STANDBY: + len = sprintf(buf, "standby\n"); + break; + case MEMORY_BLOCK_PARAVIRT: + len = sprintf(buf, "paravirt\n"); + break; + default: + len = sprintf(buf, "ERROR-UNKNOWN-%ld\n", + mem->state); + WARN_ON(1); + break; + } + + return len; +} + #ifdef CONFIG_MEMORY_HOTREMOVE static void print_allowed_zone(char *buf, int nid, unsigned long start_pfn, unsigned long nr_pages, int online_type, @@ -442,6 +468,7 @@ static DEVICE_ATTR(phys_index, 0444, show_mem_start_phys_index, NULL); static DEVICE_ATTR(state, 0644, show_mem_state, store_mem_state); static DEVICE_ATTR(phys_device, 0444, show_phys_device, NULL); static DEVICE_ATTR(removable, 0444, show_mem_removable, NULL); +static DEVICE_ATTR_RO(type); /* * Block size attribute stuff @@ -514,7 +541,8 @@ memory_probe_store(struct device *dev, struct device_attribute *attr, nid = memory_add_physaddr_to_nid(phys_addr); ret = __add_memory(nid, phys_addr, - MIN_MEMORY_BLOCK_SIZE * sections_per_block); + MIN_MEMORY_BLOCK_SIZE * sections_per_block, + MEMORY_BLOCK_NORMAL); if (ret) goto out; @@ -620,6 +648,7 @@ static struct attribute *memory_memblk_attrs[] = { &dev_attr_state.attr, &dev_attr_phys_device.attr, &dev_attr_removable.attr, + &dev_attr_type.attr, #ifdef CONFIG_MEMORY_HOTREMOVE &dev_attr_valid_zones.attr, #endif @@ -657,13 +686,17 @@ int register_memory(struct memory_block *memory) } static int init_memory_block(struct memory_block **memory, - struct mem_section *section, unsigned long state) + struct mem_section *section, unsigned long state, + int memory_block_type) { struct memory_block *mem; unsigned long start_pfn; int scn_nr; int ret = 0; + if (memory_block_type == MEMORY_BLOCK_NONE) + return -EINVAL; + mem = kzalloc(sizeof(*mem), GFP_KERNEL); if (!mem) return -ENOMEM; @@ -675,6 +708,7 @@ static int init_memory_block(struct memory_block **memory, mem->state = state; start_pfn = section_nr_to_pfn(mem->start_section_nr); mem->phys_device = arch_get_memory_phys_device(start_pfn); + mem->type = memory_block_type; ret = register_memory(mem); @@ -699,7 +733,8 @@ static int add_memory_block(int base_section_nr) if (section_count == 0) return 0; - ret = init_memory_block(&mem, __nr_to_section(section_nr), MEM_ONLINE); + ret = init_memory_block(&mem, __nr_to_section(section_nr), MEM_ONLINE, + MEMORY_BLOCK_NORMAL); if (ret) return ret; mem->section_count = section_count; @@ -710,19 +745,35 @@ static int add_memory_block(int base_section_nr) * need an interface for the VM to add new memory regions, * but without onlining it. */ -int hotplug_memory_register(int nid, struct mem_section *section) +int hotplug_memory_register(int nid, struct mem_section *section, + int memory_block_type) { int ret = 0; struct memory_block *mem; mutex_lock(&mem_sysfs_mutex); + /* make sure there is no memblock if we don't want one */ + if (memory_block_type == MEMORY_BLOCK_NONE) { + mem = find_memory_block(section); + if (mem) { + put_device(&mem->dev); + ret = -EINVAL; + } + goto out; + } + mem = find_memory_block(section); if (mem) { - mem->section_count++; + /* make sure the type matches */ + if (mem->type == memory_block_type) + mem->section_count++; + else + ret = -EINVAL; put_device(&mem->dev); } else { - ret = init_memory_block(&mem, section, MEM_OFFLINE); + ret = init_memory_block(&mem, section, MEM_OFFLINE, + memory_block_type); if (ret) goto out; mem->section_count++; diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c index b1b788082793..5a8d18c4d699 100644 --- a/drivers/hv/hv_balloon.c +++ b/drivers/hv/hv_balloon.c @@ -537,11 +537,6 @@ struct hv_dynmem_device { */ bool host_specified_ha_region; - /* - * State to synchronize hot-add. - */ - struct completion ol_waitevent; - bool ha_waiting; /* * This thread handles hot-add * requests from the host as well as notifying @@ -640,14 +635,6 @@ static int hv_memory_notifier(struct notifier_block *nb, unsigned long val, unsigned long flags, pfn_count; switch (val) { - case MEM_ONLINE: - case MEM_CANCEL_ONLINE: - if (dm_device.ha_waiting) { - dm_device.ha_waiting = false; - complete(&dm_device.ol_waitevent); - } - break; - case MEM_OFFLINE: spin_lock_irqsave(&dm_device.ha_lock, flags); pfn_count = hv_page_offline_check(mem->start_pfn, @@ -665,9 +652,7 @@ static int hv_memory_notifier(struct notifier_block *nb, unsigned long val, } spin_unlock_irqrestore(&dm_device.ha_lock, flags); break; - case MEM_GOING_ONLINE: - case MEM_GOING_OFFLINE: - case MEM_CANCEL_OFFLINE: + default: break; } return NOTIFY_OK; @@ -731,12 +716,10 @@ static void hv_mem_hot_add(unsigned long start, unsigned long size, has->covered_end_pfn += processed_pfn; spin_unlock_irqrestore(&dm_device.ha_lock, flags); - init_completion(&dm_device.ol_waitevent); - dm_device.ha_waiting = !memhp_auto_online; - nid = memory_add_physaddr_to_nid(PFN_PHYS(start_pfn)); ret = add_memory(nid, PFN_PHYS((start_pfn)), - (HA_CHUNK << PAGE_SHIFT)); + (HA_CHUNK << PAGE_SHIFT), + MEMORY_BLOCK_PARAVIRT); if (ret) { pr_err("hot_add memory failed error is %d\n", ret); @@ -757,16 +740,6 @@ static void hv_mem_hot_add(unsigned long start, unsigned long size, break; } - /* - * Wait for the memory block to be onlined when memory onlining - * is done outside of kernel (memhp_auto_online). Since the hot - * add has succeeded, it is ok to proceed even if the pages in - * the hot added region have not been "onlined" within the - * allowed time. - */ - if (dm_device.ha_waiting) - wait_for_completion_timeout(&dm_device.ol_waitevent, - 5*HZ); post_status(&dm_device); } } diff --git a/drivers/s390/char/sclp_cmd.c b/drivers/s390/char/sclp_cmd.c index d7686a68c093..1928a2411456 100644 --- a/drivers/s390/char/sclp_cmd.c +++ b/drivers/s390/char/sclp_cmd.c @@ -406,7 +406,8 @@ static void __init add_memory_merged(u16 rn) if (!size) goto skip_add; for (addr = start; addr < start + size; addr += block_size) - add_memory(numa_pfn_to_nid(PFN_DOWN(addr)), addr, block_size); + add_memory(numa_pfn_to_nid(PFN_DOWN(addr)), addr, block_size, + MEMORY_BLOCK_STANDBY); skip_add: first_rn = rn; num = 1; diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c index fdfc64f5acea..291a8aac6af3 100644 --- a/drivers/xen/balloon.c +++ b/drivers/xen/balloon.c @@ -397,7 +397,7 @@ static enum bp_state reserve_additional_memory(void) mutex_unlock(&balloon_mutex); /* add_memory_resource() requires the device_hotplug lock */ lock_device_hotplug(); - rc = add_memory_resource(nid, resource, memhp_auto_online); + rc = add_memory_resource(nid, resource, MEMORY_BLOCK_PARAVIRT); unlock_device_hotplug(); mutex_lock(&balloon_mutex); diff --git a/include/linux/memory.h b/include/linux/memory.h index a6ddefc60517..3dc2a0b12653 100644 --- a/include/linux/memory.h +++ b/include/linux/memory.h @@ -23,6 +23,30 @@ #define MIN_MEMORY_BLOCK_SIZE (1UL << SECTION_SIZE_BITS) +/* + * NONE: No memory block is to be created (e.g. device memory). + * NORMAL: Memory block that represents normal (boot or hotplugged) memory + * (e.g. ACPI DIMMs) that should be onlined either automatically + * (memhp_auto_online) or manually by user space to select a + * specific zone. + * Applicable to memhp_auto_online. + * STANDBY: Memory block that represents standby memory that should only + * be onlined on demand by user space (e.g. standby memory on + * s390x), but never automatically by the kernel. + * Not applicable to memhp_auto_online. + * PARAVIRT: Memory block that represents memory added by + * paravirtualized mechanisms (e.g. hyper-v, xen) that will + * always automatically get onlined. Memory will be unplugged + * using ballooning, not by relying on the MOVABLE ZONE. + * Not applicable to memhp_auto_online. + */ +enum { + MEMORY_BLOCK_NONE, + MEMORY_BLOCK_NORMAL, + MEMORY_BLOCK_STANDBY, + MEMORY_BLOCK_PARAVIRT, +}; + struct memory_block { unsigned long start_section_nr; unsigned long end_section_nr; @@ -34,6 +58,7 @@ struct memory_block { int (*phys_callback)(struct memory_block *); struct device dev; int nid; /* NID for this memory block */ + int type; /* type of this memory block */ }; int arch_get_memory_phys_device(unsigned long start_pfn); @@ -111,7 +136,8 @@ extern int register_memory_notifier(struct notifier_block *nb); extern void unregister_memory_notifier(struct notifier_block *nb); extern int register_memory_isolate_notifier(struct notifier_block *nb); extern void unregister_memory_isolate_notifier(struct notifier_block *nb); -int hotplug_memory_register(int nid, struct mem_section *section); +int hotplug_memory_register(int nid, struct mem_section *section, + int memory_block_type); #ifdef CONFIG_MEMORY_HOTREMOVE extern int unregister_memory_section(struct mem_section *); #endif diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index ffd9cd10fcf3..b560a9ee0e8c 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -115,18 +115,18 @@ extern int __remove_pages(struct zone *zone, unsigned long start_pfn, /* reasonably generic interface to expand the physical pages */ extern int __add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages, - struct vmem_altmap *altmap, bool want_memblock); + struct vmem_altmap *altmap, int memory_block_type); #ifndef CONFIG_ARCH_HAS_ADD_PAGES static inline int add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages, struct vmem_altmap *altmap, - bool want_memblock) + int memory_block_type) { - return __add_pages(nid, start_pfn, nr_pages, altmap, want_memblock); + return __add_pages(nid, start_pfn, nr_pages, altmap, memory_block_type); } #else /* ARCH_HAS_ADD_PAGES */ int add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages, - struct vmem_altmap *altmap, bool want_memblock); + struct vmem_altmap *altmap, int memory_block_type); #endif /* ARCH_HAS_ADD_PAGES */ #ifdef CONFIG_NUMA @@ -324,11 +324,12 @@ static inline void __remove_memory(int nid, u64 start, u64 size) {} extern void __ref free_area_init_core_hotplug(int nid); extern int walk_memory_range(unsigned long start_pfn, unsigned long end_pfn, void *arg, int (*func)(struct memory_block *, void *)); -extern int __add_memory(int nid, u64 start, u64 size); -extern int add_memory(int nid, u64 start, u64 size); -extern int add_memory_resource(int nid, struct resource *resource, bool online); +extern int __add_memory(int nid, u64 start, u64 size, int memory_block_type); +extern int add_memory(int nid, u64 start, u64 size, int memory_block_type); +extern int add_memory_resource(int nid, struct resource *resource, + int memory_block_type); extern int arch_add_memory(int nid, u64 start, u64 size, - struct vmem_altmap *altmap, bool want_memblock); + struct vmem_altmap *altmap, int memory_block_type); extern void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn, unsigned long nr_pages, struct vmem_altmap *altmap); extern int offline_pages(unsigned long start_pfn, unsigned long nr_pages); diff --git a/mm/hmm.c b/mm/hmm.c index c968e49f7a0c..2350f6f6ab42 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -32,6 +32,7 @@ #include #include #include +#include #define PA_SECTION_SIZE (1UL << PA_SECTION_SHIFT) @@ -1096,10 +1097,11 @@ static int hmm_devmem_pages_create(struct hmm_devmem *devmem) */ if (devmem->pagemap.type == MEMORY_DEVICE_PUBLIC) ret = arch_add_memory(nid, align_start, align_size, NULL, - false); + MEMORY_BLOCK_NONE); else ret = add_pages(nid, align_start >> PAGE_SHIFT, - align_size >> PAGE_SHIFT, NULL, false); + align_size >> PAGE_SHIFT, NULL, + MEMORY_BLOCK_NONE); if (ret) { mem_hotplug_done(); goto error_add_memory; diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index d4c7e42e46f3..bce6c41d721c 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -246,7 +246,7 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat) #endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */ static int __meminit __add_section(int nid, unsigned long phys_start_pfn, - struct vmem_altmap *altmap, bool want_memblock) + struct vmem_altmap *altmap, int memory_block_type) { int ret; @@ -257,10 +257,11 @@ static int __meminit __add_section(int nid, unsigned long phys_start_pfn, if (ret < 0) return ret; - if (!want_memblock) + if (memory_block_type == MEMBLOCK_NONE) return 0; - return hotplug_memory_register(nid, __pfn_to_section(phys_start_pfn)); + return hotplug_memory_register(nid, __pfn_to_section(phys_start_pfn), + memory_block_type); } /* @@ -271,7 +272,7 @@ static int __meminit __add_section(int nid, unsigned long phys_start_pfn, */ int __ref __add_pages(int nid, unsigned long phys_start_pfn, unsigned long nr_pages, struct vmem_altmap *altmap, - bool want_memblock) + int memory_block_type) { unsigned long i; int err = 0; @@ -296,7 +297,7 @@ int __ref __add_pages(int nid, unsigned long phys_start_pfn, for (i = start_sec; i <= end_sec; i++) { err = __add_section(nid, section_nr_to_pfn(i), altmap, - want_memblock); + memory_block_type); /* * EEXIST is finally dealt with by ioresource collision @@ -1099,7 +1100,8 @@ static int online_memory_block(struct memory_block *mem, void *arg) * * we are OK calling __meminit stuff here - we have CONFIG_MEMORY_HOTPLUG */ -int __ref add_memory_resource(int nid, struct resource *res, bool online) +int __ref add_memory_resource(int nid, struct resource *res, + int memory_block_type) { u64 start, size; bool new_node = false; @@ -1108,6 +1110,9 @@ int __ref add_memory_resource(int nid, struct resource *res, bool online) start = res->start; size = resource_size(res); + if (memory_block_type == MEMORY_BLOCK_NONE) + return -EINVAL; + ret = check_hotplug_memory_range(start, size); if (ret) return ret; @@ -1128,7 +1133,7 @@ int __ref add_memory_resource(int nid, struct resource *res, bool online) new_node = ret; /* call arch's memory hotadd */ - ret = arch_add_memory(nid, start, size, NULL, true); + ret = arch_add_memory(nid, start, size, NULL, memory_block_type); if (ret < 0) goto error; @@ -1153,8 +1158,8 @@ int __ref add_memory_resource(int nid, struct resource *res, bool online) /* device_online() will take the lock when calling online_pages() */ mem_hotplug_done(); - /* online pages if requested */ - if (online) + if (memory_block_type == MEMORY_BLOCK_PARAVIRT || + (memory_block_type == MEMORY_BLOCK_NORMAL && memhp_auto_online)) walk_memory_range(PFN_DOWN(start), PFN_UP(start + size - 1), NULL, online_memory_block); @@ -1169,7 +1174,7 @@ int __ref add_memory_resource(int nid, struct resource *res, bool online) } /* requires device_hotplug_lock, see add_memory_resource() */ -int __ref __add_memory(int nid, u64 start, u64 size) +int __ref __add_memory(int nid, u64 start, u64 size, int memory_block_type) { struct resource *res; int ret; @@ -1178,18 +1183,18 @@ int __ref __add_memory(int nid, u64 start, u64 size) if (IS_ERR(res)) return PTR_ERR(res); - ret = add_memory_resource(nid, res, memhp_auto_online); + ret = add_memory_resource(nid, res, memory_block_type); if (ret < 0) release_memory_resource(res); return ret; } -int add_memory(int nid, u64 start, u64 size) +int add_memory(int nid, u64 start, u64 size, int memory_block_type) { int rc; lock_device_hotplug(); - rc = __add_memory(nid, start, size); + rc = __add_memory(nid, start, size, memory_block_type); unlock_device_hotplug(); return rc; -- 2.17.1