From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8858DC4321A for ; Tue, 11 Jun 2019 01:56:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 153632086D for ; Tue, 11 Jun 2019 01:56:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=garyguo.net header.i=@garyguo.net header.b="AbpUSdv2" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390763AbfFKB4e (ORCPT ); Mon, 10 Jun 2019 21:56:34 -0400 Received: from mail-eopbgr100138.outbound.protection.outlook.com ([40.107.10.138]:9856 "EHLO GBR01-LO2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726532AbfFKB4d (ORCPT ); Mon, 10 Jun 2019 21:56:33 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=garyguo.net; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=VPCibjSLFJ8QPugcKS3wvBhWTo2k1dG0NzaEWarL6Bg=; b=AbpUSdv2Vc6x2t0TVkfgwMKUAa7ljFJU5fC5OG3+ldeABsga6+9OJjxE9IoAG+tmbFUT3ZBLMtCgsfiPBrE1MZeilVnmggqrJtgSA2INvNZhO2yayPeQao+ea6prXzplnDlDAB27qKFGwa34pWf4fLQjEa88KFcrcUGkUkf43jQ= Received: from LO2P265MB0847.GBRP265.PROD.OUTLOOK.COM (20.176.139.20) by LO2P265MB0464.GBRP265.PROD.OUTLOOK.COM (10.166.98.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1965.12; Tue, 11 Jun 2019 01:56:26 +0000 Received: from LO2P265MB0847.GBRP265.PROD.OUTLOOK.COM ([fe80::c4af:389:2951:fdd1]) by LO2P265MB0847.GBRP265.PROD.OUTLOOK.COM ([fe80::c4af:389:2951:fdd1%7]) with mapi id 15.20.1965.017; Tue, 11 Jun 2019 01:56:26 +0000 From: Gary Guo To: Palmer Dabbelt , "julien.grall@arm.com" CC: "linux-kernel@vger.kernel.org" , "linux-arm-kernel@lists.infradead.org" , "kvmarm@lists.cs.columbia.edu" , "aou@eecs.berkeley.edu" , Atish Patra , Christoph Hellwig , Paul Walmsley , "rppt@linux.ibm.com" , "linux-riscv@lists.infradead.org" , Anup Patel , "christoffer.dall@arm.com" , "james.morse@arm.com" , "marc.zyngier@arm.com" , "julien.thierry@arm.com" , "suzuki.poulose@arm.com" , "catalin.marinas@arm.com" , Will Deacon Subject: RE: [PATCH RFC 11/14] arm64: Move the ASID allocator code in a separate file Thread-Topic: [PATCH RFC 11/14] arm64: Move the ASID allocator code in a separate file Thread-Index: AQHVG7+VzWoWPgdUJU+rwDAjKiBFD6aNhr6AgAgynaA= Date: Tue, 11 Jun 2019 01:56:26 +0000 Message-ID: References: <0dfe120b-066a-2ac8-13bc-3f5a29e2caa3@arm.com> In-Reply-To: Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=gary@garyguo.net; x-originating-ip: [2001:470:6972:501:2013:f57c:b021:47b0] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 306b3dd3-aa4f-4043-5dc4-08d6ee100371 x-microsoft-antispam: BCL:0;PCL:0;RULEID:(2390118)(7020095)(4652040)(7021145)(8989299)(4534185)(7022145)(4603075)(4627221)(201702281549075)(8990200)(7048125)(7024125)(7027125)(7023125)(5600148)(711020)(4605104)(1401327)(2017052603328)(7193020);SRVR:LO2P265MB0464; x-ms-traffictypediagnostic: LO2P265MB0464: x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:3173; x-forefront-prvs: 006546F32A x-forefront-antispam-report: SFV:NSPM;SFS:(10019020)(346002)(39830400003)(376002)(366004)(396003)(136003)(13464003)(189003)(199004)(52536014)(8936002)(6436002)(229853002)(7416002)(25786009)(6116002)(76116006)(66556008)(55016002)(64756008)(81166006)(81156014)(8676002)(316002)(66476007)(4326008)(66946007)(73956011)(6246003)(5660300002)(86362001)(71200400001)(102836004)(71190400001)(2906002)(53936002)(54906003)(110136005)(74316002)(14454004)(476003)(66446008)(305945005)(99286004)(14444005)(11346002)(53946003)(9686003)(256004)(7696005)(2501003)(446003)(7736002)(33656002)(186003)(30864003)(508600001)(486006)(46003)(68736007)(53546011)(6506007)(76176011)(87944003)(579004);DIR:OUT;SFP:1102;SCL:1;SRVR:LO2P265MB0464;H:LO2P265MB0847.GBRP265.PROD.OUTLOOK.COM;FPR:;SPF:None;LANG:en;PTR:InfoNoRecords;MX:1;A:1; received-spf: None (protection.outlook.com: garyguo.net does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: K7LNFpzKy/fXGiteJSxdXanaSdvXFzCqXFX33QCp8PWzJGaNT/gxozwKVasYw4vV9ZJWiWJP87YqlUjzvdhhp3aG7CQIT4yPYlD9Q6IK8TnGJfm1X0aIhsvEnZB0H7Dn7+y86jslPonxeTAjuhiGmUfgj5uOuLqW7dCH+IRFBiB6xw88lccczXawh5s2oOGP/j1U40fHHMJtRKnS6PM0+M5d0kvspixzsyNMmDXuX3cUU5mWXoPg7/EwupyOv5wcsnsBcpr2D/zTpboQVmnEHWV5f3oqHc0WRRoYg9psAMqzBPWBOs2iaVYQhGjcNlcWLQw81nIXtaFmpicr6NxuE2rm0NYaaH6AUcWI/6RjFpGlP9CPhjMen+I8KcwAH9wTJc0MK6W7u2ak57Qxoa+7Hj0gXDpg7AZmQ24j8R7SFS0= Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 MIME-Version: 1.0 X-OriginatorOrg: garyguo.net X-MS-Exchange-CrossTenant-Network-Message-Id: 306b3dd3-aa4f-4043-5dc4-08d6ee100371 X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Jun 2019 01:56:26.3223 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: bbc898ad-b10f-4e10-8552-d9377b823d45 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: gary@garyguo.net X-MS-Exchange-Transport-CrossTenantHeadersStamped: LO2P265MB0464 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org SGksDQoNCk9uIFJJU0MtViwgd2UgY2FuIG9ubHkgdXNlIEFTSUQgaWYgdGhlcmUgYXJlIG1vcmUg QVNJRHMgdGhhbiBDUFVzLiBJZiB0aGVyZSBhcmVuJ3QgZW5vdWdoIEFTSURzIChvciBpZiB0aGVy ZSBpcyBvbmx5IDEpLCB0aGVuIEFTSUQgZmVhdHVyZSBpcyBkaXNhYmxlZCBhbmQgMCBpcyB1c2Vk IGV2ZXJ5d2hlcmUuDQoNCkJlc3QsDQpHYXJ5DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0t LS0NCj4gRnJvbTogUGFsbWVyIERhYmJlbHQgPHBhbG1lckBzaWZpdmUuY29tPg0KPiBTZW50OiBX ZWRuZXNkYXksIEp1bmUgNSwgMjAxOSAyMTo0Mg0KPiBUbzoganVsaWVuLmdyYWxsQGFybS5jb20N Cj4gQ2M6IGxpbnV4LWtlcm5lbEB2Z2VyLmtlcm5lbC5vcmc7IGxpbnV4LWFybS1rZXJuZWxAbGlz dHMuaW5mcmFkZWFkLm9yZzsNCj4ga3ZtYXJtQGxpc3RzLmNzLmNvbHVtYmlhLmVkdTsgYW91QGVl Y3MuYmVya2VsZXkuZWR1OyBHYXJ5IEd1bw0KPiA8Z2FyeUBnYXJ5Z3VvLm5ldD47IEF0aXNoIFBh dHJhIDxBdGlzaC5QYXRyYUB3ZGMuY29tPjsgQ2hyaXN0b3BoIEhlbGx3aWcNCj4gPGhjaEBpbmZy YWRlYWQub3JnPjsgUGF1bCBXYWxtc2xleSA8cGF1bC53YWxtc2xleUBzaWZpdmUuY29tPjsNCj4g cnBwdEBsaW51eC5pYm0uY29tOyBsaW51eC1yaXNjdkBsaXN0cy5pbmZyYWRlYWQub3JnOyBBbnVw IFBhdGVsDQo+IDxBbnVwLlBhdGVsQHdkYy5jb20+OyBjaHJpc3RvZmZlci5kYWxsQGFybS5jb207 IGphbWVzLm1vcnNlQGFybS5jb207DQo+IG1hcmMuenluZ2llckBhcm0uY29tOyBqdWxpZW4udGhp ZXJyeUBhcm0uY29tOyBzdXp1a2kucG91bG9zZUBhcm0uY29tOw0KPiBjYXRhbGluLm1hcmluYXNA YXJtLmNvbTsgV2lsbCBEZWFjb24gPHdpbGwuZGVhY29uQGFybS5jb20+DQo+IFN1YmplY3Q6IFJl OiBbUEFUQ0ggUkZDIDExLzE0XSBhcm02NDogTW92ZSB0aGUgQVNJRCBhbGxvY2F0b3IgY29kZSBp biBhDQo+IHNlcGFyYXRlIGZpbGUNCj4gDQo+IE9uIFdlZCwgMDUgSnVuIDIwMTkgMDk6NTY6MDMg UERUICgtMDcwMCksIGp1bGllbi5ncmFsbEBhcm0uY29tIHdyb3RlOg0KPiA+IEhpLA0KPiA+DQo+ ID4gSSBhbSBDQ2luZyBSSVNDLVYgZm9sa3MgdG8gc2VlIGlmIHRoZXJlIGFyZSBhbiBpbnRlcmVz dCB0byBzaGFyZSB0aGUgY29kZS4NCj4gPg0KPiA+IEBSSVNDLVY6IEkgbm90aWNlZCB5b3UgYXJl IGRpc2N1c3NpbmcgYWJvdXQgaW1wb3J0aW5nIGEgdmVyc2lvbiBvZiBBU0lEDQo+ID4gYWxsb2Nh dG9yIGluIFJJU0MtVi4gQXQgYSBmaXJzdCBsb29rLCB0aGUgY29kZSBsb29rcyBxdWl0ZSBzaW1p bGFyLiBXb3VsZCB0aGUNCj4gPiBsaWJyYXJ5IGJlbG93IGhlbHBzIHlvdT8NCj4gDQo+IFRoYW5r cyEgIEkgZGlkbid0IGxvb2sgdGhhdCBjbG9zZWx5IGF0IHRoZSBvcmlnaW5hbCBwYXRjaGVzIGJl Y2F1c2UgdGhlDQo+IGFyZ3VtZW50IGFnYWluc3QgdGhlbSB3YXMganVzdCAid2UgZG9uJ3QgaGF2 ZSBhbnkgd2F5IHRvIHRlc3QgdGhpcyIuDQo+IFVuZm9ydHVuYXRlbHksIHdlIGRvbid0IGhhdmUg dGhlIGNvbnN0cmFpbnQgdGhhdCB0aGVyZSBhcmUgbW9yZSBBU0lEcyB0aGFuDQo+IENQVXMNCj4g aW4gdGhlIHN5c3RlbS4gIEFzIGEgcmVzdWx0IEkgZG9uJ3QgdGhpbmsgd2UgY2FuIHVzZSB0aGlz IEFTSUQgYWxsb2NhdGlvbg0KPiBzdHJhdGVneS4NCj4gDQo+ID4NCj4gPiBDaGVlcnMsDQo+ID4N Cj4gPiBPbiAyMS8wMy8yMDE5IDE2OjM2LCBKdWxpZW4gR3JhbGwgd3JvdGU6DQo+ID4+IFdlIHdp bGwgd2FudCB0byByZS11c2UgdGhlIEFTSUQgYWxsb2NhdG9yIGluIGEgc2VwYXJhdGUgY29udGV4 dCAoZS5nDQo+ID4+IGFsbG9jYXRpbmcgVk1JRCkuIFNvIG1vdmUgdGhlIGNvZGUgaW4gYSBuZXcg ZmlsZS4NCj4gPj4NCj4gPj4gVGhlIGZ1bmN0aW9uIGFzaWRfY2hlY2tfY29udGV4dCBoYXMgYmVl biBtb3ZlZCBpbiB0aGUgaGVhZGVyIGFzIGEgc3RhdGljDQo+ID4+IGlubGluZSBmdW5jdGlvbiBi ZWNhdXNlIHdlIHdhbnQgdG8gYXZvaWQgYWRkIGEgYnJhbmNoIHdoZW4gY2hlY2tpbmcgaWYgdGhl DQo+ID4+IEFTSUQgaXMgc3RpbGwgdmFsaWQuDQo+ID4+DQo+ID4+IFNpZ25lZC1vZmYtYnk6IEp1 bGllbiBHcmFsbCA8anVsaWVuLmdyYWxsQGFybS5jb20+DQo+ID4+DQo+ID4+IC0tLQ0KPiA+Pg0K PiA+PiBUaGlzIGNvZGUgd2lsbCBiZSB1c2VkIGluIHRoZSB2aXJ0IGNvZGUgZm9yIGFsbG9jYXRp bmcgVk1JRC4gSSBhbSBub3QNCj4gPj4gZW50aXJlbHkgc3VyZSB3aGVyZSB0byBwbGFjZSBpdC4g TGliIGNvdWxkIHBvdGVudGlhbGx5IGJlIGEgZ29vZCBwbGFjZSBidXQgSQ0KPiA+PiBhbSBub3Qg ZW50aXJlbHkgY29udmluY2VkIHRoZSBhbGdvIGFzIGl0IGlzIGNvdWxkIGJlIHVzZWQgYnkgb3Ro ZXINCj4gPj4gYXJjaGl0ZWN0dXJlLg0KPiA+Pg0KPiA+PiBMb29raW5nIGF0IHg4NiwgaXQgc2Vl bXMgdGhhdCBpdCB3aWxsIG5vdCBiZSBwb3NzaWJsZSB0byByZS11c2UgYmVjYXVzZQ0KPiA+PiB0 aGUgbnVtYmVyIG9mIFBDSUQgKGFrYSBBU0lEKSBjb3VsZCBiZSBzbWFsbGVyIHRoYW4gdGhlIG51 bWJlciBvZiBDUFVzLg0KPiA+PiBTZWUgY29tbWl0IG1lc3NhZ2UgMTBhZjYyMzVlMGQzMjdkNDJl MWJhZDk3NDM4NTE5NzgxNzkyM2RjMQ0KPiAieDg2L21tOg0KPiA+PiBJbXBsZW1lbnQgUENJRCBi YXNlZCBvcHRpbWl6YXRpb246IHRyeSB0byBwcmVzZXJ2ZSBvbGQgVExCIGVudHJpZXMgdXNpbmcN Cj4gPj4gUENJIi4NCj4gPj4gLS0tDQo+ID4+ICAgYXJjaC9hcm02NC9pbmNsdWRlL2FzbS9hc2lk LmggfCAgNzcgKysrKysrKysrKysrKysNCj4gPj4gICBhcmNoL2FybTY0L2xpYi9NYWtlZmlsZSAg ICAgICB8ICAgMiArDQo+ID4+ICAgYXJjaC9hcm02NC9saWIvYXNpZC5jICAgICAgICAgfCAxODUg KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrDQo+ID4+ICAgYXJjaC9hcm02NC9tbS9j b250ZXh0LmMgICAgICAgfCAyMzUgKy0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tDQo+ID4+ICAgNCBmaWxlcyBjaGFuZ2VkLCAyNjcgaW5zZXJ0aW9ucygrKSwgMjMyIGRl bGV0aW9ucygtKQ0KPiA+PiAgIGNyZWF0ZSBtb2RlIDEwMDY0NCBhcmNoL2FybTY0L2luY2x1ZGUv YXNtL2FzaWQuaA0KPiA+PiAgIGNyZWF0ZSBtb2RlIDEwMDY0NCBhcmNoL2FybTY0L2xpYi9hc2lk LmMNCj4gPj4NCj4gPj4gZGlmZiAtLWdpdCBhL2FyY2gvYXJtNjQvaW5jbHVkZS9hc20vYXNpZC5o IGIvYXJjaC9hcm02NC9pbmNsdWRlL2FzbS9hc2lkLmgNCj4gPj4gbmV3IGZpbGUgbW9kZSAxMDA2 NDQNCj4gPj4gaW5kZXggMDAwMDAwMDAwMDAwLi5iYjYyYjU4N2YzN2YNCj4gPj4gLS0tIC9kZXYv bnVsbA0KPiA+PiArKysgYi9hcmNoL2FybTY0L2luY2x1ZGUvYXNtL2FzaWQuaA0KPiA+PiBAQCAt MCwwICsxLDc3IEBADQo+ID4+ICsvKiBTUERYLUxpY2Vuc2UtSWRlbnRpZmllcjogR1BMLTIuMCAq Lw0KPiA+PiArI2lmbmRlZiBfX0FTTV9BU01fQVNJRF9IDQo+ID4+ICsjZGVmaW5lIF9fQVNNX0FT TV9BU0lEX0gNCj4gPj4gKw0KPiA+PiArI2luY2x1ZGUgPGxpbnV4L2F0b21pYy5oPg0KPiA+PiAr I2luY2x1ZGUgPGxpbnV4L2NvbXBpbGVyLmg+DQo+ID4+ICsjaW5jbHVkZSA8bGludXgvY3B1bWFz ay5oPg0KPiA+PiArI2luY2x1ZGUgPGxpbnV4L3BlcmNwdS5oPg0KPiA+PiArI2luY2x1ZGUgPGxp bnV4L3NwaW5sb2NrLmg+DQo+ID4+ICsNCj4gPj4gK3N0cnVjdCBhc2lkX2luZm8NCj4gPj4gK3sN Cj4gPj4gKwlhdG9taWM2NF90CWdlbmVyYXRpb247DQo+ID4+ICsJdW5zaWduZWQgbG9uZwkqbWFw Ow0KPiA+PiArCWF0b21pYzY0X3QgX19wZXJjcHUJKmFjdGl2ZTsNCj4gPj4gKwl1NjQgX19wZXJj cHUJCSpyZXNlcnZlZDsNCj4gPj4gKwl1MzIJCQliaXRzOw0KPiA+PiArCS8qIExvY2sgcHJvdGVj dGluZyB0aGUgc3RydWN0dXJlICovDQo+ID4+ICsJcmF3X3NwaW5sb2NrX3QJCWxvY2s7DQo+ID4+ ICsJLyogV2hpY2ggQ1BVIHJlcXVpcmVzIGNvbnRleHQgZmx1c2ggb24gbmV4dCBjYWxsICovDQo+ ID4+ICsJY3B1bWFza190CQlmbHVzaF9wZW5kaW5nOw0KPiA+PiArCS8qIE51bWJlciBvZiBBU0lE IGFsbG9jYXRlZCBieSBjb250ZXh0IChzaGlmdCB2YWx1ZSkgKi8NCj4gPj4gKwl1bnNpZ25lZCBp bnQJCWN0eHRfc2hpZnQ7DQo+ID4+ICsJLyogQ2FsbGJhY2sgdG8gbG9jYWxseSBmbHVzaCB0aGUg Y29udGV4dC4gKi8NCj4gPj4gKwl2b2lkCQkJKCpmbHVzaF9jcHVfY3R4dF9jYikodm9pZCk7DQo+ ID4+ICt9Ow0KPiA+PiArDQo+ID4+ICsjZGVmaW5lIE5VTV9BU0lEUyhpbmZvKQkJCSgxVUwgPDwg KChpbmZvKS0+Yml0cykpDQo+ID4+ICsjZGVmaW5lIE5VTV9DVFhUX0FTSURTKGluZm8pCQkoTlVN X0FTSURTKGluZm8pID4+IChpbmZvKS0NCj4gPmN0eHRfc2hpZnQpDQo+ID4+ICsNCj4gPj4gKyNk ZWZpbmUgYWN0aXZlX2FzaWQoaW5mbywgY3B1KQkqcGVyX2NwdV9wdHIoKGluZm8pLT5hY3RpdmUs IGNwdSkNCj4gPj4gKw0KPiA+PiArdm9pZCBhc2lkX25ld19jb250ZXh0KHN0cnVjdCBhc2lkX2lu Zm8gKmluZm8sIGF0b21pYzY0X3QgKnBhc2lkLA0KPiA+PiArCQkgICAgICB1bnNpZ25lZCBpbnQg Y3B1KTsNCj4gPj4gKw0KPiA+PiArLyoNCj4gPj4gKyAqIENoZWNrIHRoZSBBU0lEIGlzIHN0aWxs IHZhbGlkIGZvciB0aGUgY29udGV4dC4gSWYgbm90IGdlbmVyYXRlIGEgbmV3IEFTSUQuDQo+ID4+ ICsgKg0KPiA+PiArICogQHBhc2lkOiBQb2ludGVyIHRvIHRoZSBjdXJyZW50IEFTSUQgYmF0Y2gN Cj4gPj4gKyAqIEBjcHU6IGN1cnJlbnQgQ1BVIElELiBNdXN0IGhhdmUgYmVlbiBhY3F1aXJlZCB0 aHJvdWdodCBnZXRfY3B1KCkNCj4gPj4gKyAqLw0KPiA+PiArc3RhdGljIGlubGluZSB2b2lkIGFz aWRfY2hlY2tfY29udGV4dChzdHJ1Y3QgYXNpZF9pbmZvICppbmZvLA0KPiA+PiArCQkJCSAgICAg IGF0b21pYzY0X3QgKnBhc2lkLCB1bnNpZ25lZCBpbnQgY3B1KQ0KPiA+PiArew0KPiA+PiArCXU2 NCBhc2lkLCBvbGRfYWN0aXZlX2FzaWQ7DQo+ID4+ICsNCj4gPj4gKwlhc2lkID0gYXRvbWljNjRf cmVhZChwYXNpZCk7DQo+ID4+ICsNCj4gPj4gKwkvKg0KPiA+PiArCSAqIFRoZSBtZW1vcnkgb3Jk ZXJpbmcgaGVyZSBpcyBzdWJ0bGUuDQo+ID4+ICsJICogSWYgb3VyIGFjdGl2ZV9hc2lkIGlzIG5v bi16ZXJvIGFuZCB0aGUgQVNJRCBtYXRjaGVzIHRoZSBjdXJyZW50DQo+ID4+ICsJICogZ2VuZXJh dGlvbiwgdGhlbiB3ZSB1cGRhdGUgdGhlIGFjdGl2ZV9hc2lkIGVudHJ5IHdpdGggYSByZWxheGVk DQo+ID4+ICsJICogY21weGNoZy4gUmFjaW5nIHdpdGggYSBjb25jdXJyZW50IHJvbGxvdmVyIG1l YW5zIHRoYXQgZWl0aGVyOg0KPiA+PiArCSAqDQo+ID4+ICsJICogLSBXZSBnZXQgYSB6ZXJvIGJh Y2sgZnJvbSB0aGUgY21weGNoZyBhbmQgZW5kIHVwIHdhaXRpbmcgb24gdGhlDQo+ID4+ICsJICog ICBsb2NrLiBUYWtpbmcgdGhlIGxvY2sgc3luY2hyb25pc2VzIHdpdGggdGhlIHJvbGxvdmVyIGFu ZCBzbw0KPiA+PiArCSAqICAgd2UgYXJlIGZvcmNlZCB0byBzZWUgdGhlIHVwZGF0ZWQgZ2VuZXJh dGlvbi4NCj4gPj4gKwkgKg0KPiA+PiArCSAqIC0gV2UgZ2V0IGEgdmFsaWQgQVNJRCBiYWNrIGZy b20gdGhlIGNtcHhjaGcsIHdoaWNoIG1lYW5zIHRoZQ0KPiA+PiArCSAqICAgcmVsYXhlZCB4Y2hn IGluIGZsdXNoX2NvbnRleHQgd2lsbCB0cmVhdCB1cyBhcyByZXNlcnZlZA0KPiA+PiArCSAqICAg YmVjYXVzZSBhdG9taWMgUm1XcyBhcmUgdG90YWxseSBvcmRlcmVkIGZvciBhIGdpdmVuIGxvY2F0 aW9uLg0KPiA+PiArCSAqLw0KPiA+PiArCW9sZF9hY3RpdmVfYXNpZCA9IGF0b21pYzY0X3JlYWQo JmFjdGl2ZV9hc2lkKGluZm8sIGNwdSkpOw0KPiA+PiArCWlmIChvbGRfYWN0aXZlX2FzaWQgJiYN Cj4gPj4gKwkgICAgISgoYXNpZCBeIGF0b21pYzY0X3JlYWQoJmluZm8tPmdlbmVyYXRpb24pKSA+ PiBpbmZvLT5iaXRzKSAmJg0KPiA+PiArCSAgICBhdG9taWM2NF9jbXB4Y2hnX3JlbGF4ZWQoJmFj dGl2ZV9hc2lkKGluZm8sIGNwdSksDQo+ID4+ICsJCQkJICAgICBvbGRfYWN0aXZlX2FzaWQsIGFz aWQpKQ0KPiA+PiArCQlyZXR1cm47DQo+ID4+ICsNCj4gPj4gKwlhc2lkX25ld19jb250ZXh0KGlu Zm8sIHBhc2lkLCBjcHUpOw0KPiA+PiArfQ0KPiA+PiArDQo+ID4+ICtpbnQgYXNpZF9hbGxvY2F0 b3JfaW5pdChzdHJ1Y3QgYXNpZF9pbmZvICppbmZvLA0KPiA+PiArCQkJdTMyIGJpdHMsIHVuc2ln bmVkIGludCBhc2lkX3Blcl9jdHh0LA0KPiA+PiArCQkJdm9pZCAoKmZsdXNoX2NwdV9jdHh0X2Ni KSh2b2lkKSk7DQo+ID4+ICsNCj4gPj4gKyNlbmRpZg0KPiA+PiBkaWZmIC0tZ2l0IGEvYXJjaC9h cm02NC9saWIvTWFrZWZpbGUgYi9hcmNoL2FybTY0L2xpYi9NYWtlZmlsZQ0KPiA+PiBpbmRleCA1 NTQwYTE2MzhiYWYuLjcyMGRmNWVlMmFhMiAxMDA2NDQNCj4gPj4gLS0tIGEvYXJjaC9hcm02NC9s aWIvTWFrZWZpbGUNCj4gPj4gKysrIGIvYXJjaC9hcm02NC9saWIvTWFrZWZpbGUNCj4gPj4gQEAg LTUsNiArNSw4IEBAIGxpYi15CQk6PSBjbGVhcl91c2VyLm8gZGVsYXkubw0KPiBjb3B5X2Zyb21f dXNlci5vCQlcDQo+ID4+ICAgCQkgICBtZW1jbXAubyBzdHJjbXAubyBzdHJuY21wLm8gc3RybGVu Lm8gc3Rybmxlbi5vCVwNCj4gPj4gICAJCSAgIHN0cmNoci5vIHN0cnJjaHIubyB0aXNoaWZ0Lm8N Cj4gPj4NCj4gPj4gK2xpYi15CQkrPSBhc2lkLm8NCj4gPj4gKw0KPiA+PiAgIGlmZXEgKCQoQ09O RklHX0tFUk5FTF9NT0RFX05FT04pLCB5KQ0KPiA+PiAgIG9iai0kKENPTkZJR19YT1JfQkxPQ0tT KQkrPSB4b3ItbmVvbi5vDQo+ID4+ICAgQ0ZMQUdTX1JFTU9WRV94b3ItbmVvbi5vCSs9IC1tZ2Vu ZXJhbC1yZWdzLW9ubHkNCj4gPj4gZGlmZiAtLWdpdCBhL2FyY2gvYXJtNjQvbGliL2FzaWQuYyBi L2FyY2gvYXJtNjQvbGliL2FzaWQuYw0KPiA+PiBuZXcgZmlsZSBtb2RlIDEwMDY0NA0KPiA+PiBp bmRleCAwMDAwMDAwMDAwMDAuLjcyYjcxYmZiMzJiZQ0KPiA+PiAtLS0gL2Rldi9udWxsDQo+ID4+ ICsrKyBiL2FyY2gvYXJtNjQvbGliL2FzaWQuYw0KPiA+PiBAQCAtMCwwICsxLDE4NSBAQA0KPiA+ PiArLy8gU1BEWC1MaWNlbnNlLUlkZW50aWZpZXI6IEdQTC0yLjANCj4gPj4gKy8qDQo+ID4+ICsg KiBHZW5lcmljIEFTSUQgYWxsb2NhdG9yLg0KPiA+PiArICoNCj4gPj4gKyAqIEJhc2VkIG9uIGFy Y2gvYXJtL21tL2NvbnRleHQuYw0KPiA+PiArICoNCj4gPj4gKyAqIENvcHlyaWdodCAoQykgMjAw Mi0yMDAzIERlZXAgQmx1ZSBTb2x1dGlvbnMgTHRkLCBhbGwgcmlnaHRzIHJlc2VydmVkLg0KPiA+ PiArICogQ29weXJpZ2h0IChDKSAyMDEyIEFSTSBMdGQuDQo+ID4+ICsgKi8NCj4gPj4gKw0KPiA+ PiArI2luY2x1ZGUgPGxpbnV4L3NsYWIuaD4NCj4gPj4gKw0KPiA+PiArI2luY2x1ZGUgPGFzbS9h c2lkLmg+DQo+ID4+ICsNCj4gPj4gKyNkZWZpbmUgcmVzZXJ2ZWRfYXNpZChpbmZvLCBjcHUpICpw ZXJfY3B1X3B0cigoaW5mbyktPnJlc2VydmVkLCBjcHUpDQo+ID4+ICsNCj4gPj4gKyNkZWZpbmUg QVNJRF9NQVNLKGluZm8pCQkJKH5HRU5NQVNLKChpbmZvKS0+Yml0cyAtIDEsIDApKQ0KPiA+PiAr I2RlZmluZSBBU0lEX0ZJUlNUX1ZFUlNJT04oaW5mbykJKDFVTCA8PCAoKGluZm8pLT5iaXRzKSkN Cj4gPj4gKw0KPiA+PiArI2RlZmluZSBhc2lkMmlkeChpbmZvLCBhc2lkKQkJKCgoYXNpZCkgJiB+ QVNJRF9NQVNLKGluZm8pKSA+PiAoaW5mbyktDQo+ID5jdHh0X3NoaWZ0KQ0KPiA+PiArI2RlZmlu ZSBpZHgyYXNpZChpbmZvLCBpZHgpCQkoKChpZHgpIDw8IChpbmZvKS0+Y3R4dF9zaGlmdCkgJg0K PiB+QVNJRF9NQVNLKGluZm8pKQ0KPiA+PiArDQo+ID4+ICtzdGF0aWMgdm9pZCBmbHVzaF9jb250 ZXh0KHN0cnVjdCBhc2lkX2luZm8gKmluZm8pDQo+ID4+ICt7DQo+ID4+ICsJaW50IGk7DQo+ID4+ ICsJdTY0IGFzaWQ7DQo+ID4+ICsNCj4gPj4gKwkvKiBVcGRhdGUgdGhlIGxpc3Qgb2YgcmVzZXJ2 ZWQgQVNJRHMgYW5kIHRoZSBBU0lEIGJpdG1hcC4gKi8NCj4gPj4gKwliaXRtYXBfY2xlYXIoaW5m by0+bWFwLCAwLCBOVU1fQ1RYVF9BU0lEUyhpbmZvKSk7DQo+ID4+ICsNCj4gPj4gKwlmb3JfZWFj aF9wb3NzaWJsZV9jcHUoaSkgew0KPiA+PiArCQlhc2lkID0gYXRvbWljNjRfeGNoZ19yZWxheGVk KCZhY3RpdmVfYXNpZChpbmZvLCBpKSwgMCk7DQo+ID4+ICsJCS8qDQo+ID4+ICsJCSAqIElmIHRo aXMgQ1BVIGhhcyBhbHJlYWR5IGJlZW4gdGhyb3VnaCBhDQo+ID4+ICsJCSAqIHJvbGxvdmVyLCBi dXQgaGFzbid0IHJ1biBhbm90aGVyIHRhc2sgaW4NCj4gPj4gKwkJICogdGhlIG1lYW50aW1lLCB3 ZSBtdXN0IHByZXNlcnZlIGl0cyByZXNlcnZlZA0KPiA+PiArCQkgKiBBU0lELCBhcyB0aGlzIGlz IHRoZSBvbmx5IHRyYWNlIHdlIGhhdmUgb2YNCj4gPj4gKwkJICogdGhlIHByb2Nlc3MgaXQgaXMg c3RpbGwgcnVubmluZy4NCj4gPj4gKwkJICovDQo+ID4+ICsJCWlmIChhc2lkID09IDApDQo+ID4+ ICsJCQlhc2lkID0gcmVzZXJ2ZWRfYXNpZChpbmZvLCBpKTsNCj4gPj4gKwkJX19zZXRfYml0KGFz aWQyaWR4KGluZm8sIGFzaWQpLCBpbmZvLT5tYXApOw0KPiA+PiArCQlyZXNlcnZlZF9hc2lkKGlu Zm8sIGkpID0gYXNpZDsNCj4gPj4gKwl9DQo+ID4+ICsNCj4gPj4gKwkvKg0KPiA+PiArCSAqIFF1 ZXVlIGEgVExCIGludmFsaWRhdGlvbiBmb3IgZWFjaCBDUFUgdG8gcGVyZm9ybSBvbiBuZXh0DQo+ ID4+ICsJICogY29udGV4dC1zd2l0Y2gNCj4gPj4gKwkgKi8NCj4gPj4gKwljcHVtYXNrX3NldGFs bCgmaW5mby0+Zmx1c2hfcGVuZGluZyk7DQo+ID4+ICt9DQo+ID4+ICsNCj4gPj4gK3N0YXRpYyBi b29sIGNoZWNrX3VwZGF0ZV9yZXNlcnZlZF9hc2lkKHN0cnVjdCBhc2lkX2luZm8gKmluZm8sIHU2 NCBhc2lkLA0KPiA+PiArCQkJCSAgICAgICB1NjQgbmV3YXNpZCkNCj4gPj4gK3sNCj4gPj4gKwlp bnQgY3B1Ow0KPiA+PiArCWJvb2wgaGl0ID0gZmFsc2U7DQo+ID4+ICsNCj4gPj4gKwkvKg0KPiA+ PiArCSAqIEl0ZXJhdGUgb3ZlciB0aGUgc2V0IG9mIHJlc2VydmVkIEFTSURzIGxvb2tpbmcgZm9y IGEgbWF0Y2guDQo+ID4+ICsJICogSWYgd2UgZmluZCBvbmUsIHRoZW4gd2UgY2FuIHVwZGF0ZSBv dXIgbW0gdG8gdXNlIG5ld2FzaWQNCj4gPj4gKwkgKiAoaS5lLiB0aGUgc2FtZSBBU0lEIGluIHRo ZSBjdXJyZW50IGdlbmVyYXRpb24pIGJ1dCB3ZSBjYW4ndA0KPiA+PiArCSAqIGV4aXQgdGhlIGxv b3AgZWFybHksIHNpbmNlIHdlIG5lZWQgdG8gZW5zdXJlIHRoYXQgYWxsIGNvcGllcw0KPiA+PiAr CSAqIG9mIHRoZSBvbGQgQVNJRCBhcmUgdXBkYXRlZCB0byByZWZsZWN0IHRoZSBtbS4gRmFpbHVy ZSB0byBkbw0KPiA+PiArCSAqIHNvIGNvdWxkIHJlc3VsdCBpbiB1cyBtaXNzaW5nIHRoZSByZXNl cnZlZCBBU0lEIGluIGEgZnV0dXJlDQo+ID4+ICsJICogZ2VuZXJhdGlvbi4NCj4gPj4gKwkgKi8N Cj4gPj4gKwlmb3JfZWFjaF9wb3NzaWJsZV9jcHUoY3B1KSB7DQo+ID4+ICsJCWlmIChyZXNlcnZl ZF9hc2lkKGluZm8sIGNwdSkgPT0gYXNpZCkgew0KPiA+PiArCQkJaGl0ID0gdHJ1ZTsNCj4gPj4g KwkJCXJlc2VydmVkX2FzaWQoaW5mbywgY3B1KSA9IG5ld2FzaWQ7DQo+ID4+ICsJCX0NCj4gPj4g Kwl9DQo+ID4+ICsNCj4gPj4gKwlyZXR1cm4gaGl0Ow0KPiA+PiArfQ0KPiA+PiArDQo+ID4+ICtz dGF0aWMgdTY0IG5ld19jb250ZXh0KHN0cnVjdCBhc2lkX2luZm8gKmluZm8sIGF0b21pYzY0X3Qg KnBhc2lkKQ0KPiA+PiArew0KPiA+PiArCXN0YXRpYyB1MzIgY3VyX2lkeCA9IDE7DQo+ID4+ICsJ dTY0IGFzaWQgPSBhdG9taWM2NF9yZWFkKHBhc2lkKTsNCj4gPj4gKwl1NjQgZ2VuZXJhdGlvbiA9 IGF0b21pYzY0X3JlYWQoJmluZm8tPmdlbmVyYXRpb24pOw0KPiA+PiArDQo+ID4+ICsJaWYgKGFz aWQgIT0gMCkgew0KPiA+PiArCQl1NjQgbmV3YXNpZCA9IGdlbmVyYXRpb24gfCAoYXNpZCAmIH5B U0lEX01BU0soaW5mbykpOw0KPiA+PiArDQo+ID4+ICsJCS8qDQo+ID4+ICsJCSAqIElmIG91ciBj dXJyZW50IEFTSUQgd2FzIGFjdGl2ZSBkdXJpbmcgYSByb2xsb3Zlciwgd2UNCj4gPj4gKwkJICog Y2FuIGNvbnRpbnVlIHRvIHVzZSBpdCBhbmQgdGhpcyB3YXMganVzdCBhIGZhbHNlIGFsYXJtLg0K PiA+PiArCQkgKi8NCj4gPj4gKwkJaWYgKGNoZWNrX3VwZGF0ZV9yZXNlcnZlZF9hc2lkKGluZm8s IGFzaWQsIG5ld2FzaWQpKQ0KPiA+PiArCQkJcmV0dXJuIG5ld2FzaWQ7DQo+ID4+ICsNCj4gPj4g KwkJLyoNCj4gPj4gKwkJICogV2UgaGFkIGEgdmFsaWQgQVNJRCBpbiBhIHByZXZpb3VzIGxpZmUs IHNvIHRyeSB0byByZS11c2UNCj4gPj4gKwkJICogaXQgaWYgcG9zc2libGUuDQo+ID4+ICsJCSAq Lw0KPiA+PiArCQlpZiAoIV9fdGVzdF9hbmRfc2V0X2JpdChhc2lkMmlkeChpbmZvLCBhc2lkKSwg aW5mby0+bWFwKSkNCj4gPj4gKwkJCXJldHVybiBuZXdhc2lkOw0KPiA+PiArCX0NCj4gPj4gKw0K PiA+PiArCS8qDQo+ID4+ICsJICogQWxsb2NhdGUgYSBmcmVlIEFTSUQuIElmIHdlIGNhbid0IGZp bmQgb25lLCB0YWtlIGEgbm90ZSBvZiB0aGUNCj4gPj4gKwkgKiBjdXJyZW50bHkgYWN0aXZlIEFT SURzIGFuZCBtYXJrIHRoZSBUTEJzIGFzIHJlcXVpcmluZyBmbHVzaGVzLiAgV2UNCj4gPj4gKwkg KiBhbHdheXMgY291bnQgZnJvbSBBU0lEICMyIChpbmRleCAxKSwgYXMgd2UgdXNlIEFTSUQgIzAg d2hlbiBzZXR0aW5nDQo+ID4+ICsJICogYSByZXNlcnZlZCBUVEJSMCBmb3IgdGhlIGluaXRfbW0g YW5kIHdlIGFsbG9jYXRlIEFTSURzIGluIGV2ZW4vb2RkDQo+ID4+ICsJICogcGFpcnMuDQo+ID4+ ICsJICovDQo+ID4+ICsJYXNpZCA9IGZpbmRfbmV4dF96ZXJvX2JpdChpbmZvLT5tYXAsIE5VTV9D VFhUX0FTSURTKGluZm8pLCBjdXJfaWR4KTsNCj4gPj4gKwlpZiAoYXNpZCAhPSBOVU1fQ1RYVF9B U0lEUyhpbmZvKSkNCj4gPj4gKwkJZ290byBzZXRfYXNpZDsNCj4gPj4gKw0KPiA+PiArCS8qIFdl J3JlIG91dCBvZiBBU0lEcywgc28gaW5jcmVtZW50IHRoZSBnbG9iYWwgZ2VuZXJhdGlvbiBjb3Vu dCAqLw0KPiA+PiArCWdlbmVyYXRpb24gPSBhdG9taWM2NF9hZGRfcmV0dXJuX3JlbGF4ZWQoQVNJ RF9GSVJTVF9WRVJTSU9OKGluZm8pLA0KPiA+PiArCQkJCQkJICZpbmZvLT5nZW5lcmF0aW9uKTsN Cj4gPj4gKwlmbHVzaF9jb250ZXh0KGluZm8pOw0KPiA+PiArDQo+ID4+ICsJLyogV2UgaGF2ZSBt b3JlIEFTSURzIHRoYW4gQ1BVcywgc28gdGhpcyB3aWxsIGFsd2F5cyBzdWNjZWVkICovDQo+ID4+ ICsJYXNpZCA9IGZpbmRfbmV4dF96ZXJvX2JpdChpbmZvLT5tYXAsIE5VTV9DVFhUX0FTSURTKGlu Zm8pLCAxKTsNCj4gPj4gKw0KPiA+PiArc2V0X2FzaWQ6DQo+ID4+ICsJX19zZXRfYml0KGFzaWQs IGluZm8tPm1hcCk7DQo+ID4+ICsJY3VyX2lkeCA9IGFzaWQ7DQo+ID4+ICsJcmV0dXJuIGlkeDJh c2lkKGluZm8sIGFzaWQpIHwgZ2VuZXJhdGlvbjsNCj4gPj4gK30NCj4gPj4gKw0KPiA+PiArLyoN Cj4gPj4gKyAqIEdlbmVyYXRlIGEgbmV3IEFTSUQgZm9yIHRoZSBjb250ZXh0Lg0KPiA+PiArICoN Cj4gPj4gKyAqIEBwYXNpZDogUG9pbnRlciB0byB0aGUgY3VycmVudCBBU0lEIGJhdGNoIGFsbG9j YXRlZC4gSXQgd2lsbCBiZSB1cGRhdGVkDQo+ID4+ICsgKiB3aXRoIHRoZSBuZXcgQVNJRCBiYXRj aC4NCj4gPj4gKyAqIEBjcHU6IGN1cnJlbnQgQ1BVIElELiBNdXN0IGhhdmUgYmVlbiBhY3F1aXJl ZCB0aHJvdWdoIGdldF9jcHUoKQ0KPiA+PiArICovDQo+ID4+ICt2b2lkIGFzaWRfbmV3X2NvbnRl eHQoc3RydWN0IGFzaWRfaW5mbyAqaW5mbywgYXRvbWljNjRfdCAqcGFzaWQsDQo+ID4+ICsJCSAg ICAgIHVuc2lnbmVkIGludCBjcHUpDQo+ID4+ICt7DQo+ID4+ICsJdW5zaWduZWQgbG9uZyBmbGFn czsNCj4gPj4gKwl1NjQgYXNpZDsNCj4gPj4gKw0KPiA+PiArCXJhd19zcGluX2xvY2tfaXJxc2F2 ZSgmaW5mby0+bG9jaywgZmxhZ3MpOw0KPiA+PiArCS8qIENoZWNrIHRoYXQgb3VyIEFTSUQgYmVs b25ncyB0byB0aGUgY3VycmVudCBnZW5lcmF0aW9uLiAqLw0KPiA+PiArCWFzaWQgPSBhdG9taWM2 NF9yZWFkKHBhc2lkKTsNCj4gPj4gKwlpZiAoKGFzaWQgXiBhdG9taWM2NF9yZWFkKCZpbmZvLT5n ZW5lcmF0aW9uKSkgPj4gaW5mby0+Yml0cykgew0KPiA+PiArCQlhc2lkID0gbmV3X2NvbnRleHQo aW5mbywgcGFzaWQpOw0KPiA+PiArCQlhdG9taWM2NF9zZXQocGFzaWQsIGFzaWQpOw0KPiA+PiAr CX0NCj4gPj4gKw0KPiA+PiArCWlmIChjcHVtYXNrX3Rlc3RfYW5kX2NsZWFyX2NwdShjcHUsICZp bmZvLT5mbHVzaF9wZW5kaW5nKSkNCj4gPj4gKwkJaW5mby0+Zmx1c2hfY3B1X2N0eHRfY2IoKTsN Cj4gPj4gKw0KPiA+PiArCWF0b21pYzY0X3NldCgmYWN0aXZlX2FzaWQoaW5mbywgY3B1KSwgYXNp ZCk7DQo+ID4+ICsJcmF3X3NwaW5fdW5sb2NrX2lycXJlc3RvcmUoJmluZm8tPmxvY2ssIGZsYWdz KTsNCj4gPj4gK30NCj4gPj4gKw0KPiA+PiArLyoNCj4gPj4gKyAqIEluaXRpYWxpemUgdGhlIEFT SUQgYWxsb2NhdG9yDQo+ID4+ICsgKg0KPiA+PiArICogQGluZm86IFBvaW50ZXIgdG8gdGhlIGFz aWQgYWxsb2NhdG9yIHN0cnVjdHVyZQ0KPiA+PiArICogQGJpdHM6IE51bWJlciBvZiBBU0lEcyBh dmFpbGFibGUNCj4gPj4gKyAqIEBhc2lkX3Blcl9jdHh0OiBOdW1iZXIgb2YgQVNJRHMgdG8gYWxs b2NhdGUgcGVyLWNvbnRleHQuIEFTSURzIGFyZQ0KPiA+PiArICogYWxsb2NhdGVkIGNvbnRpZ3Vv dXNseSBmb3IgYSBnaXZlbiBjb250ZXh0LiBUaGlzIHZhbHVlIHNob3VsZCBiZSBhIHBvd2VyDQo+ IG9mDQo+ID4+ICsgKiAyLg0KPiA+PiArICovDQo+ID4+ICtpbnQgYXNpZF9hbGxvY2F0b3JfaW5p dChzdHJ1Y3QgYXNpZF9pbmZvICppbmZvLA0KPiA+PiArCQkJdTMyIGJpdHMsIHVuc2lnbmVkIGlu dCBhc2lkX3Blcl9jdHh0LA0KPiA+PiArCQkJdm9pZCAoKmZsdXNoX2NwdV9jdHh0X2NiKSh2b2lk KSkNCj4gPj4gK3sNCj4gPj4gKwlpbmZvLT5iaXRzID0gYml0czsNCj4gPj4gKwlpbmZvLT5jdHh0 X3NoaWZ0ID0gaWxvZzIoYXNpZF9wZXJfY3R4dCk7DQo+ID4+ICsJaW5mby0+Zmx1c2hfY3B1X2N0 eHRfY2IgPSBmbHVzaF9jcHVfY3R4dF9jYjsNCj4gPj4gKwkvKg0KPiA+PiArCSAqIEV4cGVjdCBh bGxvY2F0aW9uIGFmdGVyIHJvbGxvdmVyIHRvIGZhaWwgaWYgd2UgZG9uJ3QgaGF2ZSBhdCBsZWFz dA0KPiA+PiArCSAqIG9uZSBtb3JlIEFTSUQgdGhhbiBDUFVzLiBBU0lEICMwIGlzIGFsd2F5cyBy ZXNlcnZlZC4NCj4gPj4gKwkgKi8NCj4gPj4gKwlXQVJOX09OKE5VTV9DVFhUX0FTSURTKGluZm8p IC0gMSA8PSBudW1fcG9zc2libGVfY3B1cygpKTsNCj4gPj4gKwlhdG9taWM2NF9zZXQoJmluZm8t PmdlbmVyYXRpb24sIEFTSURfRklSU1RfVkVSU0lPTihpbmZvKSk7DQo+ID4+ICsJaW5mby0+bWFw ID0ga2NhbGxvYyhCSVRTX1RPX0xPTkdTKE5VTV9DVFhUX0FTSURTKGluZm8pKSwNCj4gPj4gKwkJ CSAgICBzaXplb2YoKmluZm8tPm1hcCksIEdGUF9LRVJORUwpOw0KPiA+PiArCWlmICghaW5mby0+ bWFwKQ0KPiA+PiArCQlyZXR1cm4gLUVOT01FTTsNCj4gPj4gKw0KPiA+PiArCXJhd19zcGluX2xv Y2tfaW5pdCgmaW5mby0+bG9jayk7DQo+ID4+ICsNCj4gPj4gKwlyZXR1cm4gMDsNCj4gPj4gK30N Cj4gPj4gZGlmZiAtLWdpdCBhL2FyY2gvYXJtNjQvbW0vY29udGV4dC5jIGIvYXJjaC9hcm02NC9t bS9jb250ZXh0LmMNCj4gPj4gaW5kZXggNjc4YTU3Yjc3YzkxLi45NWVlNzcxMWEyZWYgMTAwNjQ0 DQo+ID4+IC0tLSBhL2FyY2gvYXJtNjQvbW0vY29udGV4dC5jDQo+ID4+ICsrKyBiL2FyY2gvYXJt NjQvbW0vY29udGV4dC5jDQo+ID4+IEBAIC0yMiw0NyArMjIsMjIgQEANCj4gPj4gICAjaW5jbHVk ZSA8bGludXgvc2xhYi5oPg0KPiA+PiAgICNpbmNsdWRlIDxsaW51eC9tbS5oPg0KPiA+Pg0KPiA+ PiArI2luY2x1ZGUgPGFzbS9hc2lkLmg+DQo+ID4+ICAgI2luY2x1ZGUgPGFzbS9jcHVmZWF0dXJl Lmg+DQo+ID4+ICAgI2luY2x1ZGUgPGFzbS9tbXVfY29udGV4dC5oPg0KPiA+PiAgICNpbmNsdWRl IDxhc20vc21wLmg+DQo+ID4+ICAgI2luY2x1ZGUgPGFzbS90bGJmbHVzaC5oPg0KPiA+Pg0KPiA+ PiAtc3RydWN0IGFzaWRfaW5mbw0KPiA+PiAtew0KPiA+PiAtCWF0b21pYzY0X3QJZ2VuZXJhdGlv bjsNCj4gPj4gLQl1bnNpZ25lZCBsb25nCSptYXA7DQo+ID4+IC0JYXRvbWljNjRfdCBfX3BlcmNw dQkqYWN0aXZlOw0KPiA+PiAtCXU2NCBfX3BlcmNwdQkJKnJlc2VydmVkOw0KPiA+PiAtCXUzMgkJ CWJpdHM7DQo+ID4+IC0JcmF3X3NwaW5sb2NrX3QJCWxvY2s7DQo+ID4+IC0JLyogV2hpY2ggQ1BV IHJlcXVpcmVzIGNvbnRleHQgZmx1c2ggb24gbmV4dCBjYWxsICovDQo+ID4+IC0JY3B1bWFza190 CQlmbHVzaF9wZW5kaW5nOw0KPiA+PiAtCS8qIE51bWJlciBvZiBBU0lEIGFsbG9jYXRlZCBieSBj b250ZXh0IChzaGlmdCB2YWx1ZSkgKi8NCj4gPj4gLQl1bnNpZ25lZCBpbnQJCWN0eHRfc2hpZnQ7 DQo+ID4+IC0JLyogQ2FsbGJhY2sgdG8gbG9jYWxseSBmbHVzaCB0aGUgY29udGV4dC4gKi8NCj4g Pj4gLQl2b2lkCQkJKCpmbHVzaF9jcHVfY3R4dF9jYikodm9pZCk7DQo+ID4+IC19IGFzaWRfaW5m bzsNCj4gPj4gLQ0KPiA+PiAtI2RlZmluZSBhY3RpdmVfYXNpZChpbmZvLCBjcHUpCSpwZXJfY3B1 X3B0cigoaW5mbyktPmFjdGl2ZSwgY3B1KQ0KPiA+PiAtI2RlZmluZSByZXNlcnZlZF9hc2lkKGlu Zm8sIGNwdSkgKnBlcl9jcHVfcHRyKChpbmZvKS0+cmVzZXJ2ZWQsIGNwdSkNCj4gPj4gLQ0KPiA+ PiAgIHN0YXRpYyBERUZJTkVfUEVSX0NQVShhdG9taWM2NF90LCBhY3RpdmVfYXNpZHMpOw0KPiA+ PiAgIHN0YXRpYyBERUZJTkVfUEVSX0NQVSh1NjQsIHJlc2VydmVkX2FzaWRzKTsNCj4gPj4NCj4g Pj4gLSNkZWZpbmUgQVNJRF9NQVNLKGluZm8pCQkJKH5HRU5NQVNLKChpbmZvKS0+Yml0cyAtIDEs IDApKQ0KPiA+PiAtI2RlZmluZSBOVU1fQVNJRFMoaW5mbykJCQkoMVVMIDw8ICgoaW5mbyktPmJp dHMpKQ0KPiA+PiAtDQo+ID4+IC0jZGVmaW5lIEFTSURfRklSU1RfVkVSU0lPTihpbmZvKQlOVU1f QVNJRFMoaW5mbykNCj4gPj4gLQ0KPiA+PiAgICNpZmRlZiBDT05GSUdfVU5NQVBfS0VSTkVMX0FU X0VMMA0KPiA+PiAgICNkZWZpbmUgQVNJRF9QRVJfQ09OVEVYVAkJMg0KPiA+PiAgICNlbHNlDQo+ ID4+ICAgI2RlZmluZSBBU0lEX1BFUl9DT05URVhUCQkxDQo+ID4+ICAgI2VuZGlmDQo+ID4+DQo+ ID4+IC0jZGVmaW5lIE5VTV9DVFhUX0FTSURTKGluZm8pCQkoTlVNX0FTSURTKGluZm8pID4+IChp bmZvKS0NCj4gPmN0eHRfc2hpZnQpDQo+ID4+IC0jZGVmaW5lIGFzaWQyaWR4KGluZm8sIGFzaWQp CQkoKChhc2lkKSAmIH5BU0lEX01BU0soaW5mbykpID4+IChpbmZvKS0NCj4gPmN0eHRfc2hpZnQp DQo+ID4+IC0jZGVmaW5lIGlkeDJhc2lkKGluZm8sIGlkeCkJCSgoKGlkeCkgPDwgKGluZm8pLT5j dHh0X3NoaWZ0KSAmDQo+IH5BU0lEX01BU0soaW5mbykpDQo+ID4+ICtzdHJ1Y3QgYXNpZF9pbmZv IGFzaWRfaW5mbzsNCj4gPj4NCj4gPj4gICAvKiBHZXQgdGhlIEFTSURCaXRzIHN1cHBvcnRlZCBi eSB0aGUgY3VycmVudCBDUFUgKi8NCj4gPj4gICBzdGF0aWMgdTMyIGdldF9jcHVfYXNpZF9iaXRz KHZvaWQpDQo+ID4+IEBAIC0xMDIsMTc4ICs3Nyw2IEBAIHZvaWQgdmVyaWZ5X2NwdV9hc2lkX2Jp dHModm9pZCkNCj4gPj4gICAJfQ0KPiA+PiAgIH0NCj4gPj4NCj4gPj4gLXN0YXRpYyB2b2lkIGZs dXNoX2NvbnRleHQoc3RydWN0IGFzaWRfaW5mbyAqaW5mbykNCj4gPj4gLXsNCj4gPj4gLQlpbnQg aTsNCj4gPj4gLQl1NjQgYXNpZDsNCj4gPj4gLQ0KPiA+PiAtCS8qIFVwZGF0ZSB0aGUgbGlzdCBv ZiByZXNlcnZlZCBBU0lEcyBhbmQgdGhlIEFTSUQgYml0bWFwLiAqLw0KPiA+PiAtCWJpdG1hcF9j bGVhcihpbmZvLT5tYXAsIDAsIE5VTV9DVFhUX0FTSURTKGluZm8pKTsNCj4gPj4gLQ0KPiA+PiAt CWZvcl9lYWNoX3Bvc3NpYmxlX2NwdShpKSB7DQo+ID4+IC0JCWFzaWQgPSBhdG9taWM2NF94Y2hn X3JlbGF4ZWQoJmFjdGl2ZV9hc2lkKGluZm8sIGkpLCAwKTsNCj4gPj4gLQkJLyoNCj4gPj4gLQkJ ICogSWYgdGhpcyBDUFUgaGFzIGFscmVhZHkgYmVlbiB0aHJvdWdoIGENCj4gPj4gLQkJICogcm9s bG92ZXIsIGJ1dCBoYXNuJ3QgcnVuIGFub3RoZXIgdGFzayBpbg0KPiA+PiAtCQkgKiB0aGUgbWVh bnRpbWUsIHdlIG11c3QgcHJlc2VydmUgaXRzIHJlc2VydmVkDQo+ID4+IC0JCSAqIEFTSUQsIGFz IHRoaXMgaXMgdGhlIG9ubHkgdHJhY2Ugd2UgaGF2ZSBvZg0KPiA+PiAtCQkgKiB0aGUgcHJvY2Vz cyBpdCBpcyBzdGlsbCBydW5uaW5nLg0KPiA+PiAtCQkgKi8NCj4gPj4gLQkJaWYgKGFzaWQgPT0g MCkNCj4gPj4gLQkJCWFzaWQgPSByZXNlcnZlZF9hc2lkKGluZm8sIGkpOw0KPiA+PiAtCQlfX3Nl dF9iaXQoYXNpZDJpZHgoaW5mbywgYXNpZCksIGluZm8tPm1hcCk7DQo+ID4+IC0JCXJlc2VydmVk X2FzaWQoaW5mbywgaSkgPSBhc2lkOw0KPiA+PiAtCX0NCj4gPj4gLQ0KPiA+PiAtCS8qDQo+ID4+ IC0JICogUXVldWUgYSBUTEIgaW52YWxpZGF0aW9uIGZvciBlYWNoIENQVSB0byBwZXJmb3JtIG9u IG5leHQNCj4gPj4gLQkgKiBjb250ZXh0LXN3aXRjaA0KPiA+PiAtCSAqLw0KPiA+PiAtCWNwdW1h c2tfc2V0YWxsKCZpbmZvLT5mbHVzaF9wZW5kaW5nKTsNCj4gPj4gLX0NCj4gPj4gLQ0KPiA+PiAt c3RhdGljIGJvb2wgY2hlY2tfdXBkYXRlX3Jlc2VydmVkX2FzaWQoc3RydWN0IGFzaWRfaW5mbyAq aW5mbywgdTY0IGFzaWQsDQo+ID4+IC0JCQkJICAgICAgIHU2NCBuZXdhc2lkKQ0KPiA+PiAtew0K PiA+PiAtCWludCBjcHU7DQo+ID4+IC0JYm9vbCBoaXQgPSBmYWxzZTsNCj4gPj4gLQ0KPiA+PiAt CS8qDQo+ID4+IC0JICogSXRlcmF0ZSBvdmVyIHRoZSBzZXQgb2YgcmVzZXJ2ZWQgQVNJRHMgbG9v a2luZyBmb3IgYSBtYXRjaC4NCj4gPj4gLQkgKiBJZiB3ZSBmaW5kIG9uZSwgdGhlbiB3ZSBjYW4g dXBkYXRlIG91ciBtbSB0byB1c2UgbmV3YXNpZA0KPiA+PiAtCSAqIChpLmUuIHRoZSBzYW1lIEFT SUQgaW4gdGhlIGN1cnJlbnQgZ2VuZXJhdGlvbikgYnV0IHdlIGNhbid0DQo+ID4+IC0JICogZXhp dCB0aGUgbG9vcCBlYXJseSwgc2luY2Ugd2UgbmVlZCB0byBlbnN1cmUgdGhhdCBhbGwgY29waWVz DQo+ID4+IC0JICogb2YgdGhlIG9sZCBBU0lEIGFyZSB1cGRhdGVkIHRvIHJlZmxlY3QgdGhlIG1t LiBGYWlsdXJlIHRvIGRvDQo+ID4+IC0JICogc28gY291bGQgcmVzdWx0IGluIHVzIG1pc3Npbmcg dGhlIHJlc2VydmVkIEFTSUQgaW4gYSBmdXR1cmUNCj4gPj4gLQkgKiBnZW5lcmF0aW9uLg0KPiA+ PiAtCSAqLw0KPiA+PiAtCWZvcl9lYWNoX3Bvc3NpYmxlX2NwdShjcHUpIHsNCj4gPj4gLQkJaWYg KHJlc2VydmVkX2FzaWQoaW5mbywgY3B1KSA9PSBhc2lkKSB7DQo+ID4+IC0JCQloaXQgPSB0cnVl Ow0KPiA+PiAtCQkJcmVzZXJ2ZWRfYXNpZChpbmZvLCBjcHUpID0gbmV3YXNpZDsNCj4gPj4gLQkJ fQ0KPiA+PiAtCX0NCj4gPj4gLQ0KPiA+PiAtCXJldHVybiBoaXQ7DQo+ID4+IC19DQo+ID4+IC0N Cj4gPj4gLXN0YXRpYyB1NjQgbmV3X2NvbnRleHQoc3RydWN0IGFzaWRfaW5mbyAqaW5mbywgYXRv bWljNjRfdCAqcGFzaWQpDQo+ID4+IC17DQo+ID4+IC0Jc3RhdGljIHUzMiBjdXJfaWR4ID0gMTsN Cj4gPj4gLQl1NjQgYXNpZCA9IGF0b21pYzY0X3JlYWQocGFzaWQpOw0KPiA+PiAtCXU2NCBnZW5l cmF0aW9uID0gYXRvbWljNjRfcmVhZCgmaW5mby0+Z2VuZXJhdGlvbik7DQo+ID4+IC0NCj4gPj4g LQlpZiAoYXNpZCAhPSAwKSB7DQo+ID4+IC0JCXU2NCBuZXdhc2lkID0gZ2VuZXJhdGlvbiB8IChh c2lkICYgfkFTSURfTUFTSyhpbmZvKSk7DQo+ID4+IC0NCj4gPj4gLQkJLyoNCj4gPj4gLQkJICog SWYgb3VyIGN1cnJlbnQgQVNJRCB3YXMgYWN0aXZlIGR1cmluZyBhIHJvbGxvdmVyLCB3ZQ0KPiA+ PiAtCQkgKiBjYW4gY29udGludWUgdG8gdXNlIGl0IGFuZCB0aGlzIHdhcyBqdXN0IGEgZmFsc2Ug YWxhcm0uDQo+ID4+IC0JCSAqLw0KPiA+PiAtCQlpZiAoY2hlY2tfdXBkYXRlX3Jlc2VydmVkX2Fz aWQoaW5mbywgYXNpZCwgbmV3YXNpZCkpDQo+ID4+IC0JCQlyZXR1cm4gbmV3YXNpZDsNCj4gPj4g LQ0KPiA+PiAtCQkvKg0KPiA+PiAtCQkgKiBXZSBoYWQgYSB2YWxpZCBBU0lEIGluIGEgcHJldmlv dXMgbGlmZSwgc28gdHJ5IHRvIHJlLXVzZQ0KPiA+PiAtCQkgKiBpdCBpZiBwb3NzaWJsZS4NCj4g Pj4gLQkJICovDQo+ID4+IC0JCWlmICghX190ZXN0X2FuZF9zZXRfYml0KGFzaWQyaWR4KGluZm8s IGFzaWQpLCBpbmZvLT5tYXApKQ0KPiA+PiAtCQkJcmV0dXJuIG5ld2FzaWQ7DQo+ID4+IC0JfQ0K PiA+PiAtDQo+ID4+IC0JLyoNCj4gPj4gLQkgKiBBbGxvY2F0ZSBhIGZyZWUgQVNJRC4gSWYgd2Ug Y2FuJ3QgZmluZCBvbmUsIHRha2UgYSBub3RlIG9mIHRoZQ0KPiA+PiAtCSAqIGN1cnJlbnRseSBh Y3RpdmUgQVNJRHMgYW5kIG1hcmsgdGhlIFRMQnMgYXMgcmVxdWlyaW5nIGZsdXNoZXMuICBXZQ0K PiA+PiAtCSAqIGFsd2F5cyBjb3VudCBmcm9tIEFTSUQgIzIgKGluZGV4IDEpLCBhcyB3ZSB1c2Ug QVNJRCAjMCB3aGVuIHNldHRpbmcNCj4gPj4gLQkgKiBhIHJlc2VydmVkIFRUQlIwIGZvciB0aGUg aW5pdF9tbSBhbmQgd2UgYWxsb2NhdGUgQVNJRHMgaW4gZXZlbi9vZGQNCj4gPj4gLQkgKiBwYWly cy4NCj4gPj4gLQkgKi8NCj4gPj4gLQlhc2lkID0gZmluZF9uZXh0X3plcm9fYml0KGluZm8tPm1h cCwgTlVNX0NUWFRfQVNJRFMoaW5mbyksIGN1cl9pZHgpOw0KPiA+PiAtCWlmIChhc2lkICE9IE5V TV9DVFhUX0FTSURTKGluZm8pKQ0KPiA+PiAtCQlnb3RvIHNldF9hc2lkOw0KPiA+PiAtDQo+ID4+ IC0JLyogV2UncmUgb3V0IG9mIEFTSURzLCBzbyBpbmNyZW1lbnQgdGhlIGdsb2JhbCBnZW5lcmF0 aW9uIGNvdW50ICovDQo+ID4+IC0JZ2VuZXJhdGlvbiA9IGF0b21pYzY0X2FkZF9yZXR1cm5fcmVs YXhlZChBU0lEX0ZJUlNUX1ZFUlNJT04oaW5mbyksDQo+ID4+IC0JCQkJCQkgJmluZm8tPmdlbmVy YXRpb24pOw0KPiA+PiAtCWZsdXNoX2NvbnRleHQoaW5mbyk7DQo+ID4+IC0NCj4gPj4gLQkvKiBX ZSBoYXZlIG1vcmUgQVNJRHMgdGhhbiBDUFVzLCBzbyB0aGlzIHdpbGwgYWx3YXlzIHN1Y2NlZWQg Ki8NCj4gPj4gLQlhc2lkID0gZmluZF9uZXh0X3plcm9fYml0KGluZm8tPm1hcCwgTlVNX0NUWFRf QVNJRFMoaW5mbyksIDEpOw0KPiA+PiAtDQo+ID4+IC1zZXRfYXNpZDoNCj4gPj4gLQlfX3NldF9i aXQoYXNpZCwgaW5mby0+bWFwKTsNCj4gPj4gLQljdXJfaWR4ID0gYXNpZDsNCj4gPj4gLQlyZXR1 cm4gaWR4MmFzaWQoaW5mbywgYXNpZCkgfCBnZW5lcmF0aW9uOw0KPiA+PiAtfQ0KPiA+PiAtDQo+ ID4+IC1zdGF0aWMgdm9pZCBhc2lkX25ld19jb250ZXh0KHN0cnVjdCBhc2lkX2luZm8gKmluZm8s IGF0b21pYzY0X3QgKnBhc2lkLA0KPiA+PiAtCQkJICAgICB1bnNpZ25lZCBpbnQgY3B1KTsNCj4g Pj4gLQ0KPiA+PiAtLyoNCj4gPj4gLSAqIENoZWNrIHRoZSBBU0lEIGlzIHN0aWxsIHZhbGlkIGZv ciB0aGUgY29udGV4dC4gSWYgbm90IGdlbmVyYXRlIGEgbmV3IEFTSUQuDQo+ID4+IC0gKg0KPiA+ PiAtICogQHBhc2lkOiBQb2ludGVyIHRvIHRoZSBjdXJyZW50IEFTSUQgYmF0Y2gNCj4gPj4gLSAq IEBjcHU6IGN1cnJlbnQgQ1BVIElELiBNdXN0IGhhdmUgYmVlbiBhY3F1aXJlZCB0aHJvdWdodCBn ZXRfY3B1KCkNCj4gPj4gLSAqLw0KPiA+PiAtc3RhdGljIHZvaWQgYXNpZF9jaGVja19jb250ZXh0 KHN0cnVjdCBhc2lkX2luZm8gKmluZm8sDQo+ID4+IC0JCQkgICAgICAgYXRvbWljNjRfdCAqcGFz aWQsIHVuc2lnbmVkIGludCBjcHUpDQo+ID4+IC17DQo+ID4+IC0JdTY0IGFzaWQsIG9sZF9hY3Rp dmVfYXNpZDsNCj4gPj4gLQ0KPiA+PiAtCWFzaWQgPSBhdG9taWM2NF9yZWFkKHBhc2lkKTsNCj4g Pj4gLQ0KPiA+PiAtCS8qDQo+ID4+IC0JICogVGhlIG1lbW9yeSBvcmRlcmluZyBoZXJlIGlzIHN1 YnRsZS4NCj4gPj4gLQkgKiBJZiBvdXIgYWN0aXZlX2FzaWQgaXMgbm9uLXplcm8gYW5kIHRoZSBB U0lEIG1hdGNoZXMgdGhlIGN1cnJlbnQNCj4gPj4gLQkgKiBnZW5lcmF0aW9uLCB0aGVuIHdlIHVw ZGF0ZSB0aGUgYWN0aXZlX2FzaWQgZW50cnkgd2l0aCBhIHJlbGF4ZWQNCj4gPj4gLQkgKiBjbXB4 Y2hnLiBSYWNpbmcgd2l0aCBhIGNvbmN1cnJlbnQgcm9sbG92ZXIgbWVhbnMgdGhhdCBlaXRoZXI6 DQo+ID4+IC0JICoNCj4gPj4gLQkgKiAtIFdlIGdldCBhIHplcm8gYmFjayBmcm9tIHRoZSBjbXB4 Y2hnIGFuZCBlbmQgdXAgd2FpdGluZyBvbiB0aGUNCj4gPj4gLQkgKiAgIGxvY2suIFRha2luZyB0 aGUgbG9jayBzeW5jaHJvbmlzZXMgd2l0aCB0aGUgcm9sbG92ZXIgYW5kIHNvDQo+ID4+IC0JICog ICB3ZSBhcmUgZm9yY2VkIHRvIHNlZSB0aGUgdXBkYXRlZCBnZW5lcmF0aW9uLg0KPiA+PiAtCSAq DQo+ID4+IC0JICogLSBXZSBnZXQgYSB2YWxpZCBBU0lEIGJhY2sgZnJvbSB0aGUgY21weGNoZywg d2hpY2ggbWVhbnMgdGhlDQo+ID4+IC0JICogICByZWxheGVkIHhjaGcgaW4gZmx1c2hfY29udGV4 dCB3aWxsIHRyZWF0IHVzIGFzIHJlc2VydmVkDQo+ID4+IC0JICogICBiZWNhdXNlIGF0b21pYyBS bVdzIGFyZSB0b3RhbGx5IG9yZGVyZWQgZm9yIGEgZ2l2ZW4gbG9jYXRpb24uDQo+ID4+IC0JICov DQo+ID4+IC0Jb2xkX2FjdGl2ZV9hc2lkID0gYXRvbWljNjRfcmVhZCgmYWN0aXZlX2FzaWQoaW5m bywgY3B1KSk7DQo+ID4+IC0JaWYgKG9sZF9hY3RpdmVfYXNpZCAmJg0KPiA+PiAtCSAgICAhKChh c2lkIF4gYXRvbWljNjRfcmVhZCgmaW5mby0+Z2VuZXJhdGlvbikpID4+IGluZm8tPmJpdHMpICYm DQo+ID4+IC0JICAgIGF0b21pYzY0X2NtcHhjaGdfcmVsYXhlZCgmYWN0aXZlX2FzaWQoaW5mbywg Y3B1KSwNCj4gPj4gLQkJCQkgICAgIG9sZF9hY3RpdmVfYXNpZCwgYXNpZCkpDQo+ID4+IC0JCXJl dHVybjsNCj4gPj4gLQ0KPiA+PiAtCWFzaWRfbmV3X2NvbnRleHQoaW5mbywgcGFzaWQsIGNwdSk7 DQo+ID4+IC19DQo+ID4+IC0NCj4gPj4gLS8qDQo+ID4+IC0gKiBHZW5lcmF0ZSBhIG5ldyBBU0lE IGZvciB0aGUgY29udGV4dC4NCj4gPj4gLSAqDQo+ID4+IC0gKiBAcGFzaWQ6IFBvaW50ZXIgdG8g dGhlIGN1cnJlbnQgQVNJRCBiYXRjaCBhbGxvY2F0ZWQuIEl0IHdpbGwgYmUgdXBkYXRlZA0KPiA+ PiAtICogd2l0aCB0aGUgbmV3IEFTSUQgYmF0Y2guDQo+ID4+IC0gKiBAY3B1OiBjdXJyZW50IENQ VSBJRC4gTXVzdCBoYXZlIGJlZW4gYWNxdWlyZWQgdGhyb3VnaCBnZXRfY3B1KCkNCj4gPj4gLSAq Lw0KPiA+PiAtc3RhdGljIHZvaWQgYXNpZF9uZXdfY29udGV4dChzdHJ1Y3QgYXNpZF9pbmZvICpp bmZvLCBhdG9taWM2NF90ICpwYXNpZCwNCj4gPj4gLQkJCSAgICAgdW5zaWduZWQgaW50IGNwdSkN Cj4gPj4gLXsNCj4gPj4gLQl1bnNpZ25lZCBsb25nIGZsYWdzOw0KPiA+PiAtCXU2NCBhc2lkOw0K PiA+PiAtDQo+ID4+IC0JcmF3X3NwaW5fbG9ja19pcnFzYXZlKCZpbmZvLT5sb2NrLCBmbGFncyk7 DQo+ID4+IC0JLyogQ2hlY2sgdGhhdCBvdXIgQVNJRCBiZWxvbmdzIHRvIHRoZSBjdXJyZW50IGdl bmVyYXRpb24uICovDQo+ID4+IC0JYXNpZCA9IGF0b21pYzY0X3JlYWQocGFzaWQpOw0KPiA+PiAt CWlmICgoYXNpZCBeIGF0b21pYzY0X3JlYWQoJmluZm8tPmdlbmVyYXRpb24pKSA+PiBpbmZvLT5i aXRzKSB7DQo+ID4+IC0JCWFzaWQgPSBuZXdfY29udGV4dChpbmZvLCBwYXNpZCk7DQo+ID4+IC0J CWF0b21pYzY0X3NldChwYXNpZCwgYXNpZCk7DQo+ID4+IC0JfQ0KPiA+PiAtDQo+ID4+IC0JaWYg KGNwdW1hc2tfdGVzdF9hbmRfY2xlYXJfY3B1KGNwdSwgJmluZm8tPmZsdXNoX3BlbmRpbmcpKQ0K PiA+PiAtCQlpbmZvLT5mbHVzaF9jcHVfY3R4dF9jYigpOw0KPiA+PiAtDQo+ID4+IC0JYXRvbWlj NjRfc2V0KCZhY3RpdmVfYXNpZChpbmZvLCBjcHUpLCBhc2lkKTsNCj4gPj4gLQlyYXdfc3Bpbl91 bmxvY2tfaXJxcmVzdG9yZSgmaW5mby0+bG9jaywgZmxhZ3MpOw0KPiA+PiAtfQ0KPiA+PiAtDQo+ ID4+ICAgdm9pZCBjaGVja19hbmRfc3dpdGNoX2NvbnRleHQoc3RydWN0IG1tX3N0cnVjdCAqbW0s IHVuc2lnbmVkIGludCBjcHUpDQo+ID4+ICAgew0KPiA+PiAgIAlpZiAoc3lzdGVtX3N1cHBvcnRz X2NucCgpKQ0KPiA+PiBAQCAtMzA1LDM4ICsxMDgsNiBAQCBzdGF0aWMgdm9pZCBhc2lkX2ZsdXNo X2NwdV9jdHh0KHZvaWQpDQo+ID4+ICAgCWxvY2FsX2ZsdXNoX3RsYl9hbGwoKTsNCj4gPj4gICB9 DQo+ID4+DQo+ID4+IC0vKg0KPiA+PiAtICogSW5pdGlhbGl6ZSB0aGUgQVNJRCBhbGxvY2F0b3IN Cj4gPj4gLSAqDQo+ID4+IC0gKiBAaW5mbzogUG9pbnRlciB0byB0aGUgYXNpZCBhbGxvY2F0b3Ig c3RydWN0dXJlDQo+ID4+IC0gKiBAYml0czogTnVtYmVyIG9mIEFTSURzIGF2YWlsYWJsZQ0KPiA+ PiAtICogQGFzaWRfcGVyX2N0eHQ6IE51bWJlciBvZiBBU0lEcyB0byBhbGxvY2F0ZSBwZXItY29u dGV4dC4gQVNJRHMgYXJlDQo+ID4+IC0gKiBhbGxvY2F0ZWQgY29udGlndW91c2x5IGZvciBhIGdp dmVuIGNvbnRleHQuIFRoaXMgdmFsdWUgc2hvdWxkIGJlIGEgcG93ZXINCj4gb2YNCj4gPj4gLSAq IDIuDQo+ID4+IC0gKi8NCj4gPj4gLXN0YXRpYyBpbnQgYXNpZF9hbGxvY2F0b3JfaW5pdChzdHJ1 Y3QgYXNpZF9pbmZvICppbmZvLA0KPiA+PiAtCQkJICAgICAgIHUzMiBiaXRzLCB1bnNpZ25lZCBp bnQgYXNpZF9wZXJfY3R4dCwNCj4gPj4gLQkJCSAgICAgICB2b2lkICgqZmx1c2hfY3B1X2N0eHRf Y2IpKHZvaWQpKQ0KPiA+PiAtew0KPiA+PiAtCWluZm8tPmJpdHMgPSBiaXRzOw0KPiA+PiAtCWlu Zm8tPmN0eHRfc2hpZnQgPSBpbG9nMihhc2lkX3Blcl9jdHh0KTsNCj4gPj4gLQlpbmZvLT5mbHVz aF9jcHVfY3R4dF9jYiA9IGZsdXNoX2NwdV9jdHh0X2NiOw0KPiA+PiAtCS8qDQo+ID4+IC0JICog RXhwZWN0IGFsbG9jYXRpb24gYWZ0ZXIgcm9sbG92ZXIgdG8gZmFpbCBpZiB3ZSBkb24ndCBoYXZl IGF0IGxlYXN0DQo+ID4+IC0JICogb25lIG1vcmUgQVNJRCB0aGFuIENQVXMuIEFTSUQgIzAgaXMg YWx3YXlzIHJlc2VydmVkLg0KPiA+PiAtCSAqLw0KPiA+PiAtCVdBUk5fT04oTlVNX0NUWFRfQVNJ RFMoaW5mbykgLSAxIDw9IG51bV9wb3NzaWJsZV9jcHVzKCkpOw0KPiA+PiAtCWF0b21pYzY0X3Nl dCgmaW5mby0+Z2VuZXJhdGlvbiwgQVNJRF9GSVJTVF9WRVJTSU9OKGluZm8pKTsNCj4gPj4gLQlp bmZvLT5tYXAgPSBrY2FsbG9jKEJJVFNfVE9fTE9OR1MoTlVNX0NUWFRfQVNJRFMoaW5mbykpLA0K PiA+PiAtCQkJICAgIHNpemVvZigqaW5mby0+bWFwKSwgR0ZQX0tFUk5FTCk7DQo+ID4+IC0JaWYg KCFpbmZvLT5tYXApDQo+ID4+IC0JCXJldHVybiAtRU5PTUVNOw0KPiA+PiAtDQo+ID4+IC0JcmF3 X3NwaW5fbG9ja19pbml0KCZpbmZvLT5sb2NrKTsNCj4gPj4gLQ0KPiA+PiAtCXJldHVybiAwOw0K PiA+PiAtfQ0KPiA+PiAtDQo+ID4+ICAgc3RhdGljIGludCBhc2lkc19pbml0KHZvaWQpDQo+ID4+ ICAgew0KPiA+PiAgIAl1MzIgYml0cyA9IGdldF9jcHVfYXNpZF9iaXRzKCk7DQo+ID4+IEBAIC0z NDQsNyArMTE1LDcgQEAgc3RhdGljIGludCBhc2lkc19pbml0KHZvaWQpDQo+ID4+ICAgCWlmICgh YXNpZF9hbGxvY2F0b3JfaW5pdCgmYXNpZF9pbmZvLCBiaXRzLCBBU0lEX1BFUl9DT05URVhULA0K PiA+PiAgIAkJCQkgYXNpZF9mbHVzaF9jcHVfY3R4dCkpDQo+ID4+ICAgCQlwYW5pYygiVW5hYmxl IHRvIGluaXRpYWxpemUgQVNJRCBhbGxvY2F0b3IgZm9yICVsdSBBU0lEc1xuIiwNCj4gPj4gLQkJ ICAgICAgMVVMIDw8IGJpdHMpOw0KPiA+PiArCQkgICAgICBOVU1fQVNJRFMoJmFzaWRfaW5mbykp Ow0KPiA+Pg0KPiA+PiAgIAlhc2lkX2luZm8uYWN0aXZlID0gJmFjdGl2ZV9hc2lkczsNCj4gPj4g ICAJYXNpZF9pbmZvLnJlc2VydmVkID0gJnJlc2VydmVkX2FzaWRzOw0KPiA+Pg0K From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,T_DKIMWL_WL_HIGH autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D92F8C43218 for ; Tue, 11 Jun 2019 01:57:03 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AB7302086D for ; Tue, 11 Jun 2019 01:57:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="PNX69sFV"; dkim=fail reason="signature verification failed" (1024-bit key) header.d=garyguo.net header.i=@garyguo.net header.b="AbpUSdv2" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AB7302086D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=garyguo.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-riscv-bounces+infradead-linux-riscv=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:In-Reply-To:References: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=AOgk5TrHIRSQDjmbi7Kbu+0x0OquUb/Ap5mGIPwp9TI=; b=PNX69sFVW84GtD 2n16ENRnkeELtwyvmWPOGtAxGQYDXM8nwYyXGJwEEmzrTpVwRUGO+VST++fwQLep0Zn4HhAKkjXwQ SW6tvop/UiKev8zW3WwLBYjlgG4IZ/ExgMhBA03rme7sbjFN8ZlrGyzLeNRJG9e/4xI+HBGIV+NiI CwTJw+kQzc7Lb+HdYwwcVZzztJHp4eHDCZIx0Atr0oaNeVC6Wvol8z6MOrhHZBR3uOWTEE94cNKwt r3m+6ys+O7AOfZWEJh8ZN/2TXcItewm8jownNfGjMPfmM/PDhClRSWUt+2D4GjPdnIsEkGLlDE+mC AWHiC7gLdGe3u2ImkoQA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1haW1z-0004Cs-FK; Tue, 11 Jun 2019 01:56:59 +0000 Received: from mail-eopbgr100102.outbound.protection.outlook.com ([40.107.10.102] helo=GBR01-LO2-obe.outbound.protection.outlook.com) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1haW1Y-0003tw-7W; Tue, 11 Jun 2019 01:56:35 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=garyguo.net; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=VPCibjSLFJ8QPugcKS3wvBhWTo2k1dG0NzaEWarL6Bg=; b=AbpUSdv2Vc6x2t0TVkfgwMKUAa7ljFJU5fC5OG3+ldeABsga6+9OJjxE9IoAG+tmbFUT3ZBLMtCgsfiPBrE1MZeilVnmggqrJtgSA2INvNZhO2yayPeQao+ea6prXzplnDlDAB27qKFGwa34pWf4fLQjEa88KFcrcUGkUkf43jQ= Received: from LO2P265MB0847.GBRP265.PROD.OUTLOOK.COM (20.176.139.20) by LO2P265MB0464.GBRP265.PROD.OUTLOOK.COM (10.166.98.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1965.12; Tue, 11 Jun 2019 01:56:26 +0000 Received: from LO2P265MB0847.GBRP265.PROD.OUTLOOK.COM ([fe80::c4af:389:2951:fdd1]) by LO2P265MB0847.GBRP265.PROD.OUTLOOK.COM ([fe80::c4af:389:2951:fdd1%7]) with mapi id 15.20.1965.017; Tue, 11 Jun 2019 01:56:26 +0000 From: Gary Guo To: Palmer Dabbelt , "julien.grall@arm.com" Subject: RE: [PATCH RFC 11/14] arm64: Move the ASID allocator code in a separate file Thread-Topic: [PATCH RFC 11/14] arm64: Move the ASID allocator code in a separate file Thread-Index: AQHVG7+VzWoWPgdUJU+rwDAjKiBFD6aNhr6AgAgynaA= Date: Tue, 11 Jun 2019 01:56:26 +0000 Message-ID: References: <0dfe120b-066a-2ac8-13bc-3f5a29e2caa3@arm.com> In-Reply-To: Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=gary@garyguo.net; x-originating-ip: [2001:470:6972:501:2013:f57c:b021:47b0] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 306b3dd3-aa4f-4043-5dc4-08d6ee100371 x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(7021145)(8989299)(4534185)(7022145)(4603075)(4627221)(201702281549075)(8990200)(7048125)(7024125)(7027125)(7023125)(5600148)(711020)(4605104)(1401327)(2017052603328)(7193020); SRVR:LO2P265MB0464; x-ms-traffictypediagnostic: LO2P265MB0464: x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:3173; x-forefront-prvs: 006546F32A x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(346002)(39830400003)(376002)(366004)(396003)(136003)(13464003)(189003)(199004)(52536014)(8936002)(6436002)(229853002)(7416002)(25786009)(6116002)(76116006)(66556008)(55016002)(64756008)(81166006)(81156014)(8676002)(316002)(66476007)(4326008)(66946007)(73956011)(6246003)(5660300002)(86362001)(71200400001)(102836004)(71190400001)(2906002)(53936002)(54906003)(110136005)(74316002)(14454004)(476003)(66446008)(305945005)(99286004)(14444005)(11346002)(53946003)(9686003)(256004)(7696005)(2501003)(446003)(7736002)(33656002)(186003)(30864003)(508600001)(486006)(46003)(68736007)(53546011)(6506007)(76176011)(87944003)(579004); DIR:OUT; SFP:1102; SCL:1; SRVR:LO2P265MB0464; H:LO2P265MB0847.GBRP265.PROD.OUTLOOK.COM; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: garyguo.net does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: K7LNFpzKy/fXGiteJSxdXanaSdvXFzCqXFX33QCp8PWzJGaNT/gxozwKVasYw4vV9ZJWiWJP87YqlUjzvdhhp3aG7CQIT4yPYlD9Q6IK8TnGJfm1X0aIhsvEnZB0H7Dn7+y86jslPonxeTAjuhiGmUfgj5uOuLqW7dCH+IRFBiB6xw88lccczXawh5s2oOGP/j1U40fHHMJtRKnS6PM0+M5d0kvspixzsyNMmDXuX3cUU5mWXoPg7/EwupyOv5wcsnsBcpr2D/zTpboQVmnEHWV5f3oqHc0WRRoYg9psAMqzBPWBOs2iaVYQhGjcNlcWLQw81nIXtaFmpicr6NxuE2rm0NYaaH6AUcWI/6RjFpGlP9CPhjMen+I8KcwAH9wTJc0MK6W7u2ak57Qxoa+7Hj0gXDpg7AZmQ24j8R7SFS0= MIME-Version: 1.0 X-OriginatorOrg: garyguo.net X-MS-Exchange-CrossTenant-Network-Message-Id: 306b3dd3-aa4f-4043-5dc4-08d6ee100371 X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Jun 2019 01:56:26.3223 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: bbc898ad-b10f-4e10-8552-d9377b823d45 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: gary@garyguo.net X-MS-Exchange-Transport-CrossTenantHeadersStamped: LO2P265MB0464 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190610_185632_481312_05B2C392 X-CRM114-Status: GOOD ( 30.04 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "julien.thierry@arm.com" , "aou@eecs.berkeley.edu" , "christoffer.dall@arm.com" , "marc.zyngier@arm.com" , "catalin.marinas@arm.com" , Anup Patel , Will Deacon , "linux-kernel@vger.kernel.org" , "rppt@linux.ibm.com" , Christoph Hellwig , Atish Patra , "james.morse@arm.com" , Paul Walmsley , "linux-riscv@lists.infradead.org" , "suzuki.poulose@arm.com" , "kvmarm@lists.cs.columbia.edu" , "linux-arm-kernel@lists.infradead.org" Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-riscv" Errors-To: linux-riscv-bounces+infradead-linux-riscv=archiver.kernel.org@lists.infradead.org Hi, On RISC-V, we can only use ASID if there are more ASIDs than CPUs. If there aren't enough ASIDs (or if there is only 1), then ASID feature is disabled and 0 is used everywhere. Best, Gary > -----Original Message----- > From: Palmer Dabbelt > Sent: Wednesday, June 5, 2019 21:42 > To: julien.grall@arm.com > Cc: linux-kernel@vger.kernel.org; linux-arm-kernel@lists.infradead.org; > kvmarm@lists.cs.columbia.edu; aou@eecs.berkeley.edu; Gary Guo > ; Atish Patra ; Christoph Hellwig > ; Paul Walmsley ; > rppt@linux.ibm.com; linux-riscv@lists.infradead.org; Anup Patel > ; christoffer.dall@arm.com; james.morse@arm.com; > marc.zyngier@arm.com; julien.thierry@arm.com; suzuki.poulose@arm.com; > catalin.marinas@arm.com; Will Deacon > Subject: Re: [PATCH RFC 11/14] arm64: Move the ASID allocator code in a > separate file > > On Wed, 05 Jun 2019 09:56:03 PDT (-0700), julien.grall@arm.com wrote: > > Hi, > > > > I am CCing RISC-V folks to see if there are an interest to share the code. > > > > @RISC-V: I noticed you are discussing about importing a version of ASID > > allocator in RISC-V. At a first look, the code looks quite similar. Would the > > library below helps you? > > Thanks! I didn't look that closely at the original patches because the > argument against them was just "we don't have any way to test this". > Unfortunately, we don't have the constraint that there are more ASIDs than > CPUs > in the system. As a result I don't think we can use this ASID allocation > strategy. > > > > > Cheers, > > > > On 21/03/2019 16:36, Julien Grall wrote: > >> We will want to re-use the ASID allocator in a separate context (e.g > >> allocating VMID). So move the code in a new file. > >> > >> The function asid_check_context has been moved in the header as a static > >> inline function because we want to avoid add a branch when checking if the > >> ASID is still valid. > >> > >> Signed-off-by: Julien Grall > >> > >> --- > >> > >> This code will be used in the virt code for allocating VMID. I am not > >> entirely sure where to place it. Lib could potentially be a good place but I > >> am not entirely convinced the algo as it is could be used by other > >> architecture. > >> > >> Looking at x86, it seems that it will not be possible to re-use because > >> the number of PCID (aka ASID) could be smaller than the number of CPUs. > >> See commit message 10af6235e0d327d42e1bad974385197817923dc1 > "x86/mm: > >> Implement PCID based optimization: try to preserve old TLB entries using > >> PCI". > >> --- > >> arch/arm64/include/asm/asid.h | 77 ++++++++++++++ > >> arch/arm64/lib/Makefile | 2 + > >> arch/arm64/lib/asid.c | 185 +++++++++++++++++++++++++++++++++ > >> arch/arm64/mm/context.c | 235 +----------------------------------------- > >> 4 files changed, 267 insertions(+), 232 deletions(-) > >> create mode 100644 arch/arm64/include/asm/asid.h > >> create mode 100644 arch/arm64/lib/asid.c > >> > >> diff --git a/arch/arm64/include/asm/asid.h b/arch/arm64/include/asm/asid.h > >> new file mode 100644 > >> index 000000000000..bb62b587f37f > >> --- /dev/null > >> +++ b/arch/arm64/include/asm/asid.h > >> @@ -0,0 +1,77 @@ > >> +/* SPDX-License-Identifier: GPL-2.0 */ > >> +#ifndef __ASM_ASM_ASID_H > >> +#define __ASM_ASM_ASID_H > >> + > >> +#include > >> +#include > >> +#include > >> +#include > >> +#include > >> + > >> +struct asid_info > >> +{ > >> + atomic64_t generation; > >> + unsigned long *map; > >> + atomic64_t __percpu *active; > >> + u64 __percpu *reserved; > >> + u32 bits; > >> + /* Lock protecting the structure */ > >> + raw_spinlock_t lock; > >> + /* Which CPU requires context flush on next call */ > >> + cpumask_t flush_pending; > >> + /* Number of ASID allocated by context (shift value) */ > >> + unsigned int ctxt_shift; > >> + /* Callback to locally flush the context. */ > >> + void (*flush_cpu_ctxt_cb)(void); > >> +}; > >> + > >> +#define NUM_ASIDS(info) (1UL << ((info)->bits)) > >> +#define NUM_CTXT_ASIDS(info) (NUM_ASIDS(info) >> (info)- > >ctxt_shift) > >> + > >> +#define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) > >> + > >> +void asid_new_context(struct asid_info *info, atomic64_t *pasid, > >> + unsigned int cpu); > >> + > >> +/* > >> + * Check the ASID is still valid for the context. If not generate a new ASID. > >> + * > >> + * @pasid: Pointer to the current ASID batch > >> + * @cpu: current CPU ID. Must have been acquired throught get_cpu() > >> + */ > >> +static inline void asid_check_context(struct asid_info *info, > >> + atomic64_t *pasid, unsigned int cpu) > >> +{ > >> + u64 asid, old_active_asid; > >> + > >> + asid = atomic64_read(pasid); > >> + > >> + /* > >> + * The memory ordering here is subtle. > >> + * If our active_asid is non-zero and the ASID matches the current > >> + * generation, then we update the active_asid entry with a relaxed > >> + * cmpxchg. Racing with a concurrent rollover means that either: > >> + * > >> + * - We get a zero back from the cmpxchg and end up waiting on the > >> + * lock. Taking the lock synchronises with the rollover and so > >> + * we are forced to see the updated generation. > >> + * > >> + * - We get a valid ASID back from the cmpxchg, which means the > >> + * relaxed xchg in flush_context will treat us as reserved > >> + * because atomic RmWs are totally ordered for a given location. > >> + */ > >> + old_active_asid = atomic64_read(&active_asid(info, cpu)); > >> + if (old_active_asid && > >> + !((asid ^ atomic64_read(&info->generation)) >> info->bits) && > >> + atomic64_cmpxchg_relaxed(&active_asid(info, cpu), > >> + old_active_asid, asid)) > >> + return; > >> + > >> + asid_new_context(info, pasid, cpu); > >> +} > >> + > >> +int asid_allocator_init(struct asid_info *info, > >> + u32 bits, unsigned int asid_per_ctxt, > >> + void (*flush_cpu_ctxt_cb)(void)); > >> + > >> +#endif > >> diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile > >> index 5540a1638baf..720df5ee2aa2 100644 > >> --- a/arch/arm64/lib/Makefile > >> +++ b/arch/arm64/lib/Makefile > >> @@ -5,6 +5,8 @@ lib-y := clear_user.o delay.o > copy_from_user.o \ > >> memcmp.o strcmp.o strncmp.o strlen.o strnlen.o \ > >> strchr.o strrchr.o tishift.o > >> > >> +lib-y += asid.o > >> + > >> ifeq ($(CONFIG_KERNEL_MODE_NEON), y) > >> obj-$(CONFIG_XOR_BLOCKS) += xor-neon.o > >> CFLAGS_REMOVE_xor-neon.o += -mgeneral-regs-only > >> diff --git a/arch/arm64/lib/asid.c b/arch/arm64/lib/asid.c > >> new file mode 100644 > >> index 000000000000..72b71bfb32be > >> --- /dev/null > >> +++ b/arch/arm64/lib/asid.c > >> @@ -0,0 +1,185 @@ > >> +// SPDX-License-Identifier: GPL-2.0 > >> +/* > >> + * Generic ASID allocator. > >> + * > >> + * Based on arch/arm/mm/context.c > >> + * > >> + * Copyright (C) 2002-2003 Deep Blue Solutions Ltd, all rights reserved. > >> + * Copyright (C) 2012 ARM Ltd. > >> + */ > >> + > >> +#include > >> + > >> +#include > >> + > >> +#define reserved_asid(info, cpu) *per_cpu_ptr((info)->reserved, cpu) > >> + > >> +#define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) > >> +#define ASID_FIRST_VERSION(info) (1UL << ((info)->bits)) > >> + > >> +#define asid2idx(info, asid) (((asid) & ~ASID_MASK(info)) >> (info)- > >ctxt_shift) > >> +#define idx2asid(info, idx) (((idx) << (info)->ctxt_shift) & > ~ASID_MASK(info)) > >> + > >> +static void flush_context(struct asid_info *info) > >> +{ > >> + int i; > >> + u64 asid; > >> + > >> + /* Update the list of reserved ASIDs and the ASID bitmap. */ > >> + bitmap_clear(info->map, 0, NUM_CTXT_ASIDS(info)); > >> + > >> + for_each_possible_cpu(i) { > >> + asid = atomic64_xchg_relaxed(&active_asid(info, i), 0); > >> + /* > >> + * If this CPU has already been through a > >> + * rollover, but hasn't run another task in > >> + * the meantime, we must preserve its reserved > >> + * ASID, as this is the only trace we have of > >> + * the process it is still running. > >> + */ > >> + if (asid == 0) > >> + asid = reserved_asid(info, i); > >> + __set_bit(asid2idx(info, asid), info->map); > >> + reserved_asid(info, i) = asid; > >> + } > >> + > >> + /* > >> + * Queue a TLB invalidation for each CPU to perform on next > >> + * context-switch > >> + */ > >> + cpumask_setall(&info->flush_pending); > >> +} > >> + > >> +static bool check_update_reserved_asid(struct asid_info *info, u64 asid, > >> + u64 newasid) > >> +{ > >> + int cpu; > >> + bool hit = false; > >> + > >> + /* > >> + * Iterate over the set of reserved ASIDs looking for a match. > >> + * If we find one, then we can update our mm to use newasid > >> + * (i.e. the same ASID in the current generation) but we can't > >> + * exit the loop early, since we need to ensure that all copies > >> + * of the old ASID are updated to reflect the mm. Failure to do > >> + * so could result in us missing the reserved ASID in a future > >> + * generation. > >> + */ > >> + for_each_possible_cpu(cpu) { > >> + if (reserved_asid(info, cpu) == asid) { > >> + hit = true; > >> + reserved_asid(info, cpu) = newasid; > >> + } > >> + } > >> + > >> + return hit; > >> +} > >> + > >> +static u64 new_context(struct asid_info *info, atomic64_t *pasid) > >> +{ > >> + static u32 cur_idx = 1; > >> + u64 asid = atomic64_read(pasid); > >> + u64 generation = atomic64_read(&info->generation); > >> + > >> + if (asid != 0) { > >> + u64 newasid = generation | (asid & ~ASID_MASK(info)); > >> + > >> + /* > >> + * If our current ASID was active during a rollover, we > >> + * can continue to use it and this was just a false alarm. > >> + */ > >> + if (check_update_reserved_asid(info, asid, newasid)) > >> + return newasid; > >> + > >> + /* > >> + * We had a valid ASID in a previous life, so try to re-use > >> + * it if possible. > >> + */ > >> + if (!__test_and_set_bit(asid2idx(info, asid), info->map)) > >> + return newasid; > >> + } > >> + > >> + /* > >> + * Allocate a free ASID. If we can't find one, take a note of the > >> + * currently active ASIDs and mark the TLBs as requiring flushes. We > >> + * always count from ASID #2 (index 1), as we use ASID #0 when setting > >> + * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd > >> + * pairs. > >> + */ > >> + asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), cur_idx); > >> + if (asid != NUM_CTXT_ASIDS(info)) > >> + goto set_asid; > >> + > >> + /* We're out of ASIDs, so increment the global generation count */ > >> + generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION(info), > >> + &info->generation); > >> + flush_context(info); > >> + > >> + /* We have more ASIDs than CPUs, so this will always succeed */ > >> + asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), 1); > >> + > >> +set_asid: > >> + __set_bit(asid, info->map); > >> + cur_idx = asid; > >> + return idx2asid(info, asid) | generation; > >> +} > >> + > >> +/* > >> + * Generate a new ASID for the context. > >> + * > >> + * @pasid: Pointer to the current ASID batch allocated. It will be updated > >> + * with the new ASID batch. > >> + * @cpu: current CPU ID. Must have been acquired through get_cpu() > >> + */ > >> +void asid_new_context(struct asid_info *info, atomic64_t *pasid, > >> + unsigned int cpu) > >> +{ > >> + unsigned long flags; > >> + u64 asid; > >> + > >> + raw_spin_lock_irqsave(&info->lock, flags); > >> + /* Check that our ASID belongs to the current generation. */ > >> + asid = atomic64_read(pasid); > >> + if ((asid ^ atomic64_read(&info->generation)) >> info->bits) { > >> + asid = new_context(info, pasid); > >> + atomic64_set(pasid, asid); > >> + } > >> + > >> + if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) > >> + info->flush_cpu_ctxt_cb(); > >> + > >> + atomic64_set(&active_asid(info, cpu), asid); > >> + raw_spin_unlock_irqrestore(&info->lock, flags); > >> +} > >> + > >> +/* > >> + * Initialize the ASID allocator > >> + * > >> + * @info: Pointer to the asid allocator structure > >> + * @bits: Number of ASIDs available > >> + * @asid_per_ctxt: Number of ASIDs to allocate per-context. ASIDs are > >> + * allocated contiguously for a given context. This value should be a power > of > >> + * 2. > >> + */ > >> +int asid_allocator_init(struct asid_info *info, > >> + u32 bits, unsigned int asid_per_ctxt, > >> + void (*flush_cpu_ctxt_cb)(void)) > >> +{ > >> + info->bits = bits; > >> + info->ctxt_shift = ilog2(asid_per_ctxt); > >> + info->flush_cpu_ctxt_cb = flush_cpu_ctxt_cb; > >> + /* > >> + * Expect allocation after rollover to fail if we don't have at least > >> + * one more ASID than CPUs. ASID #0 is always reserved. > >> + */ > >> + WARN_ON(NUM_CTXT_ASIDS(info) - 1 <= num_possible_cpus()); > >> + atomic64_set(&info->generation, ASID_FIRST_VERSION(info)); > >> + info->map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), > >> + sizeof(*info->map), GFP_KERNEL); > >> + if (!info->map) > >> + return -ENOMEM; > >> + > >> + raw_spin_lock_init(&info->lock); > >> + > >> + return 0; > >> +} > >> diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c > >> index 678a57b77c91..95ee7711a2ef 100644 > >> --- a/arch/arm64/mm/context.c > >> +++ b/arch/arm64/mm/context.c > >> @@ -22,47 +22,22 @@ > >> #include > >> #include > >> > >> +#include > >> #include > >> #include > >> #include > >> #include > >> > >> -struct asid_info > >> -{ > >> - atomic64_t generation; > >> - unsigned long *map; > >> - atomic64_t __percpu *active; > >> - u64 __percpu *reserved; > >> - u32 bits; > >> - raw_spinlock_t lock; > >> - /* Which CPU requires context flush on next call */ > >> - cpumask_t flush_pending; > >> - /* Number of ASID allocated by context (shift value) */ > >> - unsigned int ctxt_shift; > >> - /* Callback to locally flush the context. */ > >> - void (*flush_cpu_ctxt_cb)(void); > >> -} asid_info; > >> - > >> -#define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) > >> -#define reserved_asid(info, cpu) *per_cpu_ptr((info)->reserved, cpu) > >> - > >> static DEFINE_PER_CPU(atomic64_t, active_asids); > >> static DEFINE_PER_CPU(u64, reserved_asids); > >> > >> -#define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) > >> -#define NUM_ASIDS(info) (1UL << ((info)->bits)) > >> - > >> -#define ASID_FIRST_VERSION(info) NUM_ASIDS(info) > >> - > >> #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 > >> #define ASID_PER_CONTEXT 2 > >> #else > >> #define ASID_PER_CONTEXT 1 > >> #endif > >> > >> -#define NUM_CTXT_ASIDS(info) (NUM_ASIDS(info) >> (info)- > >ctxt_shift) > >> -#define asid2idx(info, asid) (((asid) & ~ASID_MASK(info)) >> (info)- > >ctxt_shift) > >> -#define idx2asid(info, idx) (((idx) << (info)->ctxt_shift) & > ~ASID_MASK(info)) > >> +struct asid_info asid_info; > >> > >> /* Get the ASIDBits supported by the current CPU */ > >> static u32 get_cpu_asid_bits(void) > >> @@ -102,178 +77,6 @@ void verify_cpu_asid_bits(void) > >> } > >> } > >> > >> -static void flush_context(struct asid_info *info) > >> -{ > >> - int i; > >> - u64 asid; > >> - > >> - /* Update the list of reserved ASIDs and the ASID bitmap. */ > >> - bitmap_clear(info->map, 0, NUM_CTXT_ASIDS(info)); > >> - > >> - for_each_possible_cpu(i) { > >> - asid = atomic64_xchg_relaxed(&active_asid(info, i), 0); > >> - /* > >> - * If this CPU has already been through a > >> - * rollover, but hasn't run another task in > >> - * the meantime, we must preserve its reserved > >> - * ASID, as this is the only trace we have of > >> - * the process it is still running. > >> - */ > >> - if (asid == 0) > >> - asid = reserved_asid(info, i); > >> - __set_bit(asid2idx(info, asid), info->map); > >> - reserved_asid(info, i) = asid; > >> - } > >> - > >> - /* > >> - * Queue a TLB invalidation for each CPU to perform on next > >> - * context-switch > >> - */ > >> - cpumask_setall(&info->flush_pending); > >> -} > >> - > >> -static bool check_update_reserved_asid(struct asid_info *info, u64 asid, > >> - u64 newasid) > >> -{ > >> - int cpu; > >> - bool hit = false; > >> - > >> - /* > >> - * Iterate over the set of reserved ASIDs looking for a match. > >> - * If we find one, then we can update our mm to use newasid > >> - * (i.e. the same ASID in the current generation) but we can't > >> - * exit the loop early, since we need to ensure that all copies > >> - * of the old ASID are updated to reflect the mm. Failure to do > >> - * so could result in us missing the reserved ASID in a future > >> - * generation. > >> - */ > >> - for_each_possible_cpu(cpu) { > >> - if (reserved_asid(info, cpu) == asid) { > >> - hit = true; > >> - reserved_asid(info, cpu) = newasid; > >> - } > >> - } > >> - > >> - return hit; > >> -} > >> - > >> -static u64 new_context(struct asid_info *info, atomic64_t *pasid) > >> -{ > >> - static u32 cur_idx = 1; > >> - u64 asid = atomic64_read(pasid); > >> - u64 generation = atomic64_read(&info->generation); > >> - > >> - if (asid != 0) { > >> - u64 newasid = generation | (asid & ~ASID_MASK(info)); > >> - > >> - /* > >> - * If our current ASID was active during a rollover, we > >> - * can continue to use it and this was just a false alarm. > >> - */ > >> - if (check_update_reserved_asid(info, asid, newasid)) > >> - return newasid; > >> - > >> - /* > >> - * We had a valid ASID in a previous life, so try to re-use > >> - * it if possible. > >> - */ > >> - if (!__test_and_set_bit(asid2idx(info, asid), info->map)) > >> - return newasid; > >> - } > >> - > >> - /* > >> - * Allocate a free ASID. If we can't find one, take a note of the > >> - * currently active ASIDs and mark the TLBs as requiring flushes. We > >> - * always count from ASID #2 (index 1), as we use ASID #0 when setting > >> - * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd > >> - * pairs. > >> - */ > >> - asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), cur_idx); > >> - if (asid != NUM_CTXT_ASIDS(info)) > >> - goto set_asid; > >> - > >> - /* We're out of ASIDs, so increment the global generation count */ > >> - generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION(info), > >> - &info->generation); > >> - flush_context(info); > >> - > >> - /* We have more ASIDs than CPUs, so this will always succeed */ > >> - asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), 1); > >> - > >> -set_asid: > >> - __set_bit(asid, info->map); > >> - cur_idx = asid; > >> - return idx2asid(info, asid) | generation; > >> -} > >> - > >> -static void asid_new_context(struct asid_info *info, atomic64_t *pasid, > >> - unsigned int cpu); > >> - > >> -/* > >> - * Check the ASID is still valid for the context. If not generate a new ASID. > >> - * > >> - * @pasid: Pointer to the current ASID batch > >> - * @cpu: current CPU ID. Must have been acquired throught get_cpu() > >> - */ > >> -static void asid_check_context(struct asid_info *info, > >> - atomic64_t *pasid, unsigned int cpu) > >> -{ > >> - u64 asid, old_active_asid; > >> - > >> - asid = atomic64_read(pasid); > >> - > >> - /* > >> - * The memory ordering here is subtle. > >> - * If our active_asid is non-zero and the ASID matches the current > >> - * generation, then we update the active_asid entry with a relaxed > >> - * cmpxchg. Racing with a concurrent rollover means that either: > >> - * > >> - * - We get a zero back from the cmpxchg and end up waiting on the > >> - * lock. Taking the lock synchronises with the rollover and so > >> - * we are forced to see the updated generation. > >> - * > >> - * - We get a valid ASID back from the cmpxchg, which means the > >> - * relaxed xchg in flush_context will treat us as reserved > >> - * because atomic RmWs are totally ordered for a given location. > >> - */ > >> - old_active_asid = atomic64_read(&active_asid(info, cpu)); > >> - if (old_active_asid && > >> - !((asid ^ atomic64_read(&info->generation)) >> info->bits) && > >> - atomic64_cmpxchg_relaxed(&active_asid(info, cpu), > >> - old_active_asid, asid)) > >> - return; > >> - > >> - asid_new_context(info, pasid, cpu); > >> -} > >> - > >> -/* > >> - * Generate a new ASID for the context. > >> - * > >> - * @pasid: Pointer to the current ASID batch allocated. It will be updated > >> - * with the new ASID batch. > >> - * @cpu: current CPU ID. Must have been acquired through get_cpu() > >> - */ > >> -static void asid_new_context(struct asid_info *info, atomic64_t *pasid, > >> - unsigned int cpu) > >> -{ > >> - unsigned long flags; > >> - u64 asid; > >> - > >> - raw_spin_lock_irqsave(&info->lock, flags); > >> - /* Check that our ASID belongs to the current generation. */ > >> - asid = atomic64_read(pasid); > >> - if ((asid ^ atomic64_read(&info->generation)) >> info->bits) { > >> - asid = new_context(info, pasid); > >> - atomic64_set(pasid, asid); > >> - } > >> - > >> - if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) > >> - info->flush_cpu_ctxt_cb(); > >> - > >> - atomic64_set(&active_asid(info, cpu), asid); > >> - raw_spin_unlock_irqrestore(&info->lock, flags); > >> -} > >> - > >> void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) > >> { > >> if (system_supports_cnp()) > >> @@ -305,38 +108,6 @@ static void asid_flush_cpu_ctxt(void) > >> local_flush_tlb_all(); > >> } > >> > >> -/* > >> - * Initialize the ASID allocator > >> - * > >> - * @info: Pointer to the asid allocator structure > >> - * @bits: Number of ASIDs available > >> - * @asid_per_ctxt: Number of ASIDs to allocate per-context. ASIDs are > >> - * allocated contiguously for a given context. This value should be a power > of > >> - * 2. > >> - */ > >> -static int asid_allocator_init(struct asid_info *info, > >> - u32 bits, unsigned int asid_per_ctxt, > >> - void (*flush_cpu_ctxt_cb)(void)) > >> -{ > >> - info->bits = bits; > >> - info->ctxt_shift = ilog2(asid_per_ctxt); > >> - info->flush_cpu_ctxt_cb = flush_cpu_ctxt_cb; > >> - /* > >> - * Expect allocation after rollover to fail if we don't have at least > >> - * one more ASID than CPUs. ASID #0 is always reserved. > >> - */ > >> - WARN_ON(NUM_CTXT_ASIDS(info) - 1 <= num_possible_cpus()); > >> - atomic64_set(&info->generation, ASID_FIRST_VERSION(info)); > >> - info->map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), > >> - sizeof(*info->map), GFP_KERNEL); > >> - if (!info->map) > >> - return -ENOMEM; > >> - > >> - raw_spin_lock_init(&info->lock); > >> - > >> - return 0; > >> -} > >> - > >> static int asids_init(void) > >> { > >> u32 bits = get_cpu_asid_bits(); > >> @@ -344,7 +115,7 @@ static int asids_init(void) > >> if (!asid_allocator_init(&asid_info, bits, ASID_PER_CONTEXT, > >> asid_flush_cpu_ctxt)) > >> panic("Unable to initialize ASID allocator for %lu ASIDs\n", > >> - 1UL << bits); > >> + NUM_ASIDS(&asid_info)); > >> > >> asid_info.active = &active_asids; > >> asid_info.reserved = &reserved_asids; > >> _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3FBCC43218 for ; Tue, 11 Jun 2019 10:08:03 +0000 (UTC) Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by mail.kernel.org (Postfix) with ESMTP id 36F962089E for ; Tue, 11 Jun 2019 10:08:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=garyguo.net header.i=@garyguo.net header.b="AbpUSdv2" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 36F962089E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=garyguo.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvmarm-bounces@lists.cs.columbia.edu Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 757EC4A525; Tue, 11 Jun 2019 06:08:02 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Authentication-Results: mm01.cs.columbia.edu (amavisd-new); dkim=softfail (fail, message has been altered) header.i=@garyguo.net Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id rB2q7TYGFm6y; Tue, 11 Jun 2019 06:08:00 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 053834A51C; Tue, 11 Jun 2019 06:08:00 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 35F614A517 for ; Mon, 10 Jun 2019 21:56:31 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id WbsgDzK8hP+g for ; Mon, 10 Jun 2019 21:56:29 -0400 (EDT) Received: from GBR01-LO2-obe.outbound.protection.outlook.com (mail-eopbgr100109.outbound.protection.outlook.com [40.107.10.109]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id B7C844A4F4 for ; Mon, 10 Jun 2019 21:56:28 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=garyguo.net; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=VPCibjSLFJ8QPugcKS3wvBhWTo2k1dG0NzaEWarL6Bg=; b=AbpUSdv2Vc6x2t0TVkfgwMKUAa7ljFJU5fC5OG3+ldeABsga6+9OJjxE9IoAG+tmbFUT3ZBLMtCgsfiPBrE1MZeilVnmggqrJtgSA2INvNZhO2yayPeQao+ea6prXzplnDlDAB27qKFGwa34pWf4fLQjEa88KFcrcUGkUkf43jQ= Received: from LO2P265MB0847.GBRP265.PROD.OUTLOOK.COM (20.176.139.20) by LO2P265MB0464.GBRP265.PROD.OUTLOOK.COM (10.166.98.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1965.12; Tue, 11 Jun 2019 01:56:26 +0000 Received: from LO2P265MB0847.GBRP265.PROD.OUTLOOK.COM ([fe80::c4af:389:2951:fdd1]) by LO2P265MB0847.GBRP265.PROD.OUTLOOK.COM ([fe80::c4af:389:2951:fdd1%7]) with mapi id 15.20.1965.017; Tue, 11 Jun 2019 01:56:26 +0000 From: Gary Guo To: Palmer Dabbelt , "julien.grall@arm.com" Subject: RE: [PATCH RFC 11/14] arm64: Move the ASID allocator code in a separate file Thread-Topic: [PATCH RFC 11/14] arm64: Move the ASID allocator code in a separate file Thread-Index: AQHVG7+VzWoWPgdUJU+rwDAjKiBFD6aNhr6AgAgynaA= Date: Tue, 11 Jun 2019 01:56:26 +0000 Message-ID: References: <0dfe120b-066a-2ac8-13bc-3f5a29e2caa3@arm.com> In-Reply-To: Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=gary@garyguo.net; x-originating-ip: [2001:470:6972:501:2013:f57c:b021:47b0] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 306b3dd3-aa4f-4043-5dc4-08d6ee100371 x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(7021145)(8989299)(4534185)(7022145)(4603075)(4627221)(201702281549075)(8990200)(7048125)(7024125)(7027125)(7023125)(5600148)(711020)(4605104)(1401327)(2017052603328)(7193020); SRVR:LO2P265MB0464; x-ms-traffictypediagnostic: LO2P265MB0464: x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:3173; x-forefront-prvs: 006546F32A x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(346002)(39830400003)(376002)(366004)(396003)(136003)(13464003)(189003)(199004)(52536014)(8936002)(6436002)(229853002)(7416002)(25786009)(6116002)(76116006)(66556008)(55016002)(64756008)(81166006)(81156014)(8676002)(316002)(66476007)(4326008)(66946007)(73956011)(6246003)(5660300002)(86362001)(71200400001)(102836004)(71190400001)(2906002)(53936002)(54906003)(110136005)(74316002)(14454004)(476003)(66446008)(305945005)(99286004)(14444005)(11346002)(53946003)(9686003)(256004)(7696005)(2501003)(446003)(7736002)(33656002)(186003)(30864003)(508600001)(486006)(46003)(68736007)(53546011)(6506007)(76176011)(87944003)(579004); DIR:OUT; SFP:1102; SCL:1; SRVR:LO2P265MB0464; H:LO2P265MB0847.GBRP265.PROD.OUTLOOK.COM; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: garyguo.net does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: K7LNFpzKy/fXGiteJSxdXanaSdvXFzCqXFX33QCp8PWzJGaNT/gxozwKVasYw4vV9ZJWiWJP87YqlUjzvdhhp3aG7CQIT4yPYlD9Q6IK8TnGJfm1X0aIhsvEnZB0H7Dn7+y86jslPonxeTAjuhiGmUfgj5uOuLqW7dCH+IRFBiB6xw88lccczXawh5s2oOGP/j1U40fHHMJtRKnS6PM0+M5d0kvspixzsyNMmDXuX3cUU5mWXoPg7/EwupyOv5wcsnsBcpr2D/zTpboQVmnEHWV5f3oqHc0WRRoYg9psAMqzBPWBOs2iaVYQhGjcNlcWLQw81nIXtaFmpicr6NxuE2rm0NYaaH6AUcWI/6RjFpGlP9CPhjMen+I8KcwAH9wTJc0MK6W7u2ak57Qxoa+7Hj0gXDpg7AZmQ24j8R7SFS0= MIME-Version: 1.0 X-OriginatorOrg: garyguo.net X-MS-Exchange-CrossTenant-Network-Message-Id: 306b3dd3-aa4f-4043-5dc4-08d6ee100371 X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Jun 2019 01:56:26.3223 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: bbc898ad-b10f-4e10-8552-d9377b823d45 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: gary@garyguo.net X-MS-Exchange-Transport-CrossTenantHeadersStamped: LO2P265MB0464 X-Mailman-Approved-At: Tue, 11 Jun 2019 06:07:59 -0400 Cc: "aou@eecs.berkeley.edu" , "marc.zyngier@arm.com" , "catalin.marinas@arm.com" , Anup Patel , Will Deacon , "linux-kernel@vger.kernel.org" , "rppt@linux.ibm.com" , Christoph Hellwig , Atish Patra , Paul Walmsley , "linux-riscv@lists.infradead.org" , "kvmarm@lists.cs.columbia.edu" , "linux-arm-kernel@lists.infradead.org" X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu Hi, On RISC-V, we can only use ASID if there are more ASIDs than CPUs. If there aren't enough ASIDs (or if there is only 1), then ASID feature is disabled and 0 is used everywhere. Best, Gary > -----Original Message----- > From: Palmer Dabbelt > Sent: Wednesday, June 5, 2019 21:42 > To: julien.grall@arm.com > Cc: linux-kernel@vger.kernel.org; linux-arm-kernel@lists.infradead.org; > kvmarm@lists.cs.columbia.edu; aou@eecs.berkeley.edu; Gary Guo > ; Atish Patra ; Christoph Hellwig > ; Paul Walmsley ; > rppt@linux.ibm.com; linux-riscv@lists.infradead.org; Anup Patel > ; christoffer.dall@arm.com; james.morse@arm.com; > marc.zyngier@arm.com; julien.thierry@arm.com; suzuki.poulose@arm.com; > catalin.marinas@arm.com; Will Deacon > Subject: Re: [PATCH RFC 11/14] arm64: Move the ASID allocator code in a > separate file > > On Wed, 05 Jun 2019 09:56:03 PDT (-0700), julien.grall@arm.com wrote: > > Hi, > > > > I am CCing RISC-V folks to see if there are an interest to share the code. > > > > @RISC-V: I noticed you are discussing about importing a version of ASID > > allocator in RISC-V. At a first look, the code looks quite similar. Would the > > library below helps you? > > Thanks! I didn't look that closely at the original patches because the > argument against them was just "we don't have any way to test this". > Unfortunately, we don't have the constraint that there are more ASIDs than > CPUs > in the system. As a result I don't think we can use this ASID allocation > strategy. > > > > > Cheers, > > > > On 21/03/2019 16:36, Julien Grall wrote: > >> We will want to re-use the ASID allocator in a separate context (e.g > >> allocating VMID). So move the code in a new file. > >> > >> The function asid_check_context has been moved in the header as a static > >> inline function because we want to avoid add a branch when checking if the > >> ASID is still valid. > >> > >> Signed-off-by: Julien Grall > >> > >> --- > >> > >> This code will be used in the virt code for allocating VMID. I am not > >> entirely sure where to place it. Lib could potentially be a good place but I > >> am not entirely convinced the algo as it is could be used by other > >> architecture. > >> > >> Looking at x86, it seems that it will not be possible to re-use because > >> the number of PCID (aka ASID) could be smaller than the number of CPUs. > >> See commit message 10af6235e0d327d42e1bad974385197817923dc1 > "x86/mm: > >> Implement PCID based optimization: try to preserve old TLB entries using > >> PCI". > >> --- > >> arch/arm64/include/asm/asid.h | 77 ++++++++++++++ > >> arch/arm64/lib/Makefile | 2 + > >> arch/arm64/lib/asid.c | 185 +++++++++++++++++++++++++++++++++ > >> arch/arm64/mm/context.c | 235 +----------------------------------------- > >> 4 files changed, 267 insertions(+), 232 deletions(-) > >> create mode 100644 arch/arm64/include/asm/asid.h > >> create mode 100644 arch/arm64/lib/asid.c > >> > >> diff --git a/arch/arm64/include/asm/asid.h b/arch/arm64/include/asm/asid.h > >> new file mode 100644 > >> index 000000000000..bb62b587f37f > >> --- /dev/null > >> +++ b/arch/arm64/include/asm/asid.h > >> @@ -0,0 +1,77 @@ > >> +/* SPDX-License-Identifier: GPL-2.0 */ > >> +#ifndef __ASM_ASM_ASID_H > >> +#define __ASM_ASM_ASID_H > >> + > >> +#include > >> +#include > >> +#include > >> +#include > >> +#include > >> + > >> +struct asid_info > >> +{ > >> + atomic64_t generation; > >> + unsigned long *map; > >> + atomic64_t __percpu *active; > >> + u64 __percpu *reserved; > >> + u32 bits; > >> + /* Lock protecting the structure */ > >> + raw_spinlock_t lock; > >> + /* Which CPU requires context flush on next call */ > >> + cpumask_t flush_pending; > >> + /* Number of ASID allocated by context (shift value) */ > >> + unsigned int ctxt_shift; > >> + /* Callback to locally flush the context. */ > >> + void (*flush_cpu_ctxt_cb)(void); > >> +}; > >> + > >> +#define NUM_ASIDS(info) (1UL << ((info)->bits)) > >> +#define NUM_CTXT_ASIDS(info) (NUM_ASIDS(info) >> (info)- > >ctxt_shift) > >> + > >> +#define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) > >> + > >> +void asid_new_context(struct asid_info *info, atomic64_t *pasid, > >> + unsigned int cpu); > >> + > >> +/* > >> + * Check the ASID is still valid for the context. If not generate a new ASID. > >> + * > >> + * @pasid: Pointer to the current ASID batch > >> + * @cpu: current CPU ID. Must have been acquired throught get_cpu() > >> + */ > >> +static inline void asid_check_context(struct asid_info *info, > >> + atomic64_t *pasid, unsigned int cpu) > >> +{ > >> + u64 asid, old_active_asid; > >> + > >> + asid = atomic64_read(pasid); > >> + > >> + /* > >> + * The memory ordering here is subtle. > >> + * If our active_asid is non-zero and the ASID matches the current > >> + * generation, then we update the active_asid entry with a relaxed > >> + * cmpxchg. Racing with a concurrent rollover means that either: > >> + * > >> + * - We get a zero back from the cmpxchg and end up waiting on the > >> + * lock. Taking the lock synchronises with the rollover and so > >> + * we are forced to see the updated generation. > >> + * > >> + * - We get a valid ASID back from the cmpxchg, which means the > >> + * relaxed xchg in flush_context will treat us as reserved > >> + * because atomic RmWs are totally ordered for a given location. > >> + */ > >> + old_active_asid = atomic64_read(&active_asid(info, cpu)); > >> + if (old_active_asid && > >> + !((asid ^ atomic64_read(&info->generation)) >> info->bits) && > >> + atomic64_cmpxchg_relaxed(&active_asid(info, cpu), > >> + old_active_asid, asid)) > >> + return; > >> + > >> + asid_new_context(info, pasid, cpu); > >> +} > >> + > >> +int asid_allocator_init(struct asid_info *info, > >> + u32 bits, unsigned int asid_per_ctxt, > >> + void (*flush_cpu_ctxt_cb)(void)); > >> + > >> +#endif > >> diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile > >> index 5540a1638baf..720df5ee2aa2 100644 > >> --- a/arch/arm64/lib/Makefile > >> +++ b/arch/arm64/lib/Makefile > >> @@ -5,6 +5,8 @@ lib-y := clear_user.o delay.o > copy_from_user.o \ > >> memcmp.o strcmp.o strncmp.o strlen.o strnlen.o \ > >> strchr.o strrchr.o tishift.o > >> > >> +lib-y += asid.o > >> + > >> ifeq ($(CONFIG_KERNEL_MODE_NEON), y) > >> obj-$(CONFIG_XOR_BLOCKS) += xor-neon.o > >> CFLAGS_REMOVE_xor-neon.o += -mgeneral-regs-only > >> diff --git a/arch/arm64/lib/asid.c b/arch/arm64/lib/asid.c > >> new file mode 100644 > >> index 000000000000..72b71bfb32be > >> --- /dev/null > >> +++ b/arch/arm64/lib/asid.c > >> @@ -0,0 +1,185 @@ > >> +// SPDX-License-Identifier: GPL-2.0 > >> +/* > >> + * Generic ASID allocator. > >> + * > >> + * Based on arch/arm/mm/context.c > >> + * > >> + * Copyright (C) 2002-2003 Deep Blue Solutions Ltd, all rights reserved. > >> + * Copyright (C) 2012 ARM Ltd. > >> + */ > >> + > >> +#include > >> + > >> +#include > >> + > >> +#define reserved_asid(info, cpu) *per_cpu_ptr((info)->reserved, cpu) > >> + > >> +#define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) > >> +#define ASID_FIRST_VERSION(info) (1UL << ((info)->bits)) > >> + > >> +#define asid2idx(info, asid) (((asid) & ~ASID_MASK(info)) >> (info)- > >ctxt_shift) > >> +#define idx2asid(info, idx) (((idx) << (info)->ctxt_shift) & > ~ASID_MASK(info)) > >> + > >> +static void flush_context(struct asid_info *info) > >> +{ > >> + int i; > >> + u64 asid; > >> + > >> + /* Update the list of reserved ASIDs and the ASID bitmap. */ > >> + bitmap_clear(info->map, 0, NUM_CTXT_ASIDS(info)); > >> + > >> + for_each_possible_cpu(i) { > >> + asid = atomic64_xchg_relaxed(&active_asid(info, i), 0); > >> + /* > >> + * If this CPU has already been through a > >> + * rollover, but hasn't run another task in > >> + * the meantime, we must preserve its reserved > >> + * ASID, as this is the only trace we have of > >> + * the process it is still running. > >> + */ > >> + if (asid == 0) > >> + asid = reserved_asid(info, i); > >> + __set_bit(asid2idx(info, asid), info->map); > >> + reserved_asid(info, i) = asid; > >> + } > >> + > >> + /* > >> + * Queue a TLB invalidation for each CPU to perform on next > >> + * context-switch > >> + */ > >> + cpumask_setall(&info->flush_pending); > >> +} > >> + > >> +static bool check_update_reserved_asid(struct asid_info *info, u64 asid, > >> + u64 newasid) > >> +{ > >> + int cpu; > >> + bool hit = false; > >> + > >> + /* > >> + * Iterate over the set of reserved ASIDs looking for a match. > >> + * If we find one, then we can update our mm to use newasid > >> + * (i.e. the same ASID in the current generation) but we can't > >> + * exit the loop early, since we need to ensure that all copies > >> + * of the old ASID are updated to reflect the mm. Failure to do > >> + * so could result in us missing the reserved ASID in a future > >> + * generation. > >> + */ > >> + for_each_possible_cpu(cpu) { > >> + if (reserved_asid(info, cpu) == asid) { > >> + hit = true; > >> + reserved_asid(info, cpu) = newasid; > >> + } > >> + } > >> + > >> + return hit; > >> +} > >> + > >> +static u64 new_context(struct asid_info *info, atomic64_t *pasid) > >> +{ > >> + static u32 cur_idx = 1; > >> + u64 asid = atomic64_read(pasid); > >> + u64 generation = atomic64_read(&info->generation); > >> + > >> + if (asid != 0) { > >> + u64 newasid = generation | (asid & ~ASID_MASK(info)); > >> + > >> + /* > >> + * If our current ASID was active during a rollover, we > >> + * can continue to use it and this was just a false alarm. > >> + */ > >> + if (check_update_reserved_asid(info, asid, newasid)) > >> + return newasid; > >> + > >> + /* > >> + * We had a valid ASID in a previous life, so try to re-use > >> + * it if possible. > >> + */ > >> + if (!__test_and_set_bit(asid2idx(info, asid), info->map)) > >> + return newasid; > >> + } > >> + > >> + /* > >> + * Allocate a free ASID. If we can't find one, take a note of the > >> + * currently active ASIDs and mark the TLBs as requiring flushes. We > >> + * always count from ASID #2 (index 1), as we use ASID #0 when setting > >> + * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd > >> + * pairs. > >> + */ > >> + asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), cur_idx); > >> + if (asid != NUM_CTXT_ASIDS(info)) > >> + goto set_asid; > >> + > >> + /* We're out of ASIDs, so increment the global generation count */ > >> + generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION(info), > >> + &info->generation); > >> + flush_context(info); > >> + > >> + /* We have more ASIDs than CPUs, so this will always succeed */ > >> + asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), 1); > >> + > >> +set_asid: > >> + __set_bit(asid, info->map); > >> + cur_idx = asid; > >> + return idx2asid(info, asid) | generation; > >> +} > >> + > >> +/* > >> + * Generate a new ASID for the context. > >> + * > >> + * @pasid: Pointer to the current ASID batch allocated. It will be updated > >> + * with the new ASID batch. > >> + * @cpu: current CPU ID. Must have been acquired through get_cpu() > >> + */ > >> +void asid_new_context(struct asid_info *info, atomic64_t *pasid, > >> + unsigned int cpu) > >> +{ > >> + unsigned long flags; > >> + u64 asid; > >> + > >> + raw_spin_lock_irqsave(&info->lock, flags); > >> + /* Check that our ASID belongs to the current generation. */ > >> + asid = atomic64_read(pasid); > >> + if ((asid ^ atomic64_read(&info->generation)) >> info->bits) { > >> + asid = new_context(info, pasid); > >> + atomic64_set(pasid, asid); > >> + } > >> + > >> + if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) > >> + info->flush_cpu_ctxt_cb(); > >> + > >> + atomic64_set(&active_asid(info, cpu), asid); > >> + raw_spin_unlock_irqrestore(&info->lock, flags); > >> +} > >> + > >> +/* > >> + * Initialize the ASID allocator > >> + * > >> + * @info: Pointer to the asid allocator structure > >> + * @bits: Number of ASIDs available > >> + * @asid_per_ctxt: Number of ASIDs to allocate per-context. ASIDs are > >> + * allocated contiguously for a given context. This value should be a power > of > >> + * 2. > >> + */ > >> +int asid_allocator_init(struct asid_info *info, > >> + u32 bits, unsigned int asid_per_ctxt, > >> + void (*flush_cpu_ctxt_cb)(void)) > >> +{ > >> + info->bits = bits; > >> + info->ctxt_shift = ilog2(asid_per_ctxt); > >> + info->flush_cpu_ctxt_cb = flush_cpu_ctxt_cb; > >> + /* > >> + * Expect allocation after rollover to fail if we don't have at least > >> + * one more ASID than CPUs. ASID #0 is always reserved. > >> + */ > >> + WARN_ON(NUM_CTXT_ASIDS(info) - 1 <= num_possible_cpus()); > >> + atomic64_set(&info->generation, ASID_FIRST_VERSION(info)); > >> + info->map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), > >> + sizeof(*info->map), GFP_KERNEL); > >> + if (!info->map) > >> + return -ENOMEM; > >> + > >> + raw_spin_lock_init(&info->lock); > >> + > >> + return 0; > >> +} > >> diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c > >> index 678a57b77c91..95ee7711a2ef 100644 > >> --- a/arch/arm64/mm/context.c > >> +++ b/arch/arm64/mm/context.c > >> @@ -22,47 +22,22 @@ > >> #include > >> #include > >> > >> +#include > >> #include > >> #include > >> #include > >> #include > >> > >> -struct asid_info > >> -{ > >> - atomic64_t generation; > >> - unsigned long *map; > >> - atomic64_t __percpu *active; > >> - u64 __percpu *reserved; > >> - u32 bits; > >> - raw_spinlock_t lock; > >> - /* Which CPU requires context flush on next call */ > >> - cpumask_t flush_pending; > >> - /* Number of ASID allocated by context (shift value) */ > >> - unsigned int ctxt_shift; > >> - /* Callback to locally flush the context. */ > >> - void (*flush_cpu_ctxt_cb)(void); > >> -} asid_info; > >> - > >> -#define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) > >> -#define reserved_asid(info, cpu) *per_cpu_ptr((info)->reserved, cpu) > >> - > >> static DEFINE_PER_CPU(atomic64_t, active_asids); > >> static DEFINE_PER_CPU(u64, reserved_asids); > >> > >> -#define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) > >> -#define NUM_ASIDS(info) (1UL << ((info)->bits)) > >> - > >> -#define ASID_FIRST_VERSION(info) NUM_ASIDS(info) > >> - > >> #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 > >> #define ASID_PER_CONTEXT 2 > >> #else > >> #define ASID_PER_CONTEXT 1 > >> #endif > >> > >> -#define NUM_CTXT_ASIDS(info) (NUM_ASIDS(info) >> (info)- > >ctxt_shift) > >> -#define asid2idx(info, asid) (((asid) & ~ASID_MASK(info)) >> (info)- > >ctxt_shift) > >> -#define idx2asid(info, idx) (((idx) << (info)->ctxt_shift) & > ~ASID_MASK(info)) > >> +struct asid_info asid_info; > >> > >> /* Get the ASIDBits supported by the current CPU */ > >> static u32 get_cpu_asid_bits(void) > >> @@ -102,178 +77,6 @@ void verify_cpu_asid_bits(void) > >> } > >> } > >> > >> -static void flush_context(struct asid_info *info) > >> -{ > >> - int i; > >> - u64 asid; > >> - > >> - /* Update the list of reserved ASIDs and the ASID bitmap. */ > >> - bitmap_clear(info->map, 0, NUM_CTXT_ASIDS(info)); > >> - > >> - for_each_possible_cpu(i) { > >> - asid = atomic64_xchg_relaxed(&active_asid(info, i), 0); > >> - /* > >> - * If this CPU has already been through a > >> - * rollover, but hasn't run another task in > >> - * the meantime, we must preserve its reserved > >> - * ASID, as this is the only trace we have of > >> - * the process it is still running. > >> - */ > >> - if (asid == 0) > >> - asid = reserved_asid(info, i); > >> - __set_bit(asid2idx(info, asid), info->map); > >> - reserved_asid(info, i) = asid; > >> - } > >> - > >> - /* > >> - * Queue a TLB invalidation for each CPU to perform on next > >> - * context-switch > >> - */ > >> - cpumask_setall(&info->flush_pending); > >> -} > >> - > >> -static bool check_update_reserved_asid(struct asid_info *info, u64 asid, > >> - u64 newasid) > >> -{ > >> - int cpu; > >> - bool hit = false; > >> - > >> - /* > >> - * Iterate over the set of reserved ASIDs looking for a match. > >> - * If we find one, then we can update our mm to use newasid > >> - * (i.e. the same ASID in the current generation) but we can't > >> - * exit the loop early, since we need to ensure that all copies > >> - * of the old ASID are updated to reflect the mm. Failure to do > >> - * so could result in us missing the reserved ASID in a future > >> - * generation. > >> - */ > >> - for_each_possible_cpu(cpu) { > >> - if (reserved_asid(info, cpu) == asid) { > >> - hit = true; > >> - reserved_asid(info, cpu) = newasid; > >> - } > >> - } > >> - > >> - return hit; > >> -} > >> - > >> -static u64 new_context(struct asid_info *info, atomic64_t *pasid) > >> -{ > >> - static u32 cur_idx = 1; > >> - u64 asid = atomic64_read(pasid); > >> - u64 generation = atomic64_read(&info->generation); > >> - > >> - if (asid != 0) { > >> - u64 newasid = generation | (asid & ~ASID_MASK(info)); > >> - > >> - /* > >> - * If our current ASID was active during a rollover, we > >> - * can continue to use it and this was just a false alarm. > >> - */ > >> - if (check_update_reserved_asid(info, asid, newasid)) > >> - return newasid; > >> - > >> - /* > >> - * We had a valid ASID in a previous life, so try to re-use > >> - * it if possible. > >> - */ > >> - if (!__test_and_set_bit(asid2idx(info, asid), info->map)) > >> - return newasid; > >> - } > >> - > >> - /* > >> - * Allocate a free ASID. If we can't find one, take a note of the > >> - * currently active ASIDs and mark the TLBs as requiring flushes. We > >> - * always count from ASID #2 (index 1), as we use ASID #0 when setting > >> - * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd > >> - * pairs. > >> - */ > >> - asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), cur_idx); > >> - if (asid != NUM_CTXT_ASIDS(info)) > >> - goto set_asid; > >> - > >> - /* We're out of ASIDs, so increment the global generation count */ > >> - generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION(info), > >> - &info->generation); > >> - flush_context(info); > >> - > >> - /* We have more ASIDs than CPUs, so this will always succeed */ > >> - asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), 1); > >> - > >> -set_asid: > >> - __set_bit(asid, info->map); > >> - cur_idx = asid; > >> - return idx2asid(info, asid) | generation; > >> -} > >> - > >> -static void asid_new_context(struct asid_info *info, atomic64_t *pasid, > >> - unsigned int cpu); > >> - > >> -/* > >> - * Check the ASID is still valid for the context. If not generate a new ASID. > >> - * > >> - * @pasid: Pointer to the current ASID batch > >> - * @cpu: current CPU ID. Must have been acquired throught get_cpu() > >> - */ > >> -static void asid_check_context(struct asid_info *info, > >> - atomic64_t *pasid, unsigned int cpu) > >> -{ > >> - u64 asid, old_active_asid; > >> - > >> - asid = atomic64_read(pasid); > >> - > >> - /* > >> - * The memory ordering here is subtle. > >> - * If our active_asid is non-zero and the ASID matches the current > >> - * generation, then we update the active_asid entry with a relaxed > >> - * cmpxchg. Racing with a concurrent rollover means that either: > >> - * > >> - * - We get a zero back from the cmpxchg and end up waiting on the > >> - * lock. Taking the lock synchronises with the rollover and so > >> - * we are forced to see the updated generation. > >> - * > >> - * - We get a valid ASID back from the cmpxchg, which means the > >> - * relaxed xchg in flush_context will treat us as reserved > >> - * because atomic RmWs are totally ordered for a given location. > >> - */ > >> - old_active_asid = atomic64_read(&active_asid(info, cpu)); > >> - if (old_active_asid && > >> - !((asid ^ atomic64_read(&info->generation)) >> info->bits) && > >> - atomic64_cmpxchg_relaxed(&active_asid(info, cpu), > >> - old_active_asid, asid)) > >> - return; > >> - > >> - asid_new_context(info, pasid, cpu); > >> -} > >> - > >> -/* > >> - * Generate a new ASID for the context. > >> - * > >> - * @pasid: Pointer to the current ASID batch allocated. It will be updated > >> - * with the new ASID batch. > >> - * @cpu: current CPU ID. Must have been acquired through get_cpu() > >> - */ > >> -static void asid_new_context(struct asid_info *info, atomic64_t *pasid, > >> - unsigned int cpu) > >> -{ > >> - unsigned long flags; > >> - u64 asid; > >> - > >> - raw_spin_lock_irqsave(&info->lock, flags); > >> - /* Check that our ASID belongs to the current generation. */ > >> - asid = atomic64_read(pasid); > >> - if ((asid ^ atomic64_read(&info->generation)) >> info->bits) { > >> - asid = new_context(info, pasid); > >> - atomic64_set(pasid, asid); > >> - } > >> - > >> - if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) > >> - info->flush_cpu_ctxt_cb(); > >> - > >> - atomic64_set(&active_asid(info, cpu), asid); > >> - raw_spin_unlock_irqrestore(&info->lock, flags); > >> -} > >> - > >> void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) > >> { > >> if (system_supports_cnp()) > >> @@ -305,38 +108,6 @@ static void asid_flush_cpu_ctxt(void) > >> local_flush_tlb_all(); > >> } > >> > >> -/* > >> - * Initialize the ASID allocator > >> - * > >> - * @info: Pointer to the asid allocator structure > >> - * @bits: Number of ASIDs available > >> - * @asid_per_ctxt: Number of ASIDs to allocate per-context. ASIDs are > >> - * allocated contiguously for a given context. This value should be a power > of > >> - * 2. > >> - */ > >> -static int asid_allocator_init(struct asid_info *info, > >> - u32 bits, unsigned int asid_per_ctxt, > >> - void (*flush_cpu_ctxt_cb)(void)) > >> -{ > >> - info->bits = bits; > >> - info->ctxt_shift = ilog2(asid_per_ctxt); > >> - info->flush_cpu_ctxt_cb = flush_cpu_ctxt_cb; > >> - /* > >> - * Expect allocation after rollover to fail if we don't have at least > >> - * one more ASID than CPUs. ASID #0 is always reserved. > >> - */ > >> - WARN_ON(NUM_CTXT_ASIDS(info) - 1 <= num_possible_cpus()); > >> - atomic64_set(&info->generation, ASID_FIRST_VERSION(info)); > >> - info->map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), > >> - sizeof(*info->map), GFP_KERNEL); > >> - if (!info->map) > >> - return -ENOMEM; > >> - > >> - raw_spin_lock_init(&info->lock); > >> - > >> - return 0; > >> -} > >> - > >> static int asids_init(void) > >> { > >> u32 bits = get_cpu_asid_bits(); > >> @@ -344,7 +115,7 @@ static int asids_init(void) > >> if (!asid_allocator_init(&asid_info, bits, ASID_PER_CONTEXT, > >> asid_flush_cpu_ctxt)) > >> panic("Unable to initialize ASID allocator for %lu ASIDs\n", > >> - 1UL << bits); > >> + NUM_ASIDS(&asid_info)); > >> > >> asid_info.active = &active_asids; > >> asid_info.reserved = &reserved_asids; > >> _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,T_DKIMWL_WL_HIGH autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10B70C43218 for ; Tue, 11 Jun 2019 01:56:37 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D7BEF2086D for ; Tue, 11 Jun 2019 01:56:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="E9L0Rng2"; dkim=fail reason="signature verification failed" (1024-bit key) header.d=garyguo.net header.i=@garyguo.net header.b="AbpUSdv2" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D7BEF2086D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=garyguo.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:In-Reply-To:References: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=JV3r++HKrtKsVzTFEXheywSoW4IVcp7DYYlODsIBQE0=; b=E9L0Rng2K20Wrp Dacnc9GpIDoQykM0TWjRn68kuJDQMsZLrwm9kkFHrNutU3D3wiAtA/+bdOlXj7Z8tFbm0FLxE89M4 F0Fg+GrtNX4IrROgFjd+D+H0KnTgZ2PrKzGIGcQqsB7m18bMcF4wDTqMtbGYRMJTkLZU9YTGI6yJ0 Yi8i7VpJjjtsZM6qWFVivDF+PiuKXiJWNy058tPcRWRQgDfsZPRzP9UyGTvm663fxFuDOf7hn0RAJ gmxCUxxFXT17QLP8+IwV8oUI3FqhV9lB65oD2npSOBsTjsJ1a7kEWWF9B+TtlHqAp+UKD7FrWkeD0 sLRqc8I5zBoplXdw8RNw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1haW1c-0003v7-92; Tue, 11 Jun 2019 01:56:36 +0000 Received: from mail-eopbgr100102.outbound.protection.outlook.com ([40.107.10.102] helo=GBR01-LO2-obe.outbound.protection.outlook.com) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1haW1Y-0003tw-7W; Tue, 11 Jun 2019 01:56:35 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=garyguo.net; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=VPCibjSLFJ8QPugcKS3wvBhWTo2k1dG0NzaEWarL6Bg=; b=AbpUSdv2Vc6x2t0TVkfgwMKUAa7ljFJU5fC5OG3+ldeABsga6+9OJjxE9IoAG+tmbFUT3ZBLMtCgsfiPBrE1MZeilVnmggqrJtgSA2INvNZhO2yayPeQao+ea6prXzplnDlDAB27qKFGwa34pWf4fLQjEa88KFcrcUGkUkf43jQ= Received: from LO2P265MB0847.GBRP265.PROD.OUTLOOK.COM (20.176.139.20) by LO2P265MB0464.GBRP265.PROD.OUTLOOK.COM (10.166.98.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1965.12; Tue, 11 Jun 2019 01:56:26 +0000 Received: from LO2P265MB0847.GBRP265.PROD.OUTLOOK.COM ([fe80::c4af:389:2951:fdd1]) by LO2P265MB0847.GBRP265.PROD.OUTLOOK.COM ([fe80::c4af:389:2951:fdd1%7]) with mapi id 15.20.1965.017; Tue, 11 Jun 2019 01:56:26 +0000 From: Gary Guo To: Palmer Dabbelt , "julien.grall@arm.com" Subject: RE: [PATCH RFC 11/14] arm64: Move the ASID allocator code in a separate file Thread-Topic: [PATCH RFC 11/14] arm64: Move the ASID allocator code in a separate file Thread-Index: AQHVG7+VzWoWPgdUJU+rwDAjKiBFD6aNhr6AgAgynaA= Date: Tue, 11 Jun 2019 01:56:26 +0000 Message-ID: References: <0dfe120b-066a-2ac8-13bc-3f5a29e2caa3@arm.com> In-Reply-To: Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=gary@garyguo.net; x-originating-ip: [2001:470:6972:501:2013:f57c:b021:47b0] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 306b3dd3-aa4f-4043-5dc4-08d6ee100371 x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(7021145)(8989299)(4534185)(7022145)(4603075)(4627221)(201702281549075)(8990200)(7048125)(7024125)(7027125)(7023125)(5600148)(711020)(4605104)(1401327)(2017052603328)(7193020); SRVR:LO2P265MB0464; x-ms-traffictypediagnostic: LO2P265MB0464: x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:3173; x-forefront-prvs: 006546F32A x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(346002)(39830400003)(376002)(366004)(396003)(136003)(13464003)(189003)(199004)(52536014)(8936002)(6436002)(229853002)(7416002)(25786009)(6116002)(76116006)(66556008)(55016002)(64756008)(81166006)(81156014)(8676002)(316002)(66476007)(4326008)(66946007)(73956011)(6246003)(5660300002)(86362001)(71200400001)(102836004)(71190400001)(2906002)(53936002)(54906003)(110136005)(74316002)(14454004)(476003)(66446008)(305945005)(99286004)(14444005)(11346002)(53946003)(9686003)(256004)(7696005)(2501003)(446003)(7736002)(33656002)(186003)(30864003)(508600001)(486006)(46003)(68736007)(53546011)(6506007)(76176011)(87944003)(579004); DIR:OUT; SFP:1102; SCL:1; SRVR:LO2P265MB0464; H:LO2P265MB0847.GBRP265.PROD.OUTLOOK.COM; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: garyguo.net does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: K7LNFpzKy/fXGiteJSxdXanaSdvXFzCqXFX33QCp8PWzJGaNT/gxozwKVasYw4vV9ZJWiWJP87YqlUjzvdhhp3aG7CQIT4yPYlD9Q6IK8TnGJfm1X0aIhsvEnZB0H7Dn7+y86jslPonxeTAjuhiGmUfgj5uOuLqW7dCH+IRFBiB6xw88lccczXawh5s2oOGP/j1U40fHHMJtRKnS6PM0+M5d0kvspixzsyNMmDXuX3cUU5mWXoPg7/EwupyOv5wcsnsBcpr2D/zTpboQVmnEHWV5f3oqHc0WRRoYg9psAMqzBPWBOs2iaVYQhGjcNlcWLQw81nIXtaFmpicr6NxuE2rm0NYaaH6AUcWI/6RjFpGlP9CPhjMen+I8KcwAH9wTJc0MK6W7u2ak57Qxoa+7Hj0gXDpg7AZmQ24j8R7SFS0= MIME-Version: 1.0 X-OriginatorOrg: garyguo.net X-MS-Exchange-CrossTenant-Network-Message-Id: 306b3dd3-aa4f-4043-5dc4-08d6ee100371 X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Jun 2019 01:56:26.3223 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: bbc898ad-b10f-4e10-8552-d9377b823d45 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: gary@garyguo.net X-MS-Exchange-Transport-CrossTenantHeadersStamped: LO2P265MB0464 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190610_185632_481312_05B2C392 X-CRM114-Status: GOOD ( 30.04 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "julien.thierry@arm.com" , "aou@eecs.berkeley.edu" , "christoffer.dall@arm.com" , "marc.zyngier@arm.com" , "catalin.marinas@arm.com" , Anup Patel , Will Deacon , "linux-kernel@vger.kernel.org" , "rppt@linux.ibm.com" , Christoph Hellwig , Atish Patra , "james.morse@arm.com" , Paul Walmsley , "linux-riscv@lists.infradead.org" , "suzuki.poulose@arm.com" , "kvmarm@lists.cs.columbia.edu" , "linux-arm-kernel@lists.infradead.org" Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org Hi, On RISC-V, we can only use ASID if there are more ASIDs than CPUs. If there aren't enough ASIDs (or if there is only 1), then ASID feature is disabled and 0 is used everywhere. Best, Gary > -----Original Message----- > From: Palmer Dabbelt > Sent: Wednesday, June 5, 2019 21:42 > To: julien.grall@arm.com > Cc: linux-kernel@vger.kernel.org; linux-arm-kernel@lists.infradead.org; > kvmarm@lists.cs.columbia.edu; aou@eecs.berkeley.edu; Gary Guo > ; Atish Patra ; Christoph Hellwig > ; Paul Walmsley ; > rppt@linux.ibm.com; linux-riscv@lists.infradead.org; Anup Patel > ; christoffer.dall@arm.com; james.morse@arm.com; > marc.zyngier@arm.com; julien.thierry@arm.com; suzuki.poulose@arm.com; > catalin.marinas@arm.com; Will Deacon > Subject: Re: [PATCH RFC 11/14] arm64: Move the ASID allocator code in a > separate file > > On Wed, 05 Jun 2019 09:56:03 PDT (-0700), julien.grall@arm.com wrote: > > Hi, > > > > I am CCing RISC-V folks to see if there are an interest to share the code. > > > > @RISC-V: I noticed you are discussing about importing a version of ASID > > allocator in RISC-V. At a first look, the code looks quite similar. Would the > > library below helps you? > > Thanks! I didn't look that closely at the original patches because the > argument against them was just "we don't have any way to test this". > Unfortunately, we don't have the constraint that there are more ASIDs than > CPUs > in the system. As a result I don't think we can use this ASID allocation > strategy. > > > > > Cheers, > > > > On 21/03/2019 16:36, Julien Grall wrote: > >> We will want to re-use the ASID allocator in a separate context (e.g > >> allocating VMID). So move the code in a new file. > >> > >> The function asid_check_context has been moved in the header as a static > >> inline function because we want to avoid add a branch when checking if the > >> ASID is still valid. > >> > >> Signed-off-by: Julien Grall > >> > >> --- > >> > >> This code will be used in the virt code for allocating VMID. I am not > >> entirely sure where to place it. Lib could potentially be a good place but I > >> am not entirely convinced the algo as it is could be used by other > >> architecture. > >> > >> Looking at x86, it seems that it will not be possible to re-use because > >> the number of PCID (aka ASID) could be smaller than the number of CPUs. > >> See commit message 10af6235e0d327d42e1bad974385197817923dc1 > "x86/mm: > >> Implement PCID based optimization: try to preserve old TLB entries using > >> PCI". > >> --- > >> arch/arm64/include/asm/asid.h | 77 ++++++++++++++ > >> arch/arm64/lib/Makefile | 2 + > >> arch/arm64/lib/asid.c | 185 +++++++++++++++++++++++++++++++++ > >> arch/arm64/mm/context.c | 235 +----------------------------------------- > >> 4 files changed, 267 insertions(+), 232 deletions(-) > >> create mode 100644 arch/arm64/include/asm/asid.h > >> create mode 100644 arch/arm64/lib/asid.c > >> > >> diff --git a/arch/arm64/include/asm/asid.h b/arch/arm64/include/asm/asid.h > >> new file mode 100644 > >> index 000000000000..bb62b587f37f > >> --- /dev/null > >> +++ b/arch/arm64/include/asm/asid.h > >> @@ -0,0 +1,77 @@ > >> +/* SPDX-License-Identifier: GPL-2.0 */ > >> +#ifndef __ASM_ASM_ASID_H > >> +#define __ASM_ASM_ASID_H > >> + > >> +#include > >> +#include > >> +#include > >> +#include > >> +#include > >> + > >> +struct asid_info > >> +{ > >> + atomic64_t generation; > >> + unsigned long *map; > >> + atomic64_t __percpu *active; > >> + u64 __percpu *reserved; > >> + u32 bits; > >> + /* Lock protecting the structure */ > >> + raw_spinlock_t lock; > >> + /* Which CPU requires context flush on next call */ > >> + cpumask_t flush_pending; > >> + /* Number of ASID allocated by context (shift value) */ > >> + unsigned int ctxt_shift; > >> + /* Callback to locally flush the context. */ > >> + void (*flush_cpu_ctxt_cb)(void); > >> +}; > >> + > >> +#define NUM_ASIDS(info) (1UL << ((info)->bits)) > >> +#define NUM_CTXT_ASIDS(info) (NUM_ASIDS(info) >> (info)- > >ctxt_shift) > >> + > >> +#define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) > >> + > >> +void asid_new_context(struct asid_info *info, atomic64_t *pasid, > >> + unsigned int cpu); > >> + > >> +/* > >> + * Check the ASID is still valid for the context. If not generate a new ASID. > >> + * > >> + * @pasid: Pointer to the current ASID batch > >> + * @cpu: current CPU ID. Must have been acquired throught get_cpu() > >> + */ > >> +static inline void asid_check_context(struct asid_info *info, > >> + atomic64_t *pasid, unsigned int cpu) > >> +{ > >> + u64 asid, old_active_asid; > >> + > >> + asid = atomic64_read(pasid); > >> + > >> + /* > >> + * The memory ordering here is subtle. > >> + * If our active_asid is non-zero and the ASID matches the current > >> + * generation, then we update the active_asid entry with a relaxed > >> + * cmpxchg. Racing with a concurrent rollover means that either: > >> + * > >> + * - We get a zero back from the cmpxchg and end up waiting on the > >> + * lock. Taking the lock synchronises with the rollover and so > >> + * we are forced to see the updated generation. > >> + * > >> + * - We get a valid ASID back from the cmpxchg, which means the > >> + * relaxed xchg in flush_context will treat us as reserved > >> + * because atomic RmWs are totally ordered for a given location. > >> + */ > >> + old_active_asid = atomic64_read(&active_asid(info, cpu)); > >> + if (old_active_asid && > >> + !((asid ^ atomic64_read(&info->generation)) >> info->bits) && > >> + atomic64_cmpxchg_relaxed(&active_asid(info, cpu), > >> + old_active_asid, asid)) > >> + return; > >> + > >> + asid_new_context(info, pasid, cpu); > >> +} > >> + > >> +int asid_allocator_init(struct asid_info *info, > >> + u32 bits, unsigned int asid_per_ctxt, > >> + void (*flush_cpu_ctxt_cb)(void)); > >> + > >> +#endif > >> diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile > >> index 5540a1638baf..720df5ee2aa2 100644 > >> --- a/arch/arm64/lib/Makefile > >> +++ b/arch/arm64/lib/Makefile > >> @@ -5,6 +5,8 @@ lib-y := clear_user.o delay.o > copy_from_user.o \ > >> memcmp.o strcmp.o strncmp.o strlen.o strnlen.o \ > >> strchr.o strrchr.o tishift.o > >> > >> +lib-y += asid.o > >> + > >> ifeq ($(CONFIG_KERNEL_MODE_NEON), y) > >> obj-$(CONFIG_XOR_BLOCKS) += xor-neon.o > >> CFLAGS_REMOVE_xor-neon.o += -mgeneral-regs-only > >> diff --git a/arch/arm64/lib/asid.c b/arch/arm64/lib/asid.c > >> new file mode 100644 > >> index 000000000000..72b71bfb32be > >> --- /dev/null > >> +++ b/arch/arm64/lib/asid.c > >> @@ -0,0 +1,185 @@ > >> +// SPDX-License-Identifier: GPL-2.0 > >> +/* > >> + * Generic ASID allocator. > >> + * > >> + * Based on arch/arm/mm/context.c > >> + * > >> + * Copyright (C) 2002-2003 Deep Blue Solutions Ltd, all rights reserved. > >> + * Copyright (C) 2012 ARM Ltd. > >> + */ > >> + > >> +#include > >> + > >> +#include > >> + > >> +#define reserved_asid(info, cpu) *per_cpu_ptr((info)->reserved, cpu) > >> + > >> +#define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) > >> +#define ASID_FIRST_VERSION(info) (1UL << ((info)->bits)) > >> + > >> +#define asid2idx(info, asid) (((asid) & ~ASID_MASK(info)) >> (info)- > >ctxt_shift) > >> +#define idx2asid(info, idx) (((idx) << (info)->ctxt_shift) & > ~ASID_MASK(info)) > >> + > >> +static void flush_context(struct asid_info *info) > >> +{ > >> + int i; > >> + u64 asid; > >> + > >> + /* Update the list of reserved ASIDs and the ASID bitmap. */ > >> + bitmap_clear(info->map, 0, NUM_CTXT_ASIDS(info)); > >> + > >> + for_each_possible_cpu(i) { > >> + asid = atomic64_xchg_relaxed(&active_asid(info, i), 0); > >> + /* > >> + * If this CPU has already been through a > >> + * rollover, but hasn't run another task in > >> + * the meantime, we must preserve its reserved > >> + * ASID, as this is the only trace we have of > >> + * the process it is still running. > >> + */ > >> + if (asid == 0) > >> + asid = reserved_asid(info, i); > >> + __set_bit(asid2idx(info, asid), info->map); > >> + reserved_asid(info, i) = asid; > >> + } > >> + > >> + /* > >> + * Queue a TLB invalidation for each CPU to perform on next > >> + * context-switch > >> + */ > >> + cpumask_setall(&info->flush_pending); > >> +} > >> + > >> +static bool check_update_reserved_asid(struct asid_info *info, u64 asid, > >> + u64 newasid) > >> +{ > >> + int cpu; > >> + bool hit = false; > >> + > >> + /* > >> + * Iterate over the set of reserved ASIDs looking for a match. > >> + * If we find one, then we can update our mm to use newasid > >> + * (i.e. the same ASID in the current generation) but we can't > >> + * exit the loop early, since we need to ensure that all copies > >> + * of the old ASID are updated to reflect the mm. Failure to do > >> + * so could result in us missing the reserved ASID in a future > >> + * generation. > >> + */ > >> + for_each_possible_cpu(cpu) { > >> + if (reserved_asid(info, cpu) == asid) { > >> + hit = true; > >> + reserved_asid(info, cpu) = newasid; > >> + } > >> + } > >> + > >> + return hit; > >> +} > >> + > >> +static u64 new_context(struct asid_info *info, atomic64_t *pasid) > >> +{ > >> + static u32 cur_idx = 1; > >> + u64 asid = atomic64_read(pasid); > >> + u64 generation = atomic64_read(&info->generation); > >> + > >> + if (asid != 0) { > >> + u64 newasid = generation | (asid & ~ASID_MASK(info)); > >> + > >> + /* > >> + * If our current ASID was active during a rollover, we > >> + * can continue to use it and this was just a false alarm. > >> + */ > >> + if (check_update_reserved_asid(info, asid, newasid)) > >> + return newasid; > >> + > >> + /* > >> + * We had a valid ASID in a previous life, so try to re-use > >> + * it if possible. > >> + */ > >> + if (!__test_and_set_bit(asid2idx(info, asid), info->map)) > >> + return newasid; > >> + } > >> + > >> + /* > >> + * Allocate a free ASID. If we can't find one, take a note of the > >> + * currently active ASIDs and mark the TLBs as requiring flushes. We > >> + * always count from ASID #2 (index 1), as we use ASID #0 when setting > >> + * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd > >> + * pairs. > >> + */ > >> + asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), cur_idx); > >> + if (asid != NUM_CTXT_ASIDS(info)) > >> + goto set_asid; > >> + > >> + /* We're out of ASIDs, so increment the global generation count */ > >> + generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION(info), > >> + &info->generation); > >> + flush_context(info); > >> + > >> + /* We have more ASIDs than CPUs, so this will always succeed */ > >> + asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), 1); > >> + > >> +set_asid: > >> + __set_bit(asid, info->map); > >> + cur_idx = asid; > >> + return idx2asid(info, asid) | generation; > >> +} > >> + > >> +/* > >> + * Generate a new ASID for the context. > >> + * > >> + * @pasid: Pointer to the current ASID batch allocated. It will be updated > >> + * with the new ASID batch. > >> + * @cpu: current CPU ID. Must have been acquired through get_cpu() > >> + */ > >> +void asid_new_context(struct asid_info *info, atomic64_t *pasid, > >> + unsigned int cpu) > >> +{ > >> + unsigned long flags; > >> + u64 asid; > >> + > >> + raw_spin_lock_irqsave(&info->lock, flags); > >> + /* Check that our ASID belongs to the current generation. */ > >> + asid = atomic64_read(pasid); > >> + if ((asid ^ atomic64_read(&info->generation)) >> info->bits) { > >> + asid = new_context(info, pasid); > >> + atomic64_set(pasid, asid); > >> + } > >> + > >> + if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) > >> + info->flush_cpu_ctxt_cb(); > >> + > >> + atomic64_set(&active_asid(info, cpu), asid); > >> + raw_spin_unlock_irqrestore(&info->lock, flags); > >> +} > >> + > >> +/* > >> + * Initialize the ASID allocator > >> + * > >> + * @info: Pointer to the asid allocator structure > >> + * @bits: Number of ASIDs available > >> + * @asid_per_ctxt: Number of ASIDs to allocate per-context. ASIDs are > >> + * allocated contiguously for a given context. This value should be a power > of > >> + * 2. > >> + */ > >> +int asid_allocator_init(struct asid_info *info, > >> + u32 bits, unsigned int asid_per_ctxt, > >> + void (*flush_cpu_ctxt_cb)(void)) > >> +{ > >> + info->bits = bits; > >> + info->ctxt_shift = ilog2(asid_per_ctxt); > >> + info->flush_cpu_ctxt_cb = flush_cpu_ctxt_cb; > >> + /* > >> + * Expect allocation after rollover to fail if we don't have at least > >> + * one more ASID than CPUs. ASID #0 is always reserved. > >> + */ > >> + WARN_ON(NUM_CTXT_ASIDS(info) - 1 <= num_possible_cpus()); > >> + atomic64_set(&info->generation, ASID_FIRST_VERSION(info)); > >> + info->map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), > >> + sizeof(*info->map), GFP_KERNEL); > >> + if (!info->map) > >> + return -ENOMEM; > >> + > >> + raw_spin_lock_init(&info->lock); > >> + > >> + return 0; > >> +} > >> diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c > >> index 678a57b77c91..95ee7711a2ef 100644 > >> --- a/arch/arm64/mm/context.c > >> +++ b/arch/arm64/mm/context.c > >> @@ -22,47 +22,22 @@ > >> #include > >> #include > >> > >> +#include > >> #include > >> #include > >> #include > >> #include > >> > >> -struct asid_info > >> -{ > >> - atomic64_t generation; > >> - unsigned long *map; > >> - atomic64_t __percpu *active; > >> - u64 __percpu *reserved; > >> - u32 bits; > >> - raw_spinlock_t lock; > >> - /* Which CPU requires context flush on next call */ > >> - cpumask_t flush_pending; > >> - /* Number of ASID allocated by context (shift value) */ > >> - unsigned int ctxt_shift; > >> - /* Callback to locally flush the context. */ > >> - void (*flush_cpu_ctxt_cb)(void); > >> -} asid_info; > >> - > >> -#define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) > >> -#define reserved_asid(info, cpu) *per_cpu_ptr((info)->reserved, cpu) > >> - > >> static DEFINE_PER_CPU(atomic64_t, active_asids); > >> static DEFINE_PER_CPU(u64, reserved_asids); > >> > >> -#define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) > >> -#define NUM_ASIDS(info) (1UL << ((info)->bits)) > >> - > >> -#define ASID_FIRST_VERSION(info) NUM_ASIDS(info) > >> - > >> #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 > >> #define ASID_PER_CONTEXT 2 > >> #else > >> #define ASID_PER_CONTEXT 1 > >> #endif > >> > >> -#define NUM_CTXT_ASIDS(info) (NUM_ASIDS(info) >> (info)- > >ctxt_shift) > >> -#define asid2idx(info, asid) (((asid) & ~ASID_MASK(info)) >> (info)- > >ctxt_shift) > >> -#define idx2asid(info, idx) (((idx) << (info)->ctxt_shift) & > ~ASID_MASK(info)) > >> +struct asid_info asid_info; > >> > >> /* Get the ASIDBits supported by the current CPU */ > >> static u32 get_cpu_asid_bits(void) > >> @@ -102,178 +77,6 @@ void verify_cpu_asid_bits(void) > >> } > >> } > >> > >> -static void flush_context(struct asid_info *info) > >> -{ > >> - int i; > >> - u64 asid; > >> - > >> - /* Update the list of reserved ASIDs and the ASID bitmap. */ > >> - bitmap_clear(info->map, 0, NUM_CTXT_ASIDS(info)); > >> - > >> - for_each_possible_cpu(i) { > >> - asid = atomic64_xchg_relaxed(&active_asid(info, i), 0); > >> - /* > >> - * If this CPU has already been through a > >> - * rollover, but hasn't run another task in > >> - * the meantime, we must preserve its reserved > >> - * ASID, as this is the only trace we have of > >> - * the process it is still running. > >> - */ > >> - if (asid == 0) > >> - asid = reserved_asid(info, i); > >> - __set_bit(asid2idx(info, asid), info->map); > >> - reserved_asid(info, i) = asid; > >> - } > >> - > >> - /* > >> - * Queue a TLB invalidation for each CPU to perform on next > >> - * context-switch > >> - */ > >> - cpumask_setall(&info->flush_pending); > >> -} > >> - > >> -static bool check_update_reserved_asid(struct asid_info *info, u64 asid, > >> - u64 newasid) > >> -{ > >> - int cpu; > >> - bool hit = false; > >> - > >> - /* > >> - * Iterate over the set of reserved ASIDs looking for a match. > >> - * If we find one, then we can update our mm to use newasid > >> - * (i.e. the same ASID in the current generation) but we can't > >> - * exit the loop early, since we need to ensure that all copies > >> - * of the old ASID are updated to reflect the mm. Failure to do > >> - * so could result in us missing the reserved ASID in a future > >> - * generation. > >> - */ > >> - for_each_possible_cpu(cpu) { > >> - if (reserved_asid(info, cpu) == asid) { > >> - hit = true; > >> - reserved_asid(info, cpu) = newasid; > >> - } > >> - } > >> - > >> - return hit; > >> -} > >> - > >> -static u64 new_context(struct asid_info *info, atomic64_t *pasid) > >> -{ > >> - static u32 cur_idx = 1; > >> - u64 asid = atomic64_read(pasid); > >> - u64 generation = atomic64_read(&info->generation); > >> - > >> - if (asid != 0) { > >> - u64 newasid = generation | (asid & ~ASID_MASK(info)); > >> - > >> - /* > >> - * If our current ASID was active during a rollover, we > >> - * can continue to use it and this was just a false alarm. > >> - */ > >> - if (check_update_reserved_asid(info, asid, newasid)) > >> - return newasid; > >> - > >> - /* > >> - * We had a valid ASID in a previous life, so try to re-use > >> - * it if possible. > >> - */ > >> - if (!__test_and_set_bit(asid2idx(info, asid), info->map)) > >> - return newasid; > >> - } > >> - > >> - /* > >> - * Allocate a free ASID. If we can't find one, take a note of the > >> - * currently active ASIDs and mark the TLBs as requiring flushes. We > >> - * always count from ASID #2 (index 1), as we use ASID #0 when setting > >> - * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd > >> - * pairs. > >> - */ > >> - asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), cur_idx); > >> - if (asid != NUM_CTXT_ASIDS(info)) > >> - goto set_asid; > >> - > >> - /* We're out of ASIDs, so increment the global generation count */ > >> - generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION(info), > >> - &info->generation); > >> - flush_context(info); > >> - > >> - /* We have more ASIDs than CPUs, so this will always succeed */ > >> - asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), 1); > >> - > >> -set_asid: > >> - __set_bit(asid, info->map); > >> - cur_idx = asid; > >> - return idx2asid(info, asid) | generation; > >> -} > >> - > >> -static void asid_new_context(struct asid_info *info, atomic64_t *pasid, > >> - unsigned int cpu); > >> - > >> -/* > >> - * Check the ASID is still valid for the context. If not generate a new ASID. > >> - * > >> - * @pasid: Pointer to the current ASID batch > >> - * @cpu: current CPU ID. Must have been acquired throught get_cpu() > >> - */ > >> -static void asid_check_context(struct asid_info *info, > >> - atomic64_t *pasid, unsigned int cpu) > >> -{ > >> - u64 asid, old_active_asid; > >> - > >> - asid = atomic64_read(pasid); > >> - > >> - /* > >> - * The memory ordering here is subtle. > >> - * If our active_asid is non-zero and the ASID matches the current > >> - * generation, then we update the active_asid entry with a relaxed > >> - * cmpxchg. Racing with a concurrent rollover means that either: > >> - * > >> - * - We get a zero back from the cmpxchg and end up waiting on the > >> - * lock. Taking the lock synchronises with the rollover and so > >> - * we are forced to see the updated generation. > >> - * > >> - * - We get a valid ASID back from the cmpxchg, which means the > >> - * relaxed xchg in flush_context will treat us as reserved > >> - * because atomic RmWs are totally ordered for a given location. > >> - */ > >> - old_active_asid = atomic64_read(&active_asid(info, cpu)); > >> - if (old_active_asid && > >> - !((asid ^ atomic64_read(&info->generation)) >> info->bits) && > >> - atomic64_cmpxchg_relaxed(&active_asid(info, cpu), > >> - old_active_asid, asid)) > >> - return; > >> - > >> - asid_new_context(info, pasid, cpu); > >> -} > >> - > >> -/* > >> - * Generate a new ASID for the context. > >> - * > >> - * @pasid: Pointer to the current ASID batch allocated. It will be updated > >> - * with the new ASID batch. > >> - * @cpu: current CPU ID. Must have been acquired through get_cpu() > >> - */ > >> -static void asid_new_context(struct asid_info *info, atomic64_t *pasid, > >> - unsigned int cpu) > >> -{ > >> - unsigned long flags; > >> - u64 asid; > >> - > >> - raw_spin_lock_irqsave(&info->lock, flags); > >> - /* Check that our ASID belongs to the current generation. */ > >> - asid = atomic64_read(pasid); > >> - if ((asid ^ atomic64_read(&info->generation)) >> info->bits) { > >> - asid = new_context(info, pasid); > >> - atomic64_set(pasid, asid); > >> - } > >> - > >> - if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) > >> - info->flush_cpu_ctxt_cb(); > >> - > >> - atomic64_set(&active_asid(info, cpu), asid); > >> - raw_spin_unlock_irqrestore(&info->lock, flags); > >> -} > >> - > >> void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) > >> { > >> if (system_supports_cnp()) > >> @@ -305,38 +108,6 @@ static void asid_flush_cpu_ctxt(void) > >> local_flush_tlb_all(); > >> } > >> > >> -/* > >> - * Initialize the ASID allocator > >> - * > >> - * @info: Pointer to the asid allocator structure > >> - * @bits: Number of ASIDs available > >> - * @asid_per_ctxt: Number of ASIDs to allocate per-context. ASIDs are > >> - * allocated contiguously for a given context. This value should be a power > of > >> - * 2. > >> - */ > >> -static int asid_allocator_init(struct asid_info *info, > >> - u32 bits, unsigned int asid_per_ctxt, > >> - void (*flush_cpu_ctxt_cb)(void)) > >> -{ > >> - info->bits = bits; > >> - info->ctxt_shift = ilog2(asid_per_ctxt); > >> - info->flush_cpu_ctxt_cb = flush_cpu_ctxt_cb; > >> - /* > >> - * Expect allocation after rollover to fail if we don't have at least > >> - * one more ASID than CPUs. ASID #0 is always reserved. > >> - */ > >> - WARN_ON(NUM_CTXT_ASIDS(info) - 1 <= num_possible_cpus()); > >> - atomic64_set(&info->generation, ASID_FIRST_VERSION(info)); > >> - info->map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), > >> - sizeof(*info->map), GFP_KERNEL); > >> - if (!info->map) > >> - return -ENOMEM; > >> - > >> - raw_spin_lock_init(&info->lock); > >> - > >> - return 0; > >> -} > >> - > >> static int asids_init(void) > >> { > >> u32 bits = get_cpu_asid_bits(); > >> @@ -344,7 +115,7 @@ static int asids_init(void) > >> if (!asid_allocator_init(&asid_info, bits, ASID_PER_CONTEXT, > >> asid_flush_cpu_ctxt)) > >> panic("Unable to initialize ASID allocator for %lu ASIDs\n", > >> - 1UL << bits); > >> + NUM_ASIDS(&asid_info)); > >> > >> asid_info.active = &active_asids; > >> asid_info.reserved = &reserved_asids; > >> _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel