From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY, USER_AGENT_SANE_2 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A074BC4743C for ; Mon, 21 Jun 2021 12:36:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7A6ED61059 for ; Mon, 21 Jun 2021 12:36:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230028AbhFUMin (ORCPT ); Mon, 21 Jun 2021 08:38:43 -0400 Received: from Mailgw01.mediatek.com ([1.203.163.78]:3728 "EHLO mailgw01.mediatek.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S229663AbhFUMim (ORCPT ); Mon, 21 Jun 2021 08:38:42 -0400 X-UUID: 887999824f0642cbb101dc8722195599-20210621 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Transfer-Encoding:MIME-Version:Content-Type:References:In-Reply-To:Date:CC:To:From:Subject:Message-ID; bh=euPrHxGrbdSW9VrDAhIBZeDpnfSmXLN+b5PLztQHtiI=; b=BYEY9xPgLQ2iv9v++FpkGd6Fj24sDWk5iTn0oZap5coZvCk0mrZDrVMgheEGU+jjlrxqfXnU3X0ekZO7sC5lReZnuyryV7T3B33UW+PaUXl0zxKY7OebOPx8wOQhVMFXGm/CYn7OyuGbXXENMRKZC8aamU6TsLHvw5x/2/N0pvo=; X-UUID: 887999824f0642cbb101dc8722195599-20210621 Received: from mtkcas32.mediatek.inc [(172.27.4.253)] by mailgw01.mediatek.com (envelope-from ) (mailgw01.mediatek.com ESMTP with TLSv1.2 ECDHE-RSA-AES256-SHA384 256/256) with ESMTP id 1674989814; Mon, 21 Jun 2021 20:31:03 +0800 Received: from mtkcas07.mediatek.inc (172.21.101.84) by MTKMBS33N2.mediatek.inc (172.27.4.76) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 21 Jun 2021 20:30:58 +0800 Received: from mtksdccf07 (172.21.84.99) by mtkcas07.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 21 Jun 2021 20:30:57 +0800 Message-ID: Subject: Re: [PATCH 1/1] psi: stop relying on timer_pending for poll_work rescheduling From: YT Chang To: Suren Baghdasaryan , CC: , , , , , , , , , , , , , , , , , , Date: Mon, 21 Jun 2021 20:30:57 +0800 In-Reply-To: <20210617212654.1529125-1-surenb@google.com> References: <20210617212654.1529125-1-surenb@google.com> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 MIME-Version: 1.0 X-TM-SNTS-SMTP: 5B4C12E4F793A106C228712A7701E8C268B27DA9BDF5A3A05ACA37B3B2E431DF2000:8 X-MTK: N Content-Transfer-Encoding: base64 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org T24gVGh1LCAyMDIxLTA2LTE3IGF0IDE0OjI2IC0wNzAwLCBTdXJlbiBCYWdoZGFzYXJ5YW4gd3Jv dGU6DQo+IFBzaSBwb2xsaW5nIG1lY2hhbmlzbSBpcyB0cnlpbmcgdG8gbWluaW1pemUgdGhlIG51 bWJlciBvZiB3YWtldXBzIHRvDQo+IHJ1biBwc2lfcG9sbF93b3JrIGFuZCBpcyBjdXJyZW50bHkg cmVseWluZyBvbiB0aW1lcl9wZW5kaW5nKCkgdG8NCj4gZGV0ZWN0DQo+IHdoZW4gdGhpcyB3b3Jr IGlzIGFscmVhZHkgc2NoZWR1bGVkLiBUaGlzIHByb3ZpZGVzIGEgd2luZG93IG9mDQo+IG9wcG9y dHVuaXR5DQo+IGZvciBwc2lfZ3JvdXBfY2hhbmdlIHRvIHNjaGVkdWxlIGFuIGltbWVkaWF0ZSBw c2lfcG9sbF93b3JrIGFmdGVyDQo+IHBvbGxfdGltZXJfZm4gZ290IGNhbGxlZCBidXQgYmVmb3Jl IHBzaV9wb2xsX3dvcmsgY291bGQgcmVzY2hlZHVsZQ0KPiBpdHNlbGYuDQo+IEJlbG93IGlzIHRo ZSBkZXBpY3Rpb24gb2YgdGhpcyBlbnRpcmUgd2luZG93Og0KPiANCj4gcG9sbF90aW1lcl9mbg0K PiAgIHdha2VfdXBfaW50ZXJydXB0aWJsZSgmZ3JvdXAtPnBvbGxfd2FpdCk7DQo+IA0KPiBwc2lf cG9sbF93b3JrZXINCj4gICB3YWl0X2V2ZW50X2ludGVycnVwdGlibGUoZ3JvdXAtPnBvbGxfd2Fp dCwgLi4uKQ0KPiAgIHBzaV9wb2xsX3dvcmsNCj4gICAgIHBzaV9zY2hlZHVsZV9wb2xsX3dvcmsN Cj4gICAgICAgaWYgKHRpbWVyX3BlbmRpbmcoJmdyb3VwLT5wb2xsX3RpbWVyKSkgcmV0dXJuOw0K PiAgICAgICAuLi4NCj4gICAgICAgbW9kX3RpbWVyKCZncm91cC0+cG9sbF90aW1lciwgamlmZmll cyArIGRlbGF5KTsNCj4gDQo+IFByaW9yIHRvIDQ2MWRhYmEwNmJkYyB3ZSB1c2VkIHRvIHJlbHkg b24gcG9sbF9zY2hlZHVsZWQgYXRvbWljIHdoaWNoDQo+IHdhcw0KPiByZXNldCBhbmQgc2V0IGJh Y2sgaW5zaWRlIHBzaV9wb2xsX3dvcmsgYW5kIHRoZXJlZm9yZSB0aGlzIHJhY2UNCj4gd2luZG93 DQo+IHdhcyBtdWNoIHNtYWxsZXIuDQo+IFRoZSBsYXJnZXIgd2luZG93IGNhdXNlcyBpbmNyZWFz ZWQgbnVtYmVyIG9mIHdha2V1cHMgYW5kIG91ciBwYXJ0bmVycw0KPiByZXBvcnQgdmlzaWJsZSBw b3dlciByZWdyZXNzaW9uIG9mIH4xMG1BIGFmdGVyIGFwcGx5aW5nIDQ2MWRhYmEwNmJkYy4NCj4g QnJpbmcgYmFjayB0aGUgcG9sbF9zY2hlZHVsZWQgYXRvbWljIGFuZCBtYWtlIHRoaXMgcmFjZSB3 aW5kb3cgZXZlbg0KPiBuYXJyb3dlciBieSByZXNldHRpbmcgcG9sbF9zY2hlZHVsZWQgb25seSB3 aGVuIHdlIHJlYWNoIHBvbGxpbmcNCj4gZXhwaXJhdGlvbg0KPiB0aW1lLiBUaGlzIGRvZXMgbm90 IGNvbXBsZXRlbHkgZWxpbWluYXRlIHRoZSBwb3NzaWJpbGl0eSBvZiBleHRyYQ0KPiB3YWtldXBz DQo+IGNhdXNlZCBieSBhIHJhY2Ugd2l0aCBwc2lfZ3JvdXBfY2hhbmdlIGhvd2V2ZXIgaXQgd2ls bCBsaW1pdCBpdCB0bw0KPiB0aGUNCj4gd29yc3QgY2FzZSBzY2VuYXJpbyBvZiBvbmUgZXh0cmEg d2FrZXVwIHBlciBldmVyeSB0cmFja2luZyB3aW5kb3cNCj4gKDAuNXMNCj4gaW4gdGhlIHdvcnN0 IGNhc2UpLg0KPiBCeSB0cmFjaW5nIHRoZSBudW1iZXIgb2YgaW1tZWRpYXRlIHJlc2NoZWR1bGlu ZyBhdHRlbXB0cyBwZXJmb3JtZWQgYnkNCj4gcHNpX2dyb3VwX2NoYW5nZSBhbmQgdGhlIG51bWJl ciBvZiB0aGVzZSBhdHRlbXB0cyBiZWluZyBibG9ja2VkIGR1ZQ0KPiB0bw0KPiBwc2kgbW9uaXRv ciBiZWluZyBhbHJlYWR5IGFjdGl2ZSwgd2UgY2FuIGFzc2VzcyB0aGUgZWZmZWN0cyBvZiB0aGlz DQo+IGNoYW5nZToNCj4gDQo+IEJlZm9yZSB0aGUgcGF0Y2g6DQo+ICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICBSdW4jMSAgICBSdW4jMiAgICAgIFJ1biMzDQo+IElt bWVkaWF0ZSByZXNjaGVkdWxlcw0KPiBhdHRlbXB0ZWQ6ICAgICAgICAgICA2ODQzNjUgICAxMzg1 MTU2ICAgIDEyNjEyNDANCj4gSW1tZWRpYXRlIHJlc2NoZWR1bGVzDQo+IGJsb2NrZWQ6ICAgICAg ICAgICAgIDY4Mjg0NiAgIDEzODE2NTQgICAgMTI1ODY4Mg0KPiBJbW1lZGlhdGUgcmVzY2hlZHVs ZXMgKGRlbHRhKTogICAgICAgICAgICAgMTUxOSAgICAgMzUwMiAgICAgICAyNTU4DQo+IEltbWVk aWF0ZSByZXNjaGVkdWxlcyAoJSBvZiBhdHRlbXB0ZWQpOiAgICAwLjIyJSAgICAwLjI1JSAgICAg IDAuMjAlDQo+IA0KPiBBZnRlciB0aGUgcGF0Y2g6DQo+ICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICBSdW4jMSAgICBSdW4jMiAgICAgIFJ1biMzDQo+IEltbWVkaWF0 ZSByZXNjaGVkdWxlcyBhdHRlbXB0ZWQ6ICAgICAgICAgICA4ODIyNDQgICA3NzAyOTggICAgNDI2 MjE4DQo+IEltbWVkaWF0ZSByZXNjaGVkdWxlcyBibG9ja2VkOiAgICAgICAgICAgICA4ODE5OTYg ICA3Njk3OTYgICAgNDI2MDc0DQo+IEltbWVkaWF0ZSByZXNjaGVkdWxlcyAoZGVsdGEpOiAgICAg ICAgICAgICAyNDggICAgICA1MDIgICAgICAgMTQ0DQo+IEltbWVkaWF0ZSByZXNjaGVkdWxlcyAo JSBvZiBhdHRlbXB0ZWQpOiAgICAwLjAzJSAgICAwLjA3JSAgICAgMC4wMyUNCj4gDQo+IFRoZSBu dW1iZXIgb2Ygbm9uLWJsb2NrZWQgaW1tZWRpYXRlIHJlc2NoZWR1bGVzIGRyb3BwZWQgZnJvbSAw LjIyLQ0KPiAwLjI1JQ0KPiB0byAwLjAzLTAuMDclLiBUaGUgZHJvcCBpcyBhdHRyaWJ1dGVkIHRv IHRoZSBkZWNyZWFzZSBpbiB0aGUgcmFjZQ0KPiB3aW5kb3cNCj4gc2l6ZSBhbmQgdGhlIGZhY3Qg dGhhdCB3ZSBhbGxvdyB0aGlzIHJhY2Ugb25seSB3aGVuIHBzaSBtb25pdG9ycw0KPiByZWFjaA0K PiBwb2xsaW5nIHdpbmRvdyBleHBpcmF0aW9uIHRpbWUuDQo+IA0KDQoNClRoZSByZWdyZXNzaW9u IHBvd2VyIGRhdGEgcG9pbnRzIG9mIEFuZHJvaWQgcGhvbmUgaW4gaG9tZSBzY3JlZW4gaWRsZToN Ck9yaWdpbmFsCTpiYXNlbGluZQ0KT3JpZ2luYWwrIFBhdGNoIDogLTIxLjUlICgtMTEuNW1BKQ0K VGVzdGVkLWJ5OiBTSCBDaGVuIDxzaG93LWhvbmcuY2hlbkBtZWRpYXRlay5jb20+DQoNCj4gRml4 ZXM6IDQ2MWRhYmEwNmJkYyAoInBzaTogZWxpbWluYXRlIGt0aHJlYWRfd29ya2VyIGZyb20gcHNp IHRyaWdnZXINCj4gc2NoZWR1bGluZyBtZWNoYW5pc20iKQ0KPiBSZXBvcnRlZC1ieTogS2F0aGxl ZW4gQ2hhbmcgPHl0LmNoYW5nQG1lZGlhdGVrLmNvbT4NCj4gUmVwb3J0ZWQtYnk6IFdlbmp1IFh1 IDx3ZW5qdS54dUBtZWRpYXRlay5jb20+DQo+IFJlcG9ydGVkLWJ5OiBKb25hdGhhbiBDaGVuIDxq b25hdGhhbi5qbWNoZW5AbWVkaWF0ZWsuY29tPg0KPiBTaWduZWQtb2ZmLWJ5OiBTdXJlbiBCYWdo ZGFzYXJ5YW4gPHN1cmVuYkBnb29nbGUuY29tPg0KPiAtLS0NCj4gIGluY2x1ZGUvbGludXgvcHNp X3R5cGVzLmggfCAgMSArDQo+ICBrZXJuZWwvc2NoZWQvcHNpLmMgICAgICAgIHwgNDEgKysrKysr KysrKysrKysrKysrKysrKysrKysrKy0tLS0tLS0tLQ0KPiAtLQ0KPiAgMiBmaWxlcyBjaGFuZ2Vk LCAzMSBpbnNlcnRpb25zKCspLCAxMSBkZWxldGlvbnMoLSkNCj4gDQo+IGRpZmYgLS1naXQgYS9p bmNsdWRlL2xpbnV4L3BzaV90eXBlcy5oIGIvaW5jbHVkZS9saW51eC9wc2lfdHlwZXMuaA0KPiBp bmRleCAwYTIzMzAwZDQ5YWYuLmVmOGJkODlkMDY1ZSAxMDA2NDQNCj4gLS0tIGEvaW5jbHVkZS9s aW51eC9wc2lfdHlwZXMuaA0KPiArKysgYi9pbmNsdWRlL2xpbnV4L3BzaV90eXBlcy5oDQo+IEBA IC0xNTgsNiArMTU4LDcgQEAgc3RydWN0IHBzaV9ncm91cCB7DQo+ICAJc3RydWN0IHRpbWVyX2xp c3QgcG9sbF90aW1lcjsNCj4gIAl3YWl0X3F1ZXVlX2hlYWRfdCBwb2xsX3dhaXQ7DQo+ICAJYXRv bWljX3QgcG9sbF93YWtldXA7DQo+ICsJYXRvbWljX3QgcG9sbF9zY2hlZHVsZWQ7DQo+ICANCj4g IAkvKiBQcm90ZWN0cyBkYXRhIHVzZWQgYnkgdGhlIG1vbml0b3IgKi8NCj4gIAlzdHJ1Y3QgbXV0 ZXggdHJpZ2dlcl9sb2NrOw0KPiBkaWZmIC0tZ2l0IGEva2VybmVsL3NjaGVkL3BzaS5jIGIva2Vy bmVsL3NjaGVkL3BzaS5jDQo+IGluZGV4IGNjMjVhM2NmZjQxZi4uZmVkN2M5YzJiMjc2IDEwMDY0 NA0KPiAtLS0gYS9rZXJuZWwvc2NoZWQvcHNpLmMNCj4gKysrIGIva2VybmVsL3NjaGVkL3BzaS5j DQo+IEBAIC0xOTMsNiArMTkzLDcgQEAgc3RhdGljIHZvaWQgZ3JvdXBfaW5pdChzdHJ1Y3QgcHNp X2dyb3VwICpncm91cCkNCj4gIAlJTklUX0RFTEFZRURfV09SSygmZ3JvdXAtPmF2Z3Nfd29yaywg cHNpX2F2Z3Nfd29yayk7DQo+ICAJbXV0ZXhfaW5pdCgmZ3JvdXAtPmF2Z3NfbG9jayk7DQo+ICAJ LyogSW5pdCB0cmlnZ2VyLXJlbGF0ZWQgbWVtYmVycyAqLw0KPiArCWF0b21pY19zZXQoJmdyb3Vw LT5wb2xsX3NjaGVkdWxlZCwgMCk7DQo+ICAJbXV0ZXhfaW5pdCgmZ3JvdXAtPnRyaWdnZXJfbG9j ayk7DQo+ICAJSU5JVF9MSVNUX0hFQUQoJmdyb3VwLT50cmlnZ2Vycyk7DQo+ICAJbWVtc2V0KGdy b3VwLT5ucl90cmlnZ2VycywgMCwgc2l6ZW9mKGdyb3VwLT5ucl90cmlnZ2VycykpOw0KPiBAQCAt NTUxLDE4ICs1NTIsMTQgQEAgc3RhdGljIHU2NCB1cGRhdGVfdHJpZ2dlcnMoc3RydWN0IHBzaV9n cm91cA0KPiAqZ3JvdXAsIHU2NCBub3cpDQo+ICAJcmV0dXJuIG5vdyArIGdyb3VwLT5wb2xsX21p bl9wZXJpb2Q7DQo+ICB9DQo+ICANCj4gLS8qIFNjaGVkdWxlIHBvbGxpbmcgaWYgaXQncyBub3Qg YWxyZWFkeSBzY2hlZHVsZWQuICovDQo+IC1zdGF0aWMgdm9pZCBwc2lfc2NoZWR1bGVfcG9sbF93 b3JrKHN0cnVjdCBwc2lfZ3JvdXAgKmdyb3VwLCB1bnNpZ25lZA0KPiBsb25nIGRlbGF5KQ0KPiAr LyogU2NoZWR1bGUgcG9sbGluZyBpZiBpdCdzIG5vdCBhbHJlYWR5IHNjaGVkdWxlZCBvciBmb3Jj ZWQuICovDQo+ICtzdGF0aWMgdm9pZCBwc2lfc2NoZWR1bGVfcG9sbF93b3JrKHN0cnVjdCBwc2lf Z3JvdXAgKmdyb3VwLCB1bnNpZ25lZA0KPiBsb25nIGRlbGF5LA0KPiArCQkJCSAgIGJvb2wgZm9y Y2UpDQo+ICB7DQo+ICAJc3RydWN0IHRhc2tfc3RydWN0ICp0YXNrOw0KPiAgDQo+IC0JLyoNCj4g LQkgKiBEbyBub3QgcmVzY2hlZHVsZSBpZiBhbHJlYWR5IHNjaGVkdWxlZC4NCj4gLQkgKiBQb3Nz aWJsZSByYWNlIHdpdGggYSB0aW1lciBzY2hlZHVsZWQgYWZ0ZXIgdGhpcyBjaGVjayBidXQNCj4g YmVmb3JlDQo+IC0JICogbW9kX3RpbWVyIGJlbG93IGNhbiBiZSB0b2xlcmF0ZWQgYmVjYXVzZSBn cm91cC0NCj4gPnBvbGxpbmdfbmV4dF91cGRhdGUNCj4gLQkgKiB3aWxsIGtlZXAgdXBkYXRlcyBv biBzY2hlZHVsZS4NCj4gLQkgKi8NCj4gLQlpZiAodGltZXJfcGVuZGluZygmZ3JvdXAtPnBvbGxf dGltZXIpKQ0KPiArCS8qIGNtcHhjaGcgc2hvdWxkIGJlIGNhbGxlZCBldmVuIHdoZW4gIWZvcmNl IHRvIHNldA0KPiBwb2xsX3NjaGVkdWxlZCAqLw0KPiArCWlmIChhdG9taWNfY21weGNoZygmZ3Jv dXAtPnBvbGxfc2NoZWR1bGVkLCAwLCAxKSAhPSAwICYmDQo+ICFmb3JjZSkNCj4gIAkJcmV0dXJu Ow0KPiAgDQo+ICAJcmN1X3JlYWRfbG9jaygpOw0KPiBAQCAtNTc0LDEyICs1NzEsMTUgQEAgc3Rh dGljIHZvaWQgcHNpX3NjaGVkdWxlX3BvbGxfd29yayhzdHJ1Y3QNCj4gcHNpX2dyb3VwICpncm91 cCwgdW5zaWduZWQgbG9uZyBkZWxheSkNCj4gIAkgKi8NCj4gIAlpZiAobGlrZWx5KHRhc2spKQ0K PiAgCQltb2RfdGltZXIoJmdyb3VwLT5wb2xsX3RpbWVyLCBqaWZmaWVzICsgZGVsYXkpOw0KPiAr CWVsc2UNCj4gKwkJYXRvbWljX3NldCgmZ3JvdXAtPnBvbGxfc2NoZWR1bGVkLCAwKTsNCj4gIA0K PiAgCXJjdV9yZWFkX3VubG9jaygpOw0KPiAgfQ0KPiAgDQo+ICBzdGF0aWMgdm9pZCBwc2lfcG9s bF93b3JrKHN0cnVjdCBwc2lfZ3JvdXAgKmdyb3VwKQ0KPiAgew0KPiArCWJvb2wgZm9yY2VfcmVz Y2hlZHVsZSA9IGZhbHNlOw0KPiAgCXUzMiBjaGFuZ2VkX3N0YXRlczsNCj4gIAl1NjQgbm93Ow0K PiAgDQo+IEBAIC01ODcsNiArNTg3LDIzIEBAIHN0YXRpYyB2b2lkIHBzaV9wb2xsX3dvcmsoc3Ry dWN0IHBzaV9ncm91cA0KPiAqZ3JvdXApDQo+ICANCj4gIAlub3cgPSBzY2hlZF9jbG9jaygpOw0K PiAgDQo+ICsJaWYgKG5vdyA+IGdyb3VwLT5wb2xsaW5nX3VudGlsKSB7DQo+ICsJCS8qDQo+ICsJ CSAqIFdlIGFyZSBlaXRoZXIgYWJvdXQgdG8gc3RhcnQgb3IgbWlnaHQgc3RvcCBwb2xsaW5nDQo+ IGlmIG5vDQo+ICsJCSAqIHN0YXRlIGNoYW5nZSB3YXMgcmVjb3JkZWQuIFJlc2V0dGluZyBwb2xs X3NjaGVkdWxlZA0KPiBsZWF2ZXMNCj4gKwkJICogYSBzbWFsbCB3aW5kb3cgZm9yIHBzaV9ncm91 cF9jaGFuZ2UgdG8gc25lYWsgaW4gYW5kDQo+IHNjaGVkdWxlDQo+ICsJCSAqIGFuIGltbWVnaWF0 ZSBwb2xsX3dvcmsgYmVmb3JlIHdlIGdldCB0bw0KPiByZXNjaGVkdWxpbmcuIE9uZQ0KPiArCQkg KiBwb3RlbnRpYWwgZXh0cmEgd2FrZXVwIGF0IHRoZSBlbmQgb2YgdGhlIHBvbGxpbmcNCj4gd2lu ZG93DQo+ICsJCSAqIHNob3VsZCBiZSBuZWdsaWdpYmxlIGFuZCBwb2xsaW5nX25leHRfdXBkYXRl IHN0aWxsDQo+IGtlZXBzDQo+ICsJCSAqIHVwZGF0ZXMgY29ycmVjdGx5IG9uIHNjaGVkdWxlLg0K PiArCQkgKi8NCj4gKwkJYXRvbWljX3NldCgmZ3JvdXAtPnBvbGxfc2NoZWR1bGVkLCAwKTsNCj4g Kwl9IGVsc2Ugew0KPiArCQkvKiBQb2xsaW5nIHdpbmRvdyBpcyBub3Qgb3Zlciwga2VlcCByZXNj aGVkdWxpbmcgKi8NCj4gKwkJZm9yY2VfcmVzY2hlZHVsZSA9IHRydWU7DQo+ICsJfQ0KPiArDQo+ ICsNCj4gIAljb2xsZWN0X3BlcmNwdV90aW1lcyhncm91cCwgUFNJX1BPTEwsICZjaGFuZ2VkX3N0 YXRlcyk7DQo+ICANCj4gIAlpZiAoY2hhbmdlZF9zdGF0ZXMgJiBncm91cC0+cG9sbF9zdGF0ZXMp IHsNCj4gQEAgLTYxMiw3ICs2MjksOCBAQCBzdGF0aWMgdm9pZCBwc2lfcG9sbF93b3JrKHN0cnVj dCBwc2lfZ3JvdXANCj4gKmdyb3VwKQ0KPiAgCQlncm91cC0+cG9sbGluZ19uZXh0X3VwZGF0ZSA9 IHVwZGF0ZV90cmlnZ2Vycyhncm91cCwNCj4gbm93KTsNCj4gIA0KPiAgCXBzaV9zY2hlZHVsZV9w b2xsX3dvcmsoZ3JvdXAsDQo+IC0JCW5zZWNzX3RvX2ppZmZpZXMoZ3JvdXAtPnBvbGxpbmdfbmV4 dF91cGRhdGUgLSBub3cpICsNCj4gMSk7DQo+ICsJCW5zZWNzX3RvX2ppZmZpZXMoZ3JvdXAtPnBv bGxpbmdfbmV4dF91cGRhdGUgLSBub3cpICsgMSwNCj4gKwkJZm9yY2VfcmVzY2hlZHVsZSk7DQo+ ICANCj4gIG91dDoNCj4gIAltdXRleF91bmxvY2soJmdyb3VwLT50cmlnZ2VyX2xvY2spOw0KPiBA QCAtNzM2LDcgKzc1NCw3IEBAIHN0YXRpYyB2b2lkIHBzaV9ncm91cF9jaGFuZ2Uoc3RydWN0IHBz aV9ncm91cA0KPiAqZ3JvdXAsIGludCBjcHUsDQo+ICAJd3JpdGVfc2VxY291bnRfZW5kKCZncm91 cGMtPnNlcSk7DQo+ICANCj4gIAlpZiAoc3RhdGVfbWFzayAmIGdyb3VwLT5wb2xsX3N0YXRlcykN Cj4gLQkJcHNpX3NjaGVkdWxlX3BvbGxfd29yayhncm91cCwgMSk7DQo+ICsJCXBzaV9zY2hlZHVs ZV9wb2xsX3dvcmsoZ3JvdXAsIDEsIGZhbHNlKTsNCj4gIA0KPiAgCWlmICh3YWtlX2Nsb2NrICYm ICFkZWxheWVkX3dvcmtfcGVuZGluZygmZ3JvdXAtPmF2Z3Nfd29yaykpDQo+ICAJCXNjaGVkdWxl X2RlbGF5ZWRfd29yaygmZ3JvdXAtPmF2Z3Nfd29yaywgUFNJX0ZSRVEpOw0KPiBAQCAtMTIzNSw2 ICsxMjUzLDcgQEAgc3RhdGljIHZvaWQgcHNpX3RyaWdnZXJfZGVzdHJveShzdHJ1Y3Qga3JlZg0K PiAqcmVmKQ0KPiAgCQkgKi8NCj4gIAkJZGVsX3RpbWVyX3N5bmMoJmdyb3VwLT5wb2xsX3RpbWVy KTsNCj4gIAkJa3RocmVhZF9zdG9wKHRhc2tfdG9fZGVzdHJveSk7DQo+ICsJCWF0b21pY19zZXQo Jmdyb3VwLT5wb2xsX3NjaGVkdWxlZCwgMCk7DQo+ICAJfQ0KPiAgCWtmcmVlKHQpOw0KPiAgfQ0K From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.6 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY, USER_AGENT_SANE_2 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C03FC4743C for ; Mon, 21 Jun 2021 12:31:33 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3459560FE7 for ; Mon, 21 Jun 2021 12:31:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3459560FE7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=mediatek.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-mediatek-bounces+linux-mediatek=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Date:CC:To:From:Subject:Message-ID:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=fnv69veB9Rk7b8nJlTPxgyMJRHMLbixPbHDXav6elwU=; b=o1c5MScVeDK8/Q Mb0SGxY3R1QNPbRFY6onJvk78eaDor6IZvEFsfuB3ifxzzVzNvb6T3Tps/0fFEaPzwChQy/RT26Df ezyQvAO0+v6YTwkm9+KR9OueMfJHuUne94gxKFTpc+Cvi3XYswF4MGIZobCfuXGmyTjzllGmamZd+ PxbP+qePIfoZABHiFEVuKL+UdKF1t9X7ULd50HLoRztM254x/bpvIzxXfZ1Y+fVHAI/8Ma8u8N4Ja 2aJtI3hHhDSAYyifefIvIQC+2LBtmGVvL+bLDmjf5FMOH6yQzeMsZc/IFXPC0GMl8uKdquv6Rl1Xg H/3BrvC4pyG6G/ctA89w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lvJ5E-003Tc6-ON; Mon, 21 Jun 2021 12:31:20 +0000 Received: from mailgw01.mediatek.com ([216.200.240.184]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lvJ59-003Tb5-HH; Mon, 21 Jun 2021 12:31:19 +0000 X-UUID: bf782dbfed184650ab4e6bb5a2c9d7e1-20210621 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Transfer-Encoding:MIME-Version:Content-Type:References:In-Reply-To:Date:CC:To:From:Subject:Message-ID; bh=euPrHxGrbdSW9VrDAhIBZeDpnfSmXLN+b5PLztQHtiI=; b=BYEY9xPgLQ2iv9v++FpkGd6Fj24sDWk5iTn0oZap5coZvCk0mrZDrVMgheEGU+jjlrxqfXnU3X0ekZO7sC5lReZnuyryV7T3B33UW+PaUXl0zxKY7OebOPx8wOQhVMFXGm/CYn7OyuGbXXENMRKZC8aamU6TsLHvw5x/2/N0pvo=; X-UUID: bf782dbfed184650ab4e6bb5a2c9d7e1-20210621 Received: from mtkcas66.mediatek.inc [(172.29.193.44)] by mailgw01.mediatek.com (envelope-from ) (musrelay.mediatek.com ESMTP with TLSv1.2 ECDHE-RSA-AES256-SHA384 256/256) with ESMTP id 870973360; Mon, 21 Jun 2021 05:31:06 -0700 Received: from MTKMBS33N2.mediatek.inc (172.27.4.76) by MTKMBS62N1.mediatek.inc (172.29.193.41) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 21 Jun 2021 05:31:04 -0700 Received: from mtkcas07.mediatek.inc (172.21.101.84) by MTKMBS33N2.mediatek.inc (172.27.4.76) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 21 Jun 2021 20:30:58 +0800 Received: from mtksdccf07 (172.21.84.99) by mtkcas07.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 21 Jun 2021 20:30:57 +0800 Message-ID: Subject: Re: [PATCH 1/1] psi: stop relying on timer_pending for poll_work rescheduling From: YT Chang To: Suren Baghdasaryan , CC: , , , , , , , , , , , , , , , , , , Date: Mon, 21 Jun 2021 20:30:57 +0800 In-Reply-To: <20210617212654.1529125-1-surenb@google.com> References: <20210617212654.1529125-1-surenb@google.com> X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 MIME-Version: 1.0 X-TM-SNTS-SMTP: 5B4C12E4F793A106C228712A7701E8C268B27DA9BDF5A3A05ACA37B3B2E431DF2000:8 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210621_053117_042269_BA85CCC3 X-CRM114-Status: GOOD ( 36.05 ) X-BeenThere: linux-mediatek@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "Linux-mediatek" Errors-To: linux-mediatek-bounces+linux-mediatek=archiver.kernel.org@lists.infradead.org On Thu, 2021-06-17 at 14:26 -0700, Suren Baghdasaryan wrote: > Psi polling mechanism is trying to minimize the number of wakeups to > run psi_poll_work and is currently relying on timer_pending() to > detect > when this work is already scheduled. This provides a window of > opportunity > for psi_group_change to schedule an immediate psi_poll_work after > poll_timer_fn got called but before psi_poll_work could reschedule > itself. > Below is the depiction of this entire window: > > poll_timer_fn > wake_up_interruptible(&group->poll_wait); > > psi_poll_worker > wait_event_interruptible(group->poll_wait, ...) > psi_poll_work > psi_schedule_poll_work > if (timer_pending(&group->poll_timer)) return; > ... > mod_timer(&group->poll_timer, jiffies + delay); > > Prior to 461daba06bdc we used to rely on poll_scheduled atomic which > was > reset and set back inside psi_poll_work and therefore this race > window > was much smaller. > The larger window causes increased number of wakeups and our partners > report visible power regression of ~10mA after applying 461daba06bdc. > Bring back the poll_scheduled atomic and make this race window even > narrower by resetting poll_scheduled only when we reach polling > expiration > time. This does not completely eliminate the possibility of extra > wakeups > caused by a race with psi_group_change however it will limit it to > the > worst case scenario of one extra wakeup per every tracking window > (0.5s > in the worst case). > By tracing the number of immediate rescheduling attempts performed by > psi_group_change and the number of these attempts being blocked due > to > psi monitor being already active, we can assess the effects of this > change: > > Before the patch: > Run#1 Run#2 Run#3 > Immediate reschedules > attempted: 684365 1385156 1261240 > Immediate reschedules > blocked: 682846 1381654 1258682 > Immediate reschedules (delta): 1519 3502 2558 > Immediate reschedules (% of attempted): 0.22% 0.25% 0.20% > > After the patch: > Run#1 Run#2 Run#3 > Immediate reschedules attempted: 882244 770298 426218 > Immediate reschedules blocked: 881996 769796 426074 > Immediate reschedules (delta): 248 502 144 > Immediate reschedules (% of attempted): 0.03% 0.07% 0.03% > > The number of non-blocked immediate reschedules dropped from 0.22- > 0.25% > to 0.03-0.07%. The drop is attributed to the decrease in the race > window > size and the fact that we allow this race only when psi monitors > reach > polling window expiration time. > The regression power data points of Android phone in home screen idle: Original :baseline Original+ Patch : -21.5% (-11.5mA) Tested-by: SH Chen > Fixes: 461daba06bdc ("psi: eliminate kthread_worker from psi trigger > scheduling mechanism") > Reported-by: Kathleen Chang > Reported-by: Wenju Xu > Reported-by: Jonathan Chen > Signed-off-by: Suren Baghdasaryan > --- > include/linux/psi_types.h | 1 + > kernel/sched/psi.c | 41 ++++++++++++++++++++++++++++--------- > -- > 2 files changed, 31 insertions(+), 11 deletions(-) > > diff --git a/include/linux/psi_types.h b/include/linux/psi_types.h > index 0a23300d49af..ef8bd89d065e 100644 > --- a/include/linux/psi_types.h > +++ b/include/linux/psi_types.h > @@ -158,6 +158,7 @@ struct psi_group { > struct timer_list poll_timer; > wait_queue_head_t poll_wait; > atomic_t poll_wakeup; > + atomic_t poll_scheduled; > > /* Protects data used by the monitor */ > struct mutex trigger_lock; > diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c > index cc25a3cff41f..fed7c9c2b276 100644 > --- a/kernel/sched/psi.c > +++ b/kernel/sched/psi.c > @@ -193,6 +193,7 @@ static void group_init(struct psi_group *group) > INIT_DELAYED_WORK(&group->avgs_work, psi_avgs_work); > mutex_init(&group->avgs_lock); > /* Init trigger-related members */ > + atomic_set(&group->poll_scheduled, 0); > mutex_init(&group->trigger_lock); > INIT_LIST_HEAD(&group->triggers); > memset(group->nr_triggers, 0, sizeof(group->nr_triggers)); > @@ -551,18 +552,14 @@ static u64 update_triggers(struct psi_group > *group, u64 now) > return now + group->poll_min_period; > } > > -/* Schedule polling if it's not already scheduled. */ > -static void psi_schedule_poll_work(struct psi_group *group, unsigned > long delay) > +/* Schedule polling if it's not already scheduled or forced. */ > +static void psi_schedule_poll_work(struct psi_group *group, unsigned > long delay, > + bool force) > { > struct task_struct *task; > > - /* > - * Do not reschedule if already scheduled. > - * Possible race with a timer scheduled after this check but > before > - * mod_timer below can be tolerated because group- > >polling_next_update > - * will keep updates on schedule. > - */ > - if (timer_pending(&group->poll_timer)) > + /* cmpxchg should be called even when !force to set > poll_scheduled */ > + if (atomic_cmpxchg(&group->poll_scheduled, 0, 1) != 0 && > !force) > return; > > rcu_read_lock(); > @@ -574,12 +571,15 @@ static void psi_schedule_poll_work(struct > psi_group *group, unsigned long delay) > */ > if (likely(task)) > mod_timer(&group->poll_timer, jiffies + delay); > + else > + atomic_set(&group->poll_scheduled, 0); > > rcu_read_unlock(); > } > > static void psi_poll_work(struct psi_group *group) > { > + bool force_reschedule = false; > u32 changed_states; > u64 now; > > @@ -587,6 +587,23 @@ static void psi_poll_work(struct psi_group > *group) > > now = sched_clock(); > > + if (now > group->polling_until) { > + /* > + * We are either about to start or might stop polling > if no > + * state change was recorded. Resetting poll_scheduled > leaves > + * a small window for psi_group_change to sneak in and > schedule > + * an immegiate poll_work before we get to > rescheduling. One > + * potential extra wakeup at the end of the polling > window > + * should be negligible and polling_next_update still > keeps > + * updates correctly on schedule. > + */ > + atomic_set(&group->poll_scheduled, 0); > + } else { > + /* Polling window is not over, keep rescheduling */ > + force_reschedule = true; > + } > + > + > collect_percpu_times(group, PSI_POLL, &changed_states); > > if (changed_states & group->poll_states) { > @@ -612,7 +629,8 @@ static void psi_poll_work(struct psi_group > *group) > group->polling_next_update = update_triggers(group, > now); > > psi_schedule_poll_work(group, > - nsecs_to_jiffies(group->polling_next_update - now) + > 1); > + nsecs_to_jiffies(group->polling_next_update - now) + 1, > + force_reschedule); > > out: > mutex_unlock(&group->trigger_lock); > @@ -736,7 +754,7 @@ static void psi_group_change(struct psi_group > *group, int cpu, > write_seqcount_end(&groupc->seq); > > if (state_mask & group->poll_states) > - psi_schedule_poll_work(group, 1); > + psi_schedule_poll_work(group, 1, false); > > if (wake_clock && !delayed_work_pending(&group->avgs_work)) > schedule_delayed_work(&group->avgs_work, PSI_FREQ); > @@ -1235,6 +1253,7 @@ static void psi_trigger_destroy(struct kref > *ref) > */ > del_timer_sync(&group->poll_timer); > kthread_stop(task_to_destroy); > + atomic_set(&group->poll_scheduled, 0); > } > kfree(t); > } _______________________________________________ Linux-mediatek mailing list Linux-mediatek@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-mediatek From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.6 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY, USER_AGENT_SANE_2 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 569C8C4743C for ; Mon, 21 Jun 2021 12:34:03 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2517160FE7 for ; Mon, 21 Jun 2021 12:34:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2517160FE7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=mediatek.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Date:CC:To:From:Subject:Message-ID:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=3n5PHpbGw6JHY8vAW2Egt9MQhPf4t96dlk43Vcc5Ih8=; b=TurxquAHX/irir MJKp0BzblTfSdDqjwCzgm5At52eDA7QClS2lU0thgV7dvOqbCWwH1yZ+sYr6a9WhqU4HL9U2z4XSE rzC0eEu9tzSXA1fw2L+91I6fJib6g/q6j0uCtzJAugBvQZXrA6+A1Tv729oM38IHXRUs0/K+282R+ IYF+WiwxmwUB3ijA7u6WXUhtxS5zt+BwjGHbprgBCb9I1zQpWUZTfbyDc5CrsQB/XpxyEvBkSs8TV uB8Z9Tzx9DwKOHRhTsG/ilzMrkYpMbsiJZFCmHMoYhpqKDdU4NP3nGHkBozXA/95i1hGmoOKdsyem EEv4tuhVvJ0rLjBAK6ZA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lvJ5G-003TcD-Gf; Mon, 21 Jun 2021 12:31:22 +0000 Received: from mailgw01.mediatek.com ([216.200.240.184]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lvJ59-003Tb5-HH; Mon, 21 Jun 2021 12:31:19 +0000 X-UUID: bf782dbfed184650ab4e6bb5a2c9d7e1-20210621 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Transfer-Encoding:MIME-Version:Content-Type:References:In-Reply-To:Date:CC:To:From:Subject:Message-ID; bh=euPrHxGrbdSW9VrDAhIBZeDpnfSmXLN+b5PLztQHtiI=; b=BYEY9xPgLQ2iv9v++FpkGd6Fj24sDWk5iTn0oZap5coZvCk0mrZDrVMgheEGU+jjlrxqfXnU3X0ekZO7sC5lReZnuyryV7T3B33UW+PaUXl0zxKY7OebOPx8wOQhVMFXGm/CYn7OyuGbXXENMRKZC8aamU6TsLHvw5x/2/N0pvo=; X-UUID: bf782dbfed184650ab4e6bb5a2c9d7e1-20210621 Received: from mtkcas66.mediatek.inc [(172.29.193.44)] by mailgw01.mediatek.com (envelope-from ) (musrelay.mediatek.com ESMTP with TLSv1.2 ECDHE-RSA-AES256-SHA384 256/256) with ESMTP id 870973360; Mon, 21 Jun 2021 05:31:06 -0700 Received: from MTKMBS33N2.mediatek.inc (172.27.4.76) by MTKMBS62N1.mediatek.inc (172.29.193.41) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 21 Jun 2021 05:31:04 -0700 Received: from mtkcas07.mediatek.inc (172.21.101.84) by MTKMBS33N2.mediatek.inc (172.27.4.76) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 21 Jun 2021 20:30:58 +0800 Received: from mtksdccf07 (172.21.84.99) by mtkcas07.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 21 Jun 2021 20:30:57 +0800 Message-ID: Subject: Re: [PATCH 1/1] psi: stop relying on timer_pending for poll_work rescheduling From: YT Chang To: Suren Baghdasaryan , CC: , , , , , , , , , , , , , , , , , , Date: Mon, 21 Jun 2021 20:30:57 +0800 In-Reply-To: <20210617212654.1529125-1-surenb@google.com> References: <20210617212654.1529125-1-surenb@google.com> X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 MIME-Version: 1.0 X-TM-SNTS-SMTP: 5B4C12E4F793A106C228712A7701E8C268B27DA9BDF5A3A05ACA37B3B2E431DF2000:8 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210621_053117_042269_BA85CCC3 X-CRM114-Status: GOOD ( 36.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Thu, 2021-06-17 at 14:26 -0700, Suren Baghdasaryan wrote: > Psi polling mechanism is trying to minimize the number of wakeups to > run psi_poll_work and is currently relying on timer_pending() to > detect > when this work is already scheduled. This provides a window of > opportunity > for psi_group_change to schedule an immediate psi_poll_work after > poll_timer_fn got called but before psi_poll_work could reschedule > itself. > Below is the depiction of this entire window: > > poll_timer_fn > wake_up_interruptible(&group->poll_wait); > > psi_poll_worker > wait_event_interruptible(group->poll_wait, ...) > psi_poll_work > psi_schedule_poll_work > if (timer_pending(&group->poll_timer)) return; > ... > mod_timer(&group->poll_timer, jiffies + delay); > > Prior to 461daba06bdc we used to rely on poll_scheduled atomic which > was > reset and set back inside psi_poll_work and therefore this race > window > was much smaller. > The larger window causes increased number of wakeups and our partners > report visible power regression of ~10mA after applying 461daba06bdc. > Bring back the poll_scheduled atomic and make this race window even > narrower by resetting poll_scheduled only when we reach polling > expiration > time. This does not completely eliminate the possibility of extra > wakeups > caused by a race with psi_group_change however it will limit it to > the > worst case scenario of one extra wakeup per every tracking window > (0.5s > in the worst case). > By tracing the number of immediate rescheduling attempts performed by > psi_group_change and the number of these attempts being blocked due > to > psi monitor being already active, we can assess the effects of this > change: > > Before the patch: > Run#1 Run#2 Run#3 > Immediate reschedules > attempted: 684365 1385156 1261240 > Immediate reschedules > blocked: 682846 1381654 1258682 > Immediate reschedules (delta): 1519 3502 2558 > Immediate reschedules (% of attempted): 0.22% 0.25% 0.20% > > After the patch: > Run#1 Run#2 Run#3 > Immediate reschedules attempted: 882244 770298 426218 > Immediate reschedules blocked: 881996 769796 426074 > Immediate reschedules (delta): 248 502 144 > Immediate reschedules (% of attempted): 0.03% 0.07% 0.03% > > The number of non-blocked immediate reschedules dropped from 0.22- > 0.25% > to 0.03-0.07%. The drop is attributed to the decrease in the race > window > size and the fact that we allow this race only when psi monitors > reach > polling window expiration time. > The regression power data points of Android phone in home screen idle: Original :baseline Original+ Patch : -21.5% (-11.5mA) Tested-by: SH Chen > Fixes: 461daba06bdc ("psi: eliminate kthread_worker from psi trigger > scheduling mechanism") > Reported-by: Kathleen Chang > Reported-by: Wenju Xu > Reported-by: Jonathan Chen > Signed-off-by: Suren Baghdasaryan > --- > include/linux/psi_types.h | 1 + > kernel/sched/psi.c | 41 ++++++++++++++++++++++++++++--------- > -- > 2 files changed, 31 insertions(+), 11 deletions(-) > > diff --git a/include/linux/psi_types.h b/include/linux/psi_types.h > index 0a23300d49af..ef8bd89d065e 100644 > --- a/include/linux/psi_types.h > +++ b/include/linux/psi_types.h > @@ -158,6 +158,7 @@ struct psi_group { > struct timer_list poll_timer; > wait_queue_head_t poll_wait; > atomic_t poll_wakeup; > + atomic_t poll_scheduled; > > /* Protects data used by the monitor */ > struct mutex trigger_lock; > diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c > index cc25a3cff41f..fed7c9c2b276 100644 > --- a/kernel/sched/psi.c > +++ b/kernel/sched/psi.c > @@ -193,6 +193,7 @@ static void group_init(struct psi_group *group) > INIT_DELAYED_WORK(&group->avgs_work, psi_avgs_work); > mutex_init(&group->avgs_lock); > /* Init trigger-related members */ > + atomic_set(&group->poll_scheduled, 0); > mutex_init(&group->trigger_lock); > INIT_LIST_HEAD(&group->triggers); > memset(group->nr_triggers, 0, sizeof(group->nr_triggers)); > @@ -551,18 +552,14 @@ static u64 update_triggers(struct psi_group > *group, u64 now) > return now + group->poll_min_period; > } > > -/* Schedule polling if it's not already scheduled. */ > -static void psi_schedule_poll_work(struct psi_group *group, unsigned > long delay) > +/* Schedule polling if it's not already scheduled or forced. */ > +static void psi_schedule_poll_work(struct psi_group *group, unsigned > long delay, > + bool force) > { > struct task_struct *task; > > - /* > - * Do not reschedule if already scheduled. > - * Possible race with a timer scheduled after this check but > before > - * mod_timer below can be tolerated because group- > >polling_next_update > - * will keep updates on schedule. > - */ > - if (timer_pending(&group->poll_timer)) > + /* cmpxchg should be called even when !force to set > poll_scheduled */ > + if (atomic_cmpxchg(&group->poll_scheduled, 0, 1) != 0 && > !force) > return; > > rcu_read_lock(); > @@ -574,12 +571,15 @@ static void psi_schedule_poll_work(struct > psi_group *group, unsigned long delay) > */ > if (likely(task)) > mod_timer(&group->poll_timer, jiffies + delay); > + else > + atomic_set(&group->poll_scheduled, 0); > > rcu_read_unlock(); > } > > static void psi_poll_work(struct psi_group *group) > { > + bool force_reschedule = false; > u32 changed_states; > u64 now; > > @@ -587,6 +587,23 @@ static void psi_poll_work(struct psi_group > *group) > > now = sched_clock(); > > + if (now > group->polling_until) { > + /* > + * We are either about to start or might stop polling > if no > + * state change was recorded. Resetting poll_scheduled > leaves > + * a small window for psi_group_change to sneak in and > schedule > + * an immegiate poll_work before we get to > rescheduling. One > + * potential extra wakeup at the end of the polling > window > + * should be negligible and polling_next_update still > keeps > + * updates correctly on schedule. > + */ > + atomic_set(&group->poll_scheduled, 0); > + } else { > + /* Polling window is not over, keep rescheduling */ > + force_reschedule = true; > + } > + > + > collect_percpu_times(group, PSI_POLL, &changed_states); > > if (changed_states & group->poll_states) { > @@ -612,7 +629,8 @@ static void psi_poll_work(struct psi_group > *group) > group->polling_next_update = update_triggers(group, > now); > > psi_schedule_poll_work(group, > - nsecs_to_jiffies(group->polling_next_update - now) + > 1); > + nsecs_to_jiffies(group->polling_next_update - now) + 1, > + force_reschedule); > > out: > mutex_unlock(&group->trigger_lock); > @@ -736,7 +754,7 @@ static void psi_group_change(struct psi_group > *group, int cpu, > write_seqcount_end(&groupc->seq); > > if (state_mask & group->poll_states) > - psi_schedule_poll_work(group, 1); > + psi_schedule_poll_work(group, 1, false); > > if (wake_clock && !delayed_work_pending(&group->avgs_work)) > schedule_delayed_work(&group->avgs_work, PSI_FREQ); > @@ -1235,6 +1253,7 @@ static void psi_trigger_destroy(struct kref > *ref) > */ > del_timer_sync(&group->poll_timer); > kthread_stop(task_to_destroy); > + atomic_set(&group->poll_scheduled, 0); > } > kfree(t); > } _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel