From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752587AbdLUJ5O (ORCPT ); Thu, 21 Dec 2017 04:57:14 -0500 Received: from mail-eopbgr50120.outbound.protection.outlook.com ([40.107.5.120]:2080 "EHLO EUR03-VE1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751870AbdLUJ5H (ORCPT ); Thu, 21 Dec 2017 04:57:07 -0500 Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=aryabinin@virtuozzo.com; Subject: Re: [PATCH 1/2] mm/memcg: try harder to decrease [memory,memsw].limit_in_bytes To: Shakeel Butt , Michal Hocko Cc: Andrew Morton , Johannes Weiner , Vladimir Davydov , Cgroups , Linux MM , LKML References: <20171220102429.31601-1-aryabinin@virtuozzo.com> <20171220103337.GL4831@dhcp22.suse.cz> <6e9ee949-c203-621d-890f-25a432bd4bb3@virtuozzo.com> <20171220113404.GN4831@dhcp22.suse.cz> From: Andrey Ryabinin Message-ID: <5db8aef5-2d5e-1e3b-d121-778fc4bd6875@virtuozzo.com> Date: Thu, 21 Dec 2017 13:00:46 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.5.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [195.214.232.6] X-ClientProxiedBy: HE1PR05CA0131.eurprd05.prod.outlook.com (2603:10a6:7:28::18) To AM4PR08MB2818.eurprd08.prod.outlook.com (2603:10a6:205:d::24) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 29c99984-ab61-4180-7d1a-08d54859309f X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:(5600026)(4604075)(4534020)(4602075)(4627115)(201703031133081)(201702281549075)(2017052603307)(7153060);SRVR:AM4PR08MB2818; X-Microsoft-Exchange-Diagnostics: 1;AM4PR08MB2818;3:SFZV4ZMjFzwkfungARsvYEdsGXqmy3R4IJfJXZHnSRWRQP+2acrDIvMTb5d3vEUAuwYBnQIxuOw6gumX4hqWa4od4ufJFruyucuEoxxYOhh5JdxwvAcmyfqaJGm1zCWdC5CAFebmkKmqDHdUEkU3vd/j+TJmbBL5lHsFUDKbT9w/fgld2Pi15V4cDeizP8x9INr4XqahShYVjDmtpTWt0TRb2+rjFDDVcah/iuOCWVkzf6NEF7laes9DaioRdxq9;25:EvseeuATcx7bGIfmThqS80pWXVj89oRCcJq4A0EEY5sqXQMq1+aLfsOWciZeVMGKwFN+Na5BaxUcswS8Odp5T3iuvh7FBFQsMP1TyyUFm3X2wXYTKtmkFdKdW/BIzaHNhv2AL9pj2Xay5poxJVathtcmGzQDRP6gAvQrqqZB/ODBIxAf/6snvSbmg7kg3WfYmKTfhHotnFYAFaOHZUyFRqQ3NVHi1ILzWNTyY5EYPhb2OiKuUz+/2yjk/oDqiUK1Rhk+jF1Q3Ty6viLqy0TSl4hY99ndERTS90eIhrxamrbOKmXhTJZmYraHa9sa3RSUAruXIWHJXS7f9IIoAMeA1Q==;31:/xqFuCpITFBLW1b1loA+KNrZpPBGotM31vq1DsclYdwYZcsYfI9xW9CRH3PohmGD0uyJ+CarnekSceOpSKkta8x4wnhL0EjOcCI8yC0mqxzkaHNlVzYFdGmvLKhFt4QhWbqV0mxnqQT7hrzpYwa/vTMwxzxdqLm837pr6OQqGiypin/8bBi6wZsdo1CWHXII2KLUSQA+g4HDs+ulSuUvl540LfE4aQCaAT/ccGRVwzc= X-MS-TrafficTypeDiagnostic: AM4PR08MB2818: X-Microsoft-Exchange-Diagnostics: 1;AM4PR08MB2818;20:otqbt3PaS+o9IAPBkogtD1+TIF63BP9XqxvBTtLHWmYacxn0J/371N4485Q+XPrSReyGEkgpt/+VAGtoDBactCq3H07HB6Vm2xdPjy47ps+71zybV+NnwVC1kzrG5+mxSvGR/OMU9jcsqdP7RemHUvc+dE0SmT3+jEQjx/1nkxr79azkTqjrcc7NfpC0e0FAZOZkuCeSezlp0ABGu6BGh9ZDxYex3PXdiqg+QumhGvS1VauDqPUhU7PVU/UTHGdjfK1C52yC7K2WYjr1BiMcjXd2FKB6b+0ViMPmVzP5Vss/wG4nCRVo0zuuCwnjdfYXavg/d/ae2fSFeykMrMx4j3gP2DLA6GkP1+rl0IcTRZnjWHR6s5VZNyTVlThef3NUpH1pOJFJoZJof2AuLcrbgSgdMzm0hOqrtWRAy0peo2w=;4:bXJT4nMelNL8QLapFHPnE1lvZPdhEJvEI4YxrDa83JgipZcJBTOGL/PXw8gEHud5lTwhGTYUadKolb1mUumICDPcrtRJxJpzES2R6I1X+hkEgsSjtE8eMSw53lK6lfaycbW6TU3XY7aaCmkk/mK2YS/dBT2CCluIcyeM9llInMJz4ReoNGVYV4RQgRFJwF1zKTB+77M99/T9Q354DdEq/bURh3aHIlxOCoo36vpRnJ+5Wdy2ev0/djonea5ZScP2BfFQv+JK2rGQXPj2uAwt2Q== X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(6040470)(2401047)(5005006)(8121501046)(3231023)(10201501046)(93006095)(93001095)(3002001)(6041268)(20161123560045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123562045)(20161123564045)(20161123558120)(6072148)(201708071742011);SRVR:AM4PR08MB2818;BCL:0;PCL:0;RULEID:(100000803101)(100110400095);SRVR:AM4PR08MB2818; X-Forefront-PRVS: 0528942FD8 X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10019020)(6049001)(396003)(346002)(366004)(376002)(39380400002)(39850400004)(199004)(189003)(377424004)(24454002)(386003)(5660300001)(230700001)(2950100002)(6486002)(77096006)(39060400002)(3846002)(6666003)(86362001)(81166006)(4326008)(25786009)(53936002)(65826007)(2906002)(478600001)(6116002)(65806001)(65956001)(47776003)(66066001)(36756003)(64126003)(52116002)(68736007)(31696002)(76176011)(83506002)(31686004)(58126008)(110136005)(53546011)(54906003)(6246003)(8676002)(2486003)(52146003)(23676004)(316002)(305945005)(81156014)(59450400001)(16576012)(105586002)(229853002)(16526018)(106356001)(8936002)(7736002)(97736004)(93886005)(50466002)(34023003)(148693002)(142933001);DIR:OUT;SFP:1102;SCL:1;SRVR:AM4PR08MB2818;H:[172.16.25.12];FPR:;SPF:None;PTR:InfoNoRecords;A:1;MX:1;LANG:en; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtBTTRQUjA4TUIyODE4OzIzOmtVVHBGcDRUdHVkNXpDN3ppcUQ2MTc4d0tS?= =?utf-8?B?Qm4wNW1UaGdSM0ZvSkp5UGJCSnZ1aE1JVFNVRWxCZDRueUc5ZnBxV25YaFJH?= =?utf-8?B?NUZ3aUplNzhKemVGWGxCMVZOMU5hQ0pQZzBjcUkxZEZLRGcxRzE3OUpWM0Vh?= =?utf-8?B?VEw5VG55MUhYL2Vja01RMmVteUlvVlZjUWx0bkM0R3NDS0dEZ3BtaDFBNWp1?= =?utf-8?B?cGVuVDd3Z1ZiUTBUUnkyWUpXd3E4ay9SRVZrQkRQa296dFVjVkl1dmVaRWxR?= =?utf-8?B?TDZDcnpoMHdYNEMzdVVSMlMrZjA2WWFFUk8wQW1DU2kvNTZZYmpaVEJvM05K?= =?utf-8?B?RTBVN0QrVmM5S28yRVFjdFlGS0liWXBXWXJxYzk0RlJjK01DdWM5SkdkYzhQ?= =?utf-8?B?cFVFbGVUZHVpcDI1SDhaVC80eXlPUURNU0dSajZnR3VyN1cxaWRNT1hBVEhp?= =?utf-8?B?SE5yWThCMzlqSFA1b09RYXhQWWx3eVBYWU5CdlRNNkx5RkJocXdDdDdvei9u?= =?utf-8?B?K0x2S0IvT1BDVnllWFBJRmJSdHFRRkVVL0pkY1JUOWJ3cEg2SlI3M09BK2th?= =?utf-8?B?Q1JEbEpNeVd3TFR6UGxGQXFyZk9ZcU5hSnBEdWNEcnZ6Nm03bTdFY0tkYWhW?= =?utf-8?B?R01hNnoyRUFFVTlqOU9jYTJtc1NwTmVuTm5lK3IyTCtCTk54eHNMOFh0WFZK?= =?utf-8?B?bXFITXQwUzFiTzlHRHQwUXBUNWhFUTRxYXVaNVE3NTYzVWVGTWEvQ3hqNHoy?= =?utf-8?B?ZlBHeU5tVHNydTdXTlgrdWExNk1KbjFPK0J1bXVuTmY3TmlOYlpaQlRTQml2?= =?utf-8?B?ZUtheXRFd2tDZ25xUEEwZGxCUHMvMFJrTDBmODlCWjVYeSs2M0s1ejZETkx1?= =?utf-8?B?azlzTXZTU1lvQTlsTHMzSDh2cnB6OU45cUJsdDF5UFMzM1FEOHFaR3p3Tklh?= =?utf-8?B?dk80K21uNXpUcTNha3NGcHF0cjI4dDNPbk5oL0lDUnkxdjZRTVZ0VzZXc1M5?= =?utf-8?B?djFodGhCd0Y5R2RCeENvY1hCaCtISFQ5V05DcVVjQVk2RDM5NHlzdUMySmxs?= =?utf-8?B?cHZNaER2VkYwVW9JSFB0OWQ0UmNoblgyaXRXS1A5Q2xZejFkWFp3ZGxTN05V?= =?utf-8?B?dnZNNDU5b0dZOVoxa3UxZzdGNzdjUFlOYnZGZkVoQUZvMnBMUE5GNjlXOXpk?= =?utf-8?B?elVuWEsvV0ZOY0RIdEtnc1NKWkUvUzhJVVVkbVY0aWtTYTM3NkRmbEFzSVB3?= =?utf-8?B?cHdjTkVsWmdEMHd1RWZyMjV6Z3kzc1lhS3RzekFqTGthc0VqQnhGa0lHaHds?= =?utf-8?B?cXIwZC9wRXVJZWpZKy92dUJNQ3E1ZElCbmFqTGFubUZEZFk2SURMWWtqRmhX?= =?utf-8?B?ZWd4N2tBMzVhcU5jZ2JydmgvdTZCdjREaXE3bUtBUHM4a3FEL2J2VVdxQk5m?= =?utf-8?B?cHN5R3hBSUkraFhTZk5VMHdMVGExUTNWS3JDYWw2NnRhT0hGTjcwWTdMQWpp?= =?utf-8?B?ODhCalY4ZmRCOHZoNVFucUxuOWk5ZnhQV3lpTFR3dElVR1NwbnNpQ21VeFlz?= =?utf-8?B?NEJnVmgvakpGQXdKcEo2Skx0VTNxQTREWDQ1M0FkNGo2bExCd3pUSmxBazZT?= =?utf-8?B?TnVpZmFFRHlyQUtzOXF0eEtIU3VVVE40cUVGb1ZXTlRJZTRwNjA3M2VpSW8r?= =?utf-8?B?T2krUDlUS21mc2RUOGQxa0RaakMzZnNGd3pCYlVzeXV0L1NVQzNPZzdBc2E1?= =?utf-8?B?OHE0d3pDa0hsRE5iT3BiMWJ2YVB3b1VZNGRLU2VENWpuM1ZIN3NQcnBPeU1T?= =?utf-8?B?bVYzUjhWcHcxREpLM3A5K3lxL2xid0lKODZBUkFMZk5DNi9pbyt4dHhhMlBJ?= =?utf-8?B?d21ZK0g0RnNobC9PUW01aFBmcWQ1Y1Q1VkZRVUV6N2Z2MHZxTk9uZzJETDkz?= =?utf-8?B?U01SeCt0NG9RU0NsMEwrenJqeTB6UkpLZXVZY21YZ0p2K1BtZk5YUWt0QW5l?= =?utf-8?B?akxPRmZKbk90SlBCRWNwQUtLY0luSmx6aTRjbVZWK1puV3h1cjBVM0pTZ1BD?= =?utf-8?Q?S8o62SoOv3Ud0cOG8AAtWImP5?= X-Microsoft-Exchange-Diagnostics: 1;AM4PR08MB2818;6:0f23pCi3CuvAI68zqzgiYCgtVLDXgcdvhSPFJDK2AsAJOOCai910fD4+vMELsoOkvzjWIu+IIfS+KPF8vsi0wN/YqG83CS3Lya56T4TlqcdTQdyH+8d9T/2uU0wY30Qe3Ud4fULGiO5Qi5YnLUCjP1Xu70dvehQ+RIrFAsVCs9opdPu7yiNP1P8KrgzU2I4Fhw0IiOanjV5/YDTT4naF+HTtAW57JVewWvWWdyvuV4B20bEnhrr7zAzaiBhP/CGtTyHQ+7Y4Gb0hzyK1pWIvYX60Tfvg9Q8dax3S5KvpR1wRorzxUbD3xMHwF8sG36UBzuzbak7Bs+A09rLKIIA9CjVxXDO0OteIlXuWVLD5eGk=;5:Lj/hoxjjFKy+et/X0xKoVV23B3Uc3g45CZH5PcslikMU42VZejeKLTTUVmqXbcdFGRUE7npcJYBjf93RvaQV9DshLvinDMYrO0m2fG8QdTeol7GBbNPOe663JVDgqYBfKWiWjdynjig2tMr70i2whtBvVAlBD/aozPhIfjBtkbk=;24:NKogt1Rf2QpsOXkrI1J3JT/IN8YxcTP/YWKiL77glZcLnegX7tTrqb2eHIaYymL9ZeWvAufFWv+r+FvYbOID9fWiFznXvj2bUv0MGo4ue08=;7:gyJbkZWCM5OJhX2JU4vKMDgW5hWAkYPQZxDkaQbFvthJJep7ruKZGaSDMHleEAcWKcIfxuBeks4UaWpNFAvDv+9fDScBgNUH5smV4XcMYIWarqZhFqiji9ds5q0OgXv5eH1L1TIvA/25qQ8V4CYaRZ3+imYsetJrEPe6XA40UkkP/jjwejifdrzQyieBaZ0OciUV4LFAkhAC4JibJ7dqrbooZOtdV4vOtF3j5pOmjaV4jGno1086LfaJOyF5oLzd SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;AM4PR08MB2818;20:15ufU4q3jCIdwat2nA8E61Vy4jRn2Ugx7kP4ay7ZMDztqpeQOo+TsmipOOdkM6uzbsXdHpFCiAJz/snOvaheD4spP9F1Y2INuQQ7kYQGYp9lFU9JmGLAcirf1xUfn0C2utxEIQgUCw8Fktu50/P4neV4PoiuY9XdPuHfpNdJpWM= X-OriginatorOrg: virtuozzo.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Dec 2017 09:57:04.1747 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 29c99984-ab61-4180-7d1a-08d54859309f X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 0bc7f26d-0264-416e-a6fc-8352af79c58f X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR08MB2818 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 12/20/2017 09:15 PM, Shakeel Butt wrote: > On Wed, Dec 20, 2017 at 3:34 AM, Michal Hocko wrote: >> On Wed 20-12-17 14:32:19, Andrey Ryabinin wrote: >>> On 12/20/2017 01:33 PM, Michal Hocko wrote: >>>> On Wed 20-12-17 13:24:28, Andrey Ryabinin wrote: >>>>> mem_cgroup_resize_[memsw]_limit() tries to free only 32 (SWAP_CLUSTER_MAX) >>>>> pages on each iteration. This makes practically impossible to decrease >>>>> limit of memory cgroup. Tasks could easily allocate back 32 pages, >>>>> so we can't reduce memory usage, and once retry_count reaches zero we return >>>>> -EBUSY. >>>>> >>>>> It's easy to reproduce the problem by running the following commands: >>>>> >>>>> mkdir /sys/fs/cgroup/memory/test >>>>> echo $$ >> /sys/fs/cgroup/memory/test/tasks >>>>> cat big_file > /dev/null & >>>>> sleep 1 && echo $((100*1024*1024)) > /sys/fs/cgroup/memory/test/memory.limit_in_bytes >>>>> -bash: echo: write error: Device or resource busy >>>>> >>>>> Instead of trying to free small amount of pages, it's much more >>>>> reasonable to free 'usage - limit' pages. >>>> >>>> But that only makes the issue less probable. It doesn't fix it because >>>> if (curusage >= oldusage) >>>> retry_count--; >>>> can still be true because allocator might be faster than the reclaimer. >>>> Wouldn't it be more reasonable to simply remove the retry count and keep >>>> trying until interrupted or we manage to update the limit. >>> >>> But does it makes sense to continue reclaiming even if reclaimer can't >>> make any progress? I'd say no. "Allocator is faster than reclaimer" >>> may be not the only reason for failed reclaim. E.g. we could try to >>> set limit lower than amount of mlock()ed memory in cgroup, retrying >>> reclaim would be just a waste of machine's resources. Or we simply >>> don't have any swap, and anon > new_limit. Should be burn the cpu in >>> that case? >> >> We can check the number of reclaimed pages and go EBUSY if it is 0. >> >>>> Another option would be to commit the new limit and allow temporal overcommit >>>> of the hard limit. New allocations and the limit update paths would >>>> reclaim to the hard limit. >>>> >>> >>> It sounds a bit fragile and tricky to me. I wouldn't go that way >>> without unless we have a very good reason for this. >> >> I haven't explored this, to be honest, so there may be dragons that way. >> I've just mentioned that option for completness. >> > > We already do this for cgroup-v2's memory.max. So, I don't think it is > fragile or tricky. > It has a potential to break userspace expectation. Userspace might expect that lowering limit_in_bytes too much fails with EBUSY and doesn't trigger OOM killer. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f72.google.com (mail-pg0-f72.google.com [74.125.83.72]) by kanga.kvack.org (Postfix) with ESMTP id 4366C6B0038 for ; Thu, 21 Dec 2017 04:57:09 -0500 (EST) Received: by mail-pg0-f72.google.com with SMTP id a10so15737998pgq.3 for ; Thu, 21 Dec 2017 01:57:09 -0800 (PST) Received: from EUR01-DB5-obe.outbound.protection.outlook.com (mail-eopbgr60127.outbound.protection.outlook.com. [40.107.6.127]) by mx.google.com with ESMTPS id 197si1596374pge.234.2017.12.21.01.57.07 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 21 Dec 2017 01:57:07 -0800 (PST) Subject: Re: [PATCH 1/2] mm/memcg: try harder to decrease [memory,memsw].limit_in_bytes References: <20171220102429.31601-1-aryabinin@virtuozzo.com> <20171220103337.GL4831@dhcp22.suse.cz> <6e9ee949-c203-621d-890f-25a432bd4bb3@virtuozzo.com> <20171220113404.GN4831@dhcp22.suse.cz> From: Andrey Ryabinin Message-ID: <5db8aef5-2d5e-1e3b-d121-778fc4bd6875@virtuozzo.com> Date: Thu, 21 Dec 2017 13:00:46 +0300 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: Shakeel Butt , Michal Hocko Cc: Andrew Morton , Johannes Weiner , Vladimir Davydov , Cgroups , Linux MM , LKML On 12/20/2017 09:15 PM, Shakeel Butt wrote: > On Wed, Dec 20, 2017 at 3:34 AM, Michal Hocko wrote: >> On Wed 20-12-17 14:32:19, Andrey Ryabinin wrote: >>> On 12/20/2017 01:33 PM, Michal Hocko wrote: >>>> On Wed 20-12-17 13:24:28, Andrey Ryabinin wrote: >>>>> mem_cgroup_resize_[memsw]_limit() tries to free only 32 (SWAP_CLUSTER_MAX) >>>>> pages on each iteration. This makes practically impossible to decrease >>>>> limit of memory cgroup. Tasks could easily allocate back 32 pages, >>>>> so we can't reduce memory usage, and once retry_count reaches zero we return >>>>> -EBUSY. >>>>> >>>>> It's easy to reproduce the problem by running the following commands: >>>>> >>>>> mkdir /sys/fs/cgroup/memory/test >>>>> echo $$ >> /sys/fs/cgroup/memory/test/tasks >>>>> cat big_file > /dev/null & >>>>> sleep 1 && echo $((100*1024*1024)) > /sys/fs/cgroup/memory/test/memory.limit_in_bytes >>>>> -bash: echo: write error: Device or resource busy >>>>> >>>>> Instead of trying to free small amount of pages, it's much more >>>>> reasonable to free 'usage - limit' pages. >>>> >>>> But that only makes the issue less probable. It doesn't fix it because >>>> if (curusage >= oldusage) >>>> retry_count--; >>>> can still be true because allocator might be faster than the reclaimer. >>>> Wouldn't it be more reasonable to simply remove the retry count and keep >>>> trying until interrupted or we manage to update the limit. >>> >>> But does it makes sense to continue reclaiming even if reclaimer can't >>> make any progress? I'd say no. "Allocator is faster than reclaimer" >>> may be not the only reason for failed reclaim. E.g. we could try to >>> set limit lower than amount of mlock()ed memory in cgroup, retrying >>> reclaim would be just a waste of machine's resources. Or we simply >>> don't have any swap, and anon > new_limit. Should be burn the cpu in >>> that case? >> >> We can check the number of reclaimed pages and go EBUSY if it is 0. >> >>>> Another option would be to commit the new limit and allow temporal overcommit >>>> of the hard limit. New allocations and the limit update paths would >>>> reclaim to the hard limit. >>>> >>> >>> It sounds a bit fragile and tricky to me. I wouldn't go that way >>> without unless we have a very good reason for this. >> >> I haven't explored this, to be honest, so there may be dragons that way. >> I've just mentioned that option for completness. >> > > We already do this for cgroup-v2's memory.max. So, I don't think it is > fragile or tricky. > It has a potential to break userspace expectation. Userspace might expect that lowering limit_in_bytes too much fails with EBUSY and doesn't trigger OOM killer. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrey Ryabinin Subject: Re: [PATCH 1/2] mm/memcg: try harder to decrease [memory,memsw].limit_in_bytes Date: Thu, 21 Dec 2017 13:00:46 +0300 Message-ID: <5db8aef5-2d5e-1e3b-d121-778fc4bd6875@virtuozzo.com> References: <20171220102429.31601-1-aryabinin@virtuozzo.com> <20171220103337.GL4831@dhcp22.suse.cz> <6e9ee949-c203-621d-890f-25a432bd4bb3@virtuozzo.com> <20171220113404.GN4831@dhcp22.suse.cz> Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Return-path: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=virtuozzo.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=GUst3rYSpyoxg7TRTy8B3k9c1VSb32DEJcVTvMRj/U4=; b=VT2wjexV+Qf75+3L4UcHwFx+rdTAoQDaQIuxB7x2qdGJ05XaTuncUoL93Y6QnvMqNvRq0bGT5CljwypqvfqPAlJIfyG7F7+TSYrT9ZbGwkp6ZulR1ep0ELzM0c9y+asztd5hOJywCvR1dqjw1aQ3Cs+RBHwariw9QGsfZVR5PqE= In-Reply-To: Content-Language: en-US Sender: cgroups-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org List-ID: Content-Type: text/plain; charset="us-ascii" To: Shakeel Butt , Michal Hocko Cc: Andrew Morton , Johannes Weiner , Vladimir Davydov , Cgroups , Linux MM , LKML On 12/20/2017 09:15 PM, Shakeel Butt wrote: > On Wed, Dec 20, 2017 at 3:34 AM, Michal Hocko wrote: >> On Wed 20-12-17 14:32:19, Andrey Ryabinin wrote: >>> On 12/20/2017 01:33 PM, Michal Hocko wrote: >>>> On Wed 20-12-17 13:24:28, Andrey Ryabinin wrote: >>>>> mem_cgroup_resize_[memsw]_limit() tries to free only 32 (SWAP_CLUSTER_MAX) >>>>> pages on each iteration. This makes practically impossible to decrease >>>>> limit of memory cgroup. Tasks could easily allocate back 32 pages, >>>>> so we can't reduce memory usage, and once retry_count reaches zero we return >>>>> -EBUSY. >>>>> >>>>> It's easy to reproduce the problem by running the following commands: >>>>> >>>>> mkdir /sys/fs/cgroup/memory/test >>>>> echo $$ >> /sys/fs/cgroup/memory/test/tasks >>>>> cat big_file > /dev/null & >>>>> sleep 1 && echo $((100*1024*1024)) > /sys/fs/cgroup/memory/test/memory.limit_in_bytes >>>>> -bash: echo: write error: Device or resource busy >>>>> >>>>> Instead of trying to free small amount of pages, it's much more >>>>> reasonable to free 'usage - limit' pages. >>>> >>>> But that only makes the issue less probable. It doesn't fix it because >>>> if (curusage >= oldusage) >>>> retry_count--; >>>> can still be true because allocator might be faster than the reclaimer. >>>> Wouldn't it be more reasonable to simply remove the retry count and keep >>>> trying until interrupted or we manage to update the limit. >>> >>> But does it makes sense to continue reclaiming even if reclaimer can't >>> make any progress? I'd say no. "Allocator is faster than reclaimer" >>> may be not the only reason for failed reclaim. E.g. we could try to >>> set limit lower than amount of mlock()ed memory in cgroup, retrying >>> reclaim would be just a waste of machine's resources. Or we simply >>> don't have any swap, and anon > new_limit. Should be burn the cpu in >>> that case? >> >> We can check the number of reclaimed pages and go EBUSY if it is 0. >> >>>> Another option would be to commit the new limit and allow temporal overcommit >>>> of the hard limit. New allocations and the limit update paths would >>>> reclaim to the hard limit. >>>> >>> >>> It sounds a bit fragile and tricky to me. I wouldn't go that way >>> without unless we have a very good reason for this. >> >> I haven't explored this, to be honest, so there may be dragons that way. >> I've just mentioned that option for completness. >> > > We already do this for cgroup-v2's memory.max. So, I don't think it is > fragile or tricky. > It has a potential to break userspace expectation. Userspace might expect that lowering limit_in_bytes too much fails with EBUSY and doesn't trigger OOM killer.