From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f199.google.com (mail-pf0-f199.google.com [209.85.192.199]) by kanga.kvack.org (Postfix) with ESMTP id 3045A6B02EA for ; Thu, 22 Feb 2018 10:37:45 -0500 (EST) Received: by mail-pf0-f199.google.com with SMTP id m22so2674559pfg.15 for ; Thu, 22 Feb 2018 07:37:45 -0800 (PST) Received: from EUR02-AM5-obe.outbound.protection.outlook.com (mail-eopbgr00133.outbound.protection.outlook.com. [40.107.0.133]) by mx.google.com with ESMTPS id 1-v6si214999plx.93.2018.02.22.07.37.43 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 22 Feb 2018 07:37:44 -0800 (PST) Subject: Re: [PATCH v5 2/2] mm/memcontrol.c: Reduce reclaim retries in mem_cgroup_resize_limit() References: <20171220102429.31601-1-aryabinin@virtuozzo.com> <20180119132544.19569-1-aryabinin@virtuozzo.com> <20180119132544.19569-2-aryabinin@virtuozzo.com> <20180119133510.GD6584@dhcp22.suse.cz> <20180119151118.GE6584@dhcp22.suse.cz> <20180221121715.0233d34dda330c56e1a9db5f@linux-foundation.org> <20180222140932.GL30681@dhcp22.suse.cz> <20180222153343.GN30681@dhcp22.suse.cz> From: Andrey Ryabinin Message-ID: <0927bcab-7e2c-c6f9-d16a-315ac436ba98@virtuozzo.com> Date: Thu, 22 Feb 2018 18:38:11 +0300 MIME-Version: 1.0 In-Reply-To: <20180222153343.GN30681@dhcp22.suse.cz> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: Michal Hocko Cc: Andrew Morton , Shakeel Butt , Cgroups , LKML , Linux MM , Johannes Weiner , Vladimir Davydov On 02/22/2018 06:33 PM, Michal Hocko wrote: > On Thu 22-02-18 18:13:11, Andrey Ryabinin wrote: >> >> >> On 02/22/2018 05:09 PM, Michal Hocko wrote: >>> On Thu 22-02-18 16:50:33, Andrey Ryabinin wrote: >>>> On 02/21/2018 11:17 PM, Andrew Morton wrote: >>>>> On Fri, 19 Jan 2018 16:11:18 +0100 Michal Hocko wrote: >>>>> >>>>>> And to be honest, I do not really see why keeping retrying from >>>>>> mem_cgroup_resize_limit should be so much faster than keep retrying from >>>>>> the direct reclaim path. We are doing SWAP_CLUSTER_MAX batches anyway. >>>>>> mem_cgroup_resize_limit loop adds _some_ overhead but I am not really >>>>>> sure why it should be that large. >>>>> >>>>> Maybe restarting the scan lots of times results in rescanning lots of >>>>> ineligible pages at the start of the list before doing useful work? >>>>> >>>>> Andrey, are you able to determine where all that CPU time is being spent? >>>>> >>>> >>>> I should have been more specific about the test I did. The full script looks like this: >>>> >>>> mkdir -p /sys/fs/cgroup/memory/test >>>> echo $$ > /sys/fs/cgroup/memory/test/tasks >>>> cat 4G_file > /dev/null >>>> while true; do cat 4G_file > /dev/null; done & >>>> loop_pid=$! >>>> perf stat echo 50M > /sys/fs/cgroup/memory/test/memory.limit_in_bytes >>>> echo -1 > /sys/fs/cgroup/memory/test/memory.limit_in_bytes >>>> kill $loop_pid >>>> >>>> >>>> I think the additional loops add some overhead and it's not that big by itself, but >>>> this small overhead allows task to refill slightly more pages, increasing >>>> the total amount of pages that mem_cgroup_resize_limit() need to reclaim. >>>> >>>> By using the following commands to show the the amount of reclaimed pages: >>>> perf record -e vmscan:mm_vmscan_memcg_reclaim_end echo 50M > /sys/fs/cgroup/memory/test/memory.limit_in_bytes >>>> perf script|cut -d '=' -f 2| paste -sd+ |bc >>>> >>>> I've got 1259841 pages (4.9G) with the patch vs 1394312 pages (5.4G) without it. >>> >>> So how does the picture changes if you have multiple producers? >>> >> >> Drastically, in favor of the patch. But numbers *very* fickle from run to run. >> >> Inside 5G vm with 4 cpus (qemu -m 5G -smp 4) and 4 processes in cgroup reading 1G files: >> "while true; do cat /1g_f$i > /dev/null; done &" >> >> with the patch: >> best: 1.04 secs, 9.7G reclaimed >> worst: 2.2 secs, 16G reclaimed. >> >> without: >> best: 5.4 sec, 35G reclaimed >> worst: 22.2 sec, 136G reclaimed > > Could you also compare how much memory do we reclaim with/without the > patch? > I did and I wrote the results. Please look again. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org