From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758238Ab2FZHPH (ORCPT ); Tue, 26 Jun 2012 03:15:07 -0400 Received: from mx2.parallels.com ([64.131.90.16]:42459 "EHLO mx2.parallels.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755824Ab2FZHPF (ORCPT ); Tue, 26 Jun 2012 03:15:05 -0400 Message-ID: <4FE960D6.4040409@parallels.com> Date: Tue, 26 Jun 2012 11:12:22 +0400 From: Glauber Costa User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:13.0) Gecko/20120605 Thunderbird/13.0 MIME-Version: 1.0 To: David Rientjes CC: , , Andrew Morton , , Frederic Weisbecker , Pekka Enberg , Michal Hocko , Johannes Weiner , Christoph Lameter , , , Tejun Heo , Suleiman Souhlal Subject: Re: [PATCH 02/11] memcg: Reclaim when more than one page needed. References: <1340633728-12785-1-git-send-email-glommer@parallels.com> <1340633728-12785-3-git-send-email-glommer@parallels.com> In-Reply-To: Content-Type: text/plain; charset="ISO-8859-1"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [109.173.9.3] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > >> + * retries >> + */ >> +#define NR_PAGES_TO_RETRY 2 >> + > > Should be 1 << PAGE_ALLOC_COSTLY_ORDER? Where does this number come from? > The changelog doesn't specify. Hocko complained about that, and I changed. Where the number comes from, is stated in the comments: it is a number small enough to have high changes of had been freed by the previous reclaim, and yet around the number of pages of a kernel allocation. Of course there are allocations for nr_pages > 2. But 2 will already service the stack most of the time, and most of the slab caches. >> static int mem_cgroup_do_charge(struct mem_cgroup *memcg, gfp_t gfp_mask, >> - unsigned int nr_pages, bool oom_check) >> + unsigned int nr_pages, unsigned int min_pages, >> + bool oom_check) >> { >> unsigned long csize = nr_pages * PAGE_SIZE; >> struct mem_cgroup *mem_over_limit; >> @@ -2182,18 +2190,18 @@ static int mem_cgroup_do_charge(struct mem_cgroup *memcg, gfp_t gfp_mask, >> } else >> mem_over_limit = mem_cgroup_from_res_counter(fail_res, res); >> /* >> - * nr_pages can be either a huge page (HPAGE_PMD_NR), a batch >> - * of regular pages (CHARGE_BATCH), or a single regular page (1). >> - * >> * Never reclaim on behalf of optional batching, retry with a >> * single page instead. >> */ >> - if (nr_pages == CHARGE_BATCH) >> + if (nr_pages > min_pages) >> return CHARGE_RETRY; >> >> if (!(gfp_mask & __GFP_WAIT)) >> return CHARGE_WOULDBLOCK; >> >> + if (gfp_mask & __GFP_NORETRY) >> + return CHARGE_NOMEM; >> + >> ret = mem_cgroup_reclaim(mem_over_limit, gfp_mask, flags); >> if (mem_cgroup_margin(mem_over_limit) >= nr_pages) >> return CHARGE_RETRY; >> @@ -2206,7 +2214,7 @@ static int mem_cgroup_do_charge(struct mem_cgroup *memcg, gfp_t gfp_mask, >> * unlikely to succeed so close to the limit, and we fall back >> * to regular pages anyway in case of failure. >> */ >> - if (nr_pages == 1 && ret) >> + if (nr_pages <= NR_PAGES_TO_RETRY && ret) >> return CHARGE_RETRY; >> >> /* >> @@ -2341,7 +2349,8 @@ again: >> nr_oom_retries = MEM_CGROUP_RECLAIM_RETRIES; >> } >> >> - ret = mem_cgroup_do_charge(memcg, gfp_mask, batch, oom_check); >> + ret = mem_cgroup_do_charge(memcg, gfp_mask, batch, nr_pages, >> + oom_check); >> switch (ret) { >> case CHARGE_OK: >> break; From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from psmtp.com (na3sys010amx162.postini.com [74.125.245.162]) by kanga.kvack.org (Postfix) with SMTP id 566D86B0145 for ; Tue, 26 Jun 2012 03:15:04 -0400 (EDT) Message-ID: <4FE960D6.4040409@parallels.com> Date: Tue, 26 Jun 2012 11:12:22 +0400 From: Glauber Costa MIME-Version: 1.0 Subject: Re: [PATCH 02/11] memcg: Reclaim when more than one page needed. References: <1340633728-12785-1-git-send-email-glommer@parallels.com> <1340633728-12785-3-git-send-email-glommer@parallels.com> In-Reply-To: Content-Type: text/plain; charset="ISO-8859-1"; format=flowed Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: David Rientjes Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, Andrew Morton , linux-kernel@vger.kernel.org, Frederic Weisbecker , Pekka Enberg , Michal Hocko , Johannes Weiner , Christoph Lameter , devel@openvz.org, kamezawa.hiroyu@jp.fujitsu.com, Tejun Heo , Suleiman Souhlal > >> + * retries >> + */ >> +#define NR_PAGES_TO_RETRY 2 >> + > > Should be 1 << PAGE_ALLOC_COSTLY_ORDER? Where does this number come from? > The changelog doesn't specify. Hocko complained about that, and I changed. Where the number comes from, is stated in the comments: it is a number small enough to have high changes of had been freed by the previous reclaim, and yet around the number of pages of a kernel allocation. Of course there are allocations for nr_pages > 2. But 2 will already service the stack most of the time, and most of the slab caches. >> static int mem_cgroup_do_charge(struct mem_cgroup *memcg, gfp_t gfp_mask, >> - unsigned int nr_pages, bool oom_check) >> + unsigned int nr_pages, unsigned int min_pages, >> + bool oom_check) >> { >> unsigned long csize = nr_pages * PAGE_SIZE; >> struct mem_cgroup *mem_over_limit; >> @@ -2182,18 +2190,18 @@ static int mem_cgroup_do_charge(struct mem_cgroup *memcg, gfp_t gfp_mask, >> } else >> mem_over_limit = mem_cgroup_from_res_counter(fail_res, res); >> /* >> - * nr_pages can be either a huge page (HPAGE_PMD_NR), a batch >> - * of regular pages (CHARGE_BATCH), or a single regular page (1). >> - * >> * Never reclaim on behalf of optional batching, retry with a >> * single page instead. >> */ >> - if (nr_pages == CHARGE_BATCH) >> + if (nr_pages > min_pages) >> return CHARGE_RETRY; >> >> if (!(gfp_mask & __GFP_WAIT)) >> return CHARGE_WOULDBLOCK; >> >> + if (gfp_mask & __GFP_NORETRY) >> + return CHARGE_NOMEM; >> + >> ret = mem_cgroup_reclaim(mem_over_limit, gfp_mask, flags); >> if (mem_cgroup_margin(mem_over_limit) >= nr_pages) >> return CHARGE_RETRY; >> @@ -2206,7 +2214,7 @@ static int mem_cgroup_do_charge(struct mem_cgroup *memcg, gfp_t gfp_mask, >> * unlikely to succeed so close to the limit, and we fall back >> * to regular pages anyway in case of failure. >> */ >> - if (nr_pages == 1 && ret) >> + if (nr_pages <= NR_PAGES_TO_RETRY && ret) >> return CHARGE_RETRY; >> >> /* >> @@ -2341,7 +2349,8 @@ again: >> nr_oom_retries = MEM_CGROUP_RECLAIM_RETRIES; >> } >> >> - ret = mem_cgroup_do_charge(memcg, gfp_mask, batch, oom_check); >> + ret = mem_cgroup_do_charge(memcg, gfp_mask, batch, nr_pages, >> + oom_check); >> switch (ret) { >> case CHARGE_OK: >> break; -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 From: Glauber Costa Subject: Re: [PATCH 02/11] memcg: Reclaim when more than one page needed. Date: Tue, 26 Jun 2012 11:12:22 +0400 Message-ID: <4FE960D6.4040409@parallels.com> References: <1340633728-12785-1-git-send-email-glommer@parallels.com> <1340633728-12785-3-git-send-email-glommer@parallels.com> Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: Sender: cgroups-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org List-ID: Content-Type: text/plain; charset="us-ascii"; format="flowed" To: David Rientjes Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, Andrew Morton , linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Frederic Weisbecker , Pekka Enberg , Michal Hocko , Johannes Weiner , Christoph Lameter , devel-GEFAQzZX7r8dnm+yROfE0A@public.gmane.org, kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org, Tejun Heo , Suleiman Souhlal > >> + * retries >> + */ >> +#define NR_PAGES_TO_RETRY 2 >> + > > Should be 1 << PAGE_ALLOC_COSTLY_ORDER? Where does this number come from? > The changelog doesn't specify. Hocko complained about that, and I changed. Where the number comes from, is stated in the comments: it is a number small enough to have high changes of had been freed by the previous reclaim, and yet around the number of pages of a kernel allocation. Of course there are allocations for nr_pages > 2. But 2 will already service the stack most of the time, and most of the slab caches. >> static int mem_cgroup_do_charge(struct mem_cgroup *memcg, gfp_t gfp_mask, >> - unsigned int nr_pages, bool oom_check) >> + unsigned int nr_pages, unsigned int min_pages, >> + bool oom_check) >> { >> unsigned long csize = nr_pages * PAGE_SIZE; >> struct mem_cgroup *mem_over_limit; >> @@ -2182,18 +2190,18 @@ static int mem_cgroup_do_charge(struct mem_cgroup *memcg, gfp_t gfp_mask, >> } else >> mem_over_limit = mem_cgroup_from_res_counter(fail_res, res); >> /* >> - * nr_pages can be either a huge page (HPAGE_PMD_NR), a batch >> - * of regular pages (CHARGE_BATCH), or a single regular page (1). >> - * >> * Never reclaim on behalf of optional batching, retry with a >> * single page instead. >> */ >> - if (nr_pages == CHARGE_BATCH) >> + if (nr_pages > min_pages) >> return CHARGE_RETRY; >> >> if (!(gfp_mask & __GFP_WAIT)) >> return CHARGE_WOULDBLOCK; >> >> + if (gfp_mask & __GFP_NORETRY) >> + return CHARGE_NOMEM; >> + >> ret = mem_cgroup_reclaim(mem_over_limit, gfp_mask, flags); >> if (mem_cgroup_margin(mem_over_limit) >= nr_pages) >> return CHARGE_RETRY; >> @@ -2206,7 +2214,7 @@ static int mem_cgroup_do_charge(struct mem_cgroup *memcg, gfp_t gfp_mask, >> * unlikely to succeed so close to the limit, and we fall back >> * to regular pages anyway in case of failure. >> */ >> - if (nr_pages == 1 && ret) >> + if (nr_pages <= NR_PAGES_TO_RETRY && ret) >> return CHARGE_RETRY; >> >> /* >> @@ -2341,7 +2349,8 @@ again: >> nr_oom_retries = MEM_CGROUP_RECLAIM_RETRIES; >> } >> >> - ret = mem_cgroup_do_charge(memcg, gfp_mask, batch, oom_check); >> + ret = mem_cgroup_do_charge(memcg, gfp_mask, batch, nr_pages, >> + oom_check); >> switch (ret) { >> case CHARGE_OK: >> break;