From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754832Ab1FMVcU (ORCPT ); Mon, 13 Jun 2011 17:32:20 -0400 Received: from smtp1.linux-foundation.org ([140.211.169.13]:57033 "EHLO smtp1.linux-foundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753009Ab1FMVcT (ORCPT ); Mon, 13 Jun 2011 17:32:19 -0400 Date: Mon, 13 Jun 2011 14:31:44 -0700 From: Andrew Morton To: aquini@linux.com Cc: Russ Anderson , Andrea Arcangeli , linux-mm , linux-kernel , Christoph Lameter , rja@americas.sgi.com Subject: Re: [PATCH] mm: fix negative commitlimit when gigantic hugepages are allocated Message-Id: <20110613143144.c457ff42.akpm@linux-foundation.org> In-Reply-To: <20110613211153.GA23597@optiplex.tchesoft.com> References: <20110519045630.GA22533@sgi.com> <20110519221101.GC19648@sgi.com> <20110520130411.d1e0baef.akpm@linux-foundation.org> <20110520223032.GA15192@x61.tchesoft.com> <20110526210751.GA14819@optiplex.tchesoft.com> <20110602040821.GA7934@sgi.com> <20110603025555.GA10530@optiplex.tchesoft.com> <20110609164408.8370746e.akpm@linux-foundation.org> <20110613211153.GA23597@optiplex.tchesoft.com> X-Mailer: Sylpheed 3.0.2 (GTK+ 2.20.1; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 13 Jun 2011 18:11:55 -0300 Rafael Aquini wrote: > Howdy Andrew, > > Sorry, for this late reply. > > On Thu, Jun 09, 2011 at 04:44:08PM -0700, Andrew Morton wrote: > > On Thu, 2 Jun 2011 23:55:57 -0300 > > Rafael Aquini wrote: > > > > > When 1GB hugepages are allocated on a system, free(1) reports > > > less available memory than what really is installed in the box. > > > Also, if the total size of hugepages allocated on a system is > > > over half of the total memory size, CommitLimit becomes > > > a negative number. > > > > > > The problem is that gigantic hugepages (order > MAX_ORDER) > > > can only be allocated at boot with bootmem, thus its frames > > > are not accounted to 'totalram_pages'. However, they are > > > accounted to hugetlb_total_pages() > > > > > > What happens to turn CommitLimit into a negative number > > > is this calculation, in fs/proc/meminfo.c: > > > > > > allowed = ((totalram_pages - hugetlb_total_pages()) > > > * sysctl_overcommit_ratio / 100) + total_swap_pages; > > > > > > A similar calculation occurs in __vm_enough_memory() in mm/mmap.c. > > > > > > Also, every vm statistic which depends on 'totalram_pages' will render > > > confusing values, as if system were 'missing' some part of its memory. > > > > Is this bug serious enough to justify backporting the fix into -stable > > kernels? > > Despite not having testing it, I can think the following scenario as > troublesome: > When gigantic hugepages are allocated and sysctl_overcommit_memory == OVERCOMMIT_NEVER. > In a such situation, __vm_enough_memory() goes through the mentioned 'allowed' > calculation and might end up mistakenly returning -ENOMEM, thus forcing > the system to start reclaiming pages earlier than it would be ususal, and this could > cause detrimental impact to overall system's performance, depending on the > workload. > > Besides the aforementioned scenario, I can only think of this causing annoyances > with memory reports from /proc/meminfo and free(1). > hm, OK, thanks. That sounds a bit thin, but the patch is really simple so I stuck the cc:stable onto its changelog. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail144.messagelabs.com (mail144.messagelabs.com [216.82.254.51]) by kanga.kvack.org (Postfix) with ESMTP id 97A52900117 for ; Mon, 13 Jun 2011 17:32:18 -0400 (EDT) Date: Mon, 13 Jun 2011 14:31:44 -0700 From: Andrew Morton Subject: Re: [PATCH] mm: fix negative commitlimit when gigantic hugepages are allocated Message-Id: <20110613143144.c457ff42.akpm@linux-foundation.org> In-Reply-To: <20110613211153.GA23597@optiplex.tchesoft.com> References: <20110519045630.GA22533@sgi.com> <20110519221101.GC19648@sgi.com> <20110520130411.d1e0baef.akpm@linux-foundation.org> <20110520223032.GA15192@x61.tchesoft.com> <20110526210751.GA14819@optiplex.tchesoft.com> <20110602040821.GA7934@sgi.com> <20110603025555.GA10530@optiplex.tchesoft.com> <20110609164408.8370746e.akpm@linux-foundation.org> <20110613211153.GA23597@optiplex.tchesoft.com> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: aquini@linux.com Cc: Russ Anderson , Andrea Arcangeli , linux-mm , linux-kernel , Christoph Lameter , rja@americas.sgi.com On Mon, 13 Jun 2011 18:11:55 -0300 Rafael Aquini wrote: > Howdy Andrew, > > Sorry, for this late reply. > > On Thu, Jun 09, 2011 at 04:44:08PM -0700, Andrew Morton wrote: > > On Thu, 2 Jun 2011 23:55:57 -0300 > > Rafael Aquini wrote: > > > > > When 1GB hugepages are allocated on a system, free(1) reports > > > less available memory than what really is installed in the box. > > > Also, if the total size of hugepages allocated on a system is > > > over half of the total memory size, CommitLimit becomes > > > a negative number. > > > > > > The problem is that gigantic hugepages (order > MAX_ORDER) > > > can only be allocated at boot with bootmem, thus its frames > > > are not accounted to 'totalram_pages'. However, they are > > > accounted to hugetlb_total_pages() > > > > > > What happens to turn CommitLimit into a negative number > > > is this calculation, in fs/proc/meminfo.c: > > > > > > allowed = ((totalram_pages - hugetlb_total_pages()) > > > * sysctl_overcommit_ratio / 100) + total_swap_pages; > > > > > > A similar calculation occurs in __vm_enough_memory() in mm/mmap.c. > > > > > > Also, every vm statistic which depends on 'totalram_pages' will render > > > confusing values, as if system were 'missing' some part of its memory. > > > > Is this bug serious enough to justify backporting the fix into -stable > > kernels? > > Despite not having testing it, I can think the following scenario as > troublesome: > When gigantic hugepages are allocated and sysctl_overcommit_memory == OVERCOMMIT_NEVER. > In a such situation, __vm_enough_memory() goes through the mentioned 'allowed' > calculation and might end up mistakenly returning -ENOMEM, thus forcing > the system to start reclaiming pages earlier than it would be ususal, and this could > cause detrimental impact to overall system's performance, depending on the > workload. > > Besides the aforementioned scenario, I can only think of this causing annoyances > with memory reports from /proc/meminfo and free(1). > hm, OK, thanks. That sounds a bit thin, but the patch is really simple so I stuck the cc:stable onto its changelog. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: email@kvack.org