From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932074Ab0IBU0g (ORCPT ); Thu, 2 Sep 2010 16:26:36 -0400 Received: from ogre.sisk.pl ([217.79.144.158]:43765 "EHLO ogre.sisk.pl" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752356Ab0IBU0f (ORCPT ); Thu, 2 Sep 2010 16:26:35 -0400 From: "Rafael J. Wysocki" To: KOSAKI Motohiro Subject: Re: [Bisected Regression in 2.6.35] A full tmpfs filesystem causeshibernation to hang Date: Thu, 2 Sep 2010 22:24:59 +0200 User-Agent: KMail/1.13.5 (Linux/2.6.36-rc3-rjw+; KDE/4.4.4; x86_64; ; ) Cc: "M. Vefa Bicakci" , Linux Kernel Mailing List , linux-pm@lists.linux-foundation.org, Minchan Kim References: <20100901093219.9744.A69D9226@jp.fujitsu.com> <20100902091010.D050.A69D9226@jp.fujitsu.com> <201009022157.18561.rjw@sisk.pl> In-Reply-To: <201009022157.18561.rjw@sisk.pl> MIME-Version: 1.0 Content-Type: Text/Plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Message-Id: <201009022224.59313.rjw@sisk.pl> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thursday, September 02, 2010, Rafael J. Wysocki wrote: > On Thursday, September 02, 2010, KOSAKI Motohiro wrote: > > > On Wednesday, September 01, 2010, KOSAKI Motohiro wrote: > > > > > === 8< === > > > > > PM: Marking nosave pages: ...0009f000 - ...000100000 > > > > > PM: basic memory bitmaps created > > > > > PM: Syncing filesystems ... done > > > > > Freezing user space processes ... (elapsed 0.01 seconds) done. > > > > > Freezing remaining freezable tasks ... (elapsed 0.01 seconds) done. > > > > > PM: Preallocating image memory... > > > > > shrink_all_memory start > > > > > PM: shrink memory: pass=1, req:310171 reclaimed:15492 free:360936 > > > > > PM: shrink memory: pass=2, req:294679 reclaimed:28864 free:373981 > > > > > PM: shrink memory: pass=3, req:265815 reclaimed:60311 free:405374 > > > > > PM: shrink memory: pass=4, req:205504 reclaimed:97870 free:443024 > > > > > PM: shrink memory: pass=5, req:107634 reclaimed:146948 free:492141 > > > > > shrink_all_memory: req:107634 reclaimed:146948 free:492141 > > > > > PM: preallocate_image_highmem 556658 278329 > > > > > PM: preallocate_image_memory 103139 103139 > > > > > PM: preallocate_highmem_fraction 183908 556658 760831 -> 183908 > > > > > === >8 === > > > > > > > > Rafael, this log mean hibernate_preallocate_memory() has a bug. > > > > > > Well, it works as designed ... > > > > > > > It allocate memory as following order. > > > > 1. preallocate_image_highmem() (i.e. __GFP_HIGHMEM) > > > > 2. preallocate_image_memory() (i.e. GFP_KERNEL) > > > > 3. preallocate_highmem_fraction (i.e. __GFP_HIGHMEM) > > > > 4. preallocate_image_memory() (i.e. GFP_KERNEL) > > > > > > > > But, please imazine following scenario (as Vefa's scenario). > > > > - system has 3GB memory. 1GB is normal. 2GB is highmem. > > > > - all normal memory is free > > > > - 1.5GB memory of highmem are used for tmpfs. rest 500MB is free. > > > > > > Indeed, that's a memory allocation pattern I didn't anticipate. > > > > > > > At that time, hibernate_preallocate_memory() works as following. > > > > > > > > 1. call preallocate_image_highmem(1GB) > > > > 2. call preallocate_image_memory(500M) total 1.5GB allocated > > > > 3. call preallocate_highmem_fraction(660M) total 2.2GB allocated > > > > > > > > then, all of normal zone memory was exhaust. next preallocate_image_memory() > > > > makes OOM, and oom_killer_disabled makes infinite loop. > > > > (oom_killer_disabled careless is vmscan bug. I'll fix it soon) > > > > > > So, it looks like the problem will go away if we check if there are any normal > > > pages to allocate from before calling the last preallocate_image_memory()? > > > > > > Like in the patch below, perhaps? > > > > Looks like fine. but I have one question. hibernate_preallocate_memory() call > > preallocate_image_memory() two times. Why do you only care latter one? > > former one seems similar risk. > > The first one is mandatory, ie. if we can't allocate the requested number of > pages at this point, we fail the entire hibernation. In that case the > performance hit doesn't matter. IOW, your patch at http://lkml.org/lkml/2010/9/2/262 is still necessary to protect against the infinite loop in that case. Thanks, Rafael From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Rafael J. Wysocki" Subject: Re: [Bisected Regression in 2.6.35] A full tmpfs filesystem causeshibernation to hang Date: Thu, 2 Sep 2010 22:24:59 +0200 Message-ID: <201009022224.59313.rjw@sisk.pl> References: <20100901093219.9744.A69D9226@jp.fujitsu.com> <20100902091010.D050.A69D9226@jp.fujitsu.com> <201009022157.18561.rjw@sisk.pl> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <201009022157.18561.rjw@sisk.pl> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linux-pm-bounces@lists.linux-foundation.org Errors-To: linux-pm-bounces@lists.linux-foundation.org To: KOSAKI Motohiro Cc: linux-pm@lists.linux-foundation.org, "M. Vefa Bicakci" , Linux Kernel Mailing List List-Id: linux-pm@vger.kernel.org On Thursday, September 02, 2010, Rafael J. Wysocki wrote: > On Thursday, September 02, 2010, KOSAKI Motohiro wrote: > > > On Wednesday, September 01, 2010, KOSAKI Motohiro wrote: > > > > > === 8< === > > > > > PM: Marking nosave pages: ...0009f000 - ...000100000 > > > > > PM: basic memory bitmaps created > > > > > PM: Syncing filesystems ... done > > > > > Freezing user space processes ... (elapsed 0.01 seconds) done. > > > > > Freezing remaining freezable tasks ... (elapsed 0.01 seconds) done. > > > > > PM: Preallocating image memory... > > > > > shrink_all_memory start > > > > > PM: shrink memory: pass=1, req:310171 reclaimed:15492 free:360936 > > > > > PM: shrink memory: pass=2, req:294679 reclaimed:28864 free:373981 > > > > > PM: shrink memory: pass=3, req:265815 reclaimed:60311 free:405374 > > > > > PM: shrink memory: pass=4, req:205504 reclaimed:97870 free:443024 > > > > > PM: shrink memory: pass=5, req:107634 reclaimed:146948 free:492141 > > > > > shrink_all_memory: req:107634 reclaimed:146948 free:492141 > > > > > PM: preallocate_image_highmem 556658 278329 > > > > > PM: preallocate_image_memory 103139 103139 > > > > > PM: preallocate_highmem_fraction 183908 556658 760831 -> 183908 > > > > > === >8 === > > > > > > > > Rafael, this log mean hibernate_preallocate_memory() has a bug. > > > > > > Well, it works as designed ... > > > > > > > It allocate memory as following order. > > > > 1. preallocate_image_highmem() (i.e. __GFP_HIGHMEM) > > > > 2. preallocate_image_memory() (i.e. GFP_KERNEL) > > > > 3. preallocate_highmem_fraction (i.e. __GFP_HIGHMEM) > > > > 4. preallocate_image_memory() (i.e. GFP_KERNEL) > > > > > > > > But, please imazine following scenario (as Vefa's scenario). > > > > - system has 3GB memory. 1GB is normal. 2GB is highmem. > > > > - all normal memory is free > > > > - 1.5GB memory of highmem are used for tmpfs. rest 500MB is free. > > > > > > Indeed, that's a memory allocation pattern I didn't anticipate. > > > > > > > At that time, hibernate_preallocate_memory() works as following. > > > > > > > > 1. call preallocate_image_highmem(1GB) > > > > 2. call preallocate_image_memory(500M) total 1.5GB allocated > > > > 3. call preallocate_highmem_fraction(660M) total 2.2GB allocated > > > > > > > > then, all of normal zone memory was exhaust. next preallocate_image_memory() > > > > makes OOM, and oom_killer_disabled makes infinite loop. > > > > (oom_killer_disabled careless is vmscan bug. I'll fix it soon) > > > > > > So, it looks like the problem will go away if we check if there are any normal > > > pages to allocate from before calling the last preallocate_image_memory()? > > > > > > Like in the patch below, perhaps? > > > > Looks like fine. but I have one question. hibernate_preallocate_memory() call > > preallocate_image_memory() two times. Why do you only care latter one? > > former one seems similar risk. > > The first one is mandatory, ie. if we can't allocate the requested number of > pages at this point, we fail the entire hibernation. In that case the > performance hit doesn't matter. IOW, your patch at http://lkml.org/lkml/2010/9/2/262 is still necessary to protect against the infinite loop in that case. Thanks, Rafael