From: "Huang\, Ying" <ying.huang@intel.com> To: Hugh Dickins <hughd@google.com> Cc: Tim Chen <tim.c.chen@linux.intel.com>, "Huang\, Ying" <ying.huang@intel.com>, Minchan Kim <minchan@kernel.org>, Andrew Morton <akpm@linux-foundation.org>, <linux-kernel@vger.kernel.org>, <linux-mm@kvack.org> Subject: Re: swap_cluster_info lockdep splat Date: Fri, 17 Feb 2017 10:07:15 +0800 [thread overview] Message-ID: <87efyx8t9o.fsf@yhuang-dev.intel.com> (raw) In-Reply-To: <alpine.LSU.2.11.1702161702490.24224@eggly.anvils> (Hugh Dickins's message of "Thu, 16 Feb 2017 17:46:44 -0800") Hi, Hugh, Hugh Dickins <hughd@google.com> writes: > On Thu, 16 Feb 2017, Tim Chen wrote: >> >> > I do not understand your zest for putting wrappers around every little >> > thing, making it all harder to follow than it need be. Here's the patch >> > I've been running with (but you have a leak somewhere, and I don't have >> > time to search out and fix it: please try sustained swapping and swapoff). >> > >> >> Hugh, trying to duplicate your test case. So you were doing swapping, >> then swap off, swap on the swap device and restart swapping? > > Repeated pair of make -j20 kernel builds in 700M RAM, 1.5G swap on SSD, > 8 cpus; one of the builds in tmpfs, other in ext4 on loop on tmpfs file; > sizes tuned for plenty of swapping but no OOMing (it's an ancient 2.6.24 > kernel I build, modern one needing a lot more space with a lot less in use). > > How much of that is relevant I don't know: hopefully none of it, it's > hard to get the tunings right from scratch. To answer your specific > question: yes, I'm not doing concurrent swapoffs in this test showing > the leak, just waiting for each of the pair of builds to complete, > then tearing down the trees, doing swapoff followed by swapon, and > starting a new pair of builds. > > Sometimes it's the swapoff that fails with ENOMEM, more often it's a > fork during build that fails with ENOMEM: after 6 or 7 hours of load > (but timings show it getting slower leading up to that). /proc/meminfo > did not give me an immediate clue, Slab didn't look surprising but > I may not have studied close enough. Thanks for you information! Memory newly allocated in the mm-swap series are allocated via vmalloc, could you find anything special for vmalloc in /proc/meminfo? Best Regards, Huang, Ying > I quilt-bisected it as far as the mm-swap series, good before, bad > after, but didn't manage to narrow it down further because of hitting > a presumably different issue inside the series, where swapoff ENOMEMed > much sooner (after 25 mins one time, during first iteration the next). > > Hugh
WARNING: multiple messages have this Message-ID (diff)
From: "Huang\, Ying" <ying.huang@intel.com> To: Hugh Dickins <hughd@google.com> Cc: Tim Chen <tim.c.chen@linux.intel.com>, "Huang, Ying" <ying.huang@intel.com>, Minchan Kim <minchan@kernel.org>, Andrew Morton <akpm@linux-foundation.org>, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: swap_cluster_info lockdep splat Date: Fri, 17 Feb 2017 10:07:15 +0800 [thread overview] Message-ID: <87efyx8t9o.fsf@yhuang-dev.intel.com> (raw) In-Reply-To: <alpine.LSU.2.11.1702161702490.24224@eggly.anvils> (Hugh Dickins's message of "Thu, 16 Feb 2017 17:46:44 -0800") Hi, Hugh, Hugh Dickins <hughd@google.com> writes: > On Thu, 16 Feb 2017, Tim Chen wrote: >> >> > I do not understand your zest for putting wrappers around every little >> > thing, making it all harder to follow than it need be.A Here's the patch >> > I've been running with (but you have a leak somewhere, and I don't have >> > time to search out and fix it: please try sustained swapping and swapoff). >> > >> >> Hugh, trying to duplicate your test case. A So you were doing swapping, >> then swap off, swap on the swap device and restart swapping? > > Repeated pair of make -j20 kernel builds in 700M RAM, 1.5G swap on SSD, > 8 cpus; one of the builds in tmpfs, other in ext4 on loop on tmpfs file; > sizes tuned for plenty of swapping but no OOMing (it's an ancient 2.6.24 > kernel I build, modern one needing a lot more space with a lot less in use). > > How much of that is relevant I don't know: hopefully none of it, it's > hard to get the tunings right from scratch. To answer your specific > question: yes, I'm not doing concurrent swapoffs in this test showing > the leak, just waiting for each of the pair of builds to complete, > then tearing down the trees, doing swapoff followed by swapon, and > starting a new pair of builds. > > Sometimes it's the swapoff that fails with ENOMEM, more often it's a > fork during build that fails with ENOMEM: after 6 or 7 hours of load > (but timings show it getting slower leading up to that). /proc/meminfo > did not give me an immediate clue, Slab didn't look surprising but > I may not have studied close enough. Thanks for you information! Memory newly allocated in the mm-swap series are allocated via vmalloc, could you find anything special for vmalloc in /proc/meminfo? Best Regards, Huang, Ying > I quilt-bisected it as far as the mm-swap series, good before, bad > after, but didn't manage to narrow it down further because of hitting > a presumably different issue inside the series, where swapoff ENOMEMed > much sooner (after 25 mins one time, during first iteration the next). > > Hugh -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-02-17 2:07 UTC|newest] Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top 2017-02-16 5:22 swap_cluster_info lockdep splat Minchan Kim 2017-02-16 5:22 ` Minchan Kim 2017-02-16 7:13 ` Huang, Ying 2017-02-16 7:13 ` Huang, Ying 2017-02-16 8:44 ` Huang, Ying 2017-02-16 8:44 ` Huang, Ying 2017-02-16 19:00 ` Hugh Dickins 2017-02-16 19:00 ` Hugh Dickins 2017-02-16 19:34 ` Tim Chen 2017-02-16 19:34 ` Tim Chen 2017-02-17 1:46 ` Hugh Dickins 2017-02-17 2:07 ` Huang, Ying [this message] 2017-02-17 2:07 ` Huang, Ying 2017-02-17 2:37 ` Huang, Ying 2017-02-17 2:37 ` Huang, Ying 2017-02-17 7:32 ` Huang, Ying 2017-02-17 7:32 ` Huang, Ying 2017-02-17 18:42 ` Hugh Dickins 2017-02-17 18:42 ` Hugh Dickins 2017-02-16 23:45 ` Minchan Kim 2017-02-16 23:45 ` Minchan Kim 2017-02-17 0:38 ` Huang, Ying 2017-02-17 0:38 ` Huang, Ying
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=87efyx8t9o.fsf@yhuang-dev.intel.com \ --to=ying.huang@intel.com \ --cc=akpm@linux-foundation.org \ --cc=hughd@google.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=minchan@kernel.org \ --cc=tim.c.chen@linux.intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.