From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"hannes@cmpxchg.org" <hannes@cmpxchg.org>,
"balbir@linux.vnet.ibm.com" <balbir@linux.vnet.ibm.com>
Subject: Re: [BUGFIX][PATCH 4/4] memcg: fix khugepaged should skip busy memcg
Date: Fri, 28 Jan 2011 17:30:36 +0900 [thread overview]
Message-ID: <20110128173036.9719292c.kamezawa.hiroyu@jp.fujitsu.com> (raw)
In-Reply-To: <20110128172022.8f16e862.nishimura@mxp.nes.nec.co.jp>
On Fri, 28 Jan 2011 17:20:22 +0900
Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> wrote:
> > + /*
> > + * At collapsing, khugepaged charges HPAGE_SIZE. When it unmap
> > + * used ptes, the charge will be decreased.
> > + *
> > + * This requirement of 'extra charge' at collapsing seems redundant
> > + * it's safe way for now. For example, at replacing a chunk of page
> > + * to be hugepage, khuepaged skips pte_none() entry, which is not
> > + * which is not charged. But we should do charge under spinlocks as
> > + * pte_lock, we need precharge. Check status before doing heavy
> > + * jobs and give khugepaged chance to retire early.
> > + */
> > + if (mem_cgroup_check_margin(mem) >= HPAGE_SIZE)
> I'm sorry if I misunderstand, shouldn't it be "<" ?
yes. This bug will make khugepaged never work on a memcg and
the system never cause hang ;(
Thank you.
==
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
When using khugepaged with small memory cgroup, we see khugepaged
causes soft lockup, or running process under memcg will hang
It's because khugepaged tries to scan all pmd of a process
which is under busy/small memory cgroup and tries to allocate
HUGEPAGE size resource.
This work is done under mmap_sem and can cause memory reclaim
repeatedly. This will easily raise cpu usage of khugepaged and latecy
of scanned process will goes up. Moreover, it seems succesfully
working TransHuge pages may be splitted by this memory reclaim
caused by khugepaged.
This patch adds a hint for khugepaged whether a process is
under a memory cgroup which has sufficient memory. If memcg
seems busy, a process is skipped.
How to test:
# mount -o cgroup cgroup /cgroup/memory -o memory
# mkdir /cgroup/memory/A
# echo 200M (or some small) > /cgroup/memory/A/memory.limit_in_bytes
# echo 0 > /cgroup/memory/A/tasks
# make -j 8 kernel
Changelog:
- fixed condition check bug.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
include/linux/memcontrol.h | 7 +++++
mm/huge_memory.c | 10 +++++++-
mm/memcontrol.c | 53 +++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 69 insertions(+), 1 deletion(-)
Index: mmotm-0125/mm/memcontrol.c
===================================================================
--- mmotm-0125.orig/mm/memcontrol.c
+++ mmotm-0125/mm/memcontrol.c
@@ -255,6 +255,9 @@ struct mem_cgroup {
/* For oom notifier event fd */
struct list_head oom_notify;
+ /* For transparent hugepage daemon */
+ unsigned long long recent_failcnt;
+
/*
* Should we move charges of a task when a task is moved into this
* mem_cgroup ? And what type of charges should we move ?
@@ -2211,6 +2214,56 @@ void mem_cgroup_split_huge_fixup(struct
tail_pc->flags = head_pc->flags & ~PCGF_NOCOPY_AT_SPLIT;
move_unlock_page_cgroup(head_pc, &flags);
}
+
+bool mem_cgroup_worth_try_hugepage_scan(struct mm_struct *mm)
+{
+ struct mem_cgroup *mem;
+ bool ret = true;
+ u64 recent_charge_fail;
+
+ if (mem_cgroup_disabled())
+ return true;
+
+ mem = try_get_mem_cgroup_from_mm(mm);
+
+ if (!mem)
+ return true;
+
+ if (mem_cgroup_is_root(mem))
+ goto out;
+
+ /*
+ * At collapsing, khugepaged charges HPAGE_SIZE. When it unmap
+ * used ptes, the charge will be decreased.
+ *
+ * This requirement of 'extra charge' at collapsing seems redundant
+ * it's safe way for now. For example, at replacing a chunk of page
+ * to be hugepage, khuepaged skips pte_none() entry, which is not
+ * which is not charged. But we should do charge under spinlocks as
+ * pte_lock, we need precharge. Check status before doing heavy
+ * jobs and give khugepaged chance to retire early.
+ */
+ if (mem_cgroup_check_margin(mem) < HPAGE_SIZE)
+ ret = false;
+
+ /*
+ * This is an easy check. If someone other than khugepaged does
+ * hit limit, khugepaged should avoid more pressure.
+ */
+ recent_charge_fail = res_counter_read_u64(&mem->res, RES_FAILCNT);
+ if (ret
+ && mem->recent_failcnt
+ && recent_charge_fail > mem->recent_failcnt) {
+ ret = false;
+ }
+ /* because this thread will fail charge by itself +1.*/
+ if (recent_charge_fail)
+ mem->recent_failcnt = recent_charge_fail + 1;
+out:
+ css_put(&mem->css);
+ return ret;
+}
+
#endif
/**
Index: mmotm-0125/mm/huge_memory.c
===================================================================
--- mmotm-0125.orig/mm/huge_memory.c
+++ mmotm-0125/mm/huge_memory.c
@@ -2011,8 +2011,10 @@ static unsigned int khugepaged_scan_mm_s
down_read(&mm->mmap_sem);
if (unlikely(khugepaged_test_exit(mm)))
vma = NULL;
- else
+ else if (mem_cgroup_worth_try_hugepage_scan(mm))
vma = find_vma(mm, khugepaged_scan.address);
+ else
+ vma = NULL;
progress++;
for (; vma; vma = vma->vm_next) {
@@ -2024,6 +2026,12 @@ static unsigned int khugepaged_scan_mm_s
break;
}
+ if (unlikely(!mem_cgroup_worth_try_hugepage_scan(mm))) {
+ progress++;
+ vma = NULL; /* try next mm */
+ break;
+ }
+
if ((!(vma->vm_flags & VM_HUGEPAGE) &&
!khugepaged_always()) ||
(vma->vm_flags & VM_NOHUGEPAGE)) {
Index: mmotm-0125/include/linux/memcontrol.h
===================================================================
--- mmotm-0125.orig/include/linux/memcontrol.h
+++ mmotm-0125/include/linux/memcontrol.h
@@ -148,6 +148,7 @@ u64 mem_cgroup_get_limit(struct mem_cgro
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
void mem_cgroup_split_huge_fixup(struct page *head, struct page *tail);
+bool mem_cgroup_worth_try_hugepage_scan(struct mm_struct *mm);
#endif
#else /* CONFIG_CGROUP_MEM_RES_CTLR */
@@ -342,6 +343,12 @@ u64 mem_cgroup_get_limit(struct mem_cgro
static inline void mem_cgroup_split_huge_fixup(struct page *head,
struct page *tail)
{
+
+}
+
+static inline bool mem_cgroup_worth_try_hugepage_scan(struct mm_struct *mm)
+{
+ return true;
}
#endif /* CONFIG_CGROUP_MEM_CONT */
WARNING: multiple messages have this Message-ID (diff)
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"hannes@cmpxchg.org" <hannes@cmpxchg.org>,
"balbir@linux.vnet.ibm.com" <balbir@linux.vnet.ibm.com>
Subject: Re: [BUGFIX][PATCH 4/4] memcg: fix khugepaged should skip busy memcg
Date: Fri, 28 Jan 2011 17:30:36 +0900 [thread overview]
Message-ID: <20110128173036.9719292c.kamezawa.hiroyu@jp.fujitsu.com> (raw)
In-Reply-To: <20110128172022.8f16e862.nishimura@mxp.nes.nec.co.jp>
On Fri, 28 Jan 2011 17:20:22 +0900
Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> wrote:
> > + /*
> > + * At collapsing, khugepaged charges HPAGE_SIZE. When it unmap
> > + * used ptes, the charge will be decreased.
> > + *
> > + * This requirement of 'extra charge' at collapsing seems redundant
> > + * it's safe way for now. For example, at replacing a chunk of page
> > + * to be hugepage, khuepaged skips pte_none() entry, which is not
> > + * which is not charged. But we should do charge under spinlocks as
> > + * pte_lock, we need precharge. Check status before doing heavy
> > + * jobs and give khugepaged chance to retire early.
> > + */
> > + if (mem_cgroup_check_margin(mem) >= HPAGE_SIZE)
> I'm sorry if I misunderstand, shouldn't it be "<" ?
yes. This bug will make khugepaged never work on a memcg and
the system never cause hang ;(
Thank you.
==
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
When using khugepaged with small memory cgroup, we see khugepaged
causes soft lockup, or running process under memcg will hang
It's because khugepaged tries to scan all pmd of a process
which is under busy/small memory cgroup and tries to allocate
HUGEPAGE size resource.
This work is done under mmap_sem and can cause memory reclaim
repeatedly. This will easily raise cpu usage of khugepaged and latecy
of scanned process will goes up. Moreover, it seems succesfully
working TransHuge pages may be splitted by this memory reclaim
caused by khugepaged.
This patch adds a hint for khugepaged whether a process is
under a memory cgroup which has sufficient memory. If memcg
seems busy, a process is skipped.
How to test:
# mount -o cgroup cgroup /cgroup/memory -o memory
# mkdir /cgroup/memory/A
# echo 200M (or some small) > /cgroup/memory/A/memory.limit_in_bytes
# echo 0 > /cgroup/memory/A/tasks
# make -j 8 kernel
Changelog:
- fixed condition check bug.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
include/linux/memcontrol.h | 7 +++++
mm/huge_memory.c | 10 +++++++-
mm/memcontrol.c | 53 +++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 69 insertions(+), 1 deletion(-)
Index: mmotm-0125/mm/memcontrol.c
===================================================================
--- mmotm-0125.orig/mm/memcontrol.c
+++ mmotm-0125/mm/memcontrol.c
@@ -255,6 +255,9 @@ struct mem_cgroup {
/* For oom notifier event fd */
struct list_head oom_notify;
+ /* For transparent hugepage daemon */
+ unsigned long long recent_failcnt;
+
/*
* Should we move charges of a task when a task is moved into this
* mem_cgroup ? And what type of charges should we move ?
@@ -2211,6 +2214,56 @@ void mem_cgroup_split_huge_fixup(struct
tail_pc->flags = head_pc->flags & ~PCGF_NOCOPY_AT_SPLIT;
move_unlock_page_cgroup(head_pc, &flags);
}
+
+bool mem_cgroup_worth_try_hugepage_scan(struct mm_struct *mm)
+{
+ struct mem_cgroup *mem;
+ bool ret = true;
+ u64 recent_charge_fail;
+
+ if (mem_cgroup_disabled())
+ return true;
+
+ mem = try_get_mem_cgroup_from_mm(mm);
+
+ if (!mem)
+ return true;
+
+ if (mem_cgroup_is_root(mem))
+ goto out;
+
+ /*
+ * At collapsing, khugepaged charges HPAGE_SIZE. When it unmap
+ * used ptes, the charge will be decreased.
+ *
+ * This requirement of 'extra charge' at collapsing seems redundant
+ * it's safe way for now. For example, at replacing a chunk of page
+ * to be hugepage, khuepaged skips pte_none() entry, which is not
+ * which is not charged. But we should do charge under spinlocks as
+ * pte_lock, we need precharge. Check status before doing heavy
+ * jobs and give khugepaged chance to retire early.
+ */
+ if (mem_cgroup_check_margin(mem) < HPAGE_SIZE)
+ ret = false;
+
+ /*
+ * This is an easy check. If someone other than khugepaged does
+ * hit limit, khugepaged should avoid more pressure.
+ */
+ recent_charge_fail = res_counter_read_u64(&mem->res, RES_FAILCNT);
+ if (ret
+ && mem->recent_failcnt
+ && recent_charge_fail > mem->recent_failcnt) {
+ ret = false;
+ }
+ /* because this thread will fail charge by itself +1.*/
+ if (recent_charge_fail)
+ mem->recent_failcnt = recent_charge_fail + 1;
+out:
+ css_put(&mem->css);
+ return ret;
+}
+
#endif
/**
Index: mmotm-0125/mm/huge_memory.c
===================================================================
--- mmotm-0125.orig/mm/huge_memory.c
+++ mmotm-0125/mm/huge_memory.c
@@ -2011,8 +2011,10 @@ static unsigned int khugepaged_scan_mm_s
down_read(&mm->mmap_sem);
if (unlikely(khugepaged_test_exit(mm)))
vma = NULL;
- else
+ else if (mem_cgroup_worth_try_hugepage_scan(mm))
vma = find_vma(mm, khugepaged_scan.address);
+ else
+ vma = NULL;
progress++;
for (; vma; vma = vma->vm_next) {
@@ -2024,6 +2026,12 @@ static unsigned int khugepaged_scan_mm_s
break;
}
+ if (unlikely(!mem_cgroup_worth_try_hugepage_scan(mm))) {
+ progress++;
+ vma = NULL; /* try next mm */
+ break;
+ }
+
if ((!(vma->vm_flags & VM_HUGEPAGE) &&
!khugepaged_always()) ||
(vma->vm_flags & VM_NOHUGEPAGE)) {
Index: mmotm-0125/include/linux/memcontrol.h
===================================================================
--- mmotm-0125.orig/include/linux/memcontrol.h
+++ mmotm-0125/include/linux/memcontrol.h
@@ -148,6 +148,7 @@ u64 mem_cgroup_get_limit(struct mem_cgro
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
void mem_cgroup_split_huge_fixup(struct page *head, struct page *tail);
+bool mem_cgroup_worth_try_hugepage_scan(struct mm_struct *mm);
#endif
#else /* CONFIG_CGROUP_MEM_RES_CTLR */
@@ -342,6 +343,12 @@ u64 mem_cgroup_get_limit(struct mem_cgro
static inline void mem_cgroup_split_huge_fixup(struct page *head,
struct page *tail)
{
+
+}
+
+static inline bool mem_cgroup_worth_try_hugepage_scan(struct mm_struct *mm)
+{
+ return true;
}
#endif /* CONFIG_CGROUP_MEM_CONT */
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2011-01-28 8:36 UTC|newest]
Thread overview: 66+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-01-28 3:22 [BUGFIX][PATCH 0/4] Fixes for memcg with THP KAMEZAWA Hiroyuki
2011-01-28 3:22 ` KAMEZAWA Hiroyuki
2011-01-28 3:24 ` [BUGFIX][PATCH 1/4] memcg: fix limit estimation at reclaim for hugepage KAMEZAWA Hiroyuki
2011-01-28 3:24 ` KAMEZAWA Hiroyuki
2011-01-28 4:40 ` Daisuke Nishimura
2011-01-28 4:40 ` Daisuke Nishimura
2011-01-28 4:49 ` KAMEZAWA Hiroyuki
2011-01-28 4:49 ` KAMEZAWA Hiroyuki
2011-01-28 4:58 ` KAMEZAWA Hiroyuki
2011-01-28 4:58 ` KAMEZAWA Hiroyuki
2011-01-28 5:36 ` Daisuke Nishimura
2011-01-28 5:36 ` Daisuke Nishimura
2011-01-28 8:04 ` Minchan Kim
2011-01-28 8:04 ` Minchan Kim
2011-01-28 8:17 ` Johannes Weiner
2011-01-28 8:17 ` Johannes Weiner
2011-01-28 8:25 ` Minchan Kim
2011-01-28 8:25 ` Minchan Kim
2011-01-28 8:36 ` KAMEZAWA Hiroyuki
2011-01-28 8:36 ` KAMEZAWA Hiroyuki
2011-01-30 2:26 ` Minchan Kim
2011-01-30 2:26 ` Minchan Kim
2011-01-28 8:41 ` Johannes Weiner
2011-01-28 8:41 ` Johannes Weiner
2011-01-28 8:24 ` KAMEZAWA Hiroyuki
2011-01-28 8:24 ` KAMEZAWA Hiroyuki
2011-01-28 8:37 ` Minchan Kim
2011-01-28 8:37 ` Minchan Kim
2011-01-28 7:52 ` Johannes Weiner
2011-01-28 7:52 ` Johannes Weiner
2011-01-28 8:06 ` KAMEZAWA Hiroyuki
2011-01-28 8:06 ` KAMEZAWA Hiroyuki
2011-01-28 3:26 ` [BUGFIX][PATCH 2/4] memcg: fix charge path for THP and allow early retirement KAMEZAWA Hiroyuki
2011-01-28 3:26 ` KAMEZAWA Hiroyuki
2011-01-28 5:37 ` Daisuke Nishimura
2011-01-28 5:37 ` Daisuke Nishimura
2011-01-28 7:57 ` Johannes Weiner
2011-01-28 7:57 ` Johannes Weiner
2011-01-28 8:14 ` KAMEZAWA Hiroyuki
2011-01-28 8:14 ` KAMEZAWA Hiroyuki
2011-01-28 9:02 ` Johannes Weiner
2011-01-28 9:02 ` Johannes Weiner
2011-01-28 9:16 ` KAMEZAWA Hiroyuki
2011-01-28 9:16 ` KAMEZAWA Hiroyuki
2011-01-28 3:27 ` [BUGFIX][PATCH 3/4] mecg: fix oom flag at THP charge KAMEZAWA Hiroyuki
2011-01-28 3:27 ` KAMEZAWA Hiroyuki
2011-01-28 5:39 ` Daisuke Nishimura
2011-01-28 5:39 ` Daisuke Nishimura
2011-01-28 5:50 ` KAMEZAWA Hiroyuki
2011-01-28 5:50 ` KAMEZAWA Hiroyuki
2011-01-28 8:02 ` Johannes Weiner
2011-01-28 8:02 ` Johannes Weiner
2011-01-28 8:21 ` KAMEZAWA Hiroyuki
2011-01-28 8:21 ` KAMEZAWA Hiroyuki
2011-01-31 7:41 ` Balbir Singh
2011-01-31 7:41 ` Balbir Singh
2011-01-28 3:28 ` [BUGFIX][PATCH 4/4] memcg: fix khugepaged should skip busy memcg KAMEZAWA Hiroyuki
2011-01-28 3:28 ` KAMEZAWA Hiroyuki
2011-01-28 8:20 ` Daisuke Nishimura
2011-01-28 8:20 ` Daisuke Nishimura
2011-01-28 8:30 ` KAMEZAWA Hiroyuki [this message]
2011-01-28 8:30 ` KAMEZAWA Hiroyuki
2011-01-29 12:47 ` [BUGFIX][PATCH 0/4] Fixes for memcg with THP Balbir Singh
2011-01-29 12:47 ` Balbir Singh
2011-01-30 23:55 ` KAMEZAWA Hiroyuki
2011-01-30 23:55 ` KAMEZAWA Hiroyuki
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110128173036.9719292c.kamezawa.hiroyu@jp.fujitsu.com \
--to=kamezawa.hiroyu@jp.fujitsu.com \
--cc=balbir@linux.vnet.ibm.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=nishimura@mxp.nes.nec.co.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.