All of lore.kernel.org
 help / color / mirror / Atom feed
From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>,
	Hugh Dickins <hughd@google.com>,
	"Kirill A. Shutemov" <kirill@shutemov.name>,
	Jerome Marchand <jmarchan@redhat.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Naoya Horiguchi <nao.horiguchi@gmail.com>
Subject: [PATCH -mm v6 09/13] memcg: cleanup preparation for page table walk
Date: Fri,  1 Aug 2014 15:20:45 -0400	[thread overview]
Message-ID: <1406920849-25908-10-git-send-email-n-horiguchi@ah.jp.nec.com> (raw)
In-Reply-To: <1406920849-25908-1-git-send-email-n-horiguchi@ah.jp.nec.com>

pagewalk.c can handle vma in itself, so we don't have to pass vma via
walk->private. And both of mem_cgroup_count_precharge() and
mem_cgroup_move_charge() do for each vma loop themselves, but now it's
done in pagewalk.c, so let's clean up them.

ChangeLog v4:
- use walk_page_range() instead of walk_page_vma() with for loop.

Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
---
 mm/memcontrol.c | 49 ++++++++++++++++---------------------------------
 1 file changed, 16 insertions(+), 33 deletions(-)

diff --git mmotm-2014-07-30-15-57.orig/mm/memcontrol.c mmotm-2014-07-30-15-57/mm/memcontrol.c
index dc35886a1c89..e8b44a50ef1a 100644
--- mmotm-2014-07-30-15-57.orig/mm/memcontrol.c
+++ mmotm-2014-07-30-15-57/mm/memcontrol.c
@@ -5876,7 +5876,7 @@ static int mem_cgroup_count_precharge_pte_range(pmd_t *pmd,
 					unsigned long addr, unsigned long end,
 					struct mm_walk *walk)
 {
-	struct vm_area_struct *vma = walk->private;
+	struct vm_area_struct *vma = walk->vma;
 	pte_t *pte;
 	spinlock_t *ptl;
 
@@ -5902,20 +5902,13 @@ static int mem_cgroup_count_precharge_pte_range(pmd_t *pmd,
 static unsigned long mem_cgroup_count_precharge(struct mm_struct *mm)
 {
 	unsigned long precharge;
-	struct vm_area_struct *vma;
 
+	struct mm_walk mem_cgroup_count_precharge_walk = {
+		.pmd_entry = mem_cgroup_count_precharge_pte_range,
+		.mm = mm,
+	};
 	down_read(&mm->mmap_sem);
-	for (vma = mm->mmap; vma; vma = vma->vm_next) {
-		struct mm_walk mem_cgroup_count_precharge_walk = {
-			.pmd_entry = mem_cgroup_count_precharge_pte_range,
-			.mm = mm,
-			.private = vma,
-		};
-		if (is_vm_hugetlb_page(vma))
-			continue;
-		walk_page_range(vma->vm_start, vma->vm_end,
-					&mem_cgroup_count_precharge_walk);
-	}
+	walk_page_range(0, ~0UL, &mem_cgroup_count_precharge_walk);
 	up_read(&mm->mmap_sem);
 
 	precharge = mc.precharge;
@@ -6051,7 +6044,7 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd,
 				struct mm_walk *walk)
 {
 	int ret = 0;
-	struct vm_area_struct *vma = walk->private;
+	struct vm_area_struct *vma = walk->vma;
 	pte_t *pte;
 	spinlock_t *ptl;
 	enum mc_target_type target_type;
@@ -6151,7 +6144,10 @@ put:			/* get_mctgt_type() gets the page */
 
 static void mem_cgroup_move_charge(struct mm_struct *mm)
 {
-	struct vm_area_struct *vma;
+	struct mm_walk mem_cgroup_move_charge_walk = {
+		.pmd_entry = mem_cgroup_move_charge_pte_range,
+		.mm = mm,
+	};
 
 	lru_add_drain_all();
 retry:
@@ -6167,24 +6163,11 @@ static void mem_cgroup_move_charge(struct mm_struct *mm)
 		cond_resched();
 		goto retry;
 	}
-	for (vma = mm->mmap; vma; vma = vma->vm_next) {
-		int ret;
-		struct mm_walk mem_cgroup_move_charge_walk = {
-			.pmd_entry = mem_cgroup_move_charge_pte_range,
-			.mm = mm,
-			.private = vma,
-		};
-		if (is_vm_hugetlb_page(vma))
-			continue;
-		ret = walk_page_range(vma->vm_start, vma->vm_end,
-						&mem_cgroup_move_charge_walk);
-		if (ret)
-			/*
-			 * means we have consumed all precharges and failed in
-			 * doing additional charge. Just abandon here.
-			 */
-			break;
-	}
+	/*
+	 * When we have consumed all precharges and failed in doing
+	 * additional charge, the page walk just aborts.
+	 */
+	walk_page_range(0, ~0UL, &mem_cgroup_move_charge_walk);
 	up_read(&mm->mmap_sem);
 }
 
-- 
1.9.3


WARNING: multiple messages have this Message-ID (diff)
From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>,
	Hugh Dickins <hughd@google.com>,
	"Kirill A. Shutemov" <kirill@shutemov.name>,
	Jerome Marchand <jmarchan@redhat.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Naoya Horiguchi <nao.horiguchi@gmail.com>
Subject: [PATCH -mm v6 09/13] memcg: cleanup preparation for page table walk
Date: Fri,  1 Aug 2014 15:20:45 -0400	[thread overview]
Message-ID: <1406920849-25908-10-git-send-email-n-horiguchi@ah.jp.nec.com> (raw)
In-Reply-To: <1406920849-25908-1-git-send-email-n-horiguchi@ah.jp.nec.com>

pagewalk.c can handle vma in itself, so we don't have to pass vma via
walk->private. And both of mem_cgroup_count_precharge() and
mem_cgroup_move_charge() do for each vma loop themselves, but now it's
done in pagewalk.c, so let's clean up them.

ChangeLog v4:
- use walk_page_range() instead of walk_page_vma() with for loop.

Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
---
 mm/memcontrol.c | 49 ++++++++++++++++---------------------------------
 1 file changed, 16 insertions(+), 33 deletions(-)

diff --git mmotm-2014-07-30-15-57.orig/mm/memcontrol.c mmotm-2014-07-30-15-57/mm/memcontrol.c
index dc35886a1c89..e8b44a50ef1a 100644
--- mmotm-2014-07-30-15-57.orig/mm/memcontrol.c
+++ mmotm-2014-07-30-15-57/mm/memcontrol.c
@@ -5876,7 +5876,7 @@ static int mem_cgroup_count_precharge_pte_range(pmd_t *pmd,
 					unsigned long addr, unsigned long end,
 					struct mm_walk *walk)
 {
-	struct vm_area_struct *vma = walk->private;
+	struct vm_area_struct *vma = walk->vma;
 	pte_t *pte;
 	spinlock_t *ptl;
 
@@ -5902,20 +5902,13 @@ static int mem_cgroup_count_precharge_pte_range(pmd_t *pmd,
 static unsigned long mem_cgroup_count_precharge(struct mm_struct *mm)
 {
 	unsigned long precharge;
-	struct vm_area_struct *vma;
 
+	struct mm_walk mem_cgroup_count_precharge_walk = {
+		.pmd_entry = mem_cgroup_count_precharge_pte_range,
+		.mm = mm,
+	};
 	down_read(&mm->mmap_sem);
-	for (vma = mm->mmap; vma; vma = vma->vm_next) {
-		struct mm_walk mem_cgroup_count_precharge_walk = {
-			.pmd_entry = mem_cgroup_count_precharge_pte_range,
-			.mm = mm,
-			.private = vma,
-		};
-		if (is_vm_hugetlb_page(vma))
-			continue;
-		walk_page_range(vma->vm_start, vma->vm_end,
-					&mem_cgroup_count_precharge_walk);
-	}
+	walk_page_range(0, ~0UL, &mem_cgroup_count_precharge_walk);
 	up_read(&mm->mmap_sem);
 
 	precharge = mc.precharge;
@@ -6051,7 +6044,7 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd,
 				struct mm_walk *walk)
 {
 	int ret = 0;
-	struct vm_area_struct *vma = walk->private;
+	struct vm_area_struct *vma = walk->vma;
 	pte_t *pte;
 	spinlock_t *ptl;
 	enum mc_target_type target_type;
@@ -6151,7 +6144,10 @@ put:			/* get_mctgt_type() gets the page */
 
 static void mem_cgroup_move_charge(struct mm_struct *mm)
 {
-	struct vm_area_struct *vma;
+	struct mm_walk mem_cgroup_move_charge_walk = {
+		.pmd_entry = mem_cgroup_move_charge_pte_range,
+		.mm = mm,
+	};
 
 	lru_add_drain_all();
 retry:
@@ -6167,24 +6163,11 @@ static void mem_cgroup_move_charge(struct mm_struct *mm)
 		cond_resched();
 		goto retry;
 	}
-	for (vma = mm->mmap; vma; vma = vma->vm_next) {
-		int ret;
-		struct mm_walk mem_cgroup_move_charge_walk = {
-			.pmd_entry = mem_cgroup_move_charge_pte_range,
-			.mm = mm,
-			.private = vma,
-		};
-		if (is_vm_hugetlb_page(vma))
-			continue;
-		ret = walk_page_range(vma->vm_start, vma->vm_end,
-						&mem_cgroup_move_charge_walk);
-		if (ret)
-			/*
-			 * means we have consumed all precharges and failed in
-			 * doing additional charge. Just abandon here.
-			 */
-			break;
-	}
+	/*
+	 * When we have consumed all precharges and failed in doing
+	 * additional charge, the page walk just aborts.
+	 */
+	walk_page_range(0, ~0UL, &mem_cgroup_move_charge_walk);
 	up_read(&mm->mmap_sem);
 }
 
-- 
1.9.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2014-08-01 20:19 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-08-01 19:20 [PATCH -mm v6 00/13] pagewalk: improve vma handling, apply to new users Naoya Horiguchi
2014-08-01 19:20 ` Naoya Horiguchi
2014-08-01 19:20 ` [PATCH -mm v6 01/13] mm/pagewalk: remove pgd_entry() and pud_entry() Naoya Horiguchi
2014-08-01 19:20   ` Naoya Horiguchi
2014-08-01 19:20 ` [PATCH -mm v6 02/13] pagewalk: improve vma handling Naoya Horiguchi
2014-08-01 19:20   ` Naoya Horiguchi
2014-08-01 19:20 ` [PATCH -mm v6 03/13] pagewalk: add walk_page_vma() Naoya Horiguchi
2014-08-01 19:20   ` Naoya Horiguchi
2014-08-01 19:20 ` [PATCH -mm v6 04/13] smaps: remove mem_size_stats->vma and use walk_page_vma() Naoya Horiguchi
2014-08-01 19:20   ` Naoya Horiguchi
2014-08-01 19:20 ` [PATCH -mm v6 05/13] clear_refs: remove clear_refs_private->vma and introduce clear_refs_test_walk() Naoya Horiguchi
2014-08-01 19:20   ` Naoya Horiguchi
2014-08-01 19:20 ` [PATCH -mm v6 06/13] pagemap: use walk->vma instead of calling find_vma() Naoya Horiguchi
2014-08-01 19:20   ` Naoya Horiguchi
2014-08-06 17:30   ` Peter Feiner
2014-08-06 17:30     ` Peter Feiner
2014-08-06 19:14     ` Naoya Horiguchi
2014-08-06 19:14       ` Naoya Horiguchi
2014-08-01 19:20 ` [PATCH -mm v6 07/13] numa_maps: fix typo in gather_hugetbl_stats Naoya Horiguchi
2014-08-01 19:20   ` Naoya Horiguchi
2014-08-01 19:20 ` [PATCH -mm v6 08/13] numa_maps: remove numa_maps->vma Naoya Horiguchi
2014-08-01 19:20   ` Naoya Horiguchi
2014-08-01 19:20 ` Naoya Horiguchi [this message]
2014-08-01 19:20   ` [PATCH -mm v6 09/13] memcg: cleanup preparation for page table walk Naoya Horiguchi
2014-08-01 19:20 ` [PATCH -mm v6 10/13] arch/powerpc/mm/subpage-prot.c: use walk->vma and walk_page_vma() Naoya Horiguchi
2014-08-01 19:20   ` Naoya Horiguchi
2014-08-01 19:20 ` [PATCH -mm v6 11/13] mempolicy: apply page table walker on queue_pages_range() Naoya Horiguchi
2014-08-01 19:20   ` Naoya Horiguchi
2014-08-01 19:20 ` [PATCH -mm v6 12/13] mm: /proc/pid/clear_refs: avoid split_huge_page() Naoya Horiguchi
2014-08-01 19:20   ` Naoya Horiguchi
2014-08-01 19:20 ` [PATCH -mm v6 13/13] mincore: apply page table walker on do_mincore() Naoya Horiguchi
2014-08-01 19:20   ` Naoya Horiguchi
2014-10-16 14:51 ` [PATCH -mm v6 00/13] pagewalk: improve vma handling, apply to new users Kirill A. Shutemov
2014-10-16 14:51   ` Kirill A. Shutemov
2014-10-16 19:23   ` Andrew Morton
2014-10-16 19:23     ` Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1406920849-25908-10-git-send-email-n-horiguchi@ah.jp.nec.com \
    --to=n-horiguchi@ah.jp.nec.com \
    --cc=akpm@linux-foundation.org \
    --cc=dave.hansen@intel.com \
    --cc=hughd@google.com \
    --cc=jmarchan@redhat.com \
    --cc=kirill@shutemov.name \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nao.horiguchi@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.