From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756280Ab2GEAq0 (ORCPT ); Wed, 4 Jul 2012 20:46:26 -0400 Received: from zene.cmpxchg.org ([85.214.230.12]:53421 "EHLO zene.cmpxchg.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756153Ab2GEApr (ORCPT ); Wed, 4 Jul 2012 20:45:47 -0400 From: Johannes Weiner To: Andrew Morton Cc: KAMEZAWA Hiroyuki , Michal Hocko , Hugh Dickins , David Rientjes , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [patch 10/11] mm: memcg: only check swap cache pages for repeated charging Date: Thu, 5 Jul 2012 02:45:02 +0200 Message-Id: <1341449103-1986-11-git-send-email-hannes@cmpxchg.org> X-Mailer: git-send-email 1.7.7.6 In-Reply-To: <1341449103-1986-1-git-send-email-hannes@cmpxchg.org> References: <1341449103-1986-1-git-send-email-hannes@cmpxchg.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Only anon and shmem pages in the swap cache are attempted to be charged multiple times, from every swap pte fault or from shmem_unuse(). No other pages require checking PageCgroupUsed(). Charging pages in the swap cache is also serialized by the page lock, and since both the try_charge and commit_charge are called under the same page lock section, the PageCgroupUsed() check might as well happen before the counter charging, let alone reclaim. Signed-off-by: Johannes Weiner --- mm/memcontrol.c | 17 ++++++++++++----- 1 files changed, 12 insertions(+), 5 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index a8bf86a..d3701cd 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2471,11 +2471,7 @@ static void __mem_cgroup_commit_charge(struct mem_cgroup *memcg, bool anon; lock_page_cgroup(pc); - if (unlikely(PageCgroupUsed(pc))) { - unlock_page_cgroup(pc); - __mem_cgroup_cancel_charge(memcg, nr_pages); - return; - } + VM_BUG_ON(PageCgroupUsed(pc)); /* * we don't need page_cgroup_lock about tail pages, becase they are not * accessed by any other context at this point. @@ -2740,8 +2736,19 @@ static int __mem_cgroup_try_charge_swapin(struct mm_struct *mm, struct mem_cgroup **memcgp) { struct mem_cgroup *memcg; + struct page_cgroup *pc; int ret; + pc = lookup_page_cgroup(page); + /* + * Every swap fault against a single page tries to charge the + * page, bail as early as possible. shmem_unuse() encounters + * already charged pages, too. The USED bit is protected by + * the page lock, which serializes swap cache removal, which + * in turn serializes uncharging. + */ + if (PageCgroupUsed(pc)) + return 0; if (!do_swap_account) goto charge_cur_mm; /* -- 1.7.7.6 From mboxrd@z Thu Jan 1 00:00:00 1970 From: Johannes Weiner Subject: [patch 10/11] mm: memcg: only check swap cache pages for repeated charging Date: Thu, 5 Jul 2012 02:45:02 +0200 Message-ID: <1341449103-1986-11-git-send-email-hannes@cmpxchg.org> References: <1341449103-1986-1-git-send-email-hannes@cmpxchg.org> Return-path: In-Reply-To: <1341449103-1986-1-git-send-email-hannes@cmpxchg.org> Sender: owner-linux-mm@kvack.org List-ID: MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Andrew Morton Cc: KAMEZAWA Hiroyuki , Michal Hocko , Hugh Dickins , David Rientjes , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org Only anon and shmem pages in the swap cache are attempted to be charged multiple times, from every swap pte fault or from shmem_unuse(). No other pages require checking PageCgroupUsed(). Charging pages in the swap cache is also serialized by the page lock, and since both the try_charge and commit_charge are called under the same page lock section, the PageCgroupUsed() check might as well happen before the counter charging, let alone reclaim. Signed-off-by: Johannes Weiner --- mm/memcontrol.c | 17 ++++++++++++----- 1 files changed, 12 insertions(+), 5 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index a8bf86a..d3701cd 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2471,11 +2471,7 @@ static void __mem_cgroup_commit_charge(struct mem_cgroup *memcg, bool anon; lock_page_cgroup(pc); - if (unlikely(PageCgroupUsed(pc))) { - unlock_page_cgroup(pc); - __mem_cgroup_cancel_charge(memcg, nr_pages); - return; - } + VM_BUG_ON(PageCgroupUsed(pc)); /* * we don't need page_cgroup_lock about tail pages, becase they are not * accessed by any other context at this point. @@ -2740,8 +2736,19 @@ static int __mem_cgroup_try_charge_swapin(struct mm_struct *mm, struct mem_cgroup **memcgp) { struct mem_cgroup *memcg; + struct page_cgroup *pc; int ret; + pc = lookup_page_cgroup(page); + /* + * Every swap fault against a single page tries to charge the + * page, bail as early as possible. shmem_unuse() encounters + * already charged pages, too. The USED bit is protected by + * the page lock, which serializes swap cache removal, which + * in turn serializes uncharging. + */ + if (PageCgroupUsed(pc)) + return 0; if (!do_swap_account) goto charge_cur_mm; /* -- 1.7.7.6 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org