From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756670AbdDFJOT (ORCPT ); Thu, 6 Apr 2017 05:14:19 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:52170 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755288AbdDFInn (ORCPT ); Thu, 6 Apr 2017 04:43:43 -0400 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Johannes Weiner , "Kirill A. Shutemov" , Michal Hocko , Vladimir Davydov , Andrew Morton , Linus Torvalds Subject: [PATCH 4.9 61/72] mm: rmap: fix huge file mmap accounting in the memcg stats Date: Thu, 6 Apr 2017 10:38:48 +0200 Message-Id: <20170406083622.621915514@linuxfoundation.org> X-Mailer: git-send-email 2.12.2 In-Reply-To: <20170406083619.775985942@linuxfoundation.org> References: <20170406083619.775985942@linuxfoundation.org> User-Agent: quilt/0.65 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.9-stable review patch. If anyone has any objections, please let me know. ------------------ From: Johannes Weiner commit 553af430e7c981e6e8fa5007c5b7b5773acc63dd upstream. Huge pages are accounted as single units in the memcg's "file_mapped" counter. Account the correct number of base pages, like we do in the corresponding node counter. Link: http://lkml.kernel.org/r/20170322005111.3156-1-hannes@cmpxchg.org Signed-off-by: Johannes Weiner Reviewed-by: Kirill A. Shutemov Acked-by: Michal Hocko Cc: Vladimir Davydov Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- include/linux/memcontrol.h | 6 ++++++ mm/rmap.c | 4 ++-- 2 files changed, 8 insertions(+), 2 deletions(-) --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -739,6 +739,12 @@ static inline bool mem_cgroup_oom_synchr return false; } +static inline void mem_cgroup_update_page_stat(struct page *page, + enum mem_cgroup_stat_index idx, + int nr) +{ +} + static inline void mem_cgroup_inc_page_stat(struct page *page, enum mem_cgroup_stat_index idx) { --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1295,7 +1295,7 @@ void page_add_file_rmap(struct page *pag goto out; } __mod_node_page_state(page_pgdat(page), NR_FILE_MAPPED, nr); - mem_cgroup_inc_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED); + mem_cgroup_update_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED, nr); out: unlock_page_memcg(page); } @@ -1335,7 +1335,7 @@ static void page_remove_file_rmap(struct * pte lock(a spinlock) is held, which implies preemption disabled. */ __mod_node_page_state(page_pgdat(page), NR_FILE_MAPPED, -nr); - mem_cgroup_dec_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED); + mem_cgroup_update_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED, -nr); if (unlikely(PageMlocked(page))) clear_page_mlock(page);