From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1947026Ab3BHU0i (ORCPT ); Fri, 8 Feb 2013 15:26:38 -0500 Received: from mx1.redhat.com ([209.132.183.28]:26133 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1946902Ab3BHU0h (ORCPT ); Fri, 8 Feb 2013 15:26:37 -0500 Date: Fri, 8 Feb 2013 21:25:51 +0100 From: Andrea Arcangeli To: Michel Lespinasse Cc: Rik van Riel , Mel Gorman , Hugh Dickins , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 3/3] mm: accelerate munlock() treatment of THP pages Message-ID: <20130208202550.GB9817@redhat.com> References: <1359962232-20811-1-git-send-email-walken@google.com> <1359962232-20811-4-git-send-email-walken@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1359962232-20811-4-git-send-email-walken@google.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Michel, On Sun, Feb 03, 2013 at 11:17:12PM -0800, Michel Lespinasse wrote: > munlock_vma_pages_range() was always incrementing addresses by PAGE_SIZE > at a time. When munlocking THP pages (or the huge zero page), this resulted > in taking the mm->page_table_lock 512 times in a row. > > We can do better by making use of the page_mask returned by follow_page_mask > (for the huge zero page case), or the size of the page munlock_vma_page() > operated on (for the true THP page case). > > Note - I am sending this as RFC only for now as I can't currently put > my finger on what if anything prevents split_huge_page() from operating > concurrently on the same page as munlock_vma_page(), which would mess > up our NR_MLOCK statistics. Is this a latent bug or is there a subtle > point I missed here ? I agree something looks fishy: nor mmap_sem for writing, nor the page lock can stop split_huge_page_refcount. Now the mlock side was intended to be safe because mlock_vma_page is called within follow_page while holding the PT lock or the page_table_lock (so split_huge_page_refcount will have to wait for it to be released before it can run). See follow_trans_huge_pmd assert_spin_locked and the pte_unmap_unlock after mlock_vma_page returns. Problem is, the lock side dependen on the TestSetPageMlocked below to be always repeated on the head page (follow_trans_huge_pmd will always pass the head page to mlock_vma_page). void mlock_vma_page(struct page *page) { BUG_ON(!PageLocked(page)); if (!TestSetPageMlocked(page)) { But what if the head page was split in between two different follow_page calls? The second call wouldn't take the pmd_trans_huge path anymore and the stats would be increased too much. The problem on the munlock side is even more apparent as you pointed out above but now I think the mlock side was problematic too. The good thing is, your accelleration code for the mlock side should have fixed the mlock race already: not ever risking to end up calling mlock_vma_page twice on the head page is not an "accelleration" only, it should also be a natural fix for the race. To fix the munlock side, which is still present, I think one way would be to do mlock and unlock within get_user_pages, so they run in the same place protected by the PT lock or page_table_lock. There are few things that stop split_huge_page_refcount: page_table_lock, lru_lock, compound_lock, anon_vma lock. So if we keep calling munlock_vma_page outside of get_user_pages (so outside of the page_table_lock) the other way would be to use the compound_lock. NOTE: this a purely aesthetical issue in /proc/meminfo, there's nothing functional (at least in the kernel) connected to it, so no panic :). Thanks, Andrea