From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753263Ab3AGPOh (ORCPT ); Mon, 7 Jan 2013 10:14:37 -0500 Received: from cantor2.suse.de ([195.135.220.15]:38658 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753001Ab3AGPOg (ORCPT ); Mon, 7 Jan 2013 10:14:36 -0500 Date: Mon, 7 Jan 2013 15:14:30 +0000 From: Mel Gorman To: Simon Jeons Cc: Peter Zijlstra , Andrea Arcangeli , Ingo Molnar , Rik van Riel , Johannes Weiner , Hugh Dickins , Thomas Gleixner , Paul Turner , Hillf Danton , David Rientjes , Lee Schermerhorn , Alex Shi , Srikar Dronamraju , Aneesh Kumar , Linus Torvalds , Andrew Morton , Linux-MM , LKML Subject: Re: [PATCH 22/49] mm: mempolicy: Add MPOL_MF_LAZY Message-ID: <20130107151430.GL3885@suse.de> References: <1354875832-9700-1-git-send-email-mgorman@suse.de> <1354875832-9700-23-git-send-email-mgorman@suse.de> <1357363097.5273.12.camel@kernel.cn.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <1357363097.5273.12.camel@kernel.cn.ibm.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jan 04, 2013 at 11:18:17PM -0600, Simon Jeons wrote: > > +static int > > +change_prot_numa_range(struct mm_struct *mm, struct vm_area_struct *vma, > > + unsigned long address) > > +{ > > + pgd_t *pgd; > > + pud_t *pud; > > + pmd_t *pmd; > > + pte_t *pte, *_pte; > > + struct page *page; > > + unsigned long _address, end; > > + spinlock_t *ptl; > > + int ret = 0; > > + > > + VM_BUG_ON(address & ~PAGE_MASK); > > + > > + pgd = pgd_offset(mm, address); > > + if (!pgd_present(*pgd)) > > + goto out; > > + > > + pud = pud_offset(pgd, address); > > + if (!pud_present(*pud)) > > + goto out; > > + > > + pmd = pmd_offset(pud, address); > > + if (pmd_none(*pmd)) > > + goto out; > > + > > + if (pmd_trans_huge_lock(pmd, vma) == 1) { > > + int page_nid; > > + ret = HPAGE_PMD_NR; > > + > > + VM_BUG_ON(address & ~HPAGE_PMD_MASK); > > + > > + if (pmd_numa(*pmd)) { > > + spin_unlock(&mm->page_table_lock); > > + goto out; > > + } > > + > > + page = pmd_page(*pmd); > > + > > + /* only check non-shared pages */ > > + if (page_mapcount(page) != 1) { > > + spin_unlock(&mm->page_table_lock); > > + goto out; > > + } > > + > > + page_nid = page_to_nid(page); > > + > > + if (pmd_numa(*pmd)) { > > + spin_unlock(&mm->page_table_lock); > > + goto out; > > + } > > + > > Hi Gorman, > > Since pmd_trans_huge_lock has already held &mm->page_table_lock, then > why check pmd_numa(*pmd) again? > It looks like oversight. I've added a TODO item to clean it up when I revisit NUMA balancing some time soon. Thanks. -- Mel Gorman SUSE Labs