From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753488AbdDKOHy (ORCPT ); Tue, 11 Apr 2017 10:07:54 -0400 Received: from mx2.suse.de ([195.135.220.15]:49209 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752010AbdDKOGR (ORCPT ); Tue, 11 Apr 2017 10:06:17 -0400 From: Vlastimil Babka To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Li Zefan , Michal Hocko , Mel Gorman , David Rientjes , Christoph Lameter , Hugh Dickins , Andrea Arcangeli , Anshuman Khandual , "Kirill A. Shutemov" , Vlastimil Babka Subject: [RFC 1/6] mm, page_alloc: fix more premature OOM due to race with cpuset update Date: Tue, 11 Apr 2017 16:06:04 +0200 Message-Id: <20170411140609.3787-2-vbabka@suse.cz> X-Mailer: git-send-email 2.12.2 In-Reply-To: <20170411140609.3787-1-vbabka@suse.cz> References: <20170411140609.3787-1-vbabka@suse.cz> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit e47483bca2cc ("mm, page_alloc: fix premature OOM when racing with cpuset mems update") has fixed known recent regressions found by LTP's cpuset01 testcase. I have however found that by modifying the testcase to use per-vma mempolicies via bind(2) instead of per-task mempolicies via set_mempolicy(2), the premature OOM still happens and the issue is much older. The root of the problem is that the cpuset's mems_allowed and mempolicy's nodemask can temporarily have no intersection, thus get_page_from_freelist() cannot find any usable zone. The current semantic for empty intersection is to ignore mempolicy's nodemask and honour cpuset restrictions. This is checked in node_zonelist(), but the racy update can happen after we already passed the check. Such races should be protected by the seqlock task->mems_allowed_seq, but it doesn't work here, because 1) mpol_rebind_mm() does not happen under seqlock for write, and doing so would lead to deadlock, as it takes mmap_sem for write, while the allocation can have mmap_sem for read when it's taking the seqlock for read. And 2) the seqlock cookie of callers of node_zonelist() (alloc_pages_vma() and alloc_pages_current()) is different than the one of __alloc_pages_slowpath(), so there's still a potential race window. This patch fixes the issue by having __alloc_pages_slowpath() check for empty intersection of cpuset and ac->nodemask before OOM or allocation failure. If it's indeed empty, the nodemask is ignored and allocation retried, which mimics node_zonelist(). This works fine, because almost all callers of __alloc_pages_nodemask are obtaining the nodemask via node_zonelist(). The only exception is new_node_page() from hotplug, where the potential violation of nodemask isn't an issue, as there's already a fallback allocation attempt without any nodemask. If there's a future caller that needs to have its specific nodemask honoured over task's cpuset restrictions, we'll have to e.g. add a gfp flag for that. Signed-off-by: Vlastimil Babka --- mm/page_alloc.c | 51 ++++++++++++++++++++++++++++++++++++++------------- 1 file changed, 38 insertions(+), 13 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 32b31d661c9c..502d82f0e004 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3668,6 +3668,39 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order, return false; } +static inline bool +check_retry_cpuset(int cpuset_mems_cookie, struct alloc_context *ac) +{ + /* + * It's possible that cpuset's mems_allowed and the nodemask from + * mempolicy don't intersect. This should be normally dealt with by + * policy_nodemask(), but it's possible to race with cpuset update in + * such a way the check therein was true, and then it became false + * before we got our cpuset_mems_cookie here. + * This assumes that for all allocations, ac->nodemask can come only + * from MPOL_BIND mempolicy (whose documented semantics is to be ignored + * when it does not intersect with the cpuset restrictions) or the + * caller can deal with a violated nodemask. + */ + if (cpusets_enabled() && ac->nodemask && + !cpuset_nodemask_valid_mems_allowed(ac->nodemask)) { + ac->nodemask = NULL; + return true; + } + + /* + * When updating a task's mems_allowed or mempolicy nodemask, it is + * possible to race with parallel threads in such a way that our + * allocation can fail while the mask is being updated. If we are about + * to fail, check if the cpuset changed during allocation and if so, + * retry. + */ + if (read_mems_allowed_retry(cpuset_mems_cookie)) + return true; + + return false; +} + static inline struct page * __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, struct alloc_context *ac) @@ -3863,11 +3896,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, &compaction_retries)) goto retry; - /* - * It's possible we raced with cpuset update so the OOM would be - * premature (see below the nopage: label for full explanation). - */ - if (read_mems_allowed_retry(cpuset_mems_cookie)) + + /* Deal with possible cpuset update races before we start OOM killing */ + if (check_retry_cpuset(cpuset_mems_cookie, ac)) goto retry_cpuset; /* Reclaim has failed us, start killing things */ @@ -3886,14 +3917,8 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, } nopage: - /* - * When updating a task's mems_allowed or mempolicy nodemask, it is - * possible to race with parallel threads in such a way that our - * allocation can fail while the mask is being updated. If we are about - * to fail, check if the cpuset changed during allocation and if so, - * retry. - */ - if (read_mems_allowed_retry(cpuset_mems_cookie)) + /* Deal with possible cpuset update races before we fail */ + if (check_retry_cpuset(cpuset_mems_cookie, ac)) goto retry_cpuset; /* -- 2.12.2 From mboxrd@z Thu Jan 1 00:00:00 1970 From: Vlastimil Babka Subject: [RFC 1/6] mm, page_alloc: fix more premature OOM due to race with cpuset update Date: Tue, 11 Apr 2017 16:06:04 +0200 Message-ID: <20170411140609.3787-2-vbabka@suse.cz> References: <20170411140609.3787-1-vbabka@suse.cz> Return-path: In-Reply-To: <20170411140609.3787-1-vbabka@suse.cz> Sender: owner-linux-mm@kvack.org List-ID: MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Li Zefan , Michal Hocko , Mel Gorman , David Rientjes , Christoph Lameter , Hugh Dickins , Andrea Arcangeli , Anshuman Khandual , "Kirill A. Shutemov" , Vlastimil Babka Commit e47483bca2cc ("mm, page_alloc: fix premature OOM when racing with cpuset mems update") has fixed known recent regressions found by LTP's cpuset01 testcase. I have however found that by modifying the testcase to use per-vma mempolicies via bind(2) instead of per-task mempolicies via set_mempolicy(2), the premature OOM still happens and the issue is much older. The root of the problem is that the cpuset's mems_allowed and mempolicy's nodemask can temporarily have no intersection, thus get_page_from_freelist() cannot find any usable zone. The current semantic for empty intersection is to ignore mempolicy's nodemask and honour cpuset restrictions. This is checked in node_zonelist(), but the racy update can happen after we already passed the check. Such races should be protected by the seqlock task->mems_allowed_seq, but it doesn't work here, because 1) mpol_rebind_mm() does not happen under seqlock for write, and doing so would lead to deadlock, as it takes mmap_sem for write, while the allocation can have mmap_sem for read when it's taking the seqlock for read. And 2) the seqlock cookie of callers of node_zonelist() (alloc_pages_vma() and alloc_pages_current()) is different than the one of __alloc_pages_slowpath(), so there's still a potential race window. This patch fixes the issue by having __alloc_pages_slowpath() check for empty intersection of cpuset and ac->nodemask before OOM or allocation failure. If it's indeed empty, the nodemask is ignored and allocation retried, which mimics node_zonelist(). This works fine, because almost all callers of __alloc_pages_nodemask are obtaining the nodemask via node_zonelist(). The only exception is new_node_page() from hotplug, where the potential violation of nodemask isn't an issue, as there's already a fallback allocation attempt without any nodemask. If there's a future caller that needs to have its specific nodemask honoured over task's cpuset restrictions, we'll have to e.g. add a gfp flag for that. Signed-off-by: Vlastimil Babka --- mm/page_alloc.c | 51 ++++++++++++++++++++++++++++++++++++++------------- 1 file changed, 38 insertions(+), 13 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 32b31d661c9c..502d82f0e004 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3668,6 +3668,39 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order, return false; } +static inline bool +check_retry_cpuset(int cpuset_mems_cookie, struct alloc_context *ac) +{ + /* + * It's possible that cpuset's mems_allowed and the nodemask from + * mempolicy don't intersect. This should be normally dealt with by + * policy_nodemask(), but it's possible to race with cpuset update in + * such a way the check therein was true, and then it became false + * before we got our cpuset_mems_cookie here. + * This assumes that for all allocations, ac->nodemask can come only + * from MPOL_BIND mempolicy (whose documented semantics is to be ignored + * when it does not intersect with the cpuset restrictions) or the + * caller can deal with a violated nodemask. + */ + if (cpusets_enabled() && ac->nodemask && + !cpuset_nodemask_valid_mems_allowed(ac->nodemask)) { + ac->nodemask = NULL; + return true; + } + + /* + * When updating a task's mems_allowed or mempolicy nodemask, it is + * possible to race with parallel threads in such a way that our + * allocation can fail while the mask is being updated. If we are about + * to fail, check if the cpuset changed during allocation and if so, + * retry. + */ + if (read_mems_allowed_retry(cpuset_mems_cookie)) + return true; + + return false; +} + static inline struct page * __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, struct alloc_context *ac) @@ -3863,11 +3896,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, &compaction_retries)) goto retry; - /* - * It's possible we raced with cpuset update so the OOM would be - * premature (see below the nopage: label for full explanation). - */ - if (read_mems_allowed_retry(cpuset_mems_cookie)) + + /* Deal with possible cpuset update races before we start OOM killing */ + if (check_retry_cpuset(cpuset_mems_cookie, ac)) goto retry_cpuset; /* Reclaim has failed us, start killing things */ @@ -3886,14 +3917,8 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, } nopage: - /* - * When updating a task's mems_allowed or mempolicy nodemask, it is - * possible to race with parallel threads in such a way that our - * allocation can fail while the mask is being updated. If we are about - * to fail, check if the cpuset changed during allocation and if so, - * retry. - */ - if (read_mems_allowed_retry(cpuset_mems_cookie)) + /* Deal with possible cpuset update races before we fail */ + if (check_retry_cpuset(cpuset_mems_cookie, ac)) goto retry_cpuset; /* -- 2.12.2 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org