From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932827AbbCPOTJ (ORCPT ); Mon, 16 Mar 2015 10:19:09 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:55296 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932772AbbCPOS5 (ORCPT ); Mon, 16 Mar 2015 10:18:57 -0400 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Haggai Eran , Shachar Raindel , Roland Dreier Subject: [PATCH 3.19 125/177] IB/core: Properly handle registration of on-demand paging MRs after dereg Date: Mon, 16 Mar 2015 15:08:52 +0100 Message-Id: <20150316140818.781296698@linuxfoundation.org> X-Mailer: git-send-email 2.3.3 In-Reply-To: <20150316140813.085032723@linuxfoundation.org> References: <20150316140813.085032723@linuxfoundation.org> User-Agent: quilt/0.64 MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-15 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 3.19-stable review patch. If anyone has any objections, please let me know. ------------------ From: Haggai Eran commit 4fc701ead77ede96df3e8b3de13fdf2b1326ee5b upstream. When the last on-demand paging MR is released the notifier count is left non-zero so that concurrent page faults will have to abort. If a new MR is then registered, the counter is reset. However, the decision is made to put the new MR in the list waiting for the notifier count to reach zero, before the counter is reset. An invalidation or another MR registration can release the MR to handle page faults, but without such an event the MR can wait forever. The patch fixes this issue by adding a check whether the MR is the first on-demand paging MR when deciding whether it is ready to handle page faults. If it is the first MR, we know that there are no mmu notifiers running in parallel to the registration. Fixes: 882214e2b128 ("IB/core: Implement support for MMU notifiers regarding on demand paging regions") Signed-off-by: Haggai Eran Signed-off-by: Shachar Raindel Signed-off-by: Roland Dreier Signed-off-by: Greg Kroah-Hartman --- drivers/infiniband/core/umem_odp.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -294,7 +294,8 @@ int ib_umem_odp_get(struct ib_ucontext * if (likely(ib_umem_start(umem) != ib_umem_end(umem))) rbt_ib_umem_insert(&umem->odp_data->interval_tree, &context->umem_tree); - if (likely(!atomic_read(&context->notifier_count))) + if (likely(!atomic_read(&context->notifier_count)) || + context->odp_mrs_count == 1) umem->odp_data->mn_counters_active = true; else list_add(&umem->odp_data->no_private_counters,