From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6B7FC2BA83 for ; Wed, 12 Feb 2020 07:27:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id AC07720848 for ; Wed, 12 Feb 2020 07:27:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1581492421; bh=RSUI/H6x+z0XYi0bO9Ap2uA+C6Mk/JBemjNBWZdcvUA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=EO9wpTMbtKlUHaq8/50Euo9WmyXz0zKlOPGbhSSj7TSYcu2eysvOnpqkg9SFnCGrQ 4ChKobZ9dSwslo8U+URbuq2vPspFzj1T4JSeeOli0HbNEcoJaJxQ6IocLIOQYaUojM +mlEvGQCPbJGnFRNPOnX478IkpBrT2gi32qoCq8g= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728242AbgBLH1B (ORCPT ); Wed, 12 Feb 2020 02:27:01 -0500 Received: from mail.kernel.org ([198.145.29.99]:59720 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728329AbgBLH1B (ORCPT ); Wed, 12 Feb 2020 02:27:01 -0500 Received: from localhost (unknown [213.57.247.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 726922082F; Wed, 12 Feb 2020 07:26:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1581492420; bh=RSUI/H6x+z0XYi0bO9Ap2uA+C6Mk/JBemjNBWZdcvUA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MKvq52bVOvpi+UD92C51glAt9zyG4wv7TtBMeOib+BgoX6gJEboeUmE80wam8XgKn 1GK4GUp0YGQuq+SzbtQBcJ1t8U2NaMPNcSZnHLhEAqaLUDWNYfTeyGV5GSLERyfcZu p3VoYjy2CqCTvg9jJD2Q2RHmJCRbThaK0JRY8VHg= From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , RDMA mailing list , Daniel Jurgens , Erez Shitrit , Jason Gunthorpe , Maor Gottlieb , Michael Guralnik , Moni Shoua , Parav Pandit , Sean Hefty , Valentine Fatiev , Yishai Hadas , Yonatan Cohen , Zhu Yanjun Subject: [PATCH rdma-rc 6/9] IB/mlx5: Fix async events cleanup flows Date: Wed, 12 Feb 2020 09:26:32 +0200 Message-Id: <20200212072635.682689-7-leon@kernel.org> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200212072635.682689-1-leon@kernel.org> References: <20200212072635.682689-1-leon@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Yishai Hadas Fix async events flows to prevent race between the read event APIs and their destroy uobj API. In both async command/event flows, delete the event entry from the list before its memory de-allocation and fix the async command flow to check properly for the 'is_destroyed' under the lock. The above comes to prevent accessing an entry post its de-allocation. Fixes: f7c8416ccea5 ("RDMA/core: Simplify destruction of FD uobjects") Signed-off-by: Yishai Hadas Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/devx.c | 51 +++++++++++++++++-------------- 1 file changed, 28 insertions(+), 23 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c index d7efc9f6daf0..46e1ab771f10 100644 --- a/drivers/infiniband/hw/mlx5/devx.c +++ b/drivers/infiniband/hw/mlx5/devx.c @@ -2319,14 +2319,12 @@ static int deliver_event(struct devx_event_subscription *event_sub, if (ev_file->omit_data) { spin_lock_irqsave(&ev_file->lock, flags); - if (!list_empty(&event_sub->event_list)) { + if (!list_empty(&event_sub->event_list) || + ev_file->is_destroyed) { spin_unlock_irqrestore(&ev_file->lock, flags); return 0; } - /* is_destroyed is ignored here because we don't have any memory - * allocation to clean up for the omit_data case - */ list_add_tail(&event_sub->event_list, &ev_file->event_list); spin_unlock_irqrestore(&ev_file->lock, flags); wake_up_interruptible(&ev_file->poll_wait); @@ -2473,11 +2471,11 @@ static ssize_t devx_async_cmd_event_read(struct file *filp, char __user *buf, return -ERESTARTSYS; } - if (list_empty(&ev_queue->event_list) && - ev_queue->is_destroyed) - return -EIO; - spin_lock_irq(&ev_queue->lock); + if (ev_queue->is_destroyed) { + spin_unlock_irq(&ev_queue->lock); + return -EIO; + } } event = list_entry(ev_queue->event_list.next, @@ -2551,10 +2549,6 @@ static ssize_t devx_async_event_read(struct file *filp, char __user *buf, return -EOVERFLOW; } - if (ev_file->is_destroyed) { - spin_unlock_irq(&ev_file->lock); - return -EIO; - } while (list_empty(&ev_file->event_list)) { spin_unlock_irq(&ev_file->lock); @@ -2667,8 +2661,10 @@ static int devx_async_cmd_event_destroy_uobj(struct ib_uobject *uobj, spin_lock_irq(&comp_ev_file->ev_queue.lock); list_for_each_entry_safe(entry, tmp, - &comp_ev_file->ev_queue.event_list, list) + &comp_ev_file->ev_queue.event_list, list) { + list_del(&entry->list); kvfree(entry); + } spin_unlock_irq(&comp_ev_file->ev_queue.lock); return 0; }; @@ -2680,11 +2676,29 @@ static int devx_async_event_destroy_uobj(struct ib_uobject *uobj, container_of(uobj, struct devx_async_event_file, uobj); struct devx_event_subscription *event_sub, *event_sub_tmp; - struct devx_async_event_data *entry, *tmp; struct mlx5_ib_dev *dev = ev_file->dev; spin_lock_irq(&ev_file->lock); ev_file->is_destroyed = 1; + + /* free the pending events allocation */ + if (ev_file->omit_data) { + struct devx_event_subscription *event_sub, *tmp; + + list_for_each_entry_safe(event_sub, tmp, &ev_file->event_list, + event_list) + list_del_init(&event_sub->event_list); + + } else { + struct devx_async_event_data *entry, *tmp; + + list_for_each_entry_safe(entry, tmp, &ev_file->event_list, + list) { + list_del(&entry->list); + kfree(entry); + } + } + spin_unlock_irq(&ev_file->lock); wake_up_interruptible(&ev_file->poll_wait); @@ -2699,15 +2713,6 @@ static int devx_async_event_destroy_uobj(struct ib_uobject *uobj, } mutex_unlock(&dev->devx_event_table.event_xa_lock); - /* free the pending events allocation */ - if (!ev_file->omit_data) { - spin_lock_irq(&ev_file->lock); - list_for_each_entry_safe(entry, tmp, - &ev_file->event_list, list) - kfree(entry); /* read can't come any more */ - spin_unlock_irq(&ev_file->lock); - } - put_device(&dev->ib_dev.dev); return 0; }; -- 2.24.1