From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E2C1C433DB for ; Fri, 12 Mar 2021 09:24:01 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D7D7164EBB for ; Fri, 12 Mar 2021 09:24:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D7D7164EBB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kaod.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:58842 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lKe1Y-0001oI-0G for qemu-devel@archiver.kernel.org; Fri, 12 Mar 2021 04:24:00 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:51820) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lKe0E-00007E-Oy for qemu-devel@nongnu.org; Fri, 12 Mar 2021 04:22:38 -0500 Received: from us-smtp-delivery-44.mimecast.com ([207.211.30.44]:24241) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1lKe0C-0003xV-Oe for qemu-devel@nongnu.org; Fri, 12 Mar 2021 04:22:38 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-391-WloVzSO6NmuNRIa17LPhfQ-1; Fri, 12 Mar 2021 04:22:24 -0500 X-MC-Unique: WloVzSO6NmuNRIa17LPhfQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DE8151898286; Fri, 12 Mar 2021 09:22:23 +0000 (UTC) Received: from bahia.redhat.com (ovpn-113-236.ams2.redhat.com [10.36.113.236]) by smtp.corp.redhat.com (Postfix) with ESMTP id 805F260C9B; Fri, 12 Mar 2021 09:22:13 +0000 (UTC) From: Greg Kurz To: qemu-devel@nongnu.org Subject: [PATCH v2 0/7] virtiofsd: Avoid potential deadlocks Date: Fri, 12 Mar 2021 10:22:05 +0100 Message-Id: <20210312092212.782255-1-groug@kaod.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=groug@kaod.org X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kaod.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Received-SPF: softfail client-ip=207.211.30.44; envelope-from=groug@kaod.org; helo=us-smtp-delivery-44.mimecast.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_NONE=0.001, SPF_SOFTFAIL=0.665 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?UTF-8?q?Daniel=20P=20=2E=20Berrang=C3=A9?= , "Michael S. Tsirkin" , "Dr. David Alan Gilbert" , Greg Kurz , virtio-fs@redhat.com, Stefan Hajnoczi , Vivek Goyal Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" A deadlock condition potentially exists if a vhost-user process needs=0D to request something to QEMU on the slave channel while processing a=0D vhost-user message.=0D =0D This doesn't seem to affect any vhost-user implementation so far, but=0D this is currently biting the upcoming enablement of DAX with virtio-fs.=0D The issue is being observed when the guest does an emergency reboot while= =0D a mapping still exits in the DAX window, which is very easy to get with=0D a busy enough workload (e.g. as simulated by blogbench [1]) :=0D =0D - QEMU sends VHOST_USER_GET_VRING_BASE to virtiofsd.=0D =0D - In order to complete the request, virtiofsd then asks QEMU to remove=0D the mapping on the slave channel.=0D =0D All these dialogs are synchronous, hence the deadlock.=0D =0D Another deadlock condition exists in the virtiofsd code itself, related=0D to the use of the vu_dispatch rwlock.=0D =0D This series supercedes Vivek's previous tentative [1] to fix the deadlocks.= =0D Basically, instead of introducing new vhost-user protocol message to=0D specifically shutdown the slave channel, this series introduces a nested=0D event loop in vhost_user_read() as suggested by Stefan Hajnoczi. This=0D allows to monitor and service requests on the slave channel while=0D waiting for virtiofsd to answer to a vhost-user request.=0D =0D This was tested with the latest DAX patchset posted to the virtio-fs=0D list [2], rebased on top of the present series [3]. Testing scenario=0D is:=0D 1) start VM with DAX capable vhost-user-fs device=0D 2) mount -o dax in the guest=0D 3) run Blogbench [4] in virtiofs mount=0D 4) wait 10s, which is enough to have active mappings=0D 5) echo b > /proc/sysrq-trigger=0D 6) wait for the guest to reboot and start over from step 2=0D =0D Without this series, only a couple of reboots ( < 5) are needed to=0D hit a deadlock. With this series applied, an overnight test could=0D reboot the guest 1500+ times without any issues.=0D =0D [1] https://listman.redhat.com/archives/virtio-fs/2021-January/msg00073.htm= l=0D =0D [2] https://listman.redhat.com/archives/virtio-fs/2021-February/msg00058.ht= ml=0D =0D [3] Tree with this series + DAX available at:=0D =0D https://gitlab.com/gkurz/qemu/-/commits/virtio-fs-dax-no-deadlock=0D =0D [4] https://github.com/jedisct1/Blogbench=0D =0D Changes since v2:=0D - added some preliminary fixes and cleanups for the slave channel code=0D - re-factored some code and addressed comments (see individual patches)=0D - more testing=0D =0D Greg Kurz (7):=0D vhost-user: Drop misleading EAGAIN checks in slave_read()=0D vhost-user: Fix double-close on slave_read() error path=0D vhost-user: Factor out duplicated slave_fd teardown code=0D vhost-user: Convert slave channel to QIOChannelSocket=0D vhost-user: Introduce nested event loop in vhost_user_read()=0D vhost-user: Monitor slave channel in vhost_user_read()=0D virtiofsd: Release vu_dispatch_lock when stopping queue=0D =0D hw/virtio/vhost-user.c | 217 +++++++++++++++++++++-------------=0D tools/virtiofsd/fuse_virtio.c | 6 +=0D 2 files changed, 144 insertions(+), 79 deletions(-)=0D =0D --=20=0D 2.26.2=0D =0D From mboxrd@z Thu Jan 1 00:00:00 1970 From: Greg Kurz Date: Fri, 12 Mar 2021 10:22:05 +0100 Message-Id: <20210312092212.782255-1-groug@kaod.org> Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [Virtio-fs] [PATCH v2 0/7] virtiofsd: Avoid potential deadlocks List-Id: Development discussions about virtio-fs List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Daniel=20P=20=2E=20Berrang=C3=A9?= , "Michael S. Tsirkin" , virtio-fs@redhat.com, Vivek Goyal A deadlock condition potentially exists if a vhost-user process needs to request something to QEMU on the slave channel while processing a vhost-user message. This doesn't seem to affect any vhost-user implementation so far, but this is currently biting the upcoming enablement of DAX with virtio-fs. The issue is being observed when the guest does an emergency reboot while a mapping still exits in the DAX window, which is very easy to get with a busy enough workload (e.g. as simulated by blogbench [1]) : - QEMU sends VHOST_USER_GET_VRING_BASE to virtiofsd. - In order to complete the request, virtiofsd then asks QEMU to remove the mapping on the slave channel. All these dialogs are synchronous, hence the deadlock. Another deadlock condition exists in the virtiofsd code itself, related to the use of the vu_dispatch rwlock. This series supercedes Vivek's previous tentative [1] to fix the deadlocks. Basically, instead of introducing new vhost-user protocol message to specifically shutdown the slave channel, this series introduces a nested event loop in vhost_user_read() as suggested by Stefan Hajnoczi. This allows to monitor and service requests on the slave channel while waiting for virtiofsd to answer to a vhost-user request. This was tested with the latest DAX patchset posted to the virtio-fs list [2], rebased on top of the present series [3]. Testing scenario is: 1) start VM with DAX capable vhost-user-fs device 2) mount -o dax in the guest 3) run Blogbench [4] in virtiofs mount 4) wait 10s, which is enough to have active mappings 5) echo b > /proc/sysrq-trigger 6) wait for the guest to reboot and start over from step 2 Without this series, only a couple of reboots ( < 5) are needed to hit a deadlock. With this series applied, an overnight test could reboot the guest 1500+ times without any issues. [1] https://listman.redhat.com/archives/virtio-fs/2021-January/msg00073.html [2] https://listman.redhat.com/archives/virtio-fs/2021-February/msg00058.html [3] Tree with this series + DAX available at: https://gitlab.com/gkurz/qemu/-/commits/virtio-fs-dax-no-deadlock [4] https://github.com/jedisct1/Blogbench Changes since v2: - added some preliminary fixes and cleanups for the slave channel code - re-factored some code and addressed comments (see individual patches) - more testing Greg Kurz (7): vhost-user: Drop misleading EAGAIN checks in slave_read() vhost-user: Fix double-close on slave_read() error path vhost-user: Factor out duplicated slave_fd teardown code vhost-user: Convert slave channel to QIOChannelSocket vhost-user: Introduce nested event loop in vhost_user_read() vhost-user: Monitor slave channel in vhost_user_read() virtiofsd: Release vu_dispatch_lock when stopping queue hw/virtio/vhost-user.c | 217 +++++++++++++++++++++------------- tools/virtiofsd/fuse_virtio.c | 6 + 2 files changed, 144 insertions(+), 79 deletions(-) -- 2.26.2