From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi0-f68.google.com ([209.85.218.68]:40945 "EHLO mail-oi0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727951AbeGZK2z (ORCPT ); Thu, 26 Jul 2018 06:28:55 -0400 Received: by mail-oi0-f68.google.com with SMTP id w126-v6so1710916oie.7 for ; Thu, 26 Jul 2018 02:13:00 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: References: <000000000000bc17b60571a60434@google.com> From: Miklos Szeredi Date: Thu, 26 Jul 2018 11:12:59 +0200 Message-ID: Subject: Re: INFO: task hung in fuse_reverse_inval_entry To: Dmitry Vyukov Cc: linux-fsdevel , LKML , syzkaller-bugs , syzbot Content-Type: text/plain; charset="UTF-8" Sender: linux-fsdevel-owner@vger.kernel.org List-ID: On Thu, Jul 26, 2018 at 10:44 AM, Miklos Szeredi wrote: > On Wed, Jul 25, 2018 at 11:12 AM, Dmitry Vyukov wrote: >> On Tue, Jul 24, 2018 at 5:17 PM, Miklos Szeredi wrote: >> Maybe more waits in fuse need to be interruptible? E.g. request_wait_answer? > > That's an interesting aspect. Making request_wait_answer always be > killable would help with the issue you raise (killing set of processes > taking part in deadlock should resolve deadlock), but it breaks > another aspect of the interface. > > Namely that userspace filesystems expect some serialization from > kernel when performing operations. If we allow killing of a process > in the middle of an fs operation, then that serialization is no longer > there, which can break the server. > > One solution to that is to duplicate all locking in the server > (libfuse normally), but it would not solve the issue for legacy > libfuse or legacy non-libfuse servers. It would also be difficult to > test. Also it doesn't solve the problem of killing the server, as > that alone doesn't resolve the deadlock. Umm, we can actually do better. Duplicate all vfs locking in the fuse kernel implementation: when killing a task that has an outstanding request, return immediately (which results in releasing the VFS level lock and hence the deadlock) but hold onto our own lock until the reply from the userspace server comes back. Need to think about the details; this might not be easy to do this properly. Notably memory management locks (page->lock, mmap_sem, etc) are notoriously tricky. Thanks, MIklos