All of lore.kernel.org
 help / color / mirror / Atom feed
From: Oleg Drokin <green@linuxhacker.ru>
To: Al Viro <viro@ZenIV.linux.org.uk>
Cc: Mailing List <linux-kernel@vger.kernel.org>,
	"<linux-fsdevel@vger.kernel.org>" <linux-fsdevel@vger.kernel.org>
Subject: Re: More parallel atomic_open/d_splice_alias fun with NFS and possibly more FSes.
Date: Sun, 3 Jul 2016 20:37:22 -0400	[thread overview]
Message-ID: <0145470E-667E-4A8D-AB79-F897322DA441@linuxhacker.ru> (raw)
In-Reply-To: <20160704000802.GH14480@ZenIV.linux.org.uk>


On Jul 3, 2016, at 8:08 PM, Al Viro wrote:

> On Sun, Jul 03, 2016 at 07:29:46AM +0100, Al Viro wrote:
> 
>> Tentative NFS patch follows; I don't understand Lustre well enough, but it
>> looks like a plausible strategy there as well.
> 
> Speaking of Lustre: WTF is
>                        /* Open dentry. */
>                        if (S_ISFIFO(d_inode(dentry)->i_mode)) {
>                                /* We cannot call open here as it would
>                                 * deadlock.
>                                 */
>                                if (it_disposition(it, DISP_ENQ_OPEN_REF))
>                                        ptlrpc_req_finished(
>                                                       (struct ptlrpc_request *)
>                                                          it->d.lustre.it_data);
>                                rc = finish_no_open(file, de);
>                        } else {
> about and why do we only do that to FIFOs?  What about symlinks or device
> nodes?  Directories, for that matter...  Shouldn't that be if (!S_ISREG(...))
> instead?

Hm… This dates to sometime in 2006 and my memory is a bit hazy here.

I think when we called into the open, it went into fifo open and stuck there
waiting for the other opener. Something like that. And we cannot really be stuck here
because we are holding some locks that need to be released in predictable time.

This code is actually unreachable now because the server never returns an openhandle
for special device nodes anymore (there's a comment about it in current staging tree,
but I guess you are looking at some prior version).

I imagine device nodes might have represented a similar risk too, but it did not
occur to me to test it separately and the testsuite does not do it either.

Directories do not get stuck when you open them so they are ok and we can
atomically open them too, I guess.
Symlinks are handled specially on the server and the open never returns
the actual open handle for those, so this path is also unreachable with those.

WARNING: multiple messages have this Message-ID (diff)
From: Oleg Drokin <green@linuxhacker.ru>
To: Al Viro <viro@ZenIV.linux.org.uk>
Cc: Mailing List <linux-kernel@vger.kernel.org>,
	"<linux-fsdevel@vger.kernel.org>" <linux-fsdevel@vger.kernel.org>
Subject: Re: More parallel atomic_open/d_splice_alias fun with NFS and possibly more FSes.
Date: Sun, 3 Jul 2016 20:37:22 -0400	[thread overview]
Message-ID: <0145470E-667E-4A8D-AB79-F897322DA441@linuxhacker.ru> (raw)
In-Reply-To: <20160704000802.GH14480@ZenIV.linux.org.uk>


On Jul 3, 2016, at 8:08 PM, Al Viro wrote:

> On Sun, Jul 03, 2016 at 07:29:46AM +0100, Al Viro wrote:
> 
>> Tentative NFS patch follows; I don't understand Lustre well enough, but it
>> looks like a plausible strategy there as well.
> 
> Speaking of Lustre: WTF is
>                        /* Open dentry. */
>                        if (S_ISFIFO(d_inode(dentry)->i_mode)) {
>                                /* We cannot call open here as it would
>                                 * deadlock.
>                                 */
>                                if (it_disposition(it, DISP_ENQ_OPEN_REF))
>                                        ptlrpc_req_finished(
>                                                       (struct ptlrpc_request *)
>                                                          it->d.lustre.it_data);
>                                rc = finish_no_open(file, de);
>                        } else {
> about and why do we only do that to FIFOs?  What about symlinks or device
> nodes?  Directories, for that matter...  Shouldn't that be if (!S_ISREG(...))
> instead?

Hm� This dates to sometime in 2006 and my memory is a bit hazy here.

I think when we called into the open, it went into fifo open and stuck there
waiting for the other opener. Something like that. And we cannot really be stuck here
because we are holding some locks that need to be released in predictable time.

This code is actually unreachable now because the server never returns an openhandle
for special device nodes anymore (there's a comment about it in current staging tree,
but I guess you are looking at some prior version).

I imagine device nodes might have represented a similar risk too, but it did not
occur to me to test it separately and the testsuite does not do it either.

Directories do not get stuck when you open them so they are ok and we can
atomically open them too, I guess.
Symlinks are handled specially on the server and the open never returns
the actual open handle for those, so this path is also unreachable with those.

  reply	other threads:[~2016-07-04  0:37 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-17  4:09 More parallel atomic_open/d_splice_alias fun with NFS and possibly more FSes Oleg Drokin
2016-06-17  4:29 ` Al Viro
2016-06-25 16:38   ` Oleg Drokin
2016-07-03  6:29     ` Al Viro
2016-07-04  0:08       ` Al Viro
2016-07-04  0:37         ` Oleg Drokin [this message]
2016-07-04  0:37           ` Oleg Drokin
2016-07-04  3:08           ` Al Viro
2016-07-04  3:55             ` Oleg Drokin
2016-07-04  3:55               ` Oleg Drokin
2016-07-05  2:25               ` Al Viro
2016-07-10 17:01                 ` Oleg Drokin
2016-07-10 18:14                   ` James Simmons
2016-07-11  1:01                     ` Al Viro
2016-07-11  1:03                       ` Al Viro
2016-07-11 22:54                         ` lustre sendmsg stuff Oleg Drokin
2016-07-11 17:15                       ` More parallel atomic_open/d_splice_alias fun with NFS and possibly more FSes James Simmons
2016-07-05  2:28       ` Oleg Drokin
2016-07-05  2:32         ` Oleg Drokin
2016-07-05  4:43         ` Oleg Drokin
2016-07-05  6:22       ` Oleg Drokin
2016-07-05 12:31         ` Al Viro
2016-07-05 13:51           ` Al Viro
2016-07-05 15:21             ` Oleg Drokin
2016-07-05 17:42               ` Al Viro
2016-07-05 18:12                 ` Oleg Drokin
2016-07-05 16:33             ` Oleg Drokin
2016-07-05 18:08               ` Al Viro
2016-07-05 19:12                 ` Oleg Drokin
2016-07-05 20:08                   ` Al Viro
2016-07-05 20:21                     ` Oleg Drokin
2016-07-06  0:29                       ` Oleg Drokin
2016-07-06  3:20                         ` Al Viro
2016-07-06  3:25                           ` Oleg Drokin
2016-07-06  3:25                             ` Oleg Drokin
2016-07-06  4:35                             ` Oleg Drokin
2016-07-06  4:35                               ` Oleg Drokin
2016-07-06 16:24             ` Oleg Drokin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0145470E-667E-4A8D-AB79-F897322DA441@linuxhacker.ru \
    --to=green@linuxhacker.ru \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=viro@ZenIV.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.