All of lore.kernel.org
 help / color / mirror / Atom feed
From: Maxim Patlasov <mpatlasov@parallels.com>
To: John Muir <john@jmuir.com>
Cc: Miklos Szeredi <miklos@szeredi.hu>,
	fuse-devel <fuse-devel@lists.sourceforge.net>,
	Linux List <linux-kernel@vger.kernel.org>
Subject: Re: [fuse-devel] [PATCH 0/5] fuse: close file synchronously (v2)
Date: Mon, 9 Jun 2014 16:00:11 +0400	[thread overview]
Message-ID: <5395A1CB.3070504@parallels.com> (raw)
In-Reply-To: <1AC1913F-47DF-4707-8E27-F2E7334CE2D6@jmuir.com>

On 06/09/2014 03:11 PM, John Muir wrote:
> On 2014.06.09, at 12:46 , Maxim Patlasov <mpatlasov@parallels.com> wrote:
>
>> On 06/09/2014 01:26 PM, John Muir wrote:
>>> On 2014.06.09, at 9:50 , Maxim Patlasov <mpatlasov@parallels.com> wrote:
>>>
>>>> On 06/06/2014 05:51 PM, John Muir wrote:
>>>>> On 2014.06.06, at 15:27 , Maxim Patlasov <mpatlasov@parallels.com> wrote:
>>>>>
>>>>>> The patch-set resolves the problem by making fuse_release synchronous:
>>>>>> wait for ACK from userspace for FUSE_RELEASE if the feature is ON.
>>>>> Why not make this feature per-file with a new flag bit in struct fuse_file_info rather than as a file-system global?
>>>> I don't expect a great demand for such a granularity. File-system global "close_wait" conveys a general user expectation about filesystem behaviour in distributed environment: if you stopped using a file on given node, whether it means that the file is immediately accessible from another node.
>>>>
>>> By user do you mean the end-user, or the implementor of the file-system? It seems to me that the end-user doesn't care, and just wants the file-system to work as expected. I don't think we're really talking about the end-user.
>> No, this is exactly about end-user expectations. Imagine a complicated heavy-loaded shared storage where handling FUSE_RELEASE in userspace may take a few minutes. In close_wait=0 case, an end-user who has just called close(2) has no idea when it's safe to access the file from another node or even when it's OK to umount filesystem.
> I think we're saying the same thing here from different perspectives. The end-user wants the file-system to operate with the semantics you describe, but I don't think it makes sense to give the end-user control over those semantics. The file-system itself should be implemented that way, or not, or per-file
>
> If it's a read-only file, then does this not add the overhead of having the kernel wait for the user-space file-system process to respond before closing it. In my experience, there is actually significant cost to the kernel to user-space messaging in FUSE when manipulating thousands of files.
>
>>> The implementor of a file-system, on the other hand, might want the semantics for close_wait on some files, but not on others. Won't there be a performance impact? Some distributed file-systems might want this on specific files only. Implementing it as a flag on the struct fuse_file_info gives the flexibility to the file-system implementor.
>> fuse_file_info is an userspace structure, in-kernel fuse knows nothing about it. In close_wait=1 case, nothing prevents a file-system implementation from ACK-ing FUSE_RELEASE request immediately (for specific files) and schedule actual handling for future processing.
> Of course you know I meant that you'd add another flag to both fuse_file_info, and in the kernel equivalent for those flags which is struct fuse_open_out -> open_flags. This is where other such per file options are specified such as whether or not to keep the in-kernal cache for a file, whether or not to allow direct-io, and whether or not to allow seek.
>
> Anyway, I guess you're the one doing all the work on this and if you have a particular implementation that doesn't require such fine-grained control, and no one else does then it's up to you. I'm just trying to show an alternative implementation that gives the file-system implementor more control while keeping the ability to meet user expectations.

Thank you, John. That's really depends on whether someone else wants 
fine-grained control or not. I'm generally OK to re-work the patch-set 
if more requesters emerge.

Thanks,
Maxim

  reply	other threads:[~2014-06-09 12:00 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-06-06 13:27 [PATCH 0/5] fuse: close file synchronously (v2) Maxim Patlasov
2014-06-06 13:27 ` [PATCH 1/5] fuse: add close_wait flag to fuse_conn Maxim Patlasov
2014-06-06 13:28 ` [PATCH 2/5] fuse: cosmetic rework of fuse_send_readpages Maxim Patlasov
2014-06-06 13:29 ` [PATCH 3/5] fuse: wait for end of IO on release Maxim Patlasov
2014-06-06 13:30 ` [PATCH 4/5] fuse: enable close_wait feature Maxim Patlasov
2014-06-06 13:31 ` [PATCH 5/5] fuse: fix synchronous case of fuse_file_put() Maxim Patlasov
2014-06-06 13:51 ` [fuse-devel] [PATCH 0/5] fuse: close file synchronously (v2) John Muir
2014-06-09  7:50   ` Maxim Patlasov
2014-06-09  9:26     ` John Muir
2014-06-09 10:46       ` Maxim Patlasov
2014-06-09 11:11         ` John Muir
2014-06-09 12:00           ` Maxim Patlasov [this message]
2014-08-13 12:44 ` Miklos Szeredi
2014-08-14 12:14   ` Maxim Patlasov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5395A1CB.3070504@parallels.com \
    --to=mpatlasov@parallels.com \
    --cc=fuse-devel@lists.sourceforge.net \
    --cc=john@jmuir.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=miklos@szeredi.hu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.