All of lore.kernel.org
 help / color / mirror / Atom feed
* per-inode locks in FUSE (kernel vs userspace)
@ 2021-12-03  0:05 Eric Wong
  2021-12-06 22:28 ` Vivek Goyal
  0 siblings, 1 reply; 5+ messages in thread
From: Eric Wong @ 2021-12-03  0:05 UTC (permalink / raw)
  To: fuse-devel, linux-fsdevel

Hi all, I'm working on a new multi-threaded FS using the
libfuse3 fuse_lowlevel.h API.  It looks to me like the kernel
already performs the necessary locking on a per-inode basis to
save me some work in userspace.

In particular, I originally thought I'd need pthreads mutexes on
a per-inode (fuse_ino_t) basis to protect userspace data
structures between the .setattr (truncate), .fsync, and
.write_buf userspace callbacks.

However upon reading the kernel, I can see fuse_fsync,
fuse_{cache,direct}_write_iter in fs/fuse/file.c all use
inode_lock.  do_truncate also uses inode_lock in fs/open.c.

So it's look like implementing extra locking in userspace would
do nothing useful in my case, right?

Thanks.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: per-inode locks in FUSE (kernel vs userspace)
  2021-12-03  0:05 per-inode locks in FUSE (kernel vs userspace) Eric Wong
@ 2021-12-06 22:28 ` Vivek Goyal
  2021-12-07  8:38   ` Miklos Szeredi
  0 siblings, 1 reply; 5+ messages in thread
From: Vivek Goyal @ 2021-12-06 22:28 UTC (permalink / raw)
  To: Eric Wong; +Cc: fuse-devel, linux-fsdevel

On Fri, Dec 03, 2021 at 12:05:34AM +0000, Eric Wong wrote:
> Hi all, I'm working on a new multi-threaded FS using the
> libfuse3 fuse_lowlevel.h API.  It looks to me like the kernel
> already performs the necessary locking on a per-inode basis to
> save me some work in userspace.
> 
> In particular, I originally thought I'd need pthreads mutexes on
> a per-inode (fuse_ino_t) basis to protect userspace data
> structures between the .setattr (truncate), .fsync, and
> .write_buf userspace callbacks.
> 
> However upon reading the kernel, I can see fuse_fsync,
> fuse_{cache,direct}_write_iter in fs/fuse/file.c all use
> inode_lock.  do_truncate also uses inode_lock in fs/open.c.
> 
> So it's look like implementing extra locking in userspace would
> do nothing useful in my case, right?

I guess it probably is a good idea to implement proper locking
in multi-threaded fs and not rely on what kind of locking
kernel is doing. If kernel locking changes down the line, your
implementation will be broken.

Vivek


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: per-inode locks in FUSE (kernel vs userspace)
  2021-12-06 22:28 ` Vivek Goyal
@ 2021-12-07  8:38   ` Miklos Szeredi
  2021-12-07 13:48     ` Vivek Goyal
  0 siblings, 1 reply; 5+ messages in thread
From: Miklos Szeredi @ 2021-12-07  8:38 UTC (permalink / raw)
  To: Vivek Goyal; +Cc: Eric Wong, fuse-devel, linux-fsdevel

On Mon, 6 Dec 2021 at 23:29, Vivek Goyal <vgoyal@redhat.com> wrote:
>
> On Fri, Dec 03, 2021 at 12:05:34AM +0000, Eric Wong wrote:
> > Hi all, I'm working on a new multi-threaded FS using the
> > libfuse3 fuse_lowlevel.h API.  It looks to me like the kernel
> > already performs the necessary locking on a per-inode basis to
> > save me some work in userspace.
> >
> > In particular, I originally thought I'd need pthreads mutexes on
> > a per-inode (fuse_ino_t) basis to protect userspace data
> > structures between the .setattr (truncate), .fsync, and
> > .write_buf userspace callbacks.
> >
> > However upon reading the kernel, I can see fuse_fsync,
> > fuse_{cache,direct}_write_iter in fs/fuse/file.c all use
> > inode_lock.  do_truncate also uses inode_lock in fs/open.c.
> >
> > So it's look like implementing extra locking in userspace would
> > do nothing useful in my case, right?
>
> I guess it probably is a good idea to implement proper locking
> in multi-threaded fs and not rely on what kind of locking
> kernel is doing. If kernel locking changes down the line, your
> implementation will be broken.

Thing is, some fuse filesystem implementations already do rely on
kernel locking.   So while it shouldn't hurt to have an extra layer of
locking (except complexity and performance) it's not necessary.

See for example FUSE_PARALLEL_DIROPS which was added due to kernel
locking changes to avoid breaking backward compatibility.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: per-inode locks in FUSE (kernel vs userspace)
  2021-12-07  8:38   ` Miklos Szeredi
@ 2021-12-07 13:48     ` Vivek Goyal
  2021-12-07 14:07       ` Miklos Szeredi
  0 siblings, 1 reply; 5+ messages in thread
From: Vivek Goyal @ 2021-12-07 13:48 UTC (permalink / raw)
  To: Miklos Szeredi; +Cc: Eric Wong, fuse-devel, linux-fsdevel

On Tue, Dec 07, 2021 at 09:38:10AM +0100, Miklos Szeredi wrote:
> On Mon, 6 Dec 2021 at 23:29, Vivek Goyal <vgoyal@redhat.com> wrote:
> >
> > On Fri, Dec 03, 2021 at 12:05:34AM +0000, Eric Wong wrote:
> > > Hi all, I'm working on a new multi-threaded FS using the
> > > libfuse3 fuse_lowlevel.h API.  It looks to me like the kernel
> > > already performs the necessary locking on a per-inode basis to
> > > save me some work in userspace.
> > >
> > > In particular, I originally thought I'd need pthreads mutexes on
> > > a per-inode (fuse_ino_t) basis to protect userspace data
> > > structures between the .setattr (truncate), .fsync, and
> > > .write_buf userspace callbacks.
> > >
> > > However upon reading the kernel, I can see fuse_fsync,
> > > fuse_{cache,direct}_write_iter in fs/fuse/file.c all use
> > > inode_lock.  do_truncate also uses inode_lock in fs/open.c.
> > >
> > > So it's look like implementing extra locking in userspace would
> > > do nothing useful in my case, right?
> >
> > I guess it probably is a good idea to implement proper locking
> > in multi-threaded fs and not rely on what kind of locking
> > kernel is doing. If kernel locking changes down the line, your
> > implementation will be broken.
> 
> Thing is, some fuse filesystem implementations already do rely on
> kernel locking.   So while it shouldn't hurt to have an extra layer of
> locking (except complexity and performance) it's not necessary.

I am wondering if same applies to virtiofs. In that case guest kernel
is untrusted entity. So we don't want to run into a situation where
guest kernel can somehow corrupt shared data structures of virtiofsd
and that somehow opens the door for some other bad outcome. May be in
that case it is safer to not rely on guest kernel locking.
> 
> See for example FUSE_PARALLEL_DIROPS which was added due to kernel
> locking changes to avoid breaking backward compatibility.

Good to know about this option. I checked that fuse_lowlevel.c enables
it by default. So I should be fine from virtiofsd point of view.

Thanks
Vivek


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: per-inode locks in FUSE (kernel vs userspace)
  2021-12-07 13:48     ` Vivek Goyal
@ 2021-12-07 14:07       ` Miklos Szeredi
  0 siblings, 0 replies; 5+ messages in thread
From: Miklos Szeredi @ 2021-12-07 14:07 UTC (permalink / raw)
  To: Vivek Goyal; +Cc: Eric Wong, fuse-devel, linux-fsdevel

On Tue, 7 Dec 2021 at 14:48, Vivek Goyal <vgoyal@redhat.com> wrote:
>
> On Tue, Dec 07, 2021 at 09:38:10AM +0100, Miklos Szeredi wrote:
> > On Mon, 6 Dec 2021 at 23:29, Vivek Goyal <vgoyal@redhat.com> wrote:
> > >
> > > On Fri, Dec 03, 2021 at 12:05:34AM +0000, Eric Wong wrote:
> > > > Hi all, I'm working on a new multi-threaded FS using the
> > > > libfuse3 fuse_lowlevel.h API.  It looks to me like the kernel
> > > > already performs the necessary locking on a per-inode basis to
> > > > save me some work in userspace.
> > > >
> > > > In particular, I originally thought I'd need pthreads mutexes on
> > > > a per-inode (fuse_ino_t) basis to protect userspace data
> > > > structures between the .setattr (truncate), .fsync, and
> > > > .write_buf userspace callbacks.
> > > >
> > > > However upon reading the kernel, I can see fuse_fsync,
> > > > fuse_{cache,direct}_write_iter in fs/fuse/file.c all use
> > > > inode_lock.  do_truncate also uses inode_lock in fs/open.c.
> > > >
> > > > So it's look like implementing extra locking in userspace would
> > > > do nothing useful in my case, right?
> > >
> > > I guess it probably is a good idea to implement proper locking
> > > in multi-threaded fs and not rely on what kind of locking
> > > kernel is doing. If kernel locking changes down the line, your
> > > implementation will be broken.
> >
> > Thing is, some fuse filesystem implementations already do rely on
> > kernel locking.   So while it shouldn't hurt to have an extra layer of
> > locking (except complexity and performance) it's not necessary.
>
> I am wondering if same applies to virtiofs. In that case guest kernel
> is untrusted entity. So we don't want to run into a situation where
> guest kernel can somehow corrupt shared data structures of virtiofsd
> and that somehow opens the door for some other bad outcome. May be in
> that case it is safer to not rely on guest kernel locking.

That's true, virtiofs has inverted trust model, so the server must not
assume anything from the client.

Thanks,
Miklos

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-12-07 14:08 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-03  0:05 per-inode locks in FUSE (kernel vs userspace) Eric Wong
2021-12-06 22:28 ` Vivek Goyal
2021-12-07  8:38   ` Miklos Szeredi
2021-12-07 13:48     ` Vivek Goyal
2021-12-07 14:07       ` Miklos Szeredi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.