All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andrew Walker <awalker@ixsystems.com>
To: Tom Talpey <tom@talpey.com>
Cc: Steve French <smfrench@gmail.com>,
	Paulo Alcantara <pc@manguebit.com>,
	linux-cifs@vger.kernel.org
Subject: Re: Nested NTFS volumes within Windows SMB share may result in inode collisions in linux client
Date: Thu, 2 Mar 2023 14:38:24 -0500	[thread overview]
Message-ID: <CAB5c7xoTahmT97te+vtsT4+E5nC_GmujJ4S_xOP04CT04boMkA@mail.gmail.com> (raw)
In-Reply-To: <0a9990d0-84bf-d5b3-db11-8eafad22f618@talpey.com>

On Thu, Mar 2, 2023 at 1:32 PM Tom Talpey <tom@talpey.com> wrote:
>
> On 3/2/2023 2:23 PM, Steve French wrote:
> >> Why isn't this behavior simply the default?
> >
> > Without persisted inode numbers (UniqueId) it would cause problems
> > with hardlinks (ie mounting with noserverino).  We could try a trick
> > of hashing them with the volume id if we could detect the transition
> > to a different volume (as original thread was discussing) -
> > fortunately in Linux you have to walk a path component by component so
> > might be possible to spot these more easily.
>
> Well yeah, it can't be a random assignment, and the fileid is only
> unique within the scope of a volumeid. Blindly using the server's
> fileid as a client inode without checking for a volume crossing is
> a client protocol violation, right?
>
>
> > On Thu, Mar 2, 2023 at 1:19 PM Tom Talpey <tom@talpey.com> wrote:
> >>
> >> On 3/1/2023 8:49 PM, Steve French wrote:
> >>> I would expect when the inode collision is noted that
> >>> "cifs_autodisable_serverino()" will get called in the Linux client and
> >>> you should see: "Autodisabling the user of server inode numbers on
> >>> ..."
> >>> "Consider mounting with noserverino to silence this message"
> >>
> >> Why isn't this behavior simply the default? It's going to be
> >> data corruption (sev 1 issue) if the inode number is the same
> >> for two different fileid's, so this seems entirely backwards.
> >>
> >> Also, the words "to silence this message" really don't convey
> >> the severity of the situation.
> >>
> >> Tom.
> >
> >
> >

Just glancing briefly at the kernel NFS client, it appears there is
dynamic handling for crossing mountpoints:
```
/*
 * nfs_d_automount - Handle crossing a mountpoint on the server
 * @path - The mountpoint
 *
 * When we encounter a mountpoint on the server, we want to set up
 * a mountpoint on the client too, to prevent inode numbers from
 * colliding, and to allow "df" to work properly.
 * On NFSv4, we also want to allow for the fact that different
 * filesystems may be migrated to different servers in a failover
 * situation, and that different filesystems may want to use
 * different security flavours.
 */
struct vfsmount *nfs_d_automount(struct path *path)
{
```
c.f. fs/nfs/namespace.c

and

```
                } else if (S_ISDIR(inode->i_mode)) {
                        inode->i_op =
NFS_SB(sb)->nfs_client->rpc_ops->dir_inode_ops;
                        inode->i_fop = &nfs_dir_operations;
                        inode->i_data.a_ops = &nfs_dir_aops;
                        nfs_inode_init_dir(nfsi);
                        /* Deal with crossing mountpoints */
                        if (fattr->valid & NFS_ATTR_FATTR_MOUNTPOINT ||
                                        fattr->valid &
NFS_ATTR_FATTR_V4_REFERRAL) {
                                if (fattr->valid & NFS_ATTR_FATTR_V4_REFERRAL)
                                        inode->i_op =
&nfs_referral_inode_operations;
                                else
                                        inode->i_op =
&nfs_mountpoint_inode_operations;
                                inode->i_fop = NULL;
                                inode->i_flags |= S_AUTOMOUNT;
                        }
```

in fs/nfs/inode.c

Andrew

  reply	other threads:[~2023-03-02 19:38 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-01 22:07 Nested NTFS volumes within Windows SMB share may result in inode collisions in linux client Andrew Walker
2023-03-01 22:37 ` Paulo Alcantara
2023-03-02  1:33   ` Andrew Walker
2023-03-02  1:49     ` Steve French
2023-03-02  4:12       ` Andrew Walker
2023-03-02 19:18       ` Tom Talpey
2023-03-02 19:23         ` Steve French
2023-03-02 19:32           ` Tom Talpey
2023-03-02 19:38             ` Andrew Walker [this message]
2023-03-02 19:42           ` ronnie sahlberg
2023-03-02 18:00     ` Paulo Alcantara
2023-08-21 18:57 ` Paulo Alcantara

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAB5c7xoTahmT97te+vtsT4+E5nC_GmujJ4S_xOP04CT04boMkA@mail.gmail.com \
    --to=awalker@ixsystems.com \
    --cc=linux-cifs@vger.kernel.org \
    --cc=pc@manguebit.com \
    --cc=smfrench@gmail.com \
    --cc=tom@talpey.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.