All of lore.kernel.org
 help / color / mirror / Atom feed
From: Chuck Lever III <chuck.lever@oracle.com>
To: Neil Brown <neilb@suse.de>
Cc: Linux NFS Mailing List <linux-nfs@vger.kernel.org>
Subject: Re: [PATCH] Documentation: Add an explanation of NFSv4 client identifiers
Date: Tue, 12 Apr 2022 15:13:39 +0000	[thread overview]
Message-ID: <4918188E-9271-47F2-8F5A-D6D5BEB85F36@oracle.com> (raw)
In-Reply-To: <164974719723.11576.583440068909686735@noble.neil.brown.name>



> On Apr 12, 2022, at 3:06 AM, NeilBrown <neilb@suse.de> wrote:
> 
> On Tue, 12 Apr 2022, Chuck Lever wrote:
>> To enable NFSv4 to work correctly, NFSv4 client identifiers have
>> to be globally unique and persistent over client reboots. We
>> believe that in many cases, a good default identifier can be
>> chosen and set when a client system is imaged.
>> 
>> Because there are many different ways a system can be imaged,
>> provide an explanation of how NFSv4 client identifiers and
>> principals can be set by install scripts and imaging tools.
>> 
>> Additional cases, such as NFSv4 clients running in containers, also
>> need unique and persistent identifiers. The Linux NFS community
>> sets forth this explanation to aid those who create and manage
>> container environments.
>> 
>> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
>> ---
>> .../filesystems/nfs/client-identifier.rst          |  212 ++++++++++++++++++++
>> Documentation/filesystems/nfs/index.rst            |    2 
>> 2 files changed, 214 insertions(+)
>> create mode 100644 Documentation/filesystems/nfs/client-identifier.rst
>> 
>> diff --git a/Documentation/filesystems/nfs/client-identifier.rst b/Documentation/filesystems/nfs/client-identifier.rst
>> new file mode 100644
>> index 000000000000..5d056145833f
>> --- /dev/null
>> +++ b/Documentation/filesystems/nfs/client-identifier.rst
>> @@ -0,0 +1,212 @@
>> +=======================
>> +NFSv4 client identifier
>> +=======================
>> +
>> +This document explains how the NFSv4 protocol identifies client
>> +instances in order to maintain file open and lock state during
>> +system restarts. A special identifier and principal are maintained
>> +on each client. These can be set by administrators, scripts
>> +provided by site administrators, or tools provided by Linux
>> +distributors.
>> +
>> +There are risks if a client's NFSv4 identifier and its principal
>> +are not chosen carefully.
>> +
>> +
>> +Introduction
>> +------------
>> +
>> +The NFSv4 protocol uses "lease-based file locking". Leases help
>> +NFSv4 servers provide file lock guarantees and manage their
>> +resources.
>> +
>> +Simply put, an NFSv4 server creates a lease for each NFSv4 client.
>> +The server collects each client's file open and lock state under
>> +the lease for that client.
>> +
>> +The client is responsible for periodically renewing its leases.
>> +While a lease remains valid, the server holding that lease
>> +guarantees the file locks the client has created remain in place.
>> +
>> +If a client stops renewing its lease (for example, if it crashes),
>> +the NFSv4 protocol allows the server to remove the client's open
>> +and lock state after a certain period of time. When a client
>> +restarts, it indicates to servers that open and lock state
>> +associated with its previous leases is no longer valid.
> 
> Add "and can be removed immediately" This makes is clear how the two
> sentences in the para relate.
> 
>> +
>> +In addition, each NFSv4 server manages a persistent list of client
>> +leases. When the server restarts, it uses this list to distinguish
>> +between requests from clients that held state before the server
>> +restarted and from clients that did not. This enables file locks to
>> +persist safely across server restarts.
> 
> I still think this is a bit misleading.  It distinguished between
> clients, not between requests.  I would prefer:
> 
>  When the server restarts client will attempt to recover their state.
>  The server uses this list to distinguish between clients with state
>  that can still be recovered and clients which don't - possibly because
>  their state expired before the server restart.
> 
> 
>> +
>> +NFSv4 client identifiers
>> +------------------------
>> +
>> +Each NFSv4 client presents an identifier to NFSv4 servers so that
>> +they can associate the client with its lease. Each client's
>> +identifier consists of two elements:
>> +
>> +  - co_ownerid: An arbitrary but fixed string.
>> +
>> +  - boot verifier: A 64-bit incarnation verifier that enables a
>> +    server to distinguish successive boot epochs of the same client.
>> +
>> +The NFSv4.0 specification refers to these two items as an
>> +"nfs_client_id4". The NFSv4.1 specification refers to these two
>> +items as a "client_owner4".
>> +
>> +NFSv4 servers tie this identifier to the principal and security
>> +flavor that the client used when presenting it. Servers use this
>> +principal to authorize subsequent lease modification operations
>> +sent by the client. Effectively this principal is a third element of
>> +the identifier.
>> +
>> +As part of the identity presented to servers, a good
>> +"co_ownerid" string has several important properties:
>> +
>> +  - The "co_ownerid" string identifies the client during reboot
>> +    recovery, therefore the string is persistent across client
>> +    reboots.
>> +  - The "co_ownerid" string helps servers distinguish the client
>> +    from others, therefore the string is globally unique. Note
>> +    that there is no central authority that assigns "co_ownerid"
>> +    strings.
>> +  - Because it often appears on the network in the clear, the
>> +    "co_ownerid" string does not reveal private information about
>> +    the client itself.
>> +  - The content of the "co_ownerid" string is set and unchanging
>> +    before the client attempts NFSv4 mounts after a restart.
>> +  - The NFSv4 protocol does not place a limit on the size of the
>> +    "co_ownerid" string, but most NFSv4 implementations do not
>> +    tolerate excessively long "co_ownerid" strings.
> 
> RFC5661 declares:
>   struct client_owner4 {
>           verifier4       co_verifier;
>           opaque          co_ownerid<NFS4_OPAQUE_LIMIT>;
>   };
> and
>   const NFS4_OPAQUE_LIMIT         = 1024;
> 
> so I think there is a clear limit that must be honoured by both sides.
> 
>> +
>> +Protecting NFSv4 lease state
>> +----------------------------
>> +
>> +NFSv4 servers utilize the "client_owner4" as described above to
>> +assign a unique lease to each client. Under this scheme, there are
>> +circumstances where clients can interfere with each other. This is
>> +referred to as "lease stealing".
>> +
>> +If distinct clients present the same "co_ownerid" string and use
>> +the same principal (for example, AUTH_SYS and UID 0), a server is
>> +unable to tell that the clients are not the same. Each distinct
>> +client presents a different boot verifier, so it appears to the
>> +server as if there is one client that is rebooting frequently.
>> +Neither client can maintain open or lock state in this scenario.
>> +
>> +If distinct clients present the same "co_ownerid" string and use
>> +distinct principals, the server is likely to allow the first client
>> +to operate normally but reject subsequent clients with the same
>> +"co_ownerid" string.
>> +
>> +If a client's "co_ownerid" string or principal are not stable,
>> +state recovery after a server or client reboot is not guaranteed.
>> +If a client unexpectedly restarts but presents a different
>> +"co_ownerid" string or principal to the server, the server orphans
>> +the client's previous open and lock state. This blocks access to
>> +locked files until the server removes the orphaned state.
>> +
>> +If the server restarts and a client presents a changed "co_ownerid"
>> +string or principal to the server, the server will not allow the
>> +client to reclaim its open and lock state, and may give those locks
>> +to other clients in the mean time. This is referred to as "lock
>> +stealing".
> 
> This is not a possible scenario with Linux NFS client.  The client
> assembles the string once from various sources, then uses it
> consistently at least until unmount or reboot.  Is it worth mentioning?

Neil, thanks for the eyes-on. I've integrated the other suggestions
in your reply. However there are some corner cases here that I'd
like to consider before proceeding.

Generally, preserving the cl_owner_id string is good defense against
lock stealing. Looks like the Linux NFS client didn't do that before
ceb3a16c070c ("NFSv4: Cache the NFSv4/v4.1 client owner_id in the
struct nfs_client").

If a server filesystem is migrated to a server that the client hasn't
contacted before, and the client's uniquifier or hostname has changed
since the client established its lease with the first server, there
is the possibility of lock stealing during transparent state migration.

I'm also not certain how the Linux NFS client preserves the principal
that was used when a lease is first established. It's going to use
Kerberos if possible, but what if the kernel's cred cache expires and
the keytab has been altered in the meantime? I haven't walked through
that code carefully enough to understand whether there is still a
vulnerability.


>> +
>> +Lease stealing and lock stealing increase the potential for denial
>> +of service and in rare cases even data corruption.
>> +
>> +Selecting an appropriate client identifier
>> +------------------------------------------
>> +
>> +By default, the Linux NFSv4 client implementation constructs its
>> +"co_ownerid" string starting with the words "Linux NFS" followed by
>> +the client's UTS node name (the same node name, incidentally, that
>> +is used as the "machine name" in an AUTH_SYS credential). In small
>> +deployments, this construction is usually adequate. Often, however,
>> +the node name by itself is not adequately unique, and can change
>> +unexpectedly. Problematic situations include:
>> +
>> +  - NFS-root (diskless) clients, where the local DCHP server (or
>> +    equivalent) does not provide a unique host name.
>> +
>> +  - "Containers" within a single Linux host.  If each container has
>> +    a separate network namespace, but does not use the UTS namespace
>> +    to provide a unique host name, then there can be multiple NFS
>> +    client instances with the same host name.
>> +
>> +  - Clients across multiple administrative domains that access a
>> +    common NFS server. If hostnames are not assigned centrally
>> +    then uniqueness cannot be guaranteed unless a domain name is
>> +    included in the hostname.
>> +
>> +Linux provides two mechanisms to add uniqueness to its "co_ownerid"
>> +string:
>> +
>> +    nfs.nfs4_unique_id
>> +      This module parameter can set an arbitrary uniquifier string
>> +      via the kernel command line, or when the "nfs" module is
>> +      loaded.
>> +
>> +    /sys/fs/nfs/client/net/identifier
>> +      This virtual file, available since Linux 5.3, is local to the
>> +      network namespace in which it is accessed and so can provide
>> +      distinction between network namespaces (containers) when the
>> +      hostname remains uniform.
>> +
>> +Note that this file is empty on name-space creation. If the
>> +container system has access to some sort of per-container identity
>> +then that uniquifier can be used. For example, a uniquifier might
>> +be formed at boot using the container's internal identifier:
>> +
>> +    sha256sum /etc/machine-id | awk '{print $1}' \\
>> +        > /sys/fs/nfs/client/net/identifier
>> +
>> +Security considerations
>> +-----------------------
>> +
>> +The use of cryptographic security for lease management operations
>> +is strongly encouraged.
>> +
>> +If NFS with Kerberos is not configured, a Linux NFSv4 client uses
>> +AUTH_SYS and UID 0 as the principal part of its client identity.
>> +This configuration is not only insecure, it increases the risk of
>> +lease and lock stealing. However, it might be the only choice for
>> +client configurations that have no local persistent storage.
>> +"co_ownerid" string uniqueness and persistence is critical in this
>> +case.
>> +
>> +When a Kerberos keytab is present on a Linux NFS client, the client
>> +attempts to use one of the principals in that keytab when
>> +identifying itself to servers. Alternately, a single-user client
>> +with a Kerberos principal can use that principal in place of the
>> +client's host principal.
> 
> I think this happens even when "-o sec=krb?" isn't requested.  Is that
> correct?  Is it worth stating that here?  I guess the next paragraph
> suggests it, but making more explicit could help.
> 
>> +
>> +Using Kerberos for this purpose enables the client and server to
>> +use the same lease for operations covered by all "sec=" settings.
>> +Additionally, the Linux NFS client uses the RPCSEC_GSS security
>> +flavor with Kerberos and the integrity QOS to prevent in-transit
>> +modification of lease modification requests.
>> +
>> +Additional notes
>> +----------------
>> +The Linux NFSv4 client establishes a single lease on each NFSv4
>> +server it accesses. NFSv4 mounts from a Linux NFSv4 client of a
>> +particular server then share that lease.
>> +
>> +Once a client establishes open and lock state, the NFSv4 protocol
>> +enables lease state to transition to other servers, following data
>> +that has been migrated. This hides data migration completely from
>> +running applications. The Linux NFSv4 client facilitates state
>> +migration by presenting the same "client_owner4" to all servers it
>> +encounters.
>> +
>> +========
>> +See Also
>> +========
>> +
>> +  - nfs(5)
>> +  - kerberos(7)
>> +  - RFC 7530 for the NFSv4.0 specification
>> +  - RFC 8881 for the NFSv4.1 specification.
>> diff --git a/Documentation/filesystems/nfs/index.rst b/Documentation/filesystems/nfs/index.rst
>> index 288d8ddb2bc6..8536134f31fd 100644
>> --- a/Documentation/filesystems/nfs/index.rst
>> +++ b/Documentation/filesystems/nfs/index.rst
>> @@ -6,6 +6,8 @@ NFS
>> .. toctree::
>>    :maxdepth: 1
>> 
>> +   client-identifier
>> +   exporting
>>    pnfs
>>    rpc-cache
>>    rpc-server-gss
>> 
>> 
>> 
> 
> 
> Generally good - just a few suggestions to consider.
> 
> Thanks,
> NeilBrown

--
Chuck Lever




  reply	other threads:[~2022-04-12 15:13 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-11 20:32 [PATCH] Documentation: Add an explanation of NFSv4 client identifiers Chuck Lever
2022-04-12  7:06 ` NeilBrown
2022-04-12 15:13   ` Chuck Lever III [this message]
2022-04-14  4:13     ` NeilBrown
2022-04-14  4:35       ` Trond Myklebust
2022-04-14  4:52         ` NeilBrown
2022-04-14 14:34           ` Trond Myklebust
2022-04-14 15:25       ` Chuck Lever III

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4918188E-9271-47F2-8F5A-D6D5BEB85F36@oracle.com \
    --to=chuck.lever@oracle.com \
    --cc=linux-nfs@vger.kernel.org \
    --cc=neilb@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.