linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "J. Bruce Fields" <bfields@fieldses.org>
To: "crispyduck@outlook.at" <crispyduck@outlook.at>
Cc: Chuck Lever III <chuck.lever@oracle.com>,
	Linux NFS Mailing List <linux-nfs@vger.kernel.org>
Subject: Re: Problems with NFS4.1 on ESXi
Date: Thu, 21 Apr 2022 14:54:23 -0400	[thread overview]
Message-ID: <20220421185423.GD18620@fieldses.org> (raw)
In-Reply-To: <20220421164049.GB18620@fieldses.org>

On Thu, Apr 21, 2022 at 12:40:49PM -0400, bfields wrote:
> On Thu, Apr 21, 2022 at 03:30:19PM +0000, crispyduck@outlook.at wrote:
> > Thanks. From VMWare side nobody will help here as this is not supported. They support NFS4.1, but officially only from some storage vendors.
> > 
> > I had it running in the past on FreeBSD, where I also some problems in the beginning  (RECLAIM_COMPLETE) and Rick Macklem helped to figure out the problem and fixed it with some patches that should now be part of FreeBSD.
> > 
> > I plan to use it with ZFS, but also tested it on ext4, with exact same behavior. 
> > 
> > NFS3 works fine, NFS4.1 seems to work fine, except the described problems.
> > 
> > The reason for NFS4.1 is session trunking, which gives really awesome speeds when using multiple NICs/subnets. Comparable to ISCSI.
> > ANFS4.1 based storage for ESXi and other Hypervisors.
> > 
> > The test is also done without session trunking.
> > 
> > This needs NFS expertise, no idea where else i could ask to have a look on the traces.
> 
> Stale filehandles aren't normal, and suggest some bug or
> misconfiguration on the server side, either in NFS or the exported
> filesystem.

Actually, I should take that back: if one client removes files while a
second client is using them, it'd be normal for applications on that
second client to see ESTALE.

So it might be interesting to know what actually happens when VM
templates are imported.

I suppose you could also try NFSv4.0 or try varying kernel versions to
try to narrow down the problem.

No easy ideas off the top of my head, sorry.

--b.

> Figuring out more than that would require more
> investigation.
> 
> --b.
> 
> > 
> > Br,
> > Andi
> > 
> > 
> > 
> > 
> > 
> > 
> > Von: Chuck Lever III <chuck.lever@oracle.com>
> > Gesendet: Donnerstag, 21. April 2022 16:58
> > An: Andreas Nagy <crispyduck@outlook.at>
> > Cc: Linux NFS Mailing List <linux-nfs@vger.kernel.org>
> > Betreff: Re: Problems with NFS4.1 on ESXi 
> >  
> > Hi Andreas-
> > 
> > > On Apr 21, 2022, at 12:55 AM, Andreas Nagy <crispyduck@outlook.at> wrote:
> > > 
> > > Hi,
> > > 
> > > I hope this mailing list is the right place to discuss some problems with nfs4.1.
> > 
> > Well, yes and no. This is an upstream developer mailing list,
> > not really for user support.
> > 
> > You seem to be asking about products that are currently supported,
> > and I'm not sure if the Debian kernel is stock upstream 5.13 or
> > something else. ZFS is not an upstream Linux filesystem and the
> > ESXi NFS client is something we have little to no experience with.
> > 
> > I recommend contacting the support desk for your products. If
> > they find a specific problem with the Linux NFS server's
> > implementation of the NFSv4.1 protocol, then come back here.
> > 
> > 
> > > Switching from FreeBSD host as NFS server to a Proxmox environment also serving NFS I see some strange issues in combination with VMWare ESXi.
> > > 
> > > After first thinking it works fine, I started to realize that there are problems with ESXi datastores on NFS4.1 when trying to import VMs (OVF).
> > > 
> > > Importing ESXi OVF VM Templates fails nearly every time with a ESXi error message "postNFCData failed: Not Found". With NFS3 it is working fine.
> > > 
> > > NFS server is running on a Proxmox host:
> > > 
> > >  root@sepp-sto-01:~# hostnamectl
> > >  Static hostname: sepp-sto-01
> > >  Icon name: computer-server
> > >  Chassis: server
> > >  Machine ID: 028da2386e514db19a3793d876fadf12
> > >  Boot ID: c5130c8524c64bc38994f6cdd170d9fd
> > >  Operating System: Debian GNU/Linux 11 (bullseye)
> > >  Kernel: Linux 5.13.19-4-pve
> > >  Architecture: x86-64
> > > 
> > > 
> > > File system is ZFS, but also tried it with others and it is the same behaivour.
> > > 
> > > 
> > > ESXi version 7.2U3
> > > 
> > > ESXi vmkernel.log:
> > > 2022-04-19T17:46:38.933Z cpu0:262261)cswitch: L2Sec_EnforcePortCompliance:209: [nsx@6876 comp="nsx-esx" subcomp="vswitch"]client vmk1 requested promiscuous mode on port 0x4000010, disallowed by vswitch policy
> > > 2022-04-19T17:46:40.897Z cpu10:266351 opID=936118c3)World: 12075: VC opID esxui-d6ab-f678 maps to vmkernel opID 936118c3
> > > 2022-04-19T17:46:40.897Z cpu10:266351 opID=936118c3)WARNING: NFS41: NFS41FileDoCloseFile:3128: file handle close on obj 0x4303fce02850 failed: Stale file handle
> > > 2022-04-19T17:46:40.897Z cpu10:266351 opID=936118c3)WARNING: NFS41: NFS41FileOpCloseFile:3718: NFS41FileCloseFile failed: Stale file handle
> > > 2022-04-19T17:46:41.164Z cpu4:266351 opID=936118c3)WARNING: NFS41: NFS41FileDoCloseFile:3128: file handle close on obj 0x4303fcdaa000 failed: Stale file handle
> > > 2022-04-19T17:46:41.164Z cpu4:266351 opID=936118c3)WARNING: NFS41: NFS41FileOpCloseFile:3718: NFS41FileCloseFile failed: Stale file handle
> > > 2022-04-19T17:47:25.166Z cpu18:262376)ScsiVmas: 1074: Inquiry for VPD page 00 to device mpx.vmhba32:C0:T0:L0 failed with error Not supported
> > > 2022-04-19T17:47:25.167Z cpu18:262375)StorageDevice: 7059: End path evaluation for device mpx.vmhba32:C0:T0:L0
> > > 2022-04-19T17:47:30.645Z cpu4:264565 opID=9529ace7)World: 12075: VC opID esxui-6787-f694 maps to vmkernel opID 9529ace7
> > > 2022-04-19T17:47:30.645Z cpu4:264565 opID=9529ace7)VmMemXfer: vm 264565: 2465: Evicting VM with path:/vmfs/volumes/9f10677f-697882ed-0000-000000000000/test-ovf/test-ovf.vmx
> > > 2022-04-19T17:47:30.645Z cpu4:264565 opID=9529ace7)VmMemXfer: 209: Creating crypto hash
> > > 2022-04-19T17:47:30.645Z cpu4:264565 opID=9529ace7)VmMemXfer: vm 264565: 2479: Could not find MemXferFS region for /vmfs/volumes/9f10677f-697882ed-0000-000000000000/test-ovf/test-ovf.vmx
> > > 2022-04-19T17:47:30.693Z cpu4:264565 opID=9529ace7)VmMemXfer: vm 264565: 2465: Evicting VM with path:/vmfs/volumes/9f10677f-697882ed-0000-000000000000/test-ovf/test-ovf.vmx
> > > 2022-04-19T17:47:30.693Z cpu4:264565 opID=9529ace7)VmMemXfer: 209: Creating crypto hash
> > > 2022-04-19T17:47:30.693Z cpu4:264565 opID=9529ace7)VmMemXfer: vm 264565: 2479: Could not find MemXferFS region for /vmfs/volumes/9f10677f-697882ed-0000-000000000000/test-ovf/test-ovf.vmx
> > > 
> > > tcpdump taken on the esxi with filter on the nfs server ip is attached here:
> > > https://easyupload.io/xvtpt1
> > > 
> > > I tried to analyze, but have no idea what exactly the problem is. Maybe it is some issue with the VMWare implementation? 
> > > Would be nice if someone with better NFS knowledge could have a look on the traces.
> > > 
> > > Best regards,
> > > cd
> > 
> > --
> > Chuck Lever
> > 

  parent reply	other threads:[~2022-04-21 18:54 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <AM9P191MB1665484E1EFD2088D22C2E2F8EF59@AM9P191MB1665.EURP191.PROD.OUTLOOK.COM>
2022-04-21  4:55 ` Problems with NFS4.1 on ESXi Andreas Nagy
2022-04-21 14:58   ` Chuck Lever III
2022-04-21 15:30     ` AW: " crispyduck
2022-04-21 16:40       ` J. Bruce Fields
     [not found]         ` <AM9P191MB16654F5B7541CD1E489D75608EF49@AM9P191MB1665.EURP191.PROD.OUTLOOK.COM>
2022-04-21 18:41           ` crispyduck
2022-04-21 18:54         ` J. Bruce Fields [this message]
2022-04-21 23:52           ` Rick Macklem
2022-04-21 23:58             ` Rick Macklem
2022-04-22 14:29             ` Chuck Lever III
2022-04-22 14:59               ` AW: " crispyduck
2022-04-22 15:02                 ` Chuck Lever III
2022-04-22 22:58               ` Rick Macklem
2022-04-22 15:15             ` J. Bruce Fields
2022-04-22 18:43               ` AW: " crispyduck
2022-04-22 23:03               ` Rick Macklem
2022-04-24 15:07                 ` J. Bruce Fields
2022-04-24 20:36                   ` Rick Macklem
2022-04-24 20:39                     ` Rick Macklem
2022-04-25  9:00                       ` AW: " crispyduck
2022-04-27  6:08                         ` crispyduck
2022-05-05  5:31                           ` andreas-nagy
2022-05-05 14:19                             ` Rick Macklem
2022-05-05 16:38                             ` Chuck Lever III
2022-05-07  1:53                               ` Chuck Lever III
2022-04-22 14:23           ` Olga Kornievskaia

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220421185423.GD18620@fieldses.org \
    --to=bfields@fieldses.org \
    --cc=chuck.lever@oracle.com \
    --cc=crispyduck@outlook.at \
    --cc=linux-nfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).